throbber
www.uspto.gov
`
`UNITED STATES DEPARTMENT OF COMMERCE
`United States Patent and TrademarkOffice
`Address; COMMISSIONER FOR PATENTS
`P.O. Box 1450
`Alexandria, Virginia 22313-1450
`
`16/814,666
`
`03/10/2020
`
`Hiroki KASUGAI
`
`AOYA.16PUSO1
`
`3955
`
`MARKD. SARALINO (PAN)
`RENNER, OTTO, BOISSELLE & SKLAR, LLP
`1621 EUCLID AVENUE
`ISTH FLOOR
`
`CLEVELAND, OH 44115
`
`NAZRUL, SHAHBAZ
`
`2697
`
`09/27/2021
`
`ELECTRONIC
`
`Please find below and/or attached an Office communication concerning this application or proceeding.
`
`The time period for reply, if any, is set in the attached communication.
`
`Notice of the Office communication was sent electronically on above-indicated "Notification Date" to the
`following e-mail address(es):
`
`ipdocket @rennerotto.com
`
`PTOL-90A (Rev. 04/07)
`
`

`

`
`
`Disposition of Claims*
`10-12 is/are pending in the application.
`)
`Claim(s)
`5a) Of the above claim(s) ___ is/are withdrawn from consideration.
`C] Claim(s)__ is/are allowed.
`Claim(s) 10-12 is/are rejected.
`(1 Claim(s)__is/are objectedto.
`C} Claim(s)
`are subjectto restriction and/or election requirement
`* If any claims have been determined allowable, you maybeeligible to benefit from the Patent Prosecution Highway program at a
`participating intellectual property office for the corresponding application. For more information, please see
`http://www.uspto.gov/patents/init_events/pph/index.jsp or send an inquiry to PPHfeedback@uspto.gov.
`
`) ) ) )
`
`Application Papers
`10) The specification is objected to by the Examiner.
`11)0) The drawing(s) filedon__ is/are: a)(J accepted or b)() objected to by the Examiner.
`Applicant may not request that any objection to the drawing(s) be held in abeyance. See 37 CFR 1.85(a).
`Replacement drawing sheet(s) including the correction is required if the drawing(s) is objected to. See 37 CFR 1.121 (d).
`
`Priority under 35 U.S.C. § 119
`12)1) Acknowledgment is made of a claim for foreign priority under 35 U.S.C. § 119(a)-(d)or (f).
`Certified copies:
`c)Z None ofthe:
`b)() Some**
`a)C All
`1.2 Certified copies of the priority documents have been received.
`2.1.) Certified copies of the priority documents have been received in Application No.
`3.1.) Copies of the certified copies of the priority documents have been receivedin this National Stage
`application from the International Bureau (PCT Rule 17.2(a)).
`* See the attached detailed Office action for a list of the certified copies not received.
`
`Attachment(s)
`
`1)
`
`Notice of References Cited (PTO-892)
`
`Information Disclosure Statement(s) (PTO/SB/08a and/or PTO/SB/08b)
`2)
`Paper No(s)/Mail Date
`U.S. Patent and Trademark Office
`
`3) (J Interview Summary (PTO-413)
`Paper No(s)/Mail Date
`(Qj Other:
`
`4)
`
`PTOL-326 (Rev. 11-13)
`
`Office Action Summary
`
`Part of Paper No./Mail Date 20210921
`
`Application No.
`Applicant(s)
`16/814,666
`KASUGAI, Hiroki
`
`Office Action Summary Art Unit|AIA (FITF) StatusExaminer
`SHAHBAZ NAZRUL
`2697
`Yes
`
`
`
`-- The MAILING DATEofthis communication appears on the cover sheet with the correspondence address --
`Period for Reply
`
`A SHORTENED STATUTORY PERIOD FOR REPLYIS SET TO EXPIRE 3 MONTHS FROM THE MAILING
`DATE OF THIS COMMUNICATION.
`Extensions of time may be available underthe provisions of 37 CFR 1.136(a). In no event, however, may a reply betimely filed after SIX (6) MONTHSfrom the mailing
`date of this communication.
`If NO period for reply is specified above, the maximum statutory period will apply and will expire SIX (6) MONTHSfrom the mailing date of this communication.
`-
`- Failure to reply within the set or extended period for reply will, by statute, cause the application to become ABANDONED (35 U.S.C. § 133}.
`Any reply received by the Office later than three months after the mailing date of this communication, evenif timely filed, may reduce any earned patent term
`adjustment. See 37 CFR 1.704(b).
`
`Status
`
`1) Responsive to communication(s)filed on applicant's response of 9/15/2021.
`C) A declaration(s)/affidavit(s) under 37 CFR 1.130(b) was/werefiled on
`2a)¥) This action is FINAL.
`2b) (J This action is non-final.
`3)02 An election was madeby the applicant in responseto a restriction requirement set forth during the interview
`on
`; the restriction requirement and election have been incorporated into this action.
`4\0) Since this application is in condition for allowance except for formal matters, prosecution as to the merits is
`closed in accordance with the practice under Exparte Quayle, 1935 C.D. 11, 453 O.G. 213.
`
`

`

`Application/Control Number: 16/814,666
`Art Unit: 2697
`
`Page 2
`
`DETAILED ACTION
`
`Notice of Pre-AlA or AIA Status
`
`The present application, filed on or after March 16, 2013, is being examined under the
`
`first inventor to file provisions of the AIA.
`
`Response to Amendment
`
`Claims 10-12 are pending in this instant application. Claim 10 are amended. Claims 1-9
`
`are cancelled. No new claims added.
`
`Responseto Arguments
`
`Applicant's argumentsfiled 9/15/2021 have been fully considered but they are not
`
`persuasive. First and foremost, applicant’s arguments regarding Hirose and Lee
`
`references are moot, since these reference are not usedin this Office Action. Regarding
`
`Arrasvuori, applicant argues (see pages 6-7 of response of 9/15/2021), according to
`
`40078, Arrasvuori only uses audio/sound to detect the sound source, and does not use
`
`image data at all. Examiner disagrees with Applicant’s interpretation. In 40078, the
`
`source of analysis is a composite audio, which contains a corresponding video portion
`
`((O059, YO061, YO072, 40074). At least a combination of audio and video processing
`
`helps to ascertain the subject type (¢0059, 40061, 70072, 40074-0076) as Examiner
`
`understands from the disclosure of Arrasvuori. Also according to assertion in 40076,
`
`only video analysis ascertains the subject type of sound source (... and/or on basis of
`
`analysis of an image and/or video segment associated with the composite audio signal).
`
`Also, according to the recited claim limitation, there is no problem using audio data to
`
`ascertain the sound source, as long as video/image is used to pin point the sound
`
`source type and its position/location in the scene. It is very clear from disclosures of
`
`

`

`Application/Control Number: 16/814,666
`Art Unit: 2697
`
`Page 3
`
`Arrasvuori, that audio-visual correspondence of the composite audio (which contains
`
`corresponding video image attached with audio) is analyzed to determine the sound
`
`source and its location in the field of view ((0059, 40061, 40072, (0074-0076) of the
`
`video image. Thus, Applicant’s argument that Arrasvuori is not found to teach detecting
`
`4 subject and type of subject Dased on visual image data rather than audio or sound
`
`data, is not persuasive, based on arguments provided above by Exarniner.
`
`Another of Applicant's argument that, Arasvuor dees not leach displaying target type
`
`information indicating the type of the focus subject based on the type selected byine
`
`operation member based of a user operation, is moot and not applicable in this Office
`
`Action, since this feature is primarily taken fromm Yuan (new primary reference used in
`
`this Office Action}, and propagates to the combination of Yuan and Arrasvuori.
`
`For details see the reiection below.
`
`Claim Rejections - 35 USC § 103
`
`In the event the determination of the status of the application as subject to AIA 35
`
`U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103)is incorrect, any
`
`correction of the statutory basis for the rejection will not be considered a new ground of
`
`rejection if the prior art relied upon, and the rationale supporting the rejection, would be
`
`the same under either status.
`
`The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness
`
`rejections set forth in this Office action:
`
`

`

`Application/Control Number: 16/814,666
`Art Unit: 2697
`
`Page 4
`
`A patent for a claimed invention may not be obtained, notwithstanding that the claimed
`invention is not identically disclosed as set forth in section 102 of thistitle, if the differences
`between the claimed invention andthe prior art are such that the claimed invention as a whole
`would have been obvious before the effectivefiling date of the claimed invention to a person
`having ordinary skill in the art to which the claimed invention pertains. Patentability shall not
`be negated by the manner in which the invention was made.
`
`Claims 10-12 are rejected under 35 U.S.C. 103 as being unpatentable over Yuan
`
`(US 2017/0289681) in view of Arrasvuori et al. (US 2014/0369506; hereinafter
`
`Arrasvuori).
`
`Regarding claim 10, Yuan discloses an imaqiig apparatus ile, abstract, claim Tt and
`
`dependents, device 1000 in fig. 10. Also see devices in figs. 6-9) cornprising:
`
`an imager configured to capture 4 subject image to gencrate image data (second
`
`capture unit 802, fig. 8, 70116. Also see step siOZ, s202, s602... ete. in figs. 1, 2,6, 7}
`
`an audio input device configured to receive audio data indicating sound during shooting
`
`using the imager (unit 607, 901, and/or 1007 in figs. 8-10, 0115, 40130, 0136. Alse
`
`see steps $101, s201, so01... efc. in flas. 1, 2, 6}:
`
`a detector configured to detect a subject basecl on the image data generated by the
`
`imager (processor 1003 functioning as a detector configured to detect a subiect hased
`
`on the image data generated by the imager. If the focusing object of a user changes
`
`from the object 41 to the object 43, for example,_the electronic device can acquire object
`
`43 as the target object in the real-time image according to the focusing operation of the
`
`user — 40083}:
`
`a display isplay screen and/or display unit, JOOS0, GO11E) configured to dismiay an
`
`image indicated by the image data (0030, 0118);
`
`

`

`Application/Control Number: 16/814,666
`Art Unit: 2697
`
`Page 5
`
`an operation member (/f a user selects an object 43 throughafirst operation, (for
`
`instance, tapping on a touch screen of the electronic device) — 40081} configured to
`
`select a focus subject in ine image from sublects detected by the detector based on a
`
`user operation an ihe imaging apparatus (fa user selects an object 43 throughafirst
`
`operation, (for instance, tapping on a touch screen of the electronic device), the
`
`electronic device can then determine a target object from multiple objects in the real-
`
`time image based on the object which was selected by the user through thefirst
`
`operation — 40081.
`
`The second scenario includes a case in which, during the video recording and sound
`
`recording of multiple people, the focusing direction is adjusted when multiple people are
`
`speaking, so as to aim the beam forming direction at a target person. One example of
`
`such a scenario may include the following: 1) multiple people are simultaneously
`
`speaking during video recording and sound recording; 2) a certain person is selected to
`
`be focused on the screen, and the beam forming direction is adjusted to be aimedat the
`
`speaker; 3) when the microphone array forms the indication of the beam forming
`
`direction, the noise reduction level is enhanced during audio zoom-in, so as to make the
`
`sound clearer. There are various advantages to employing the embodiments. First, the
`
`video recording and sound recording are combined together in order to be consistent
`
`with real human experiences. For example, the sound recording quality is changed with
`
`the adjustment of focus length during video recording, whichis different from the
`
`unchanged sound quality as seen in the current market. Second, during video recording
`
`and sound recording of a single person, if the focus length is adjusted to zoom in or out
`
`on the person, the clarity of the person's voice will be changed therewith. 3) During
`
`

`

`Application/Control Number: 16/814,666
`Art Unit: 2697
`
`Page 6
`
`video recording and sound recording of multiple people, if the focus is moved to another
`
`speaker, the speaker's voices will be amplified or clarified, and the surrounding people's
`
`voices will be reduced in volume — 40100 - 0101}:
`
`an aucho processar configured to process the auclo data to emphasize or suppress
`
`specific sound in the audia data received by the audio input device based on selected
`
`subiect by the operation member (The second scenario includes a case in which, during
`
`the video recording and sound recording of multiple people, the focusing direction is
`
`adjusted when multiple people are speaking, so as to aim the beam forming direction at
`
`a larget person. One example of such a scenario may include the following: 1) multiple
`
`people are simultaneously speaking during video recording and sound recording; 2) a
`
`certain person is selected to be focused on the screen, and the beam forming direction
`
`is adjusted to be aimed at the speaker; 3) when the microphone array forms the
`
`indication of the beam forming direction, the noise reduction level is enhanced during
`
`audio zoom-in, so as to make the sound clearer. There are various advantages to
`
`employing the embodiments. First, the video recording and sound recording are
`
`combined togetherin order to be consistent with real human experiences. For example,
`
`the sound recording quality is changed with the adjustment of focus length during video
`
`recording, which is different from the unchanged sound quality as seen in the current
`
`market. Second, during video recording and sound recording of a single person, if the
`
`focus length is adjusted to zoom in or out on the person, the clarity of the person's voice
`
`will be changed therewith. 3) During video recording and sound recording of multiple
`
`people, if the focus is moved to another speaker, the speaker's voices will be amplified
`
`

`

`Application/Control Number: 16/814,666
`Art Unit: 2697
`
`Page 7
`
`or Clarified, and the surrounding people's voices will be reduced in volume. — 40100 -
`
`0101}; anc
`
`a processor configured to cause the display to display the focus subject as a target type
`
`to be processed by the audio processor (figs. 5-6, (0100-0101).
`
`Yuanis not found disclosing explicitly that, the detector configured to detect a type of
`
`the subject based on the image data generated by the imager, an audio processor
`
`configured to process the audio data to emphasize or suppress specific sound in the
`
`audio data received by the audio input device based on a type of selected subject by
`
`the operation member, and a processor configured to causethe display to display
`
`target type information indicating the type of the focus subject as a target type to be
`
`processed by the audio processor.
`
`However, Arrasvuori discloses an apparatus 1100 in the form of a mobile phone, video
`
`camera, computer or likes thereof (fig. 11, J0115), which can contains audio-visual
`
`processing elements (0059), capable of determining an object type (e.g. as shownin
`
`fig. 6-8, abstract, YO063) based on audio (¥0049, abstract) and video signals (composite
`
`image contains audio as well as video signal used for object detection, (0059, 70061,
`
`“The predetermined scale or scales may be determined, for example, on basis of a field
`
`of view provided by an image or a video segment associated with the composite audio
`
`signal, the predetermined scale or scales thereby corresponding to the field of view of
`
`the image or the video segment’ — 40072. In case there is an image or a video segment
`
`associated with the composite audio signal, the image or the video segment may be
`
`

`

`Application/Control Number: 16/814,666
`Art Unit: 2697
`
`Page 8
`
`displayed on the display together with the one or more items representing respective
`
`sound sources of the composite audio signal. In particular, the one or more items
`
`representing the respective sound sources of the composite audio signal are positioned
`
`in the display such that their positions essentially coincide with positions of the
`
`respective objects of the displayed image or a video segment — 40074.
`
`The item may comprise a figure, e.g. an icon, comprising a shapeillustrating the
`
`respective type, e.g. a human shapeillustrating a person as the sound source, a dog
`
`illustrating an animal as the sound source, a carillustrating a vehicle as the sound
`
`source, a question mark illustrating a sound source of unknownor unidentifiable type--
`ias
`each possibly provided together with a short explanatory text (e.g. "person",
`"animal",
`
`"vehicle"; "unknown", .. . ). AS another example, the item may comprise an image
`
`and/or nameof a specific person in case such information is available or otherwise
`
`identifiable on basis of the sound source or on basis of an image and/or video segment
`
`associated with the composite audio signal comprising the sound source — 40075. Also
`
`see 40075) and corroborations thereof (§0061, 0072-0076).
`
`After detecting a type of sublect using video, aucio or a combination therea!, spatial
`
`position of the sound source is determined {@.g. #2059-0060}. User could also select a
`
`subject from the display ((0080, step 1050, fig. 10) to apply user selected and/or user
`
`desired modifications (step 1070, fig. 10, 0109). The display also showsthe target
`
`type information indicating the type of the focus subject as a target type to be
`
`processed by the audio processor (figs. 7-8).
`
`

`

`Application/Control Number: 16/814,666
`Art Unit: 2697
`
`Page 9
`
`Therefore, it would have been obvious to one of ordinary skill in the art before the
`
`effective filing date of the claimed invention (AIA) to modify the invention of Yuan to
`
`include the teaching of Arrasvuori of detect a type of the subject based on the image
`
`data generated by the imager and processor configured to causethe display to display
`
`target type information indicating the type of the focus subject as a target type to be
`
`processed by the audio processor, because, combining prior art elements according to
`
`known method readyfor improvement to yield predictable results is obvious.
`
`Furthermore, the combination would enhancethe versatility of the overall system.
`
`After the combination the limitation of, an audio processor configured to process the
`
`audio data to emphasize or suppress specific sound in the audio data received by the
`
`audio input device based on a type of selected subject by the operation member, would
`
`meet, since Yuan’s user's selective audio-visual beamforming would not only now work
`
`for people, it would now work for animal’s as well, understood as a type of selected
`
`subject.
`
`Regarding claim 11, Yuan in view cf Arrasvuori discloses the imaging apparatus
`
`according fo claim 10, wherein the processor is canfigured to further cause the display
`
`to display emphasis level information indicating a level at which the aucio processar
`
`emphasizes or suppresses specific sound of the selected subject (Arrasvuori: figs. 7-8,
`
`40094}.
`
`

`

`Application/Control Number: 16/814,666
`Art Unit: 2697
`
`Page 10
`
`Regarding claim 12, Yuan in view cf Arrasvuori discloses the imaging apparatus
`
`according fo claim 10, wherein the processor is canfigured io,
`
`in response to change of the focus subject with a type of the focus sublect after
`
`the change being diferent from the target fyoe before the change (afer combination
`
`with Arrasvuori, Yuan’s use case as described in 0100 would now not only have
`
`person, but also other sound sources, e.g. animal, and the combination the focus
`
`subject could be of either subject type. Also see the combination of the referencein
`
`claim 10},
`
`Updaie the target tyoe information to indicate the type aller the change as the
`
`targel type, and cause the display to display the undated target type information
`
`{Arrasvuori: After combination, target tyoe information is updated to incicate the type
`
`fe.g. John Doe, Dog and/or unknown, fig. 6} afier the change as the target type in
`
`response to users selection of focus subject per Yuan's description of €O100, and
`
`cause the display to display the updated target type information, figs. 6, 8}.
`
`Conclusion
`
`The prior and/or pertinent art(s) made of record and not relied upon is considered
`
`pertinent to applicant's disclosure, are —
`
`AN et al. (US 2015/0162019, see figs. 7, 8), Kaine et al. (US 2014/0085538, seefigs.
`
`7D-F, 9-10) — who disclose focusing on subjects of interest and amending audio
`
`features from the selected subject accordingly.
`
`

`

`Application/Control Number: 16/814,666
`Art Unit: 2697
`
`Page 11
`
`Applicant's amendment necessitated the new ground(s)of rejection presentedin this
`
`Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a).
`
`Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
`
`A shortenedstatutory period for reply to this final action is set to expire THREE
`
`
`
`MONTHS from the mailing date of this action. In the eventafirst replyis filed within
`
`TWO MONTHS ofthe mailing date ofthis final action and the advisory action is not
`
`mailed until after the end of the THREE-MONTH shortenedstatutory period, then the
`
`shortened statutory period will expire on the date the advisory action is mailed, and any
`
`extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of
`
`the advisory action.
`
`In no event, however, will the statutory period for reply expire later
`
`than SIX MONTHS from the date of this final action.
`
`Any inquiry concerning this communication or earlier communications from the
`
`examiner should be directed to SHAHBAZ NAZRUL whose telephone number is
`
`(571)270-1467. The examiner can normally be reached on M-Th: 9.30 am-3 pm, 6.30
`
`pm-9 pm, F: 9.30 am-1.30 pm, 4 pm-8 pm.
`
`Examiner interviews are available via telephone, in-person, and video
`
`conferencing using a USPTO supplied web-basedcollaboration tool. To schedule an
`
`interview, applicant is encouraged to use the USPTO Automated Interview Request
`
`(AIR) at http:/Awww.uspto.gov/interviewpractice.
`
`If attempts to reach the examiner by telephone are unsuccessful, the examiner's
`
`supervisor, Lin Ye can be reached on 571-272-7372. The fax phone number for the
`
`organization where this application or proceeding is assigned is 571-273-8300.
`
`

`

`Application/Control Number: 16/814,666
`Art Unit: 2697
`
`Page 12
`
`Information regarding the status of an application may be obtained from the
`
`Patent Application Information Retrieval (PAIR) system. Status information for
`
`published applications may be obtained from either Private PAIR or Public PAIR.
`
`Status information for unpublished applications is available through Private PAIR only.
`
`For more information about the PAIR system, see https://ppair-
`
`my.uspto.gov/pair/PrivatePair. Should you have questions on accessto the Private
`
`PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197(toll-free).
`
`If you would like assistance from a USPTO Customer Service Representative or access
`
`to the automatedinformation system, call 800-786-9199 (IN USA OR CANADA)or 571-
`
`272-1000.
`
`/SHAHBAZ NAZRUL/
`Primary Examiner, Art Unit 2696
`
`

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket