`
`UNITED STATES DEPARTMENT OF COMMERCE
`United States Patent and TrademarkOffice
`Address; COMMISSIONER FOR PATENTS
`P.O. Box 1450
`Alexandria, Virginia 22313-1450
`
`16/814,666
`
`03/10/2020
`
`Hiroki KASUGAI
`
`AOYA.16PUSO1
`
`3955
`
`MARKD. SARALINO (PAN)
`RENNER, OTTO, BOISSELLE & SKLAR, LLP
`1621 EUCLID AVENUE
`ISTH FLOOR
`
`CLEVELAND, OH 44115
`
`NAZRUL, SHAHBAZ
`
`2697
`
`01/06/2022
`
`ELECTRONIC
`
`Please find below and/or attached an Office communication concerning this application or proceeding.
`
`The time period for reply, if any, is set in the attached communication.
`
`Notice of the Office communication was sent electronically on above-indicated "Notification Date" to the
`following e-mail address(es):
`
`ipdocket @rennerotto.com
`
`PTOL-90A (Rev. 04/07)
`
`
`
`
`
`Disposition of Claims*
`10-13 is/are pending in the application.
`)
`Claim(s)
`5a) Of the above claim(s) ___ is/are withdrawn from consideration.
`(J Claim(s)__ is/are allowed.
`Claim(s) 10-12 is/are rejected.
`Claim(s) 13 is/are objectedto.
`1) Claim(s
`are subject to restriction and/or election requirement
`)
`* If any claims have been determined allowable, you maybeeligible to benefit from the Patent Prosecution Highway program at a
`participating intellectual property office for the corresponding application. For more information, please see
`http:/Awww.uspto.gov/patents/init_events/pph/index.jsp or send an inquiry to PPHfeedback@uspto.gov.
`
`) ) ) )
`
`Application Papers
`10) The specification is objected to by the Examiner.
`11)() The drawing(s) filedon__ is/are: a)C) accepted or b)C) objected to by the Examiner.
`Applicant may not request that any objection to the drawing(s) be held in abeyance. See 37 CFR 1.85(a).
`Replacement drawing sheet(s) including the correction is required if the drawing(s) is objected to. See 37 CFR 1.121 (d).
`
`Priority under 35 U.S.C. § 119
`12). Acknowledgment is made of a claim for foreign priority under 35 U.S.C. § 119(a)-(d)or (f).
`Certified copies:
`cc) None ofthe:
`b)L) Some**
`a)D) All
`1.1) Certified copies of the priority documents have been received.
`2.1) Certified copies of the priority documents have beenreceived in Application No.
`3.2.) Copies of the certified copies of the priority documents have been receivedin this National Stage
`application from the International Bureau (PCT Rule 17.2(a)).
`* See the attached detailed Office action for a list of the certified copies not received.
`
`Attachment(s)
`
`1) ([] Notice of References Cited (PTO-892)
`
`2) (J Information Disclosure Statement(s) (PTO/SB/08a and/or PTO/SB/08b)
`Paper No(s)/Mail Date
`U.S. Patent and Trademark Office
`
`3) (J Interview Summary (PTO-413)
`Paper No(s)/Mail Date
`(Qj Other:
`
`4)
`
`PTOL-326 (Rev. 11-13)
`
`Office Action Summary
`
`Part of Paper No./Mail Date 20220101
`
`Application No.
`Applicant(s)
`16/814,666
`KASUGAI, Hiroki
`
`Office Action Summary Art Unit|AIA (FITF) StatusExaminer
`SHAHBAZ NAZRUL
`2697
`Yes
`
`
`
`-- The MAILING DATEofthis communication appears on the cover sheet with the correspondence address --
`Period for Reply
`
`A SHORTENED STATUTORY PERIOD FOR REPLYIS SET TO EXPIRE 3 MONTHS FROM THE MAILING
`DATE OF THIS COMMUNICATION.
`Extensions of time may be available underthe provisions of 37 CFR 1.136(a). In no event, however, may a reply betimely filed after SIX (6) MONTHSfrom the mailing
`date of this communication.
`If NO period for reply is specified above, the maximum statutory period will apply and will expire SIX (6) MONTHSfrom the mailing date of this communication.
`-
`- Failure to reply within the set or extended period for reply will, by statute, cause the application to become ABANDONED (35 U.S.C. § 133}.
`Any reply received by the Office later than three months after the mailing date of this communication, evenif timely filed, may reduce any earned patent term
`adjustment. See 37 CFR 1.704(b).
`
`Status
`
`1) Responsive to communication(s) filed on 12/17/2021.
`C} A declaration(s)/affidavit(s) under 37 CFR 1.130(b) was/werefiled on
`
`2a)L) This action is FINAL. 2b)¥)This action is non-final.
`3)02 An election was madeby the applicant in responseto a restriction requirement set forth during the interview
`on
`; the restriction requirement and election have been incorporated into this action.
`4\0) Since this application is in condition for allowance except for formal matters, prosecution as to the merits is
`closed in accordance with the practice under Exparte Quayle, 1935 C.D. 11, 453 O.G. 213.
`
`
`
`Application/Control Number: 16/814,666
`Art Unit: 2697
`
`Page 2
`
`DETAILED ACTION
`
`Notice of Pre-AlA or AIA Status
`
`The present application, filed on or after March 16, 2013, is being examined under the
`
`first inventor to file provisions of the AIA.
`
`Continued Examination Under 37 CFR 1.114
`
`A requestfor continued examination under 37 CFR 1.114, including the fee set forth in
`
`37 CFR 1.17(e), wasfiled in this application after final rejection. Since this application is
`
`eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR
`
`1.17(e) has beentimely paid, the finality of the previous Office action has been
`
`withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/17/2021 has
`
`been entered.
`
`Response to Amendment
`
`Claims 10-13 are pending in this instant application. Claim 10 is amended. Claims 1-9
`
`were cancelled before. Claim 13 is newly added.
`
`Response to Arguments
`
`Applicant's argumentsfiled 12/17/2021 have been fully considered but they are not
`
`persuasive. Applicant primarily argues that GUI of Arrasvuori is meant to take user’s
`
`input to change audio attributes, e.g. volume, while invention as claimed is meantfor
`
`target type information at the output side. in response, the Examiner respectiully points
`
`oul thal because applicant has the opporturily to amend the claims during prosecution,
`
`giving a Claim its broadest reasonable interpretation will reduce the possibility thai the
`
`claim, once issued, will be interpreted more broadly than is justified [In re Yamamoto,
`
`
`
`Application/Control Number: 16/814,666
`Art Unit: 2697
`
`Page 3
`
`740 F.2d 1569, 1571 (Fed. Cw. 1984): In re Zleiz, 893 F.2d 319, 321 (Fed. Cir. 1989).
`
`(“During patent examination the pending claims must be interpreted as broadly as their
`
`ferms reasonably aliow.”}) < in re Prater, 445 F 2d 1393, 1404-05, 162 USPQ 541, 550-
`
`51 (COPA 1969}.
`
`Examiner further warts lo pout out that, esmecially claim 10 as presenily formutated stl
`
`reads on the combination of Yuan and Arrasvuori. Regardless of the intent of whether
`
`the icon in Arrasvuori is meant for input or output, claim 10 still reads on the
`
`combination of references, especially given that, items 520-540 depicts different subject
`
`type, and are separate from the real images captured in Yuan. Furthermore, icons 520-
`
`540 can be selected by a pinch gesture, e.g., and re-positioned to any desired location
`
`within the display screen ((0085-0086). Therefore, when combined with Yuan, an audio
`
`focus on a specific subject type, represented by an icon, separate from the captured
`
`image is actually displayed in the display screen. Furthermore, Arrasvuori’s items 520-
`
`540, indicating the type of subject based on which the audio processor is currently
`
`processing the audio data to enable the user to visually confirm the tyne of subject
`
`based on which the audio processor is currently processing the auclo data during the
`
`shooting of the moving image
`
`Claim Rejections - 35 USC § 103
`
`In the event the determination of the status of the application as subject to AIA 35
`
`U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103)is incorrect, any
`
`correction of the statutory basis for the rejection will not be considered a new ground of
`
`
`
`Application/Control Number: 16/814,666
`Art Unit: 2697
`
`Page 4
`
`rejection if the prior art relied upon, and the rationale supporting the rejection, would be
`
`the same under either status.
`
`The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness
`
`rejections set forth in this Office action:
`
`A patent for a claimed invention may not be obtained, notwithstanding that the claimed
`invention is not identically disclosed as set forth in section 102 of thistitle, if the differences
`between the claimed invention andthe prior art are such that the claimed invention as a whole
`would have been obvious before the effectivefiling date of the claimed invention to a person
`having ordinary skill in the art to which the claimed invention pertains. Patentability shall not
`be negated by the manner in which the invention was made.
`
`Claims 10-12 are rejected under 35 U.S.C. 103 as being unpatentable over Yuan
`
`(US 2017/0289681) in view of Arrasvuori et al. (US 2014/0369506; hereinafter
`
`Arrasvuori).
`
`Regarding claim 16, Yuan discloses an imaging apparatus Gitle, abstract, claim TT and
`
`dependents, device 1000 in fig. 10. Also see devices in figs. 6-9} comprising:
`
`an imager confiqured to capture 4 subject image to generate image data (secanc
`
`capture unit 802, fig. 8, 70116. Also see sien s1O2, s202, s6O2... etc. in fias. 1, 2,6, 7}:
`
`an audio ingul device configured to receive audio dala indicating sound during shooting
`
`using ihe imager (unit #07, 901, and/or 1007 in figs. 8-10, #0115, 70130, GQ13s. Also
`
`see steps s101, s201, so01... etc. in figs. 1, 2, 5);
`
`a detector configured to detect a subject based on the image data generated bythe
`
`imager (orocessor 1003 functioning as 4 detector configured to detect a subject based
`
`on the image data generated by the imager. If the focusing object of a user changes
`
`from the object 41 to the object 43, for example,_the electronic device can acquire object
`
`
`
`Application/Control Number: 16/814,666
`Art Unit: 2697
`
`Page 5
`
`43 as the target object in the real-time image according to the focusing operation of the
`
`user — 40083};
`
`a display (display screen and/or display unit, FOO3G, 10118) contigured to display an
`
`image indicated by the image data (0030, 0178);
`
`an operation member (/f a user selects an object 43 through a first operation, (for
`
`instance, tapping on a touch screen of the electronic device) — 40081} configured to
`
`select 4 focus sublect in the image from subjects detected by the detector based on a
`
`user operalion on the wnaging apparatus during shooting of a moving image Ufa user
`
`selects an object 43 through a first operation, (for instance, tapping on a touch screen of
`
`the electronic device), the electronic device can then determine a target object from
`
`multiple objects in the real-time image based on the object which was selected by the
`
`user through thefirst operation — 40081.
`
`The second scenario includes a case in which, during the video recording and sound
`
`recording of multiple people, the focusing direction is adjusted when multiple people are
`
`speaking, so as to aim the beam forming direction at a target person. One example of
`
`such a scenario may include the following: 1) multiple people are simultaneously
`
`speaking during video recording and sound recording; 2) a certain person is selected to
`
`be focused on the screen, and the beam forming direction is adjusted to be aimedat the
`
`speaker; 3) when the microphone array forms the indication of the beam forming
`
`direction, the noise reduction level is enhanced during audio zoom-in, so as to make the
`
`sound clearer. There are various advantages to employing the embodiments. First, the
`
`video recording and sound recording are combined together in order to be consistent
`
`with real human experiences. For example, the sound recording quality is changed with
`
`
`
`Application/Control Number: 16/814,666
`Art Unit: 2697
`
`Page 6
`
`the adjustment of focus length during video recording, whichis different from the
`
`unchanged sound quality as seen in the current market. Second, during video recording
`
`and sound recording of a single person, if the focus length is adjusted to zoom in or out
`
`on the person, the clarity of the person's voice will be changed therewith. 3) During
`
`video recording and sound recording of multiple people, if the focus is moved to another
`
`speaker, the speaker's voices will be amplified or clarified, and the surrounding people's
`
`voices will be reduced in volume — 40100 — 0101.
`
`The second scenario includes recording video (i.e., both the real-time sound and the
`
`real-time image are required to be stored) — J0033. Also see 40047-0048 for shooting a
`
`video image};
`
`an audio processor configured to process the audio data to ernphasize or suppress
`
`specific sound in the audio data received by the audio input device based on selected
`
`subleci by the operation mernber (The second scenario includes a case in which, during
`
`the video recording and sound recording of multiple people, the focusing direction is
`
`adjusted when multiple people are speaking, so as to aim the beam forming direction at
`
`a target person. One example of such a scenario mayinclude the following: 1) multiple
`
`people are simultaneously speaking during video recording and sound recording; 2) a
`
`certain person is selected to be focused on the screen, and the beam forming direction
`
`is adjusted to be aimed at the speaker; 3) when the microphone array forms the
`
`indication of the beam forming direction, the noise reduction level is enhanced during
`
`audio zoom-in, so as to make the soundclearer. There are various advantages to
`
`employing the embodiments. First, the video recording and sound recording are
`
`combined togetherin order to be consistent with real human experiences. For example,
`
`
`
`Application/Control Number: 16/814,666
`Art Unit: 2697
`
`Page 7
`
`the sound recording quality is changed with the adjustment of focus length during video
`
`recording, which its different from the unchanged sound quality as seen in the current
`
`market. Second, during video recording and sound recording of a single person, if the
`
`focus length is adjusted to zoom in or out on the person, the clarity of the person's voice
`
`will be changed therewith. 3) During video recording and sound recording of multiple
`
`people, if the focus is moved to another speaker, the speaker's voices will be amplified
`
`or Clarified, and the surrounding people's voices will be reduced in volume. — 40100 -
`
`0101}; and
`
`a processor configured to cause the display to display the focus subject as a target type
`
`to be processed by the audio processor (figs. 5-6, (0100-0101).
`
`Yuanis not found disclosing explicitly that, the detector configured to detect a type of
`
`the subject based on the image data generated by the imager, an audio processor
`
`configured to process the audio data to emphasize or suppress specific sound in the
`
`audio data received by the audio input device based on a type of selected subject by
`
`the operation member, and a processor configured to causethe display to display
`
`target type information, in addition to the subject image, indicaling the type of subject
`
`based on which the audic processor is currently processing the audio data to enable the
`
`user fo visually confirm the type of sublect based on which the audio processor is
`
`currently processing the audio data during the shooting of the moving image.
`
`However, Arrasvuori discloses an apparatus 1100 in the form of a mobile phone, video
`
`camera, computer or likes thereof(fig. 11, (0115), which can contains audio-visual
`
`
`
`Application/Control Number: 16/814,666
`Art Unit: 2697
`
`Page 8
`
`processing elements (0059), capable of determining an object type (e.g. as shownin
`
`fig. 6-8, abstract, J0063) based on audio (¥0049, abstract) and video signals (composite
`
`image contains audio as well as video signal used for object detection, (0059, 40061,
`
`“The predetermined scale or scales may be determined, for example, on basis of a field
`
`of view provided by an image or a video segment associated with the composite audio
`
`signal, the predetermined scale or scales thereby correspondingto the field of view of
`
`the image or the video segment’ — 40072. In case there is an image or a video segment
`
`associated with the composite audio signal, the image or the video segment may be
`
`displayed on the display together with the one or more items representing respective
`
`sound sources of the composite audio signal. In particular, the one or more items
`
`representing the respective sound sources of the composite audio signal are positioned
`
`in the display such that their positions essentially coincide with positions of the
`
`respective objects of the displayed image or a video segment — 40074.
`
`The item may comprise a figure, e.g. an icon, comprising a shapeillustrating the
`
`respective type, e.g. a human shapeillustrating a person as the sound source, a dog
`
`illustrating an animal as the sound source, a carillustrating a vehicle as the sound
`
`source, a question mark illustrating a sound source of unknownor unidentifiable type--
`
`each possibly provided together with a short explanatory text (e.g. "person", "animal",
`
`"vehicle"; "unknown", .. . ). AS another example, the item may comprise an image
`
`and/or nameof a specific person in case such information is available or otherwise
`
`identifiable on basis of the sound source or on basis of an image and/or video segment
`
`associated with the composite audio signal comprising the sound source — 40075. Also
`
`see 40075) and corroborations thereof (40061, 0072-0076).
`
`
`
`Application/Control Number: 16/814,666
`Art Unit: 2697
`
`Page 9
`
`After detecting a type of sublect using video, aucio or a combination therea!, spatial
`
`position of the sound source is determined {@.g. #2059-0060}. User could also select a
`
`subject from the display ((0080, step 1050, fig. 10) to apply user selected and/or user
`
`desired modifications (step 1070, fig. 10, 0109). The display also showsthe target
`
`type information indicating the type of the focus subject as a target type to be
`
`processed by the audio processor (figs. 7-8).
`
`Arrasvuori further discloses that, items 520-540, can be representative of the sound
`
`source (abstract), indicating it is different from the real captured image in Yuan, and
`
`composite image with video in Arrasvuori. Furthermore, Arrasvuori discloses that a
`
`selected sound source subject type can have modified audio attribute, e.g. changed
`
`volume (fig. 8). Items 520-540 can also be that represents selected sound source can
`
`be placed in a changed spatial location (0085-0086). Thus, Arrasvuori’s items 520-
`
`540, indicating the type of sublect based on which the audic processor is currently
`
`processing the audio data to enable the user to visually confirm the tyne of subject
`
`based on which the audio processor is currently processing the auclo data during the
`
`shoolirig of the moving imade.
`
`Therefore, it would have been obvious to one of ordinary skill in the art before the
`
`effective filing date of the claimed invention (AIA) to modify the invention of Yuan to
`
`include the teaching of Arrasvuori to detect a type of the subject based on the image
`
`data generated by the imager, an audio processor configured to process the audio data
`
`to emphasize or suppress specific sound in the audio data received by the audio input
`
`
`
`Application/Control Number: 16/814,666
`Art Unit: 2697
`
`Page 10
`
`device based on a type of selected subject by the operation member, and a processor
`
`configured to cause the display to display target type information, in addition fo the
`
`subiect imace, indicating the type of sublect based on which the audio processor is
`
`currently orocessing the audio data to enable the user to visually confirm the type of
`
`subject based on which the audio orocessar is currently processing the audio data
`
`during ihe shooting of the moving image, because, combining prior art elements
`
`according to known method ready for improvement to yield predictable results is
`
`obvious. Furthermore, the combination would enhancethe versatility of the overall
`
`system.
`
`Regarding claim 11, Yuan in viewof Arrasvuori discloses the imaging apparatus
`
`according to claim 10, wherein the processor is configured to further cause the disnlay
`
`io disslay emphasis level information indicating a level at which the audio processor
`
`emphasizes or suppresses specific sound of the selected subject (Arrasvuori: figs. 7-8,
`
`40094).
`
`Regarding claim 12, Yuan in view cf Arrasvuori discloses the imaging apparatus
`
`according fo claim 10, wherein the processor is canfigured io,
`
`in response to change of the focus subject with a type of the focus sublect after
`
`the change being diferent from the target fyoe before the change (afer combination
`
`with Arrasvuori, Yuan’s use case as described in 0100 would now not only have
`
`person, but also other sound sources, e.g. animal, and the combination the focus
`
`
`
`Application/Control Number: 16/814,666
`Art Unit: 2697
`
`Page 11
`
`subject could be of either subject type. Also see the combination of the referencein
`
`claim 10},
`
`update the target type information to indicate the type after the change as the
`
`target type, and cause the display to display the updated target type information
`
`{Arrasvuori: After combination, target type information is updated to indicate the type
`
`(e.g. John Doe, Dog and/or unknown, fig. 6) after the change as the target type in
`
`response to user's selection of focus subject per Yuan’s description of GO100, and
`
`cause the display to display the updated target type information, figs. 6, 8}.
`
`Allowable Subject Matter
`
`Claim 13 is objected to as being dependent upona rejected base claim, but would be
`
`allowable if rewritten in independent form including all of the limitations of the base
`
`claim and any intervening claims.
`
`The following is a statement of reasons for the indication of allowable subject matter:
`
`Prior arts of record taken alone or in combination fail to reasonably disclose or suggest,
`
`Regarding claim 13, wherein the detector is configured to detect different subjects and
`
`diferent types of subjects within the image data generated by the imager: the audio
`
`pracessar is configured to process the audio data to emphasize or suppress specific
`
`sound in the audio data received by the audio input device differently based on type of
`
`
`
`Application/Control Number: 16/814,666
`Art Unit: 2697
`
`Page 12
`
`as selected by the user during shooting of the moving image; and the operation member
`
`is configured to select the focus subject in the image from different subjects detected by
`
`the detector based on the user operation on the imaging apparatus during the shooting
`
`of the moving image, wherein selection of the focus sublect based on the user operation
`
`causes ihe aucio grocessor to process the audio data in accordance with the iype of the
`
`focus subject detected by the detector; wherein the operation memberis canfigured to
`
`enable tne user to change the type of subject based on which the audio processor is
`
`currently processing the audio data to a different tyoe of subject during the shooting of
`
`the maovinig image, and the processor is configured to cause the dislay to display
`
`updated target type information indicating the changed type af subject.
`
`Conclusion
`
`Any inquiry concerning this communication or earlier communications from the
`
`examiner should be directed to SHAHBAZ NAZRUL whose telephone number is
`
`(571)270-1467. The examiner can normally be reached M-Th: 9.30 am-3 pm, 6.30 pm-9
`
`pm, F: 9.30 am-1.30 pm, 4 pm-8 pm.
`
`Examiner interviews are available via telephone, in-person, and video
`
`conferencing using a USPTO supplied web-basedcollaboration tool. To schedule an
`
`interview, applicant is encouraged to use the USPTO Automated Interview Request
`
`(AIR) at http:/Avww.uspto.gov/interviewpractice.
`
`If attempts to reach the examiner by telephone are unsuccessful, the examiner's
`
`supervisor, Lin Ye can be reached on 571-272-7372. The fax phone number for the
`
`organization where this application or proceeding is assigned is 571-273-8300.
`
`
`
`Application/Control Number: 16/814,666
`Art Unit: 2697
`
`Page 13
`
`Information regarding the status of published or unpublished applications may be
`
`obtained from Patent Center. Unpublished application information in Patent Center is
`
`available to registered users. To file and manage patent submissions in Patent Center,
`
`visit: https://patentcenter.uspto.gov. Visit https:/Awww.uspto.gov/patents/apply/patent-
`
`center for more information about Patent Center and
`
`https:/Awww.uspto.gov/patents/docx for information aboutfiling in DOCX format. For
`
`additional questions, contact the Electronic Business Center (EBC) at 866-217-9197
`
`(toll-free). If you would like assistance from a USPTO Customer Service
`
`Representative, call 800-786-9199 (IN USA OR CANADA)or 571-272-1000.
`
`/SHAHBAZ NAZRUL/
`Primary Examiner, Art Unit 2697
`
`