`
`
`
`UNITED STATES DEPARTMENT OF COMMERCE
`United States Patent and Trademark Office
`Address: COMMISSIONER FOR PATENTS
`P.O. Box 1450
`Alexandria, Virginia 2231371450
`www.uspto.gov
`
`14/941,583
`
`11/14/2015
`
`NORITAKA IGUCHI
`
`2015-1682T
`
`1081
`
`10/02/2019
`- 759°
`”5044
`Wenderoth, L1nd & Ponack, L.L.P.
`1025 Connecticut Avenue, NW
`Suite 500
`
`Washington DC 20036
`
`RETALLICK' KAITLIN A
`
`2482
`
`PAPER NUMBER
`
`NOTIFICATION DATE
`
`DELIVERY MODE
`
`10/02/2019
`
`ELECTRONIC
`
`Please find below and/or attached an Office communication concerning this application or proceeding.
`
`The time period for reply, if any, is set in the attached communication.
`
`Notice of the Office communication was sent electronically on above—indicated "Notification Date" to the
`
`following e—mail address(es):
`eoa @ wenderoth. com
`kmiller @ wenderothcom
`
`PTOL-90A (Rev. 04/07)
`
`
`
`0,7709 A0170” Summary
`
`Application No.
`14/941,583
`Examiner
`Kaitlin A Retaliick
`
`Applicant(s)
`IGUCHI et al.
`Art Unit
`2482
`
`AIA (FITF) Status
`Yes
`
`- The MAILING DA TE of this communication appears on the cover sheet wit/7 the correspondence address -
`Period for Reply
`
`A SHORTENED STATUTORY PERIOD FOR REPLY IS SET TO EXPIRE g MONTHS FROM THE MAILING
`DATE OF THIS COMMUNICATION.
`Extensions of time may be available under the provisions of 37 CFR 1.136(a). In no event, however, may a reply be timely filed after SIX (6) MONTHS from the mailing
`date of this communication.
`|f NO period for reply is specified above, the maximum statutory period will apply and will expire SIX (6) MONTHS from the mailing date of this communication.
`-
`- Failure to reply within the set or extended period for reply will, by statute, cause the application to become ABANDONED (35 U.S.C. § 133).
`Any reply received by the Office later than three months after the mailing date of this communication, even if timely filed, may reduce any earned patent term
`adjustment. See 37 CFR 1.704(b).
`
`Status
`
`1). Responsive to communication(s) filed on 09/09/2019.
`[:1 A declaration(s)/affidavit(s) under 37 CFR 1.130(b) was/were filed on
`
`2a)D This action is FINAL.
`
`2b)
`
`This action is non-final.
`
`3)[:] An election was made by the applicant in response to a restriction requirement set forth during the interview on
`; the restriction requirement and election have been incorporated into this action.
`
`4)[:] Since this application is in condition for allowance except for formal matters, prosecution as to the merits is
`closed in accordance with the practice under Expat/7e Quay/e, 1935 CD. 11, 453 O.G. 213.
`
`Disposition of Claims*
`
`5)
`
`Claim(s) fl is/are pending in the application.
`
`5a) Of the above claim(s) fl is/are withdrawn from consideration.
`
`El Claim(s) _ is/are allowed.
`
`Claim(s) fl is/are rejected.
`
`E] Claim(s) _ is/are objected to.
`
`) ) ) )
`
`6 7
`
`8
`
`
`
`are subject to restriction and/or election requirement
`[:1 Claim(s)
`9
`* If any claims have been determined allowable, you may be eligible to benefit from the Patent Prosecution Highway program at a
`
`participating intellectual property office for the corresponding application. For more information, please see
`
`http://www.uspto.gov/patents/init events/pph/index.'sp or send an inquiry to PPeredback@uspto.gov.
`
`Application Papers
`10):] The specification is objected to by the Examiner.
`
`11):] The drawing(s) filed on
`
`is/are: a)C] accepted or b)Ej objected to by the Examiner.
`
`Applicant may not request that any objection to the drawing(s) be held in abeyance. See 37 CFR 1.85(a).
`Replacement drawing sheet(s) including the correction is required if the drawing(s) is objected to. See 37 CFR 1.121 (d).
`
`Priority under 35 U.S.C. § 119
`12). Acknowledgment is made of a claim for foreign priority under 35 U.S.C. § 119(a)-(d) or (f).
`Certified copies:
`
`a). All
`
`b)C] Some**
`
`c)Cl None of the:
`
`1.. Certified copies of the priority documents have been received.
`
`2C] Certified copies of the priority documents have been received in Application No.
`
`3.[:] Copies of the certified copies of the priority documents have been received in this National Stage
`application from the International Bureau (PCT Rule 17.2(a)).
`
`** See the attached detailed Office action for a list of the certified copies not received.
`
`Attachment(s)
`
`1)
`
`Notice of References Cited (PTO-892)
`
`2) D Information Disclosure Statement(s) (PTO/SB/08a and/or PTO/SB/08b)
`Paper No(s)/Mail Date_
`U.S. Patent and Trademark Office
`
`3) C] Interview Summary (PTO-413)
`Paper No(s)/Mail Date
`4) CI Other-
`
`PTOL-326 (Rev. 11-13)
`
`Office Action Summary
`
`Part of Paper No./Mai| Date 20190926
`
`
`
`Application/Control Number: 14/941,583
`Art Unit: 2482
`
`Page 2
`
`DETAILED ACTION
`
`Notice of Pre-AIA or AIA Status
`
`The present application, filed on or after March 16, 2013, is being examined under the
`
`first inventor to file provisions of the AIA.
`
`Continued Examination Under 37 CFR 1.114
`
`A request for continued examination under 37 CFR 1.114, including the fee set forth in
`
`37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible
`
`for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been
`
`timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR
`
`1.114. Applicant's submission filed on 09/09/2019 has been entered.
`
`Status of the Application
`
`Claims 1—9 have been cancelled. Claims 14—16 have been added. Claims 10—16 are
`
`currently pending in this application.
`
`Specification
`
`Due to the amendments to the claims, the objection to the specification has been
`
`withdrawn.
`
`Claim Rejections - 35 USC § 112
`
`Claim 1 has been cancelled. Therefore, the rejection to claim 1 under 35 USC § 112 has
`
`been withdrawn.
`
`Response to Arguments
`
`Presented arguments have been fully considered, but are rendered moot in view of new
`
`ground(s) of rejection necessitated by amendment(s) initiated by the applicant(s).
`
`
`
`Application/Control Number: 14/941,583
`Art Unit: 2482
`
`Page 3
`
`Claim Rejections - 35 USC § 103
`
`In the event the determination of the status of the application as subject to AIA 35
`
`U.S.C. 102 and 103 (or as subject to pre—AIA 35 U.S.C. 102 and 103) is incorrect, any correction
`
`of the statutory basis for the rejection will not be considered a new ground of rejection if the
`
`prior art relied upon, and the rationale supporting the rejection, would be the same under
`
`either status.
`
`The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness
`
`rejections set forth in this Office action:
`
`A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is
`not identically disclosed as set forth in section 102, if the differences between the claimed invention
`and the prior art are such that the claimed invention as a whole would have been obvious before the
`effective filing date of the claimed invention to a person having ordinary skill in the art to which the
`claimed invention pertains. Patentability shall not be negated by the manner in which the invention
`was made.
`
`The factual inquiries set forth in Graham v. John Deere C0,, 383 U.S. 1, 148 USPQ 459
`
`(1966), that are applied for establishing a background for determining obviousness under 35
`
`U.S.C.103 are summarized as follows:
`
`1. Determining the scope and contents ofthe prior art.
`
`2. Ascertaining the differences between the prior art and the claims at issue.
`
`3. Resolving the level of ordinary skill in the pertinent art.
`
`4. Considering objective evidence present in the application indicating obviousness or
`
`nonobviousness.
`
`Claims 10-16 are rejected under 35 U.S.C. 103 as being unpatentable over Takahashi et
`
`al. (Hereafter, ”Takahashi”) [US 2014/0137168 A1] in view of Park et al. (Hereafter, ”Park”)
`
`[US 2012/0023254 A1] in further view of Kim et al. (Hereafter, ”Kim”) [US 2010/0272190 A1].
`
`
`
`Application/Control Number: 14/941,583
`Art Unit: 2482
`
`Page 4
`
`In regards to claim 10, Takahashi discloses a transmitting method ([0044] a control
`
`method for a transmitting apparatus) comprising: generating an encoded stream including first
`
`contents and at least one second content ([0044] a control method for a transmitting
`
`apparatus which encodes a plurality of pieces of content corresponding to the same playback
`
`duration and which transmits a piece of content which is selected as a playback subject
`
`among the encoded pieces of content to a playback apparatus); generating control
`
`information which corresponds to Media Presentation Description defined in an MPEG—DASH
`
`standard ([0078] The description information analyzer 12 obtains content description
`
`information which includes, in each playback duration period (period) of the content,
`
`information indicating resources that can be requested to the server 2, and selects a
`
`representation as a playback subject in accordance with the obtained description
`
`information. More specifically, the description information analyzer 12 receives MPD data
`
`from the server 2 as description information and determines a representation as a playback
`
`subject. [0088] As stated above, the segment data 27 is data obtained by encoding content
`
`that can be provided to the client 1 in accordance with predetermined encoding conditions
`
`and by dividing the content at intervals of predetermined playback duration periods.) and
`
`which includes: first identifiers to identify respective first contents; at least one second
`
`identifier to identify the at least one second content ([0018] Upon receiving this request, the
`
`server sends the requested segment data to the client. The segment data is divided into a
`
`plurality of IP packets, and the IP packets are sequentially sent to the client. The time
`
`required to finish sending the segment data from the start time varies in accordance with, for
`
`example, the traffic of a network which connects the client and the server.); level identifiers
`
`
`
`Application/Control Number: 14/941,583
`Art Unit: 2482
`
`Page 5
`
`to identify first referenced contents to decode the at least one second content, the first
`
`referenced contents being included in the first contents ([0019] The client starts receiving the
`
`segment data, and upon receiving the segment data after the lapse of the minimum buffer
`
`time necessary for starting stream playback (minBufferTime) described in the MPD data, that
`
`is, six seconds in the example shown in FIG. 17, the client starts playing back the segment
`
`data. [0020] Upon the completion of receiving "Sample-div1.ts", the client makes a request to
`
`send "Sample-div2.ts", which is the subsequent data. In response to this request, the server
`
`sends data "Sample-div2.ts" to the client. The client then plays back this received segment
`
`data after finishing playing back "Sample-div1.ts".); time information to reproduce the first
`
`contents and the at least one second content ([0103] In FIG. 5, time t0 is a time at which the
`
`generation of MPD data is started. At time t0, in order to start generating MPD data, the
`
`server controller 26 instructs the description information generator 25 to generate
`
`description information by the use of predetermined distribution parameters. Upon receiving
`
`an instruction to generate description information, the description information generator 25
`
`generates description information on the basis of specified distribution parameters (the
`
`number of representations, distribution start time, playback duration of segments, playback
`
`duration period of sub segments, minimum buffer time, and so on). In order to implement
`
`adaptive streaming in the client, the server controller 26 controls the distribution parameters
`
`so that a plurality of sub segments may be included in the minimum buffer time. For
`
`example, in the data configuration shown in FIGS. 2 and 3, the minimum buffer time is set to
`
`be six seconds, and the playback duration period of sub segments is set to be two seconds.
`
`The description information generator 25 immediately generates MPD data and stores it in
`
`
`
`Application/Control Number: 14/941,583
`Art Unit: 2482
`
`Page 6
`
`the server storage unit 22.); and reconstruction data for reconstructing the encoded stream
`
`([0108] At time t1, in order to start live encoding of segment data, the server controller 26
`
`instructs the encoder 23 to start encoding with predetermined distribution parameters. The
`
`encoder 23 starts encoding on the basis of specified distribution parameters (codec, bit rate,
`
`image size, playback duration of segments, playback duration period of sub segments, and so
`
`on). The distribution parameters are supplied to the encoder 23 for each representation.
`
`[0109] Upon receiving content, the encoder 23 encodes items of segment data at the same
`
`time in parallel with a plurality of distribution parameters. Since there are two
`
`representations in the example shown in FIG. 2, the encoder 23 encodes two items of
`
`segment data at the same time in parallel.); and transmitting the encoded stream and the
`
`control information from a communication server to a client device ([0077] The client
`
`communication unit 11 sends a request to send segment data or a sub segment which forms a
`
`representation selected by the description information analyzer 12 to the server 2, and
`
`obtains segment data or a sub segment. The client communication unit 11 also obtains index
`
`information requested by the index information analyzer 14. It is assumed that the client
`
`communication unit 11 sends a request by using an HTTP. [0083] The server communication
`
`unit 21 sends the description information 29 to the client 1 in response to a request to send
`
`description information from the client 1. More specifically, the server communication unit
`
`21 reads the description information 29 stored in the server storage unit 22 and sends it to
`
`the client 1. In this case, MPD data is used as the description information 29, as stated above.
`
`[0084] Similarly, in response to a request to send the segment data 27 or a request to send
`
`the index information 28 from the client 1, the server communication unit 21 reads the
`
`
`
`Application/Control Number: 14/941,583
`Art Unit: 2482
`
`Page 7
`
`segment data 27 or the index information 28 stored in the server storage unit 22 and sends
`
`the segment data 27 or the index information 28 to the client 1.).
`
`Park discloses a transmitting method ([Title] Method and Apparatus for Providing
`
`Multimedia Streaming Service) comprising: generating an encoded stream including first
`
`contents and at least one second content ([0021] The segment includes at least one fragment,
`
`segment index information indicating the position of the at least one fragment in the
`
`segment, and fragment index information indicating the position of each of a plurality of
`
`samples included in the at least one fragment.); generating control information which
`
`corresponds to Media Presentation Description defined in an MPEG—DASH standard ([0005]
`
`When the client initially accesses the server, the server transmits a serviceable content list
`
`and a Media Presentation Description (MPD) for media content to the client. The MPD
`
`describes information required for the client to receive the media content, such as the type of
`
`the media content, the average bit rate of the media content, and the Uniform Resource
`
`Identifiers (URls) or Uniform Resource Locators (URLs) of content Segments covering a time
`
`unit. The client repeatedly requests necessary content based on the MPD. [0078] The present
`
`invention can be used in HTTP streaming, the standardization of which is being developed by
`
`the MPEG-Dynamic Adaptive HTTP Streaming (DASH) and 3.sup.rd Generation Partnership
`
`Project (3GPP). The MPEG-DASH and 3GPP defines an MPD for HTTP streaming. An HTTP
`
`streaming client may index one or more Periods, Representations, and Segments based on
`
`the MPD of media content, as illustrated in FIG. 10.) and which includes: first identifiers to
`
`identify respective first contents; at least one second identifier to identify the at least one
`
`second content ([0120] If ranges(s), a type, and duration(s) are defined according to temporal
`
`
`
`Application/Control Number: 14/941,583
`Art Unit: 2482
`
`Page 8
`
`levels or a frame type, switching is possible without sidx(s), using ranges(s) with the highest
`
`priority (e.g. type="0"; I frame) and their duration(s). [0126] In the presence of Priorities (e.g.
`
`type="0, 1, .
`
`.
`
`. , N") and ranges --according to temporal levels of a hierarchical prediction
`
`structure in Segmentlnfo of an MPD, a client transmits an HTTP partial request only for a
`
`high-priority range and plays it back.); level identifiers to identify first referenced contents to
`
`decode the at least one second content, the first referenced contents being included in the first
`
`contents ([0131] When a Segment has the configuration illustrated in FIG. 11, each moof
`
`includes frames with different temporal levels (e.g. I, P and B frames). When a Trick and
`
`Random Access (TRA) situation occurs upon user request, only a specific sample group (e.g.
`
`subfragments) such as an I frame group) in each moof is played back. To support this
`
`operation, index information enabling access to a certain group of samples in the moofs
`
`needs to be added. [0133] Table 3 illustrates a syntax that describes an sidx_extension-based
`
`method, according to the present invention. [0134] Parameters in Table 3 have the following
`
`meanings. [0135] contains_level: a flag bit indicating whether subfragmentwise index
`
`information is included; and [0136] assemble_type: an indicator indicating a media sample
`
`arrangement method of each moof. [0138] level_count: the total number of levels in a
`
`fragment; [0139] level: defines each level. A lower value indicates a higher priority; [0140]
`
`level_offset: position information of each level; [0141] offset_count: an offset count to
`
`support samplewise access, when needed; [0142] offset: the position of a sample; [0143] size:
`
`the size of a sample; [0144] reserved_bit: a reserved bit for extension. [0145] The above
`
`sidx_extension enables a user to directly access a sample group having a specific level and
`
`thus a trick mode, Picture in Picture (PIP), and rate adaptation to a network environment can
`
`
`
`Application/Control Number: 14/941,583
`Art Unit: 2482
`
`Page 9
`
`be more effectively supported.); time information to reproduce the first contents and the at
`
`least one second content ([Table 2] decoding time); and reconstruction data for reconstructing
`
`the encoded stream ([0049] A stream is reconstructed to have the same NAL Reference Index
`
`(NRI) value, for example, using an NRI field in the MPEG-4/AVC NACL header and the NRI
`
`value and range information about the stream is written in an MPD.); and transmitting the
`
`encoded stream and the control information from a communication server to a client device
`
`([0021] In accordance with the present invention, there is provided a method for providing a
`
`multimedia streaming service, in which a server transmits a Media Presentation Description
`
`(MPD) including information about media data to a client, receives from the client a partial
`
`request message requesting a part of media data having a range based on a range defined in
`
`the MPD, and transmits to the client a segment having the range in response to the partial
`
`request message. The segment includes at least one fragment, segment index information
`
`indicating the position of the at least one fragment in the segment, and fragment index
`
`information indicating the position of each of a plurality of samples included in the at least
`
`one fragment.)
`
`Further, Park discloses that the scalable video coding is a known method in the
`
`transmission method using DASH.
`
`Kim discloses a transmitting method ([0001] a scalable transmitting/receiving
`
`apparatus and method for improving availability of a broadcasting service) comprising:
`
`generating an encoded stream ([0052] The MUX 114 packetizes the base-layer video stream,
`
`the enhancement video stream and the audio stream output from the encoders 112 and 113
`
`into respective PES packets 210. Thereafter, the MUX 114 packetizes the PES packets 210 into
`
`
`
`Application/Control Number: 14/941,583
`Art Unit: 2482
`
`Page 10
`
`TS packets 220. One PES packet is packetized into one or more TS packets.) including first
`
`contents and at least one second content ([0051-0053 and Fig. 2] the TS packet consists of
`
`either a base-layer video or an enhancement layer video); generating control information
`
`([0062-0063] FIG. 4 is a view for explaining specification information of a PMT applied to the
`
`present invention. As shown in FIG. 4, a PID type includes Ple respectively representing a
`
`PMT packet, a base-layer video packet, an enhancement-layer video packet (PID-PMT,
`
`PlD_video_base layer, PlD_video_enhancement layer and PlD_audio). Respective PID values
`
`thereof are ‘100‘, ‘200‘, ‘201‘ and ‘202‘) which-eerrespends—te—M-edia—P—resen-tat—ien—Deseript—ien
`
`defined—in-a-M-P-EG-DASH—sta-nda-Fd and which includes: first identifiers to identify respective
`
`first contents ([0053] an audio TS packet, a base-layer video TS packet and an enhancement-
`
`layer video TS packet each having different Ple); at least one second identifier to identify the
`
`at least one second content ([0053] an audio TS packet, a base-layer video TS packet and an
`
`enhancement-layer video TS packet each having different Ple); level identifiers to identify
`
`first referenced contents to decode the at least one second content, the first referenced
`
`contents being included in the first contents ([0058 and 0060] the packet streams are
`
`identified as first layer (lebase layer) and second layer (L2:enhancement-layer) [0051 and
`
`Fig. 4] The MUX 114 of FIG. 1 packetizes and multiplexes program specification information,
`
`i.e., a stream map, a compressed and encoded SVC video stream, i.e., a base-layer video
`
`stream and an enhancement-layer video stream, and an audio stream, thereby generating an
`
`MPEG-2 TS. Different program identifications (Ple) are allocated to a video stream
`
`corresponding to a base layer, a video stream corresponding to an enhancement layer, and
`
`an audio stream.);
`
`'
`
`
`
`Application/Control Number: 14/941,583
`Art Unit: 2482
`
`Page 11
`
`' and transmitting the
`
`encoded stream ([0053] That is, as shown in FIG. 2, the TS generated at the MUX 114 includes
`
`an audio TS packet, a base-layer video TS packet and an enhancement-layer video TS packet
`
`each having different Ple. The program specification information is included in a header of
`
`the TS packet.) and the control information ([0062-0063] FIG. 4 is a view for explaining
`
`specification information of a PMT applied to the present invention. As shown in FIG. 4, a PID
`
`type includes Ple respectively representing a PMT packet, a base-layer video packet, an
`
`enhancement-layer video packet (PID-PMT, PlD_video_base layer, PlD_video_enhancement
`
`layer and PlD_audio). Respective PID values thereof are ‘100‘, ‘200‘, ‘201‘ and ‘202‘) from-a
`
`It would have been obvious to one of ordinary skill in the art before the effective filing
`
`date of the claimed invention to modify the teachings of Takahashi with the teachings of Park in
`
`order to improve system efficiency in the multimedia streaming service. It would have been
`
`obvious to one of ordinary skill in the art before the effective filing date of the claimed
`
`invention to modify the teachings of Takahashi and Park with the explicit teachings of scalable
`
`video coding transmission as taught by Kim in order to improve the quality of the satellite
`
`broadcasting service.
`
`In regards to claim 11, the limitations of claim 10 have been addressed. Takahashi
`
`discloses wherein each content of the first contents and the at least one second content is a
`
`video content ([0010] For example, if, for a certain piece of video content, multiple
`
`representations with different codecs, bit rates, frame rates, and resolutions are described, a
`
`
`
`Application/Control Number: 14/941,583
`Art Unit: 2482
`
`Page 12
`
`client selects a representation with a codec, bit rate, frame rate, and resolution which match
`
`the playback performance of a device of the client.) or an audio content.
`
`In regards to claim 12, the limitations of claim 10 have been addressed. Takahashi fails
`
`to explicitly disclose wherein the first contents are independently decodable.
`
`Park discloses wherein the first contents are independently decodable ([0120] If
`
`ranges(s), a type, and duration(s) are defined according to temporal levels or a frame type,
`
`switching is possible without sidx(s), using ranges(s) with the highest priority (e.g. type="0"; I
`
`frame) and their duration(s). [0126] In the presence of Priorities (e.g. type="0, 1, .
`
`.
`
`. , N") and
`
`ranges --according to temporal levels of a hierarchical prediction structure in Segmentlnfo of
`
`an MPD, a client transmits an HTTP partial request only for a high-priority range and plays it
`
`back. [0131] When a Segment has the configuration illustrated in FIG. 11, each moof includes
`
`frames with different temporal levels (e.g. I, P and B frames). When a Trick and Random
`
`Access (TRA) situation occurs upon user request, only a specific sample group (e.g.
`
`subfragments) such as an I frame group) in each moof is played back. To support this
`
`operation, index information enabling access to a certain group of samples in the moofs
`
`needs to be added.).
`
`Kim discloses wherein the first contents are independently decodable ([0032] The base
`
`layer corresponds to a compression result of an SD resolution image compatible to the H.264
`
`Advanced Video Coding (AVC) standard, and the enhancement layer corresponds to a result
`
`of compression and encoding performed by referencing an input HD resolution image and an
`
`encoding result of the base layer according to the H.264 SVC standard. If only a base-layer
`
`
`
`Application/Control Number: 14/941,583
`Art Unit: 2482
`
`Page 13
`
`video stream is decoded, an SD image may be restored, and if an enhancement-layer video
`
`stream is decoded together with the base-layer video stream, an HD image may be restored.).
`
`It would have been obvious to one of ordinary skill in the art before the effective filing
`
`date of the claimed invention to modify the teachings of Takahashi with the teachings of Park in
`
`order to improve system efficiency in the multimedia streaming service. It would have been
`
`obvious to one of ordinary skill in the art before the effective filing date of the claimed
`
`invention to modify the teachings of Takahashi and Park with the explicit teachings of scalable
`
`video coding transmission as taught by Kim in order to improve the quality of the satellite
`
`broadcasting service.
`
`In regards to claim 13, the limitations of claim 10 have been addressed. Takahashi
`
`discloses wherein the reconstructing data includes time offset information ([0108] At time t1,
`
`in order to start live encoding of segment data, the server controller 26 instructs the encoder
`
`23 to start encoding with predetermined distribution parameters. The encoder 23 starts
`
`encoding on the basis of specified distribution parameters (codec, bit rate, image size,
`
`playback duration of segments, playback duration period of sub segments, and so on). The
`
`distribution parameters are supplied to the encoder 23 for each representation. [0109] Upon
`
`receiving content, the encoder 23 encodes items of segment data at the same time in parallel
`
`with a plurality of distribution parameters. Since there are two representations in the
`
`example shown in FIG. 2, the encoder 23 encodes two items of segment data at the same
`
`time in parallel.).
`
`
`
`Application/Control Number: 14/941,583
`Art Unit: 2482
`
`Page 14
`
`In regards to claim 14, the limitations of claim 10 have been addressed. Takahashi
`
`discloses wherein the reconstruction data indicates a change in the first contents and the at
`
`least one second content ([0109] Upon receiving content, the encoder 23 encodes items of
`
`segment data at the same time in parallel with a plurality of distribution parameters. Since
`
`there are two representations in the example shown in FIG. 2, the encoder 23 encodes two
`
`items of segment data at the same time in parallel.).
`
`In regards to claim 15, the limitations of claim 14 have been addressed. Takahashi
`
`discloses wherein the reconstruction data indicates removal of at least one content ([0005]
`
`Accordingly, by describing multiple representations with different bit rates in the MPD data,
`
`the client may select a representation with a bit rate which matches, for example, a
`
`communication condition or the playback performance of a device of the client, obtain a
`
`resource specified by the selected representation, and play back the resource. [0015]
`
`Accordingly, when using this MPD data, the client may select one of a representation with
`
`MPEGZ-TS having a bandwidth "1024000" and a representation with MPEGZ-TS having a
`
`bandwidth "102400". Since the content of these representations is the same video content,
`
`the client may select the first representation if a sufficient bandwidth is available and the
`
`client wishes to obtain high quality video, and may select the second representation if a
`
`sufficient bandwidth is not available. Note that, in the example shown in part (b) of FIG. 17, a
`
`case in which MPEGZ-TS is used as segment data is shown, but, in DASH, MP4 may be used as
`
`segment data.).
`
`
`
`Application/Control Number: 14/941,583
`Art Unit: 2482
`
`Page 15
`
`Park discloses wherein the reconstruction data indicates removal of at least one content
`
`([0049] A stream is reconstructed to have the same NAL Reference Index (NRI) value, for
`
`example, using an NRI field in the MPEG-4/AVC NACL header and the NRI value and range
`
`information about the stream is written in an MPD.).
`
`Kim discloses wherein the reconstruction data indicates removal of at least one content
`
`([0032] The base layer corresponds to a compression result of an SD resolution image
`
`compatible to the H.264 Advanced Video Coding (AVC) standard, and the enhancement layer
`
`corresponds to a result of compression and encoding performed by referencing an input HD
`
`resolution image and an encoding result of the base layer according to the H.264 SVC
`
`standard. If only a base-layer video stream is decoded, an SD image may be restored, and if
`
`an enhancement-layer video stream is decoded together with the base-layer video stream, an
`
`HD image may be restored.).
`
`It would have been obvious to one of ordinary skill in the art before the effective filing
`
`date of the claimed invention to modify the teachings of Takahashi with the teachings of Park in
`
`order to improve system efficiency in the multimedia streaming service. It would have been
`
`obvious to one of ordinary skill in the art before the effective filing date of the claimed
`
`invention to modify the teachings of Takahashi and Park with the explicit teachings of scalable
`
`video coding transmission as taught by Kim in order to improve the quality of the satellite
`
`broadcasting service.
`
`In regards to claim 16, the limitations of claim 10 have been addressed. Takahashi
`
`discloses wherein the reconstruction data is different from information for decoding the first
`
`
`
`Application/Control Number: 14/941,583
`Art Unit: 2482
`
`Page 16
`
`contents and the at least one second content ([0108] At time t1, in order to start live encoding
`
`of segment data, the server controller 26 instructs the encoder 23 to start encoding with
`
`predetermined distribution parameters. The encoder 23 starts encoding on the basis of
`
`specified distribution parameters (codec, bit rate, image size, playback duration of segments,
`
`playback duration period of sub segments, and so on). The distribution parameters are
`
`supplied to the encoder 23 for each representation. [0109] Upon receiving content, the
`
`encoder 23 encodes items of segment data at the same time in parallel with a plurality of
`
`distribution parameters. Since there are two representations in the example shown in FIG. 2,
`
`the encoder 23 encodes two items of segment data at the same time in parallel.).
`
`Contact Information
`
`Any inquiry concerning this communication or earlier communications from the
`
`examiner should be directed to Kaitlin A Retallick whose telephone number is (571)270—3841.
`
`The examiner can normally be reached on Monday—Friday 8am—5pm.
`
`Examiner interviews are available via telephone, in—person, and video conferencing
`
`using a USPTO supplied web—based collaboration tool. To schedule an interview, applicant is
`
`encouraged to use the USPTO Automated Interview Request (AIR) at
`
`http://www.uspto.gov/interviewpractice.
`
`If attempts to reach the examiner by telephone are unsuccessful, the examiner’s
`
`supervisor, Chris Kelley can be reached on (571) 272—7331. The fax phone number for the
`
`organiza