`
`ae
`
`*
`ths4onae
`
`Nels
`
`YAMAMOTO TAKAYA; UCHIUM! TADASHI + (YAMAMOTO
`TAKAYA, ; UCHIUMI TADASHI)
`
`SHARP KK + (SHARP CORP)
`
`- international
`”.
`Ive
`- cooperat
`JP20120099118 20120424
`
`HO4N13/08
`
`z
`
`HOAN7/32
`
`Giobal Dossier
`
`Inventor(s)
`
`.x
`
`Applicant(s)
`
`Classification
`
`Application
`number
`
`ty number
`
`JP20120099118 20120424
`
`ior
`Pr
`(s)
`
`iA hewee
`RNRye =
`O
`SO
`
`CRiognIfon
`
`rary
`
`aee
`
`pedMong 2
`
`TameTek
`
`ge with
`
`
`
`
`
`
`
`
`
`[Document Name |Description
`
`image encoding
`image decoding device,
`[Title of the invention]lmage encoding device,
`method,
`image decoding method,
`image encoding program, and image decoding program
`
`[Technical field]
`
`invention relates to an image encoding apparatus, an image decoding
`[0001 |The present
`apparatus,
`an image encoding method, an image decoding method, an image encoding
`program, and an image decoding program.
`
`[Background of the invention]
`
`[0002|Recently, discussion of techniques relating to image coding has been made. For
`example, Non-Patent Document
`1 describes a technique relating to HEVC (High
`Efficiency Video Coding, high-efficiency moving picture compression coding).
`According to Non-Patent Document 1,
`in HEVC, an image (picture) constituting a moving
`image is divided into a tree block obtained by dividing an image and an encoding
`unit
`(Coding Unit:CU) obtained by dividing a tree block, and encoded / decoded for
`each encoding unit.
`In addition,
`in intra prediction and inter screen prediction,
`encoding unit which is a unit of encoding processing is further divided into
`prediction units (Prediction Unit:PU). The shape of the prediction unit
`is selected
`from a plurality of predetermined shapes. For example,
`the plurality of shapes of
`FIG.
`3 generally have uniform (symmetrical) shapes in the longitudinal and transverse
`directions.
`
`an
`
`[Prior art documents |
`
`[Non-patent document |
`
`10003 |
`
`17, Search],
`[2012 Apr.
`Ishikawa,
`[online],
`[Non-patent document 1][Online],
`& {t
`; URL: http://whbb. forum. impressrd. jp/feature/20110412/837 & et
`;.
`
`Internet
`
` [0004 ]However,
`
`[Summary of the invention]
`
`[Problem to be solved by the invention]
`
`in MPEG (Moving Picture Experts Group) - 3 dv, since a
`for example,
`camera is arranged in 1 d (parallel
`to an optical axis and | straight
`lines in a
`horizontal direction), parallax oeeurs only in the horizontal direction.
`In some
`cases,
`there is a case where a vertical edge is a region in which a horizontal
`parallax appears conspicuously.
`In this way,
`the direction in which an edge or the
`like appears may be horizontal and vertical and not uniform.
`In such cases,
`in the
`prior art,
`if the width of the prediction unit
`is narrowed to detect an edge,
`it
`is
`necessary to reduce the size of the coding unit.
`In this case,
`a picture is divided
`into a plurality of encoding units, and accordingly,
`information associated with the
`encoding unit (e.g.,
`information indicating a method of dividing into an encoding
`mode and a prediction unit) increases,
`resulting in an increase in the amount of
`
`
`
`code,
`
`invention to provide an image coding apparatus,
`is an object of the present
`[O0G5]]t
`an image decoding apparatus, an image coding method, an image decoding method, an
`image coding program, and an image decoding program which can prevent an increase in
`code amount.
`
`[Means for solving the problem]
`
`[0006 |
`in order to solve the above
`invention,
`(1) According to an aspect of the present
`problems, one aspect of the present
`invention is an image encoding device which
`divides an image into prediction units, generates a prediction image for each
`prediction unit, and encodes the image, and the shape of each of the prediction units
`is selected from a plurality of predetermined shapes. A prediction method selected
`from a plurality of prediction methods including at
`least parallax prediction which
`is a prediction method for generating a predicted image with reference to an image
`based on a viewpoint different from the image is used. The image encoding device
`includes a predictive image generation unit
`that generates a predictive image of the
`predictive unit, and the plurality of predetermined shapes include a shape longer in
`a direction perpendicular to a parallax direction in the parallax prediction than a
`shape longer in the parallax direction.
`
`[0007]
`there is
`invention,
`(2)
`In addition, according to an embodiment of the present
`provided a unit shape storage unit for storing a 1 set and a 2 set, which is a set of
`shapes that can be used as a shape of the prediction unit,
`in the image encoding
`apparatus described above. A determination unit that determines whether the image is
`a reference viewpoint
`image that cannot be used for parallax prediction or a
`non-reference viewpoint
`image that can be used for parallax prediction, and the
`predicted image generation unit selects one of the sets stored in the unit shape
`storage unit according to the determination result of the determination unit,
`It
`characterized in that, using the predetermined plurality of shapes as the
`predetermined shapes,
`the 2 set, which is a set selected when the determination unit
`determines that
`the determination unit
`is a non-reference viewpoint
`image,
`includes a
`larger shape in a direction perpendicular to a parallax direction in the parallax
`prediction than a shape that
`is longer in the parallax direction.
`
`is
`
`[0008 |
`in the image encoding
`invention,
`in an embodiment of the present
`(3)
`In addition,
`apparatus according to the present
`invention,
`the 1 set
`is set as follows.
`It
`is a
`set chosen when it judges with the aforementioned judgment part being a standard view
`image, and the above-mentioned second set changes the part of the above-mentioned
`first sets to form long in the direction perpendicular to the parallax directions in
`the aforementioned azimuth difference prediction.
`
`[0009 |
`in the image encoding
`invention,
`in an embodiment of the present
`(4)
`In addition,
`apparatus according to the present
`invention,
`the | set
`is set as follows,
`[t
`is a
`set chosen when it judges with the aforementioned judgment part being a standard view
`image, and the above-mentioned second set adds form long in the direction
`perpendicular to the parallax directions in the aforementioned azimuth difference
`prediction to the above-mentioned first set,
`
`
`
`[0010]
`in the above-described
`invention,
`in an embodiment of the present
`(5)
`In addition,
`image encoding apparatus,
`a part of the 2 set
`is changed in direction according to
`the parallax direction so as to be elongated in a direction perpendicular to the
`parallax direction.
`
`[O0LL]
`a prediction image is
`invention,
`in an embodiment of the present
`(6)
`In addition,
`generated per prediction unit and an image is decoded, and the shape of each of the
`prediction units is selected from a plurality of predetermined shapes. A prediction
`method selected from a plurality of prediction methods including at
`least parallax
`prediction which is a prediction method for generating a predicted image with
`reference to an image based on a viewpoint different from the image is used. The
`image decoding apparatus includes a prediction image generation unit
`that generates a
`prediction image of the prediction unit, and the plurality of predetermined shapes
`include a shape that
`is longer
`in a direction perpendicular to a parallax direction
`in the parallax prediction than a shape that
`is longer
`in the parallax direction.
`
`[0012]
`there is provided
`invention,
`(7)
`In addition, according to an aspect of the present
`an image decoding apparatus, wherein a unit shape storage unit
`that stores a 1 set
`and a 2 set that are a set of shapes that can be used as a shape of the prediction
`unit
`is stored. A determination unit
`that determines whether the image is a reference
`viewpoint
`image that cannot be used for parallax prediction or a non-reference
`viewpoint
`image that can be used for parallax prediction, and the predicted image
`generation unit selects one of the sets stored in the unit shape storage unit
`according to the determination result of the determination unit.
`It
`is characterized
`in that, using the predetermined plurality of shapes as the predetermined shapes,
`the
`2 set, which is a set selected when the determination unit determines that
`the
`determination unit
`is a non-reference viewpoint
`image,
`includes a larger shape in a
`direction perpendicular to a parallax direction in the parallax prediction than a
`shape that
`is longer in the parallax direction,
`
`[0013]
`in the image decoding
`invention,
`in an embodiment of the present
`(8)
`In addition,
`device according to the present
`invention,
`the 1 set
`is set as follows.
`It
`is a set
`chosen when it judges with the aforementioned judgment part being a standard view
`image, and the above-mentioned second set changes the part of the above-mentioned
`first sets to form long in the direction perpendicular to the parallax directions in
`the aforementioned azimuth difference prediction,
`
`(0014 ]
`in the image decoding
`invention,
`in an embodiment of the present
`(9)
`In addition,
`device according to the present
`invention,
`the | set
`is set as follows,
`it is a set
`chosen when it judges with the aforementioned judgment part being a standard view
`image, and the above-mentioned second set adds form long in the direction
`perpendicular to the parallax directions in the aforementioned azimuth difference
`prediction to the above-mentioned first set.
`
`f0015]
`in the above-described
`invention,
`in an embodiment of the present
`(10)
`In addition,
`image decoding apparatus,
`a part of the 2 set
`is changed in direction according to
`the parallax direction so as to be elongated in a direction perpendicular to the
`parallax direction.
`
`
`
`[0016]
`invention, an image encoding method
`in an embodiment of the present
`(11} In addition,
`in an image encoding apparatus which divides an image into prediction units,
`generates a prediction image for each prediction unit, and encodes the image is
`selected from a plurality of predetermined shapes. A prediction method selected from
`a plurality of prediction methods including at
`least parallax prediction which is a
`prediction method for generating a predicted image with reference to an image based
`on a viewpoint different from the image is used. The image encoding method includes a
`predicted image generation process for generating a predicted image of the
`prediction unit, and the plurality of predetermined shapes include a shape longer in
`a direction perpendicular to a parallax direction in the parallax direction than a
`shape longer in the parallax direction,
`
`[0017]
`invention, an image decoding method
`in an embodiment of the present
`(12)
`In addition,
`in an image decoding apparatus for generating a predicted image for each prediction
`unit and decoding an image,
`the shape of each of the prediction units is selected
`from a plurality of predetermined shapes. A prediction method selected from a
`plurality of prediction methods including at
`least parallax prediction which is a
`prediction method for generating a predicted image with reference to an image based
`on a viewpoint different from the image is used. The image decoding method includes a
`predictive image generation process for generating a predictive image of the
`predictive unit, and the plurality of predetermined shapes include a shape longer in
`a direction perpendicular to a parallax direction in the parallax prediction than a
`shape longer in the parallax direction.
`
`[0018]
`the shape of each of the
`invention,
`in one aspect of the present
`(13)
`In addition,
`prediction units is selected from a plurality of predetermined shapes,
`in a computer
`of an image coding apparatus which divides an image into prediction units, generates
`a prediction image for each prediction unit, and encodes the image. A prediction
`method selected from a plurality of prediction methods including at
`least parallax
`prediction which is a prediction method for generating a predicted image with
`reference to an image based on a viewpoint different from the image is used. The
`image encoding program causes a prediction image generation procedure for generating
`a prediction image of the prediction unit
`to be performed, and the plurality of
`predetermined shapes include a larger shape in a direction perpendicular to a
`parallax direction in the parallax prediction than a shape longer
`in the parallax
`direction.
`
`[0019]
`a computer of each of the
`invention,
`in one aspect of the present
`(14)
`In addition,
`prediction units is selected from a plurality of predetermined shapes in a computer
`of a computer of an image decoding apparatus that generates a prediction image for
`each prediction unit and decodes an image. A prediction method selected from a
`plurality of prediction methods including at
`least parallax prediction which is a
`prediction method for generating a predicted image with reference to an image based
`on a viewpoint different from the image 1s used. The image decoding program includes
`a predictive image generation means for generating a predictive image of the
`predictive unit, and the plurality of predetermined shapes include a shape longer in
`a direction perpendicular to a parallax direction in the parallax prediction than a
`shape longer in the parallax direction.
`
`[Effect of the invention]
`
`
`
`[0020]According to the present
`
`invention,
`
`increase of a code amount can be prevented.
`
`[Brief description of the drawings]
`
`[0021]
`
`is a schematic block diagram showing a configuration of a video system
`1
`FIG.
`according to the present embodiment
`;.
`
`2 is a schematic block diagram showing a configuration of an encoding apparatus
`FIG.
`according to the present embodiment
`;.
`
`3 is a schematic block diagram showing a configuration of a predictive image
`FIG,
`generation unit according to the present embodiment
`;.
`
`FIG. 4 is an explanatory diagram illustrating division of a tree block according to
`an embodiment of
`the present
`invention into an encoding unit
`;,
`
`5 is a diagram showing an example of a I set and a 2 set stored in the unit
`FIG.
`shape storage unit according to the present embodiment
`;.
`
`6 is a table showing a relationship between shape information of a prediction
`FIG.
`unit and division according to the present embodiment
`;.
`
`7 is a table showing the relationship between the shape information of the
`FIG.
`prediction unit,
`the layer information,
`the prediction mode,
`the division mode, and
`the intra-screen division flag stored in the unit shape storage unit according to the
`present embodiment
`;,
`
`8 is a schematic diagram showing an example of a syntax table of an encoding
`FIG,
`unit according to the present embodiment
`;.
`
`9 is a schematic block diagram showing a configuration of a decoding apparatus
`FIG.
`according to the present embodiment
`;.
`
`10 is a schematic diagram showing an example of a syntax table of a Nal unit
`FIG.
`according to the present embodiment
`;.
`
`is a schematic block diagram showing a configuration of a prediction unit
`I]
`FiG.
`information decoding unit according to the present embodiment
`;.
`
`12 is a flowchart showing an example of an operation of the prediction unit
`FIG,
`information decoding unit according to the present embodiment
`;.
`
`13 is a diagram showing an example of a I set and a 2 set stored in a unit shape
`FIG,
`storage unit according to a 2 embodiment of the present
`invention ;.
`
`14 is a table showing a relationship between shape information of a prediction
`FIG.
`unit and division according to the present embodiment
`;.
`
`15 is a table showing the relationship between the shape information of the
`FIG,
`prediction unit,
`the layer information,
`the prediction mode,
`the division mode, and
`the intra-screen division flag stored in the unit shape storage unit according to the
`present embodiment
`;.
`
`
`
`16 is a diagram showing an example of a 1 set and a 2 set stored in a unit shape
`FIG.
`storage unit according to a 3 embodiment of the present
`invention ;.
`
`17 is a table showing the shape information of the prediction unit stored in the
`FIG.
`unit shape memory according to the present embodiment, and the relationship between
`the layer information,
`the prediction mode,
`the division mode, and the intra-screen
`division flag ;.
`
`18 is a diagram showing an example of a 2 set stored in a unit shape storage
`Fig.
`unit according to a 4 embodiment of the present
`invention ;.
`
`19 is a table showing the relationship between the shape information of the
`FIG.
`and
`prediction unit,
`the layer information,
`the prediction mode,
`the division mode,
`the intra-screen division flag stored in the unit shape storage unit according to the
`present embodiment
`;.
`
`FIG, 20 is a schematic diagram showing an example of a syntax table of an encoding
`unit according to the present embodiment
`;.
`
`FIG. 21 is a diagram showing an example of a 2 set stored in a unit shape storage
`unit according to a 5 embodiment of the present
`invention ;.
`
`FIG. 22 is a table showing the relationship between the shape information of the
`prediction unit,
`the layer information,
`the prediction mode,
`the division mode, and
`the intra-screen division flag stored in the unit shape storage unit according to the
`present embodiment
`;.
`
`FIG. 23 is a schematic block diagram showing a configuration of an encoding apparatus
`according to a 6 embodiment of the present
`invention ;,
`
`FIG. 24 is a schematic block diagram showing a configuration of a predictive image
`generation unit according to the present embodiment
`;.
`
`Fig, 25 is an explanatory diagram illustrating division of a tree block according to
`an embodiment of the present
`invention into an encoding unit
`;.
`
`FIG. 26 is a table showing the relationship between the shape information of an
`encoding unit,
`the layer information,
`the prediction mode,
`the division mode, and the
`intra-screen division flag stored in the unit shape storage unit according to the
`present embodiment
`;,
`
`FIG, 27 is a schematic block diagram showing a configuration of a decoding apparatus
`according to the present embodiment
`;,
`
`FIG. 28 is an explanatory diagram illustrating division from a tree block to a
`conversion unit according to the present embodiment
`3.
`
`Fig. 29 is a schematic block diagram showing a configuration of an encoding unit
`information decoding unit according to the present embodiment
`;.
`
`FIG, 30 is a flowchart showing an example of an operation of an encoding unit
`information decoding unit according to the present embodiment
`;.
`
`FIG,
`
`31 is a schematic diagram showing an example of a syntax table of an encoding
`
`
`
`tree at a non-reference viewpoint according to the present embodiment
`
`;
`
`FIG. 32 is an explanatory diagram illustrating an example of an effect of the present
`embodiment
`3.
`
`[Mode for carrying out
`
`the invention]
`
`[0022]
`(Embodiment No, 1)
`invention will be described in detail with
`Hereinafter,
`a | embodiment of the present
`reference to the accompanying drawings. FIG.
`1
`is a schematic block diagram showing
`a configuration of a video system according to a 1 embodiment of the present
`invention.
`In this figure,
`a video system includes an encoding device | and a display
`device D 1. The display device D |!
`includes a decoding device 2 and a display unit D
`ll. The encoding device | encodes an image of
`the video (multi-viewpoint video) from
`the plurality of
`input viewpoints,
`thereby generating encoded data and transmitting
`the encoded data to the decoding device 2. The decoding device 2 obtains an image by
`decoding the encoded data transmitted from the encoding device 1. The decoding device
`2 inputs the acquired image to the display unit D Il. The display unit D 11 displays
`a multi-viewpoint
`image based on an image input
`from the decoding device 2.
`
`2 is a schematic block diagram showing a configuration of an encoding
`[0023]FIG.
`apparatus | according to the present embodiment. The encoding apparatus 1
`includes an
`encoding information setting unit 100,
`a subtraction unit I0l, a conversion
`quantization unit 102, an inverse quantization inverse transform unit 103, an
`addition unit 104,
`a frame memory 105,
`a unit shape storage unit 106, a predicted
`image generation unit 107, and an encoded data generation unit 108. An image
`represented by a video signal
`input
`to the encoding apparatus |
`is a multi-viewpoint
`image,
`I. e.,
`a video composed of a plurality of viewpoint
`images each having a
`different viewpoint, Note that, between viewpoint
`images constituting the multi-view
`image in the present embodiment,
`the viewpoint differs only in the horizontal
`direction (horizontal direction),
`I. e.,
`in the horizontal direction.
`In addition,
`hereinafter, data representing an image or a video may be referred to as an image or
`a video.
`
`[0024|The encoded information setting unit 100 determines whether each of the
`viewpoint
`images represented by the video signal
`input
`from the outside of the
`encoded information setting unit 100 is a reference viewpoint
`image (Base _ View) or
`a non-reference viewpoint
`image (Non-base _ View}. Here,
`the reference viewpoint
`image iS a viewpoint
`image which is encoded so as to be decoded by an encoding method
`for a single viewpoint,
`for example, a HEVC, without using a parallax prediction
`which is a prediction method for generating a predicted image with reference to
`another viewpoint
`image at the time of encoding. Further,
`the non-reference viewpoint
`image 1S @ viewpoint
`image in which parallax prediction can be used at
`the time of
`encoding. For example,
`the encoding information setting unit 100 sets the first input
`viewpoint
`image as a reference viewpoint
`image and the other viewpoint
`image as a
`non-reference viewpoint
`image. For each of the viewpoint
`images,
`the encoded
`information setting unit 100 inputs information (also referred to as layer
`image or
`information)
`indicating whether the viewpoint
`image is a reference viewpoint
`a non-reference viewpoint
`image,
`to the predicted image generation unit 107 and the
`encoded data generation unit 108, The encoded information setting unit 100 inputs the
`input viewpoint
`image to the subtraction unit 101 and the predicted image generation
`unit 107,
`
`
`
`[0025 |The subtraction unit 101 subtracts the predicted image from an area
`corresponding to the predicted image input by the predicted image generation unit 107
`among the viewpoint
`images input by the coded information setting unit 100, and
`generates a difference image, The subtraction unit 101 inputs the generated
`difference image to the conversion quantization unit 102.
`
`from
`[0026|The conversion quantization unit 102 divides the difference image input
`the subtraction unit 101 into a conversion unit
`(Transfrom Unit:TU) serving as a unit
`for performing orthogonal
`transformation.
`In the division method, 4 division is
`recursively performed, and a flag (split_transofrmflag) indicating whether or not 4
`division is performed is generated for each conversion unit. The conversion
`quantization unit 102 performs orthogonal
`transformation on each of the divided
`conversion units, As an orthogonal
`transform, DCT coefficients are generated by DCT
`transform (discrete cosine transform ; Discrete Cosine Transform),
`for example. Other
`methods,
`such as FFT (Fast Fourier Transform ; Fast Fourier Transform), may be used
`instead of DCT. The conversion quantization unit 102 quantizes the result of
`orthogonal
`transformation (e.g., DCT coefficient) and calculates a quantization
`coefficient. The conversion quantization unit 102 inputs the quantized quantization
`coefficient to the encoded data generation unit 108 and the inverse quantization
`inverse transform unit 103.
`
`[0027]The inverse quantization inverse transform unit 103 performs inverse
`from the
`quantization (inverse quantization) on the quantization coefficient
`input
`transform quantization unit 102, opposite to the quantization performed by the
`transform quantization unit 102,
`to generate an inverse quantization coefficient.
`Inverse quantization inverse transform unit 103 generates a decoded difference image
`by performing inverse transform processing, which is inverse to orthogonal
`transform
`of transform quantization unit 102,
`for example,
`inverse DCT transform, The inverse
`quantization inverse transform unit 103 inputs the generated decoded difference image
`to the addition unit 104,
`
`[0028]An adder 104 adds the predicted image input by the predicted image generator
`107 and the decoded differential
`image input by the inverse quantizer 103. The adding
`unit 104 generates a reference image obtained by encoding and decoding an input
`viewpoint
`image (internal decoding). This reference image is input
`into the frame
`nemory 105, The frame memory 105 stores the reference image input by the adder 104.
`
`{0029 |The unit shape storage unit 106 stores information which can be used as a shape
`of a prediction unit (Prediction Unit) which is a unit when the predicted image
`generation unit 107 generates a predicted image, The unit shape storage unit 106 in
`this embodiment stores a | set which is a set of shapes which can be used when the
`viewpoint
`image is a reference viewpoint
`image, and a 2 set which is a set of shapes
`which can be used when the viewpoint
`image is a non-reference viewpoint
`image,
`In
`this embodiment,
`the unit shape storage unit 106 stores the shape of a prediction
`unit by storing a division mode for dividing an encoding unit
`to be described later
`into a prediction unit.
`
`image input by
`{0030]The predicted image generation unit 107 divides the viewpoint
`the coded information setting unit 100 into prediction units, and generates a
`predicted image for each prediction unit by referring to the reference image stored
`in the frame memory 105. The predicted image generation unit 107 inputs the generated
`predicted image to the subtraction unit 101 and the addition unit 104. Details of
`the predicted image generation unit 107 will be described later. The encoded data
`generation unit 108 performs compression encoding for reducing the number of bits,
`
`
`
`from each unit, and generates encoded data. The
`for input
`such as entropy coding,
`encoded data generation unit 108 outputs the generated encoded data to the outside of
`the encoding apparatus | (e.g.,
`the decoding apparatus 2).
`
`3 is a schematic block diagram showing a configuration of the predicted
`[OOBL]FIG.
`image generation unit 107 according to the present embodiment, The prediction image
`generation unit 107 includes a tree block division unit 171, an encoding unit
`division unit 172, a prediction unit division unit 173, an inter prediction unit 174,
`an intra prediction unit 175, and a selection unit 176. The tree block dividing unit
`i7i divides each frame of the viewpoint
`image into slices, and divides the frame
`into blocks of 64 pixels X 64 pixels called a tree block. The encoding unit dividing
`unit 172 divides the tree block into units for determining a prediction mode for
`generating a predicted image called an encoding unit
`(Coding Unit).
`In this coding
`unit,
`there 1s a range of possible sizes from 0 pixels X 64 pixels in a case where a
`tree block is divided into 64 times to 8 pixels & 8 pixels in a case where a tree
`block is divided into 3 times. The coding unit dividing unit 172 generates coding
`units in all cases up to 8 pixels xX 8 pixels in the case of dividing the tree block
`3 times from an encoding unit of 64 pixels x 64 pixels in the case of dividing the
`tree block 0 times,
`and inputs them to the prediction unit dividing unit 173,
`
`into
`[0032]The prediction unit dividing unit 173 divides the input coding unit
`the
`prediction units. When the encoding information setting unit 100 determines that
`viewpoint
`image to be processed is a reference viewpoint
`image,
`the prediction unit
`dividing unit 173 divides the encoding unit
`into the prediction unit by all
`the
`dividing methods belonging to the 1 set stored in the unit shape storage unit 106,
`Similarly, when the encoding information setting unit 100 determines that
`the
`viewpoint
`image to be processed is a non-reference viewpoint
`image,
`the prediction
`unit dividing unit 173 divides the encoding unit
`into the prediction unit by ail
`the
`dividing methods belonging to the 2 set stored in the unit shape storage unit 106.
`The prediction unit division unit 178 inputs all] of the prediction units generated by
`the division into the inter-screen prediction unit 174. The prediction unit division
`unit 173 inputs only the prediction unit
`in the case where the coding unit
`is
`divided into 0 times and the prediction unit
`in which the coding unit
`is divided into
`4 into the intra prediction unit 175,
`
`[0033]An inter-picture prediction unit 174 generates a prediction image of an input
`prediction unit using inter-frame prediction. Specifically,
`the inter prediction unit
`174 searches for an image closest to the input prediction unit
`from the reference
`image stored in the frame memory 105, and sets the image as a predicted image. Note
`that, when the viewpoint
`image to which the prediction unit belongs is a reference
`viewpoint
`image, only the motion prediction can be used as the inter-screen
`prediction,
`and the parallax prediction cannot be used. Accordingly,
`in that case,
`the inter-picture prediction unit 174 sets an image closest to the prediction unit
`among the other frames of the same viewpoint
`image among the reference images stored
`in the frame memory 105 as the predicted image.
`
`image to which the prediction unit belongs is a
`[0034]When the viewpoint
`image, motion prediction and disparity prediction can be used
`non-reference viewpoint
`as inter-screen prediction. Accordingly,
`in that case,
`the inter-picture prediction
`unit 174 sets an image closest to the prediction unit as a predictive image from all
`reference images stored in the frame memory 105 (including a viewpoint
`image based on
`another viewpoint}. The inter prediction unit 174 inputs the predicted image and the
`reference information (the index indicating a motion vector (also referred to as a
`disparity vector) and a reference image)
`to the predicted image to the selection unit
`
`
`
`176,
`
`[0035 |The intra prediction unit 175 generates the prediction image of the input
`prediction unit using intra prediction. At this time,
`the intra prediction unit 175
`uses a mode in which the generated prediction image is closest to the input
`prediction unit as a mode indicating a prediction direction or the like. The intra
`prediction unit 175 inputs the generated prediction image and an intra prediction
`mode indicating a mode used in generating the prediction image to the selection unit
`176,
`
`[0036]Based on the information input by the inter prediction unit 174 and the intra
`prediction unit 175,
`the selection unit 176 selects the number of times of division
`of each tree block into the encoding unit,
`the prediction mode of each coding unit,
`and the division method of each coding unit
`into prediction units. Details of the
`selection method will be omitted here, bat
`the selection unit 176 includes an
`inter-screen prediction unit 174 and an intra-screen prediction unit 175.
`In addition
`to all patterns of division from the tree block to the coding unit, prediction
`images obtained when intra prediction of each of prediction units corresponding to
`all patterns of a division method to a prediction unit
`is used and prediction images
`when inter-picture prediction is used are input. Referring to these,
`the selection
`unit 176 can select,
`for example,
`the number of times of division from the tree block
`to the encoding unit,
`the prediction mode of each coding unit, and the division
`method (division mode) of each coding unit
`to the prediction unit so that
`the coding
`efficiency becomes highest.
`
`[0037]The selection unit 176 inputs the selected content and the intra prediction
`mode or reference information corresponding to the content as the division / mode
`information into the encoded data generation unit 108. Further,
`the selection unit
`i176 inputs the predicted image corresponding to the selected content
`to the
`subtraction unit 101 and the addition unit 104, Here,
`the prediction mode includes an
`intra prediction mode (ModeINTRA) and an inter prediction mode (ModeINTER). The
`intra prediction mode is a mode in which intra prediction is used for generating a
`prediction image of each prediction unit
`in the encoding unit.
`In addition,
`the
`inter-picture prediction mode is a mode in which inter-picture prediction is used for
`generating a predictive picture of each prediction unit
`in the encoding unit. Each
`of the prediction modes
`is converted into each value of the pred modeflag.
`In
`addition,
`a division mode of dividing the encoding unit
`into the prediction unit
`converted into each value of part _ mode.
`
`is
`
`[O038]FIG. 4 is an explanatory diagram illustrating division from a tree block
`according to this embodiment
`to an encoding unit. As shown in FIG. 5, when the value
`of the encoding unit division flag (split codingunit flag)
`is "1",
`the tree block TB
`is divid