`
`(19) World Intellectual Property Organization
`International Bureau
`
`5 May 2011 (05.05.2011) (10) International Publication Number
`
`(43) International Publication Date
`
`WO 2011/053050 A2
`
`
`G6)
`
`(21)
`
`International Patent Classification:
`HOAN 7/24 (2011.01)
`
`(81)
`
`International Application Number:
`PCT/KR2010/007537
`
`(22)
`
`International Filing Date:
`
`29 October 2010 (29.10.2010)
`
`(25)
`
`(26)
`
`(30)
`
`(7)
`
`(72)
`
`(74)
`
`Filing Language:
`
`Publication Language:
`
`English
`
`English
`
`Priority Data:
`10-2009-0104421 30 October 2009 (30.10.2009)
`
`KR
`
`Applicant (for all designated States except US): SAM-
`SUNG ELECTRONICS CO., LTD.
`[KR/KR]; 416,
`Maetan-dong, Yeongtong-gu, Suwon-si, Gyeonggi-do
`442-742 (KR).
`
`Inventor: CHEON, Min-Su; #601, 337-65 Woncheon-
`dong, Yeongtong-gu, Suwon-si, Gyeonggi-do 443-822
`(KR).
`
`Agent: Y.P.LEE, MOCK & PARTNERS; Koryo Build-
`ing, 1575-1 Seocho-dong, Seocho-gu, Seoul 137-875
`(KR).
`
`Designated States (unless otherwise indicated, for every
`kind of national protection available): AE, AG, AL, AM,
`AO, AT, AU, AZ, BA, BB, BG, BH, BR, BW, BY, BZ,
`CA, CH, CL, CN, CO, CR, CU, CZ, DE, DK, DM, DO,
`DZ, EC, EE, EG, ES, FL GB, GD, GE, GH, GM,GT,
`HN, HR, HU,ID,IL,IN, IS, JP, KE, KG, KM, KN, KP,
`KZ, LA, LC, LK, LR, LS, LT, LU, LY, MA, MD, ME,
`MG, MK, MN, MW, MX, MY, MZ, NA, NG, NI, NO,
`NZ, OM,PE, PG, PH, PL, PT, RO, RS, RU, SC, SD, SE,
`SG, SK, SL, SM, ST, SV, SY, TH, TJ, TM, TN, TR, TT,
`TZ, UA, UG, US, UZ, VC, VN, ZA, ZM, ZW.
`
`(84)
`
`Designated States (unless otherwise indicated, for every
`kind of regional protection available): ARIPO (BW, GH,
`GM, KE, LR, LS, MW, MZ, NA, SD, SL, SZ, TZ, UG,
`ZM, ZW), Eurasian (AM, AZ, BY, KG, KZ, MD, RU,TJ,
`TM), European (AL, AT, BE, BG, CH, CY, CZ, DE, DK,
`EE, ES, FI, FR, GB, GR, HR, HU,IE, IS, IT, LT, LU,
`LV, MC, MK, MT, NL, NO, PL, PT, RO, RS, SE, SI, SK,
`SM, TR), OAPI (BF, BJ, CF, CG, CI, CM, GA, GN, GQ,
`GW,ML, MR, NE, SN, TD, TG).
`Published:
`
`without international search report and to be republished
`upon receipt of that report (Rule 48.2(g))
`
`(54) Title: METHOD AND APPARATUS FOR ENCODING AND DECODING CODING UNIT OF PICTURE BOUNDARY
`
`(57) Abstract: A method and apparatus tor encoding an image is provid-
`ed. An image coding unit, including a region that deviates from a bound-
`ary of a current picture, is divided to obtain a coding unit having a smaller
`size than the size of the image coding unit, and encoding is performed.
`only in a region that does not deviate from the boundary of the current
`picture. A method and apparatus for decoding an image encoded by the
`method and apparatus for encoding an imageis also provided.
`
`[Fig. 12b]
`
`
`
`
`
`WO2011/053050A2IIIITVIHINIMITATINGMETIMTATTAATAT
`
`
`
`WO 2011/053030
`
`PCT/KR2010/007537
`
`Description
`Title of Invention: METHOD AND APPARATUS FOR
`
`ENCODING AND DECODING CODING UNIT OF PICTURE
`
`Technical Field
`
`BOUNDARY
`
`[1]
`
`Apparatuses and methods consistent with the exemplary embodiments relate to
`
`encoding and decoding an image, and more particularly, to a method and apparatus for
`encoding and decoding an image coding unit of a picture boundary.
`Background Art
`In image compression methods, such as Moving Pictures Experts Group (MPEG)-1,
`MPEG-2, and MPEG-4 H.264/MPEG-4 Advanced Video Coding (AVC), an imageis
`divided into blocks having a predetermined size so as to encode the image. Then, each
`
`of the blocks is prediction-encoded using inter prediction or intra prediction.
`
`Disclosure of Invention
`
`Solution to Problem
`
`The exemplary embodiments provide a method and apparatus for encoding and
`decoding a coding unit of a picture boundary.
`The exemplary embodiments also provide a computer readable recording medium
`having recorded thereon a program for executing the method of encoding and decoding
`a coding unit of a picture boundary.
`Advantageous Effects of Invention
`According to the present invention, a block of bounday can be encodedefficiently
`without occurring overhead.
`Brief Description of Drawings
`The above and other aspects will become more apparent by describing in detail
`
`exemplary embodiments thereof with reference to the attached drawings in which:
`FIG.1 is a block diagram of an apparatus for encoding an image, according to an
`exemplary embodiment;
`FIG. 2 is a block diagram of an apparatus for decoding an image, according to an
`exemplary embodiment;
`
`[2]
`
`[3]
`
`[4]
`
`[5]
`
`[6]
`
`[7]
`
`[8]
`
`[9]
`
`[10]
`
`[11]
`
`FIG.3 illustrates hierarchical coding units according to an exemplary embodiment;
`FIG.4 is a block diagram of an image encoderbased on a coding unit, according to
`an exemplary embodiment;
`FIG.5 is a block diagram of an image decoder based on a coding unit, according to
`an exemplary embodiment;
`
`
`
`WO 2011/053050
`
`PCT/KR2010/007537
`
`[12]
`
`FIG.6 illustrates a maximum coding unit, a sub coding unit, and a prediction unit,
`
`[13]
`
`[14]
`
`[15]
`
`[16]
`
`[17]
`
`[18]
`
`[19]
`
`according to an exemplary embodiment;
`PIG.7 illustrates a coding unit and a transformation unit, according to an exemplary
`embodiment;
`
`FIGS. 8A and 8B illustrate division shapes of a coding unit, a prediction unit, and a
`frequency transformation unit, according to an exemplary embodiment;
`
`FIG. 9 is a block diagram of an apparatus for encoding an image, according to
`another exemplary embodiment;
`FIGS. 10A and 10B illustrate a coding unit of a picture boundary, according to an
`exemplary embodiment;
`FIGS. 11A and 11B illustrate a method of dividing a coding unit of a picture
`
`boundary, according to an exemplary embodiment;
`PIGS. 12A and 12Billustrate a method of dividing a coding unit of a picture
`boundary, according to another exemplary embodiment;
`PIGS. 13A and 13Billustrate an intra prediction method according to an exemplary
`embodiment;
`
`[20]
`
`FIG. 14 illustrates indexing of a maximum coding unit, according to an exemplary
`
`embodiment;
`
`[21]
`
`[22]
`
`FIG. 15 is a flowchartillustrating a method of encoding an image, according to an
`exemplary embodiment;
`FIG. 16 is a block diagram of an apparatus for decoding an image, according to
`
`another exemplary embodiment;
`PIG. 17 is a flowchartillustrating a method of decoding an image, according to an
`exemplary embodiment;
`PIGS. 18A through 18G illustrate prediction modesinafirst coding unit including a
`region that deviates from a boundaryof a current picture;
`
`FIG. 19 is a flowchartillustrating a method of encoding an image, according to
`another exemplary embodiment;
`FIGS. 20A and 20B illustrate a method of encoding a coding unit of a picture
`boundary, according to an exemplary embodiment;
`FIG. 21 is a flowchartillustrating a method of decoding an image, according to
`another exemplary embodiment;
`PIG. 22 is a flowchartillustrating a method of encoding an image, according to
`another exemplary embodiment;
`PIGS. 23A and 23Billustrate a method of encoding a coding unit of a picture
`boundary, according to another exemplary embodiment; and
`FIG. 24 is a flowchartillustrating a method of decoding an image, according to
`another exemplary embodiment.
`
`[26]
`
`[27]
`
`[29]
`
`[30]
`
`
`
`WO 2011/053050
`
`PCT/KR2010/007537
`
`Best Mode for Carrying out the Invention
`According to an aspect of the exemplary embodiments, there is provided a method of
`encoding an image, the method including: determining whethera first coding unit
`includes a region that deviates from a boundary of a current picture; dividing the first
`
`coding unit to obtain at least one second coding unit based on a result of the de-
`termining; and encoding only a second coding unit that does not deviate from the
`boundary of the current picture, from among the at least one second coding unit
`generated as a result of the dividing.
`Whenthe encoding of the second coding unit that does not deviate from the
`
`[31]
`
`[32]
`
`boundary of the current picture is performed, information about the dividing of the first
`coding unit is not encoded.
`The determining of whetherthe first coding unit includes the region that deviates
`from the boundary of the current picture includes determining whetheraleft or right
`boundary ofthe first coding unit deviates from a left or right boundary of the current
`
`[33]
`
`[34]
`
`[35]
`
`[36]
`
`[37]
`
`picture.
`The determining of whetherthe first coding unit includes the region that deviates
`from the boundary ofthe current picture includes determining whether an upper or
`lower boundary ofthe first coding unit deviates from an upper or lower boundary of
`the current picture.
`
`According to another aspect of the exemplary embodiments, there is provided a
`method of decoding an image, the method including: determining whethera first
`coding unit includes a region that deviates from a boundary of a current picture;
`parsing data regarding a second coding unit that does not deviate from the boundary of
`the current picture, from among at least one second coding unit generated by dividing
`
`the first coding unit based on a result of the determining; and decoding data regarding
`the second coding unit that does not deviate from the boundaryofthe currentpicture.
`According to another aspect of the exemplary embodiments, there is provided an
`apparatus for encoding an image, the apparatus including: a determiner determining
`whethera first coding unit includes a region that deviates from a boundary of a current
`
`picture; a controller dividing the first coding unit to obtain at least one second coding
`unit based on a result of the determining; and an encoder encoding only a second
`coding unit that does not deviate from the boundary of the current picture, from among
`the at least one second coding unit generated as a result of the dividing.
`According to another aspect of the exemplary embodiments, there is provided an
`
`apparatus for decoding an image, the apparatus including: a determiner determining
`whethera first coding unit includes a region that deviates from a boundary of a current
`picture; a parser parsing data regarding a second coding unit that does not deviate from
`
`
`
`WO 2011/053050
`
`PCT/KR2010/007537
`
`the boundary of the current picture, from amongat least one second coding unit
`generated by dividing the first coding unit based on a result of the determining: and a
`decoder decoding data regarding the second coding unit that does not deviate from the
`boundary of the current picture.
`According to another aspect of the exemplary embodiments, there is provided a
`computer readable recording medium having embodied thereon a program for
`
`executing the method of encoding and decoding an image.
`
`Modefor the Invention
`
`[39]
`
`The exemplary embodiments will now be described more fully with reference to the
`
`[40]
`
`[41]
`
`[42]
`
`[43]
`
`accompanying drawings, in which exemplary embodiments are shown. Expressions
`such as “at least one of,” when precedinga list of elements, modify the entire list of
`elements and do not modify the individual elementsof thelist. In the present speci-
`fication, an “image” may denote a still image for a video or a moving image,thatis,
`the videoitself.
`
`FIG. 1 is a block diagram of an apparatus for encoding an image 100, according to an
`exemplary embodiment.
`Referring to FIG. 1, the apparatus for encoding an image 100 includes a maximum
`coding unit divider 110, an encoding depth determiner 120, an image data encoder 130,
`and an encoding information encoder 140.
`
`The maximum coding unit divider 110 can divide a current picture or slice based on
`a maximum coding unit that is a coding unit of the maximum size. That is, the
`maximum coding unit divider 110 can divide the current picture or slice to obtain at
`least one maximum coding unit.
`According to an exemplary embodiment, a coding unit may be represented using a
`
`maximum coding unit and a depth. As described above, the maximum coding unit
`indicates a coding unit having the maximum size from among coding units of the
`current picture, and the depth indicates a degree obtained by hierarchically decreasing
`the coding unit. As a depth increases, a coding unit may decrease from a maximum
`coding unit to a minimum coding unit, wherein a depth of the maximum coding unitis
`
`defined as a minimum depth and a depth of the minimum coding unit is defined as a
`maximum depth. Since the size of a coding unit according to depths decreases from a
`maximum coding unit as a depth increases, a sub coding unit of a k* depth may include
`a plurality of sub coding units of a (k+n)" depth (k and n are integers equal to or
`greater than 1).
`
`[44]
`
`According to an increase of the size of a picture to be encoded, encoding an image in
`a greater coding unit may cause a higher image compression ratio. However, if a
`greater coding unit is fixed, an image may notbeefficiently encoded by reflecting con-
`
`
`
`WO 2011/053050
`
`PCT/KR2010/007537
`
`[45]
`
`[46]
`
`[47]
`
`[48]
`
`[49]
`
`[50]
`
`tinuously changing image characteristics.
`
`For example, when a smooth area such as the sea or sky is encoded, the greater a
`coding unit is, the more a compression ratio may increase. However, when a complex
`area such as people or buildings is encoded, the smaller a coding unit is, the more a
`compression ratio may increase.
`Accordingly, according to an exemplary embodiment, a maximum image coding unit
`
`and a maximum depth having different sizes are set for each picture orslice. Since a
`maximum depth denotes the maximum numberof times by which a coding unit may
`decrease, the size of each minimum coding unit included in a maximum image coding
`unit may be variably set according to a maximum depth.
`The encoding depth determiner 120 determines a maximum depth. The maximum
`
`depth may be determined based on calculation of Rate-Distortion (R-D) cost. The
`maximum depth may be determined differently for each picture or slice or for each
`maximum coding unit. The determined maximum depth is provided to the encoding in-
`formation encoder 140, and image data according to maximum coding unitsis
`provided to the image data encoder 130.
`
`The maximum depth denotes a coding unit having the smallest size, which may be
`included in a maximum coding unit, i.e., a minimum coding unit. In other words, a
`maximum coding unit may be divided into sub coding units having different sizes
`according to different depths. This is described in detail later with reference to FIGS.
`8A and 8B. In addition, the sub coding units having different sizes, which are included
`
`in the maximum coding unit, may be prediction- or frequency-transformed based on
`processing units having different sizes (values of pixel domains may be transformed
`into values of frequency domains, for example, by performing discrete cosine trans-
`formation (DCT)). In other words, the apparatus 100 for encoding an image may
`perform a plurality of processing operations for image encoding based on processing
`
`units having various sizes and various shapes. ‘To encode image data, processing op-
`erations such as prediction, frequency transformation, and entropy encoding are
`performed, wherein processing units having the same size may be used for every
`operation or processing units having different sizes may be used for every operation.
`For example, the apparatus for encoding an image 100 mayselect a processing unit
`that is different from a predetermined coding unit to predict the predetermined coding
`unit.
`
`Whenthe size of a coding unit is 2Nx2N (where N is a positive integer), processing
`units for prediction may be 2Nx2N, 2NxN, Nx2N, and NxN.In other words, motion
`prediction may be performed based on a processing unit having a shape whereby at
`least one of height and width of a coding unit is equally divided by two. Hereinafter, a
`processing unit, which is the base of prediction, is defined as a ‘prediction unit’.
`
`
`
`WO 2011/053050
`
`PCT/KR2010/007537
`
`[51]
`
`A prediction mode may beat least one of an intra mode, an inter mode, and a skip
`
`mode, and a specific prediction mode may be performed for only a prediction unit
`having a specific size or shape. For example, the intra mode may be performedfor
`only prediction units having the sizes of 2Nx2N and NxN of which the shapeis a
`square. Further, the skip mode may be performedfor only a prediction unit having the
`size of 2Nx2N.If a plurality of prediction units exist in a coding unit, the prediction
`
`mode with the least encoding errors may be selected after performing prediction for
`every prediction unit.
`Alternatively, the apparatus 100 for encoding an image may perform frequency trans-
`formation on image data based ona processing unit having a different size from a
`coding unit. For the frequency transformation in the coding unit, the frequency trans-
`
`formation may be performed based on a data unit having a size equal to or smaller than
`that of the coding unit. Hereinafter, a processing unit, which is the base of frequency
`transformation, is defined as a ‘transformation unit’.
`
`[52]
`
`[53]
`
`The encoding depth determiner 120 may determine sub coding units includedin a
`maximum coding unit using R-D optimization based on a Lagrangian multiplier. In
`
`[54]
`
`[55]
`
`other words, the encoding depth determiner 120 may determine which shapea plurality
`of sub coding units divided from the maximum coding unit have, wherein the plurality
`of sub coding units have different sizes according to their depths. The image data
`encoder 130 outputs a bitstream by encoding the maximum coding unit based on the
`division shapes determined by the encoding depth determiner120.
`
`The encoding information encoder 140 encodes information about an encoding mode
`of the maximum coding unit determined by the encoding depth determiner 120. In
`other words, the encoding information encoder 140 outputs a bitstream by encoding in-
`formation about a division shape of the maximum coding unit, information about the
`maximum depth, and information about an encoding mode of a sub coding unit for
`
`each depth. The information about the encoding modeofthe sub coding unit may
`include information about a prediction unit of the sub coding unit, information about a
`prediction mode for each prediction unit, and information about a transformation unit
`of the sub coding unit.
`Information about division shapes of the maximum coding unit may be information
`that indicates whether each coding unit will be divided or not. For example, when the
`maximum coding unit is divided and encoded, information that indicates whether the
`maximum coding unit will be divided or not, is encoded, and even when a sub coding
`unit that is generated by dividing the maximum coding unit is sub-divided and
`encoded, information that indicates whether each sub coding unit will be divided or
`not, is encoded. Information that indicates division may be in the form offlag in-
`formation that indicates division.
`
`
`
`WO 2011/053050
`
`PCT/KR2010/007537
`
`[56]
`
`Since sub coding units having different sizes exist for each maximum coding unit
`
`and information about an encoding mode must be determined for each sub coding unit,
`information about at least one encoding mode may be determined for one maximum
`coding unit.
`The apparatus 100 for encoding an image may generate sub coding units by equally
`dividing both height and width of a maximum coding unit by two according to an
`
`increase of depth. That is, when the size of a coding unit of a k'* depth is 2Nx2N, the
`size of a coding unit of a (k+1)* depth is NxN.
`Accordingly, the apparatus 100 for encoding an image according to an exemplary
`embodiment may determine an optimal division shape for each maximum coding unit
`based on sizes of maximum coding units and a maximum depth in consideration of
`
`image characteristics. By variably controlling the size of a maximum coding unit in
`consideration of image characteristics and encoding an image through division of a
`maximum coding unit into sub coding units of different depths, images having various
`resolutions may be more efficiently encoded.
`PIG. 2 is a block diagram of an apparatus 200 for decoding an image, according to an
`
`exemplary embodiment.
`Referring to FIG.2, the apparatus 200 for decoding an image includes an image data
`acquisition unit 210, an encoding information extractor 220, and an image data decoder
`
`230.
`
`[58]
`
`[59]
`
`[60]
`
`[61]
`
`The image data acquisition unit 210 acquires image data according to maximum
`
`coding units by parsing a bitstream received by the apparatus 200 for decoding an
`image and outputs the image data to the image data decoder 230. The image data ac-
`quisition unit 210 may extract information about a maximum coding unit of a current
`picture orslice from a headerof the currentpicture orslice. In other words, the image
`data acquisition unit 210 divides the bitstream in the maximum coding unit so that the
`
`image data decoder 230 may decode the image data according to maximum coding
`
`units.
`
`[62]
`
`[63]
`
`The encoding information extractor 220 extracts information about a maximum
`coding unit, a maximumdepth, a division shape of the maximum coding unit, an
`encoding mode of sub coding units from the header of the current picture by parsing
`the bitstream received by the apparatus 200 for decoding an image. The information
`abouta division shape and the information about an encoding mode are providedto the
`image data decoder 230.
`The information about a division shape of the maximum coding unit may include in-
`formation about sub coding units having different sizes according to depths included in
`the maximum coding unit. As described above, the information about a division shape
`of the maximum coding unit may be information that indicates division encoded. in-
`
`
`
`WO 2011/053050
`
`PCT/KR2010/007537
`
`[64]
`
`[65]
`
`[66]
`
`[67]
`
`[68]
`
`[69]
`
`[70]
`
`[71]
`
`[72]
`
`[73]
`
`formation for each coding unit, for example, flag information.
`
`The information about an encoding mode mayinclude information about a prediction
`unit according to a sub coding unit, information about a prediction mode, and in-
`formation about a transformation unit.
`
`The image data decoder 230 restores the current picture by decoding image data of
`every maximum coding unit based on the information extracted by the encoding in-
`
`formation extractor 220.
`
`The image data decoder 230 may decode sub coding units included in a maximum
`coding unit based on the information about a division shape of the maximum coding
`unit. A decoding process may include a motion prediction process including intra
`prediction and motion compensation and an inverse frequency transformation process.
`
`The image data decoder 230 may perform intra prediction or inter prediction based.
`on information about a prediction unit according to sub coding units and information
`about a prediction mode in orderto predict a sub coding unit. The image data decoder
`230 may also perform inverse frequency transformation for each sub coding unit based
`on information about a transformation unit of a sub coding unit.
`
`FIG. 3 illustrates hierarchical coding units according to an exemplary embodiment.
`Referring to FIG.3, the hierarchical coding units according to an exemplary em-
`bodiment may include coding units whose widthxheight dimensions are 64x64, 32x32,
`16x16, 8x8, and 4x4. Besides these coding units having perfect square shapes, coding
`units whose widthxheight dimensions are 64x32, 32x64, 32x16, 16x32, 16x8, 8x16,
`
`8x4, and 4x8 mayalso exist.
`Referring to FIG.3, for image data 310 whoseresolution is 1920x1080, the size of a
`maximum coding unit is set to 64x64, and a maximum depthis setto 2.
`Por image data 320 whoseresolution is 1920x1080, the size of a maximum coding
`unit is set to 64x64, and a maximum depthis set to 3. For image data 330 whose
`
`resolution is 352x288, the size of a maximum coding unit is set to 16x16, anda
`maximum depth is setto 1.
`Whenthe resolution is high or the amountof data is great, it is preferable that a
`maximum size of a coding unitis relatively great to increase a compressionratio and
`exactly reflect image characteristics. Accordingly, for the image data 310 and 320
`having higher resolution than the image data 330, 64x64 may be selected as the size of
`a Maximum coding unit.
`A maximum depth indicates the total numberoflayers in the hierarchical coding
`units. Since the maximum depth of the image data 310 is 2, a coding unit 315 of the
`image data 310 may include a maximum coding unit whose longer axis size is 64 and
`sub coding units whose longer axis sizes are 32 and 16, according to an increase in
`depth.
`
`
`
`WO 2011/053050
`
`PCT/KR2010/007537
`
`[74]
`
`On the other hand, since the maximum depth of the image data 330 is 1, a coding
`
`unit 335 of the image data 330 may include a maximum coding unit whose longer axis
`size is 16 and coding units whose longeraxis sizes are 8 and 4, according to an
`increase in depth.
`However, since the maximum depth of the image data 320 is 3, a coding unit 325 of
`the image data 320 may include a maximum coding unit whose longeraxis size is 64
`
`and sub coding units whose longer axis sizes are 32, 16, 8 and 4 according to an
`increase in depth. Since an image is encoded based on a smaller sub coding unit as the
`depth increases, the exemplary embodimentis suitable for encoding an image
`including more minute scenes.
`FIG. 4 is a block diagram of an image encoder 400 based on a coding unit, according
`
`to an exemplary embodiment.
`Anintra prediction unit 410 performsintra prediction on prediction units of the intra
`modein a current frame 405, and a motion estimator 420 and a motion compensator
`425 perform inter prediction and motion compensation on prediction units of the inter
`mode using the current frame 405 and a reference frame 495.
`
`Residual values are generated based on the prediction units output from the intra
`prediction unit 410, the motion estimator 420, and the motion compensator 425, and
`the generated residual values are output as quantized transform coefficients by passing
`through a frequency transformation unit 430 and a quantizer 440.
`The quantized transform coefficients are restored to residual values by passing
`
`through an inverse-quantizer 460 and an inverse frequency transformation unit 470,
`and the restored residual values are post-processed by passing through a deblocking
`unit 480 and a loop filtering unit 490 and output as the reference frame 495. The
`quantized transform coefficients may be output as a bitstream 455 by passing through
`an entropy encoder 450.
`
`To perform encoding based on an encoding method according to an exemplary em-
`bodiment, components of the image encoder 400,i.e., the intra prediction unit 410, the
`motion estimator 420, the motion compensator 425, the frequency transformation unit
`430, the quantizer 440, the entropy encoder450, the inverse-quantizer 460, the inverse
`frequency transformation unit 470, the deblocking unit 480 andthe loop filtering unit
`490, perform image encoding processes based on a maximum coding unit, a sub
`coding unit according to depths, a prediction unit, and a transformation unit.
`FIG. 5 is a block diagram of an image decoder 500 based on a coding unit, according
`to an exemplary embodiment.
`A bitstream 505 passes through a parser 510 so that encoded image data to be
`decoded and encoding information necessary for decoding are parsed. The encoded
`image data is output as inverse-quantized data by passing through an entropy decoder
`
`[75]
`
`[76]
`
`[77]
`
`[78]
`
`[79]
`
`[80]
`
`[81]
`
`[82]
`
`
`
`WO 2011/053050
`
`PCT/KR2010/007537
`
`10
`
`[83]
`
`520 and an inverse-quantizer 530 and restored to residual values by passing through an
`
`inverse frequency transformation unit 540, The residual values are restored according
`to coding units by being added to an intra prediction result of an intra prediction unit
`550 or a motion compensation result of a motion compensator 560. The restored
`coding units are used for prediction of next coding units or a next picture by passing
`through a deblocking unit 570 and a loop filtering unit 580.
`
`To pertorm decoding based on a decoding method according to an exemplary em-
`bodiment, components of the image decoder 500,i.e., the parser 510, the entropy
`decoder 520, the inverse-quantizer 530, the inverse frequency transformation unit 540,
`the intra prediction unit 550, the motion compensator 560, the deblocking unit 570 and
`the loop filtering unit 580, perform image decoding processes based on a maximum
`
`coding unit, a sub coding unit according to depths, a prediction unit, and a trans-
`
`formation unit.
`
`[84]
`
`In particular, the intra prediction unit 550 and the motion compensator 560 determine
`a prediction unit and a prediction mode in a sub coding unit by considering a
`maximum coding unit and a depth, and the inverse frequency transformation unit 540
`
`performs inverse frequency transformation by considering the size ofa transformation
`
`unit.
`
`[85]
`
`[86]
`
`FIG.6 illustrates a maximum coding unit, a sub coding unit, and a prediction unit,
`according to an exemplary embodiment.
`The apparatus 100 for encoding an image andthe apparatus 200 for decoding an
`
`[87]
`
`[88]
`
`image according to an exemplary embodimentuse hierarchical coding units to perform
`encoding and decoding in consideration of image characteristics. A maximum coding
`unit and a maximum depth may be adaptively set according to the image charac-
`teristics or variously set according to requirementsof a user.
`A hierarchical coding unit structure 600 according to an exemplary embodiment il-
`
`lustrates a maximum coding unit 610 whose height and width are 64x64 and maximum
`depth is 4. A depth increases along a vertical axis of the hierarchical coding unit
`structure 600, and as a depth increases, heights and widths of sub coding units 620 to
`650 decrease. Prediction units of the maximum coding unit 610 and the sub coding
`units 620 to 650 are shown along a horizontal axis of the hierarchical coding unit
`
`structure 600.
`
`The maximum coding unit 610 has a depth of 0 andthesize of a coding unit, i.e.,
`height and width, of 64x64. A depth increases along the vertical axis, and there exist a
`sub coding unit 620 whosesize is 32x32 and depth is 1, a sub coding unit 630 whose
`size is 16x16 and depth is 2, a sub coding unit 640 whosesize is 8x8 and depthis 3,
`and a sub coding unit 650 whose size is 4x4 and depth is 4. The sub coding unit 650
`whosesize is 4x4 and depth is 4 is a minimum coding unit.
`
`
`
`WO 2011/053050
`
`PCT/KR2010/007537
`
`14
`
`[89]
`
`Referring to FIG. 6, examples of a prediction unit are shown along the horizontal
`
`axis according to each depth. That is, a prediction unit of the maximum coding unit
`610 whose depth is 0 may be a prediction unit whosesize is equal to the coding unit
`610, i.e., 64x64, or a prediction unit 612 whose size is 64x32, a prediction unit 614
`whosesize is 32x64, or a prediction unit 616 whose size is 32x32, which all have sizes
`smaller than the coding unit 610 whosesize is 64x64.
`
`A prediction unit of the coding unit 620 whose depth is 1 and size is 32x32 may be a
`prediction unit whosesize is equal to the coding unit 620, ie., 32x32, or a prediction
`unit 622 whosesize is 32x16, a prediction unit 624 whosesize is 16x32, or a
`prediction unit 626 whosesize is 16x16, which all have sizes smaller than the coding
`unit 620 whose size is 32x32.
`
`A prediction unit of the coding unit 630 whose depth is 2 and size is 16x16 may bea
`prediction unit whosesize is equal to the coding unit 630,ie., 16x16, or a prediction
`unit 632 whose size is 16x8, a prediction unit 634 whosesize is 8x16, or a prediction
`unit 636 whose size is 8x8, which all have sizes smaller than the coding unit 630
`whose size is 16x16.
`
`A prediction unit of the coding unit 640 whose depth is 3 and size is 8x8 may be a
`prediction unit whosesize is equal to the coding unit 640, ie., 8x8, or a prediction unit
`642 whose size is 8x4, a prediction unit 644 whose size is 4x8, or a prediction unit 646
`whosesize is 4x4, whichall have sizes smaller than the coding unit 640 whosesize is
`8x8.
`
`Finally, the coding unit 650 whose depth is 4 and size is 4x4 is a minimum coding
`unit and a coding unit of a maximum depth, and a prediction unit of the coding unit
`650 is a prediction unit 650 whosesize is 4x4.
`PIG.7 illustrates a coding unit and a transformation unit, according to an exemplary
`embodiment.
`
`The apparatus for encoding an image 100 and the apparatus for decoding an image
`200, according to an exemplary embodiment, perform encoding with a maximum
`coding unit itself or with sub coding units, which are equal to or smaller than the
`maximumcoding unit, and are divided from the maximum coding unit.
`In the encoding process, the size of a transformation unit for frequency trans-
`formation is selected to be no larger than that of a corresponding coding unit. For
`example, when a current coding unit 710 has the size of 64x64, frequency trans-
`formation may be performed using a transformation unit 720 having the size of 32x32.
`PIGS. 8A and 8Billustrate division shapes of a coding unit, a prediction unit, and a
`frequency transformation uni