`
`CROSS REFERENCE TO RELATED APPLICATIONS
`
`This application is a U.S. continuation application of PCT International
`
`Patent Application Number PCT/JP2019/008417 filed on January 31, 2019,
`
`claiming the benefit of priority of U.S. Provisional Patent Application Number
`
`62/625,674, U.S. Provisional Patent Application Number 62/625,669, and U.S.
`
`Provisional Patent Application Namber 62/625,685 filed on February 2, 2018,
`
`the entire contents of which are hereby incorporated by reference.
`
`1. Technical Field
`
`BACKGROUND
`
`The present disclosure relates to an encoder, an encoding method, a
`
`decoder, and a decoding method.
`
`2. Deserrption of the Related Art
`
`Video coding standard called High-Efficiency Video Coding (HEVC) has
`
`been standardized by Joint Collaborative Team on Video Coding (JCT-VC).
`
`20
`
`SUMMARY
`
`An encoder according to an aspect of the present disclosure includes
`
`circuitry and memory.
`
`In the encoder, using the memory,
`
`the circuitry:
`
`calculates a cost that is an evaluation value for a current block to be encoded,
`
`for each of a plurality of search points included in afirst set, the plurality of
`
`25
`
`search points being a plurality of pixel positions im a reference picture:
`
`determines whether a base search point has a lowest cost among the base
`
`search point and a plurality of neighboring search points which spatially
`
`1
`
`
`
`neighbor the base search point,
`
`the base search point and the plurality of
`
`neighboring search points being included in the first set as the plurality of
`
`search points; when the base search point is determined to have the lowest cost
`
`among the base search point and the plurality of neighboring search points,
`
`selects the base search point as a first best search point: when the base search
`
`point is determined not to have the lowest cost among the base search point
`
`and the plurality of neighboring search points, calculates a cost that is the
`
`evaluation vahie for the current block, for eachof a plurality of search points
`
`which spatially neighbor the base search poimt and are included in a secondset
`
`different from the first set: selects a search poimt having a lowest cost from
`
`among the first set and the second set, as a second best search point: and
`
`encodes the current block, using a motion vector corresponding to the first best
`
`search point or the second best search point.
`
`A decoder according to an aspect of the present disclosure includes
`
`circuitry and memory.
`
`In the decoder, using the memory,
`
`the circuitry:
`
`calculates a cost that is an evaluation value for a current block to be decoded,
`
`for each of a plurality of search points included in a first set, the plurality of
`
`search points being a plurality of pixel positions in a reference picture:
`
`determines whether a base search point has a lowest cost among the base
`
`20
`
`search pomt and a plurality of neighboring search points which spatially
`
`neighbor the base search point,
`
`the base search point and the plurality of
`
`neighboring search points being included in the first set as the plurality of
`
`search points: when the base search point is determined to have the lowest cost
`
`among the base search point and the plurality of neighboring search points,
`
`25
`
`selects the base search point as a first best search point: whenthe base search
`
`point is determined not to have the lowest cost among the base search point
`
`and the plurality of neighboring search points, calculates a cost that is the
`
`2
`
`
`
`evaluation value for the current block, for each of a plurality of search points
`
`which spatially neighbor the base search point and are included in a second set
`
`different from the first set; selects a search point having a lowest cost from
`
`among the first set and the second set, as a second best search point; and
`
`decodes the current block, using a motion vector corresponding to the first best
`
`search point or the second best search point.
`
`It should be noted that these generic and specific aspects may be
`
`implemented using a system, a method, an integrated circuit, a computer
`
`program, or a computer-readable recording mediumsuch as a compact disc
`
`read only memory (CD-ROM), and may also be implemented by any
`
`cormbinationof systems, methods, integrated circuits, computer programs, and
`
`recording media.
`
`BRIEF DESCRIPTION OF DRAWINGS
`
`These and other objects, advantages and features of the disclosure will
`
`become apparent from the following description thereof taken in conjunction
`
`with the accompanying drawings that illustrate a specific embodiment of the
`
`present disclosure.
`
`FIG. 1 is a block diagram illustrating a functional configuration of an
`
`20
`
`encoder according to Embodiment 1;
`
`FIG.
`
`2
`
`illustrates one
`
`example of block sphiting according to
`
`Embodiment 1:
`
`FIG.
`
`3 is a chart
`
`indicating transform basis functions for each
`
`transform type:
`
`25
`
`FIG. 4A illustrates one example ofa filter shape used in ALF:
`
`FIG. 4B illustrates another example of a filter shape used in ALF:
`
`FIG. 4C illustrates another example ofa filter shape used in ALF;
`
`3
`
`
`
`FIG. 5A illustrates 67 intra prediction modes used in intra prediction:
`
`FIG. 5B is a flow chart for illustrating an outline of a prediction image
`
`correction process performed via OBMC processing:
`
`FIG. 5C is a conceptual diagram for ilustrating an outline of a
`
`prediction image correction process performed via OBMCprocessing:
`
`FIG. 5D illustrates one example of FRUC:
`
`FIG.6 is for ilustrating pattern matching (bilateral matching) between
`
`two blocks along a motion trajectory:
`
`FIG. 7 is for illustrating pattern matching (template matching) between
`
`a template in the current picture and a block in a reference picture:
`
`FIG. 8 is for dlustrating a model assuming uniform hnear motion,
`
`FIG. 9A is for illustrating deriving a motion vector of each sub-block
`
`based on motion vectors of neighboring blocks:
`
`FIG. 9B is for illustrating an outline of a process for deriving a motion
`
`vector via merge mode;
`
`FIG. 9C is a conceptual diagram for illustrating an outline of DMVR
`
`processing:
`
`FIG. 9D is for illustrating an outline of a prediction image generation
`
`method using a luminance correction process performed via LIC processing:
`
`20
`
`FIG. 10 is a block diagram ilustrating a functional configuration of a
`
`decoder according to Embodiment 1
`
`FIG. 11 is a flowchart illustrating a first aspect of a decoding method
`
`and decoding processing performed by a decoder according to Embodiment 2;
`
`FIG. 12 is a flowchart illustrating a second aspect of the decoding
`
`25
`
`method and the decoding processing performed by the decoder according to
`
`Embodiment 2;
`
`FIG. 131s a flowchart illustrating a third aspect of the decoding method
`
`4
`
`
`
`and the decoding processing performed by the decoder according to
`
`Embodiment 2;
`
`FIG. 14 is a flowchart illustrating a fourth aspect of the decoding
`
`method and the decoding processing performed by the decoder according to
`
`Embodiment 2:
`
`FIG. 15 is a flowchart uUlustrating a fifth aspect of an encoding method
`
`and encoding processing performed by an encoder according to Embodiment 2:
`
`FIG. 16 is a diagram illustrating one example of each of search points
`
`included in a first set and a second set used in the fifth aspect according to
`
`Embodiment 2:
`
`FIG. 17 is a diagram illustrating one example of eachof the search
`
`points included in the first set and the second set used in the fifth aspect
`
`according to Embodiment 2:
`
`FIG. 18 is a flowchart illustrating a sixth aspect of an encoding method
`
`and encoding processing performed by the encoder according to Embodiment 2;
`
`FIG. 19 is a diagram for explaining the sixth aspect according to
`
`Embodiment 2;
`
`FIG. 20Ais a block diagram illustrating an implementation example of
`
`the encoder according to each of the embodiments:
`
`20
`
`FIG. 20B is a flowchart indicating operations performed by the encoder
`
`including circuitry and memoryaccording to each of the embodiments;
`
`FIG. 20C is a block diagram illustrating an implementation exampleof
`
`the decoder according to each of the embodiments,
`
`FIG. 20Dis a flowchart indicating operations performed by the decoder
`
`25
`
`including circuitry and memoryaccording to each of the embodiments,
`
`FIG. 21 illustrates an overall configuration of a content providing
`
`systemfor implementing a content distribution service:
`
`5
`
`
`
`FIG. 22 ulustrates one example of an encoding structure in scalable
`
`encoding:
`
`FIG. 28 ulbustrates one example of an encoding structure in scalable
`
`encoding:
`
`FIG. 24 illustrates an example of adisplay screen of a web page:
`
`FIG. 25 illustrates an example ofa display screen of a web page:
`
`FIG. 26 illustrates one example ofa smartphone; and
`
`FIG. 27 is a block diagram illustrating a configuration example of a
`
`smartphone.
`
`DETAILED DESCRIPTION OF THE EMBODIMENTS
`
`An encoder according to an aspect of the present disclosure includes
`
`circuitry and memory.
`
`In the encoder, using the memory,
`
`the circuitry:
`
`calculates a cost that is an evaluation value for a current block to be encoded,
`
`for each of a plurality of search points included in a first set, the plurality of
`
`search points being a plurality of pixel positions in a reference picture:
`
`determines whether a base search point has a lowest cost among the base
`
`search pomt and a plurality of neighboring search points which spatially
`
`neighbor the base search point,
`
`the base search point and the plurality of
`
`20
`
`neighboring search points being included in the first set as the plurality of
`
`search points: when the base search point is determined to have the lowest cost
`
`among the base search point and the plurality of neighboring search points,
`
`selects the base search point as afirst best search point; when the base search
`
`poimt is determined not to have the lowest cost among the base search point
`
`25
`
`and the plurality of neighboring search points, calculates a cost that is the
`
`evaluation value for the current block, for each of a plurality of search points
`
`which spatially neighbor the base search point and are included in a second set
`
`6
`
`
`
`different from the first set: selects a search point having a lowest cost from
`
`among the first set and the second set, as a second best search point; and
`
`encodes the current block, using a motion vector corresponding to the first best
`
`search point or the second best search point.
`
`In this manner, the best search point is selected in two steps. More
`
`specifically, when the cost of the base search point is lowest im the first set, the
`
`base search poimt is selected as the best search point, and thus it is possible to
`
`omit cost calculation for the second set. Accordingly, it is possible to reduce
`
`the load of cost calculation processing and reduce the processing of motion
`
`estimation, compared to the case where a cost is calculated for each search
`
`point inckided in the first set and the second set, and then a search point
`
`having the lowest cost is selected as a best search pomt.
`
`In other words, it is
`
`possible to reduce the processing load while inhibiting a decrease in prediction
`
`accuracy. Asa result, it is possible to reduce complexity in the inter prediction
`
`processing.
`
`In addition, the circuitry: when the second best search point is selected,
`
`may further determine whether an end condition to end an update of the base
`
`search point
`
`is satisfied; when the end condition is determined not to be
`
`satisfied, may update the base search point to the second best search point, and
`
`20
`
`select the first best search point based on the base search point updated or
`
`repeats selecting the second best search point: and when the end condition is
`
`determined to be satisfied, may encode the current block, using a motion vector
`
`corresponding to the second best search point selected most recently.
`
`In this manner, selecting the best search point is repeated until the end
`
`25
`
`conditionis satisfied, and thus it is possible to select a best search point which
`
`is better optimized, while reducing the load of cost calculation processing.
`
`In addition, the circuitry: may use a pixel position indicated based on a
`
`7
`
`
`
`motion vector of an encoded block, as the base search point; and when the cast
`
`is calculated for a search point included in the first set or the second set, may
`
`calculate the cost based on G) an image of a regionindicated by the search point
`
`in the reference picture and Gi) a base image, and the base image may be an
`
`image which is obtained from at least one encoded block which is used for
`
`deriving a motionvector of the current block instead of the current block.
`
`In this manner,
`
`it
`
`is possible to reduce complexity in the inter
`
`prediction processing, in the FRUC made or theDMVRmade, for example.
`
`In addition, the circuitry: may use a pixel position indicated based on a
`
`motion vector of an encodedblock, as the base search point: and when the cost
`
`is calculated for a search point included in the first set or the second set, may
`
`calculate the cost based on G) an image of a region indicated by the search point
`
`in the reference picture and (i) a base image, and the base image may be an
`
`image of the current block.
`
`In this manner,
`
`it
`
`is possible to reduce complexity in the inter
`
`prediction processing in the normal inter mode.
`
`In addition, when the cost is calculated for the search point inchided in
`
`the first set or the second set, the circuitry maycalculate the cost using at least
`
`a distortion of the image of the region with respect to the base image.
`
`20
`
`In this manner, 1 is possible to calculate a proper cost. As a result, it
`
`is possible to improve coding efficiency.
`
`A decoder according to an aspect of the present disclosure includes
`
`circuitry and memory.
`
`In the decoder, using the memory,
`
`the circuitry:
`
`calculates a cost that is an evaluation value for a current block to be decoded,
`
`25
`
`for each of a plurality of search points included ina first set, the plurality of
`
`search points being a plurality of pixel positions in a reference picture:
`
`determines whether a base search pomt has a lowest cost among the base
`
`8
`
`
`
`search pomt and a plurality of neighboring search points which spatially
`
`neighbor the base search point,
`
`the base search point and the plurality of
`
`neighboring search points being included in the first set as the plurality of
`
`search points: when the base search point is determined to have the lowest cost
`
`among the base search point and the plurality of neighboring search points,
`
`selects the base search point as afirst best search point; when the base search
`
`poimt is determined not to have the lowest cost among the base search point
`
`and the plurality of neighboring search points, calculates a cost that is the
`
`evaluation value for the current block, for each of a plurality of search points
`
`which spatially neighbor the base search point and are included in a second set
`
`different from the first set: selects a search point having a lowest cost from
`
`among the first set and the second set, as a second best search point; and
`
`decodes the current block, using a motion vector correspondingto the first best
`
`search point or the second best search point.
`
`In this manner, the best search poimt is selected in two steps. More
`
`specifically, when the cost of the base search point is lowest in the first set, the
`
`base search pomt is selected as the best search point, and thus it is possible to
`
`omit cost calculation for the second set. Accordingly, it is possible to reduce
`
`the load of cost calculation processing and reduce the processing of motion
`
`20
`
`estimation, compared to the case where a cost is calculated for each search
`
`point included in the first set and the second set, and then a search point
`
`having the lowest cost is selected as a best search point.
`
`In other words, it is
`
`possible to reduce the processing load while inhibiting a decrease in prediction
`
`accuracy. Asa result, it is possible to reduce complexity in the inter prediction
`
`25
`
`processing.
`
`In addition, the circuitry: when the second best search point is selected,
`
`may further determine whether an end condition to end an update of the base
`
`9
`
`
`
`search point
`
`is satisfied; when the end condition is determined not to be
`
`satisfied, may update the base search point to the second best search point, and
`
`select the first best search point based on the base search point updated or
`
`repeat selecting the second best search point; and when the end condition is
`
`determined to be satisfied, may decode the current block, using a motion vector
`
`corresponding to the second best search point selected most recently.
`
`In this manner, selecting the best search point is repeated until the end
`
`conditionis satisfied, and thus it is possible to select a best search point which
`
`is better optimized, while reducing the load of cost calculation processing.
`
`In addition, the circuitry: may use a pixel position indicated based on a
`
`motion vector of a decodedblock, as the base search point: and when the cost is
`
`calculated for the search point included in the first set or the second set, may
`
`calculate the cost based on G) an image of a region indicated by the search point
`
`in the reference picture and Gi) a base image, and the base image may be an
`
`image which is obtained from at least one decoded block which is used for
`
`deriving a motion vector of the current block instead of the current block.
`
`In this manner,
`
`it
`
`is possible to reduce complexity in the imter
`
`prediction processing, in the FRUC mode or the DMVR mode, for example.
`
`In addition, when the cost is calculated for the search point included in
`
`20
`
`the first set or the second set, the circuitry maycalculate the cost using at least
`
`a distortion of the image of the region with respect to the base image.
`
`In this manner, it is possible to calculate a proper cost. As aresult, it
`
`is possible to improve coding efficiency.
`
`Hereimafter, embodiments will be specifically described with reference
`
`25
`
`to the drawings.
`
`Note that the embodiments described below each show a general or
`
`specific example.
`
`The numerical values,
`
`shapes, materials, constituent
`
`10
`
`
`
`elements, the arrangement and connection of the constituent elements, steps,
`
`order of the steps, etc.,
`
`indicated in the folowing embodiments are mere
`
`examples, and therefore are not intended to hmit the scope of the claims.
`
`Therefore, among the constituent elements in the following embodiments, those
`
`not recited in any of the independent claims defining the broadest inventive
`
`concepts are described as optional constituent elements.
`
`EMBODIMENT1
`
`First, an outline of Embodiment 1 will be presented. Embodiment I is
`
`one example of an encoder and a decoder to which the processes and/or
`
`configurations presented in subsequent description of aspects of the present
`
`disclosure are applicable. Note that Embodiment 1 is merely one example of
`
`an encoder and a decoder
`
`to which the processes and/or configurations
`
`presented in the description of aspects of the present disclosure are applicable.
`
`The processes and/or configurations presented in the description of aspects of
`
`the present disclosure can also be implemented in an encoder and a decoder
`
`different from those according to Embodiment 1.
`
`When the processes and/or configurations presented in the description
`
`of aspects of the present disclosure are apphed to Embodiment 1, for example,
`
`any of the following may be performed.
`
`20
`
`(1) regarding the encoder or the decoder according to Embodiment 1,
`
`among components inchided in the encoder or
`
`the decoder according to
`
`Embodiment
`
`1, substituting a component corresponding to a component
`
`presented in the description of aspects of the present disclosure with a
`
`component presented in the description of aspects of the present disclosure:
`
`25
`
`(2) regarding the encoder or the decoder according to Embodiment 1,
`
`implementing discretionary changes to functions or implemented processes
`
`performed by one or more components included in the encoder or the decoder
`
`ii
`
`
`
`according to Embodiment 1, such as addition, substitution, or removal, etc., of
`
`such functions or
`
`implemented processes,
`
`then substituting a component
`
`corresponding to a component presented in the description of aspects of the
`
`present disclosure with a component presented in the description of aspects of
`
`the present disclosure:
`
`(3) regarding the method implemented by the encoder or the decoder
`
`according to Embodiment
`
`1,
`
`implementing discretionary changes such as
`
`addition of processes and/or substitution,
`
`removal of one or more of the
`
`processes
`
`included in the method,
`
`and then substitutmg a processes
`
`corresponding to a process presented in the description of aspects of the present
`
`disclosure with a process presented in the description of aspects of the present
`
`disclosure:
`
`(4) combining one or more components included im the encoder or the
`
`decoder according to Embodiment
`
`1 with a component presented in the
`
`description of aspects of the present disclosure, a component including one or
`
`more functions included in a component presented in the description of aspects
`
`of the present disclosure, or a component that implements one or more
`
`processes nnplemented by a component presented in the description of aspects
`
`of the present disclosure:
`
`20
`
`(5) combining a component including one or more functions included in
`
`one or more components included in the encoder or the decoder according to
`
`Embodiment
`
`1, or a component
`
`that
`
`implements one or more processes
`
`implemented by one or more components included im the encoder or the decoder
`
`according to Embodiment 1 with a component presented in the description of
`
`25
`
`aspects of the present disclosure, a component including one or more functions
`
`included in a component presented in the description of aspects of the present
`
`disclosure, or a component that implements one or more processes implemented
`
`12
`
`
`
`by a component presented in the description cf aspects of the present
`
`disclosure:
`
`(6) regarding the method implemented bythe encoder or the decoder
`
`according to Embodiment
`
`1, among processes inchided
`
`in the method,
`
`substituting a process corresponding to a process presented in the description
`
`of aspects of the present disclosure with a process presented in the description
`
`of aspects of the present disclosure; and
`
`(7)
`
`combining one or more processes
`
`included in the method
`
`implemented by the encoder or the decoder according to H#mbodiment 1 witha
`
`process presented in the description of aspects of the present disclosure.
`
`Note that the implementation of the processes and/or configurations
`
`presented in the description of aspects of the present disclosure is not limited to
`
`the above examples.
`
`For example,
`
`the processes and/or configurations
`
`presented in the description of aspects of the present disclosure may be
`
`implemented in a device used for a purpose different
`
`from the moving
`
`picture/picture encoder or the moving picture/picture decoder disclosed in
`
`Embodiment 1. Moreover, the processes and/or configurations presented in
`
`the description of aspects of the present disclosure may be independently
`
`implemented.
`
`Moreover, processes and/or configurations described in
`
`20
`
`different aspects may be combined.
`
`[Encoder Outline]
`
`First, the encoder according to Embodiment 1 will be outhned.
`
`FIG. 1
`
`is a block diagram illustrating a functional configuration of encoder 100
`
`according to Embodiment 1. Encoder 100 is a moving picture/picture encoder
`
`25
`
`that encodes a moving picture/picture block by block.
`
`As illustrated in FIG. 1, encoder 100 is a device that encodes a picture
`
`block by block, and includes splitter 102, subtractor 104,
`
`transformer 106,
`
`is
`
`
`
`quantizer 108, entropy encoder 110, inverse quantizer 112, inverse transformer
`
`114, adder 116, block memory 118,
`
`loop filter 120, frame memory 122, intra
`
`predictor 124, inter predictor 126, and prediction controller 128.
`
`Emcoder 100 is realized as,
`
`for example, a generic processor and
`
`Or
`
`memory.
`
`In this case, when a software program stored in the memory is
`
`executed by the processor, the processor functions as splitter 102, subtractor
`
`104, transformer 106, quantizer 108, entropy encoder 110, inverse quantizer
`
`112, inverse transformer 114, adder 116,
`
`loop filter 120, intra predictor 124,
`
`inter predictor 126, and prediction controller 128. Alternatively, encoder 100
`
`ray be realized as one or more dedicated electronic circuits corresponding to
`
`splitter 102, subtractor 104, transformer 106, quantizer 108, entropy encoder
`
`110, inverse quantizer 112, inverse transformer 114, adder 116, loop filter 120,
`
`intra predictor 124, inter predictor 126, and prediction controller 128.
`
`Hereinafter, each component included in encoder 100 will be described.
`
`[Splitter]
`
`Splitter 102 splits each picture included in an input moving picture into
`
`blocks, and outputs each block to subtractor 104. For example, splitter 102
`
`first splits a picture into blocks of a fixed size Yor example, 128x128). The
`
`fixed size block is also referred to as coding tree unit (CTU). Splitter 102 then
`
`20
`
`splits each fixed size block into blocks of variable sizes Yor example, 64x64 or
`
`smaller), based on recursive quadtree and/or binary tree block splitting. The
`yt{
`variable size block is also referred to as a coding unit (CU), a prediction unit
`
`(PU), ora transform unit (FU). Note that in this embodiment, there is no need
`
`to differentiate between CU, PU, and TU) all or some of the blocks in a picture
`
`25
`
`may be processed per CU, PU, or TU.
`
`FIG.
`
`2
`
`illustrates one example of block splitting according to
`
`Embodiment 1.
`
`In FIG. 2, the solid lines represent block boundaries of blocks
`
`14
`
`
`
`split by quadtree block splitting, and the dashed lines represent block
`
`boundaries of blocks spht by binary tree block splitting.
`
`Here, block 16 is a square 128x128 pixel block (128x128 block). This
`
`128™128 block 10 is first split into four square 64x64 blocks (quadtree block
`
`splitting).
`
`The top left 64x64 block is further vertically split into two rectangle
`
`32x64 blocks, and the left 32x64 block is further vertically split into two
`
`rectangle 16x64 blocks (binary tree block splitting), As a result, the top left
`
`64x64blockis split into two 16x64 blocks 11 and 12 and one 32x64 block 18.
`
`The top right 6464 block is horizontally split into two rectangle 64x32
`
`blocks 14 and 15 (binarytree block splitting).
`
`The bottom left 64x64 block is first split into four square 32x32 blocks
`
`(quadtree block splitting). The top left block and the bottom right block among
`
`the four 32x32 blocks are further split. The top left 82x32 block is vertically
`
`split into two rectangle 16x32 blocks, and the right 16x32 block is further
`
`horizontally split into two 16X16 blocks (bimary tree block splitting). The
`
`bottom right 32x32 block is horizontally split into two 82x16 blocks (binarytree
`
`block splitting). As a result, the bottom left 64x64 block is split into 16x82
`
`block 16, twe 16x16 blocks 17 and 18, two 32x32 blocks 19 and 20, and two
`
`20
`
`32x16 blocks 21 and 22.
`
`The bottom right 64*64 block 23 is not split.
`
`As described above,
`
`in FIG. 2, block 10 is split into 13 variable size
`
`blocks 11 through 28 based on recursive quadtree and binary tree block
`
`splitting. This type of splitting is also referred to as quadtree plus binary tree
`
`25
`
`(QTBT) splitting.
`
`Note that in FIG. 2, one block is split into four or two blocks (quadtree
`
`or binary tree block splitting), but splitting is not limited to this example. For
`
`15
`
`
`
`example, one block may be split into three blocks Gernary block splitting).
`
`Splitting including such ternary block splitting is also referred to as multi-type
`
`tree (MBT) splitting.
`
`[Subtractor]
`
`Subtractor 104 subtracts a prediction signal (prediction sample) from
`
`an original signal (original sample) per block split by splitter 102,
`
`In other
`
`words, subtractor 104 calculates prediction errors (also referred to as residuals)
`
`of a block to be encoded (hereinafterreferred to as a current block), Subtractor
`
`104 then outputs the calculated prediction errors to transformer 106.
`
`The original signal is a signal input into encoder 100, andis a signal
`
`representing an image for each picture inchided in a moving picture (for
`
`example, a luma signal and two chroma signals). Hereinafter, a signal
`
`representing an image is also referred to as a sample.
`
`[Transformer]
`
`Transformer 106 transforms spatial domain prediction errors into
`
`frequency domain transform coefficients, and outputs the transform coefficients
`
`to quantizer 108. More specifically, transformer 106 applies, for example, a
`
`predefined discrete cosine transform (DCT) or discrete sine transform (DST) to
`
`spatial domain prediction errors.
`
`20
`
`Note that transformer 106 may adaptively select a transform type from
`
`among a plurality of transform types, and transform prediction errors into
`
`transform coefficients by using a transform basis function corresponding to the
`
`selected transformtype. This sort of transform is also referred to as explicit
`
`multiple core transform (EMT) or adaptive multiple transform (AMT).
`
`25
`
`The transform types include, for example, DCT-IL, DCT-V, DCT-VHI,
`
`DST-L and DST-VH. FIG. 3 is a chart indicating transform basis functions for
`
`each transform type.
`
`In FIG. 3, Nindicates the numberof input pixels. For
`
`16
`
`
`
`example, selection of a transform type from among the plurality of transform
`
`types may depend on the prediction type Gntra prediction and inter prediction),
`
`and may depend on intra prediction mode.
`
`Information indicating whether to apply such EMT or AMTGeferred to
`
`as,
`
`for example, an AMT flag) and information indicating the selected
`
`transform type is signalled at the CU level. Note that the signaling of such
`
`information need not be performed at the CUlevel, and may be performed at
`
`another level (or example, at the sequence level, picture level, slice level, tile
`
`level, or CTUlevel).
`
`Moreover,
`
`transformer 106 may apply a secondary transform to the
`
`transformcoefficients (transform result). Such a secondary transformis also
`
`referred to as adaptive secondary transform (AST) or non-separable secondary
`
`transform (NSST).
`
`For example,
`
`transformer 106 applies a secondary
`
`transform to each sub-block (for example, each 4x4 sub-block) included in the
`
`block of the transform coefficients corresponding to the intra prediction errors.
`
`Information indicating whether to apply NSST and information related to the
`
`transform matrix used in NSST are signalled at the CU level. Note that the
`
`signaling of such information need not be performed at the CUlevel, and may
`
`be performed at another level (Gor example, at the sequence level, picture level,
`
`20
`
`slice level, tile level, or CTUlevel).
`
`Here, a separable transform is a method in which a transform is
`
`performed a plurality of times by separately performing a transform for each
`
`direction according to the number of dimensions input. A non-separable
`
`transform is a method of performing a collective transform in which two or
`
`25
`
`more dimensions in a multidimensional input are collectively regarded as a
`
`single dimension.
`
`In one example of a non-separable transform, when the input is a 4x4
`
`17
`
`
`
`block, the 4x4 block is regarded as a single array including 16 components, and
`
`the transform applies a 16“16 transform matrix to the array.
`
`Moreover, similar to above, after an input 4x4 block is regarded as a
`
`single array including 16 components, a transform that performs a plurality of
`
`Givens rotations on the array (.e., a Hypercube-Givens Transform) is also one
`
`example of a non-separable transform.
`
`[Quantizer]
`
`Quantizer 108 quantizes
`
`the transform coefficients output
`
`from
`
`transformer 106. More specifically, quantizer 108 scans, in a predetermined
`
`scanning order, the transformcoefficients of the current block, and quantizes
`
`the scanned transformcoefficients based on quantization parameters (QP)
`
`corresponding to the transform coefficients. Quantizer 108 then outputs the
`
`quantized transform coefficients
`
`(hereafter
`
`referred to as quantized
`
`coefficients) of the current block to entropy encoder 110 and inverse quantizer
`
`112.
`
`A predetermined order is an order for quantizing/inverse quantizing
`
`transform coefficients.
`
`For example, a predetermined scanning order is
`
`defined as ascending order of frequency (from low to high frequency) or
`
`descending orderof frequency (from high to low frequency).
`
`20
`
`A quantization parameter is a parameter defining a quantization step
`
`size (quantization width).
`
`For example,
`
`if the value of the quantization
`
`parameter increases, the quantization step size also increases.
`