`(19) World Intellectual Property
`Organization
`International Bureau
`
`(43) International Publication Date
`7 August 2014 (07.08.2014)
`
`WIPO!IPCT
`
`\=
`
`(10) International Publication Number
`WO 2014/120367 Al
`
`GD)
`
`International Patent Classification:
`HOAN 19/60 (2014.01)
`
`(21)
`
`International Application Number:
`
`PCT/US2013/077692
`
`(81)
`
`(22)
`
`International Filing Date:
`
`24 December 2013 (24.12.2013)
`
`(25)
`
`(26)
`
`(30)
`
`(71)
`
`(72)
`(71)
`
`Filing Language:
`
`Publication Language:
`
`English
`
`English
`
`Priority Data:
`61/758,314
`
`30 January 2013 (30.01.2013)
`
`US
`
`Applicant (for all designated States except US): INTEL
`CORPORATION [US/US];
`2200 Mission
`College
`Boulevard, Santa Clara, California 95054 (US).
`
`Inventors; and
`Applicants (for US only): GOKHALE, Neelesh N.
`[US/US]; 1825 10th Ave W APT C, Seattle, Washington
`98119 CUS). PURI, Atul [US/US]; 16080 NE 85th Street,
`Apt N204, Redmond, Washington 98052 (US).
`
`(74)
`
`Agent: GREEN, Blayne D.; Lynch Law Patent Group PC,
`c/o CPA Global, P.O. Box 52050, Minneapolis, Minnesota
`55402 (US).
`
`Designated States (unless otherwise indicated, for every
`kind of national protection available): AE, AG, AL, AM,
`AO, AT, AU, AZ, BA, BB, BG, BH, BN, BR, BW, BY,
`BZ, CA, CH, CL, CN, CO, CR, CU, CZ, DE, DK, DM,
`DO, DZ, EC, EE, EG, ES, FI, GB, GD, GE, GH, GM, GT,
`HIN, HR, HU,ID,IL, IN, IR, IS, JP, KE, KG, KN, KP, KR,
`KZ, LA, LC, LK, LR, LS, LT, LU, LY, MA, MD, ME,
`MG, MK, MN, MW, MX, MY, MZ, NA, NG, NI NO, NZ,
`OM,PA, PE, PG, PH, PL, PT, QA, RO, RS, RU, RW, SA,
`SC, SD, SE, SG, SK, SL, SM, ST, SV, SY, TH, TJ, T,
`TN, TR, TT, TZ, UA, UG, US, UZ, VC, VN, ZA, ZM,
`ZW.
`
`(84)
`
`Designated States (unless otherwise indicated, for every
`kind of regional protection available): ARIPO (BW, GH,
`GM, KE, LR, LS, MW, MZ, NA, RW,SD, SL, SZ, TZ,
`UG, ZM, ZW), Eurasian (AM, AZ, BY, KG, KZ, RU, TJ,
`TM), European (AL, AT, BE, BG, CH, CY, CZ, DE, DK,
`EE, ES, FI, FR, GB, GR, HR, HU,IE, IS, IT, LT, LU, LV,
`MC, MK, MT, NL, NO, PL, PT, RO, RS, SE, SI, SK, SM,
`
`[Continued on next page]
`
`(54) Title: CONTENT ADAPTIVE PARAMETRIC TRANSFORMS FOR CODING FOR NEXT GENERATION VIDEO
`
`(57) Abstract: A method for vidco coding according to the present inven-
`tion comprises: the steps of receiving prediction error data or coding parti-
`tion or a partition of original pixel data for transform coding; performing a
`closed-form parametric transform on the prediction error data or coding par-
`tition or the partition of original pixel data to generate transform coeffi-
`cients; quantizing the transform coefficients to generate quantized transform
`coefficients; and entropy encoding data associated with the quantized trans-
`form coefficient into a bitstream.
`
`118
`
`108
`
`
`
`
`
`
`
`FIG. 1
`
`
`
`
`
`wo2014/120367A1TIMINMANTINYTNIIMTINATNATUMTAYEA
`
`
`—|
`Encode Controller
`
`403
`
` Output
`Bitstream
`
`
`
`‘”
`a
`Content
`Adaotive
`
`
`
`
`
`
`
`
`
`
`
`Pre
`fo]
`ions
`itastomn P™] Guentae >] Estoy
`Analyzer
`r]
`&
`Encoder
`u
`yh
`¥
`Qu
`102
`|? 10
`[adaptive
`1075
`Inverse
`Quantize
`¥
`Adaptive-**
`Inverse
`Transform
`daa¥
`
`
`Coding
`m4
` 1|Partitions
`
`1|Assembler
`
`126
`:
`
`£
`:
`A14b
`Trediion
`Jf
`15
`Analyzer &
`Prediction Fusion
`
`
`
`Filtering
`PredictionJ"6
`125
`124
`Partitions
`f
`£
`Assembler
`
`
`eciation
`Intra-Directional
`¥
`af ees, Bockingss|—17[| PredietonAnalyzer &
`
`
`
`Tee heehee
`Prediction Generation
`Analyzer8
`ws
`FE
`
`f
`Characteristics
`120
`& Motion
`-
`Resloration
`Cor
`Morphing
`Filkerin
`
`
`Analyzer &
`Generation {Decoded
`
`
`Picture fe
`suffer
`
`
`Synthesizing |“
`Analyzer &
`
`q 19
`Generation
`Motion
`Estimator
`
`
`424
`
`q 122
`
`
`
`
`
`
`
`
`
`
`
`
`
`WO 2014/120367 A1 IfMTIMUTUIT INTEIATATT IAAT AY A ATTA
`
`TR), OAPI (BF, BJ, CF, CG, CL CM, GA, GN, GQ, GW, Published:
`
`KM, ML, MR, NE, SN, TD, TG). —__with international search report (Art. 21(3))
`Declarations under Rule 4.17:
`
`—_ofinventorship (Rule 4.17(iv))
`
`
`
`WO 2014/120367
`
`PCT/US2013/077692
`
`CONTENT ADAPTIVE PARAMETRIC TRANSFORMS FOR CODING FOR NEXT
`GENERATION VIDEO
`
`RELATED APPLICATIONS
`
`This application claims the bencfit of U.S. Provisional Application No.
`
`61/758,314 filed 30 JAN 2013 and titled “NEXT GENERATION VIDEO CODING’, the
`
`contents of which are hereby incorporated in their entirety.
`
`BACKGROUND
`
`A video encoder compresses video information so that more information can be
`
`sent over a given bandwidth. The compressed signal may then be transmitted to a receiver
`
`having a decoderthat decodes or decompressesthe signal prior to display.
`
`High Efficient Video Coding (HEVC)is the latest video compression standard,
`
`which is being developed by the Joint Collaborative Team on Video Coding (JCT-VC)
`
`formed by ISO/IEC Moving Picture Experts Group (MPEG) and ITU-T Video Coding
`
`Experts Group (VCEG). HEVCis being developed in response to the previous H.264/AVC
`
`(Advanced Video Coding) standard not providing enough compression for evolving higher
`
`resolution video applications. Similar to previous video coding standards, HEVC includes
`
`basic functional modules such as intra/inter prediction, transform, quantization, in-loop
`
`filtering, and entropy coding.
`
`The ongoing HEVCstandard may attempt to improve on limitations of the
`
`H.264/AVCstandard such as limited choices for allowed prediction partitions and coding
`
`partitions, limited allowed multiple references and prediction generation, limited transform
`
`block sizes and actual transforms, limited mechanisms for reducing codingartifacts, and
`
`inefficient entropy encoding techniques. However, the ongoing HEVC standard may use
`
`iterative approachesto solving such problems.
`
`BRIEF DESCRIPTION OF THE DRAWINGS
`
`The material described herein is illustrated by way of example and not by way of
`
`limitation in the accompanying figures. For simplicity and clarity of illustration, elements
`
`illustrated in the figures are not necessarily drawn to scale. For example, the dimensions of
`
`
`
`WO 2014/120367
`
`PCT/US2013/077692
`
`some elements may be exaggerated relative to other elements for clarity. Further, where
`
`considered appropriate, reference labels have been repeated amongthe figures to indicate
`
`corresponding or analogous elements. In the figures:
`
`FIG.1 is an illustrative diagram of an example next generation video encoder;
`
`FIG. 2 is an illustrative diagram of an cxample next gencration vidco decoder;
`
`FIG.3 is an illustrative diagram of an example encoder subsystem;
`
`FIG. 4 is anillustrative diagram of an example encoder subsystem;
`
`FIG.5 is an illustrative diagram of an example encoder subsystem;
`
`FIG. 6 is an illustrative diagram of an example encoder subsystem;
`
`FIG. 7 is a flow diagram illustrating an example process;
`
`FIG.8 illustrates example partitioning of video data using a bi-tree partitioning
`
`technique;
`
`FIG. 9 illustrates example partitioning of video data using a k-d tree partitioning
`
`technique;
`
`FIGS. 10(A), 10(B), and 10(C)illustrate example parametric and hybrid parametric
`
`transforms operating on a coding partition;
`
`FIGS. 11(A) and 11(B)illustrate example neighboring reconstructed video data
`
`relative to a coding partition;
`
`FIG. 12 illustrates example neighboring reconstructed video data for a coding
`
`partition;
`
`FIG. 13 illustrates a directional rearrangement of pixels for use via a parametric
`
`transform to code coding partitions having a slope;
`
`FIG 14 illustrates an example bitstream;
`
`FIG. 15 is a flow diagram illustrating an example process;
`
`
`
`WO 2014/120367
`
`PCT/US2013/077692
`
`FIG. 16 illustrates example good transform pairs;
`
`FIG. 17 is an illustrative diagram of an example wavelet based video encoder;
`
`FIG. 18 is an illustrative diagram of an example wavelet based video decoder;
`
`FIGS. 19(A) and 19(B) provide an illustrative diagram of an cxample video coding
`
`system and video coding process in operation;
`
`FIG. 20is an illustrative diagram of an example video coding system;
`
`FIG.21 is an illustrative diagram of an example system;
`
`FIG. 22 illustrates an example device, all arranged in accordance with at least some
`
`implementations of the present disclosure.
`
`DETAILED DESCRIPTION
`
`One or more embodiments or implementations are now described with reference to
`
`the enclosed figures. While specific configurations and arrangements are discussed,it
`
`should be understood that this is done for illustrative purposes only. Persons skilled in the
`
`relevant art will recognize that other configurations and arrangements may be employed
`
`without departing from the spirit and scope of the description. It will be apparent to those
`
`skilled in the relevant art that techniques and/or arrangements described herein may also be
`
`employed in a variety of other systems and applications other than what is described
`
`herein.
`
`While the following description sets forth various implementations that may be
`
`manifested in architectures such as system-on-a-chip (SoC) architectures for example,
`
`implementation of the techniques and/or arrangements described herein are not restricted to
`
`particular architectures and/or computing systems and may be implemented by any
`
`architecture and/or computing system for similar purposes. For instance, various
`
`architectures employing, for example, multiple integrated circuit (IC) chips and/or
`
`packages, and/or various computing devices and/or consumerelectronic (CE) devices such
`
`as set top boxes, smart phones, etc., may implement the techniques and/or arrangements
`
`described herein. Further, while the following description may set forth numerousspecific
`
`details such as logic implementations, types and interrelationships of system components,
`
`
`
`WO 2014/120367
`
`PCT/US2013/077692
`
`logic partitioning/integration choices, etc., claimed subject matter may be practiced without
`
`such specific details. In other instances, some matcrial such as, for example, control
`
`structures and full software instruction sequences, may not be shownin detail in order not
`
`to obscure the material disclosed herein.
`
`The material disclosed herein may be implemented in hardware, firmware, software,
`
`or any combination thercof. The material disclosed hercin may also be implemented as
`
`instructions stored on a machine-readable medium, which may be read and executed by one
`
`or more processors. A machine-readable medium may include any medium and/or
`
`mechanism for storing or transmitting information in a form readable by a machine(e.g., a
`
`computing device). For example, a machine-readable medium mayinclude read only
`
`memory (ROM); random access memory (RAM); magnetic disk storage media; optical
`
`storage media; flash memory devices; electrical, optical, acoustical or other forms of
`
`propagated signals (e.g., carrier waves, infrared signals, digital signals, etc.); and others.
`
`References in the specification to "one implementation", "an implementation”, "an
`
`example implementation", etc., indicate that the implementation described may include a
`
`particular fcature, structure, or charactcristic, but cvery cmbodiment maynot necessarily
`
`includethe particular feature, structure, or characteristic. Moreover, such phrases are not
`
`necessarily referring to the same implementation. Further, when a particular feature,
`
`structure, or characteristic is described in connection with an embodiment, it is submitted
`
`that it is within the knowledge of one skilled in the art to effect such feature, structure, or
`
`characteristic in connection with other implementations whether or not explicitly described
`
`herein.
`
`Systems, apparatus, articles, and methods are described below related to content
`
`adaptive transforms for video coding.
`
`Next generation video (NGV) systems, apparatus, articles, and methods are described
`
`below. NGV video coding may incorporate significant content based adaptivity in the
`
`video coding process to achieve higher compression. As discussed above, the H.264/AVC
`
`standard may havea variety of limitations and ongoing attempts to improve on the
`
`standard, such as, for example, the HEVC standard may use iterative approaches to address
`
`such limitations. Hercin, an NGV system including an encoder and a decoder will be
`
`described.
`
`
`
`WO 2014/120367
`
`PCT/US2013/077692
`
`Also as discussed, the H.264/AVC standard may include limited choices of coding
`
`partitions and fixcd transforms. In particular, as discussed hercin, vidco data may be
`
`received for transform coding. The video data may include any suitable data for transform
`
`coding suchas, for example, residual video data, prediction partitions, prediction error data
`
`partitions, tiles or super-fragments, or wavelet data. In some examples, the video data may
`
`includeprediction error data partitions having error data for prediction partitions. For
`
`example, the prediction partitions may include partitions of tiles or super-fragments of a
`
`video frame.
`
`The received video data may bepartitioned. In various examples, the video data may
`
`be partitioned based on the data type (e.g., F/B-picture (e.g., functional or bi-directional),
`
`P-picture (e.g., predictive), or I-picture (c.g., intra compensation only)) or prediction
`
`technique (e.g., inter- or intra- or the like) associated with the video data using bi-tree
`
`partitioning or k-d tree partitioning. In some examples, the video data may include
`
`prediction error data partitions (e.g., associated with inter prediction for P- and F/B-
`
`pictures) and the prediction error data partitions may be partitioned into coding partitions.
`
`For example, prediction error data partitions may be partitioned into coding partitions using
`
`bi-tree partitioning or k-d trec partitioning. As used herein, the term F/B-picture may
`
`include an F-picture(e.g., functional) or a B-picture (e.g. bi-directional) such that the
`
`picture may use prior or future previously decoded pictures or frames for prediction.
`
`In some examples, the video data may include tiles or super-fragments of a video
`
`frame for intra-prediction (either for I-pictures or for intra prediction in P- and F/B-
`
`pictures), which may be partitioned to generate partitions for prediction. In some examples,
`
`such partitioning may be performed using k-d tree partitioning for I-pictures and using bi-
`
`tree partitioning for P- and F/B-pictures. Prediction partitions associated with partitions for
`
`prediction may be generated (e.g., via intra-prediction techniques) and differenced with
`
`original pixel data to generate prediction error data partitions. In such examples, only a
`
`single level of partitioning may be performed and the prediction error data partitions may
`
`be considered coding partitions. In some examples, the video data may include original
`
`pixel data (or a partition thereof) such that original pixel data may be processed(e.g.,
`
`transform coded andthe like as discussed herein). For example, such original pixel data
`
`may be processed and transmitted via a bitstream for I-pictures and intra-coding.
`
`
`
`WO 2014/120367
`
`PCT/US2013/077692
`
`In some examples, the plurality of coding partitions may be transformed such that a
`
`subsct of the coding partitions (c.g., small to medium sized coding partitions) arc
`
`transformed using a content adaptive transform (e.g., a transform which hasa basis thatis
`
`block wise adaptive) and another subset of the coding partitions (e.g., medium to large
`
`sized partitions) are transformed using a fixed transform (e.g., a transform having a fixed
`
`basis). In other examples, a subset of the coding partitions (e.g., small to medium sized
`
`partitions) may be transformed using a content adaptive transform and substantially all of
`
`the coding partitions may be transformed using a fixed transform. Based on the success of
`
`the transformsand therelative bit costs and error rates, a rate distortion optimization or the
`
`like may be made to choose between the content adaptive transform(s) and the fixed
`
`transform(s) for the coding partitions.
`
`In examples where both bi-tree and k-d tree partitioning are used based on picture or
`
`prediction type as discussed, similar techniques may be applied. In some examples, the bi-
`
`tree coding partitions may be transformed suchthat a subset of the coding partitions (e.g.,
`
`small to medium sized partitions) are transformed using a content adaptive transform and
`
`another subset of the coding partitions (e.g., medium to large sized partitions) are
`
`transformed using a fixed transform. Similarly, in some cxamplcs, the k-d tree partitions
`
`may be transformed such that a subset of the partitions (e.g., small to medium sized
`
`partitions) are transformed using a content adaptive transform and another subset of the
`
`partitions (e.g., medium to large sized partitions) are transformed using a fixed transform.
`
`In other examples, the bi-tree partitions may be transformed suchthat a subset of the
`
`coding partitions (e.g., small to medium sized partitions) may be transformed using a
`
`content adaptive transform and substantially all of the coding partitions may be
`
`transformed using a fixed transform. Based on the success of the transforms and the
`
`relative bit costs and error rates, a rate determination optimization or the like may be made
`
`to choose between the content adaptive transform(s) and the fixed transform(s). Similarly,
`
`in some examples, the k-d tree partitions may be transformed such that a subset of the
`
`partitions (e.g., small to medium sized partitions) may be transformed using a content
`
`adaptive transform and substantially all of the coding partitions may be transformed using a
`
`fixed transform, and, based on the success of the transforms andtherelative bit costs and
`
`error rates, a rate determination optimization or the like may be made to choose between
`
`the content adaptive transform result and the fixed transform result.
`
`
`
`WO 2014/120367
`
`PCT/US2013/077692
`
`In various examples, the resulting data from the pertinent transforms(e.g., transform
`
`cocfficicnts) and data defining the relevant coding partition(s) may be quantized, scanned
`
`and entropy encoded into a bitstream for transmission. The bitstream may be decoded and
`
`the decoded data may be inverse transformed to generate, for example, prediction error
`
`data partitions that may be further used in the decoding process and eventual display via a
`
`display device. For coding partitions transformed using a content adaptive transform, the
`
`encode process may determine transform data that may be transmitted to the decode
`
`process(e.g., in addition to the transform coefficients). Further, for partitions transformed
`
`using a content adaptive transform, both the encode and decode processes may include
`
`determining basis function parameters associated with the partition based on another block
`
`of previously decoded video data. Further, in some examples, the content adaptive
`
`transform may include an adaptive parametric transform in either the vertical or horizontal
`
`direction and a fixed transform in a direction orthogonal to the adaptive parametric
`
`transform. In some examples, the content adaptive transform may include a closed-form
`
`hybrid parametric transform.
`
`In some examples, a prediction error data partition (either a prediction or coding
`
`partition) or a partition of original pixcl data may be reccived for transform coding. In
`
`some examples, a prediction error data partition may be partitioned to generate a plurality
`
`of coding partitions of the prediction error data partition. A content adaptive transform
`
`including a closed-form solution for a parametric Haar transform may be performed on an
`
`individual coding partition of the plurality of coding partitions or a prediction error data
`
`partition or a partition of original pixel data to generate transform coefficients associated
`
`with the individual coding partition. The transform coefficients may be quantized to
`
`generate quantized transform coefficients. Data associated with the quantized transform
`
`coefficients may be entropy encoded into a bitstream.
`
`As used herein, the term “coder” may refer to an encoder and/or a decoder. Similarly,
`
`as used herein, the term “coding” mayrefer to performing video encoding via an encoder
`
`and/or performing video decoding via a decoder. For example, a video encoder and video
`
`decoder may both be examples of coders capable of coding video data. In addition, as used
`
`herein, the term “codec” may refer to any process, program or set of operations, such as,
`
`for example, any combination of software, firmware, and/or hardware that may implement
`
`an encoder and/or a decoder. Further, as used herein, the phrase “video data” may refer to
`
`
`
`WO 2014/120367
`
`PCT/US2013/077692
`
`any type of data associated with video coding suchas, for example, video frames, image
`
`data, cncodcedbit stream data, or the like.
`
`FIG.1 is an illustrative diagram of an example next generation video encoder 100,
`
`arranged in accordance with at least some implementations of the present disclosure. As
`
`shown, encoder 100 mayreceive input video 101. Input video 101 may include any suitable
`
`input vidco for cncoding such as, for cxamplc, input frames of a vidco sequence. As
`
`shown, input video 101 may bereceived via a content pre-analyzer module 102. Content
`
`pre-analyzer module 102 may be configured to perform analysis of the content of video
`
`frames of input video 101 to determine various types of parameters for improving video
`
`coding efficiency and speed performance. For example, content pre-analyzer module 102
`
`may determine horizontal and vertical gradient information (e.g., Rs, Cs), variance, spatial
`
`complexity per picture, temporal complexity per picture, scene change detection, motion
`
`range estimation, gain detection, prediction distance estimation, number of objects
`
`estimation, region boundary detection, spatial complexity map computation, focus
`
`estimation, film grain estimation, or the like. The parameters generated by content pre-
`
`analyzer module 102 may be used by encoder 100 (e.g., via encode controller 103) and/or
`
`quantized and communicated to a decoder. As shown, vidco frames and/or other data may
`
`be transmitted from content pre-analyzer module 102 to adaptive picture organizer module
`
`104, which may determine the picture type (e.g., I-, P-, or F/B-picture ) of each video frame
`
`and reorder the video frames as needed. In some examples, adaptive picture organizer
`
`module 104 may include a frame portion generator configured to generate frame portions.
`
`In some examples, content pre-analyzer module 102 and adaptive picture organizer module
`
`104 may together be considered a pre-analyzer subsystem of encoder 100.
`
`As shown, video frames and/or other data may be transmitted from adaptive picture
`
`organizer module 104 to prediction partitions generator module 105. In some examples,
`
`prediction partitions generator module 105 may divide a frameorpicture intotiles or
`
`super-fragments or the like. In some examples, an additional module (e.g., between
`
`modules 104 and 105) may be provided for dividing a frame orpicture into tiles or super-
`
`fragments. Prediction partitions generator module 105 may divide each tile or super-
`
`fragment into potential prediction partitionings or partitions. In some examples, the
`
`potential prediction partitionings may be determined using a partitioning technique such as,
`
`for example, a k-d tree partitioning technique, a bi-tree partitioning technique,orthe like,
`
`which may be determined based onthe picture type (e.g., I-, P-, or F/B-picture ) of
`
`
`
`WO 2014/120367
`
`PCT/US2013/077692
`
`individual video frames, a characteristic of the frame portion being partitioned, or the like.
`
`In some cxamples, the determined potential prediction partitionings may be partitions for
`
`prediction (e.g., inter- or intra-prediction) and may bedescribed as prediction partitions or
`
`prediction blocksor the like.
`
`In some examples, a selected prediction partitioning (e.g., prediction partitions) may
`
`be determined from the potential prediction partitionings. For example, the selected
`
`prediction partitioning may be based on determining, for each potential prediction
`
`partitioning, predictions using characteristics and motion based multi-reference predictions
`
`or intra-predictions, and determining prediction parameters. For each potential prediction
`
`partitioning, a potential prediction error may be determined by differencing original pixels
`
`with prediction pixels and the selected prediction partitioning may be the potential
`
`prediction partitioning with the minimum prediction error. In other examples, the selected
`
`prediction partitioning may be determined based on a rate distortion optimization including
`
`a weighted scoring based on numberof bits for coding the partitioning and a prediction
`
`error associated with the prediction partitioning.
`
`As shown,the original pixels of the selected prediction partitioning (e.g., prediction
`
`partitions of a current frame) may be differenced with predicted partitions (e.g., a
`
`prediction of the prediction partition of the current frame based on a reference frame or
`
`frames and other predictive data such as inter- or intra-prediction data) at differencer 106.
`
`The determination of the predicted partitions will be described further below and may
`
`include a decode loop as shown in FIG. 1. Any residuals or residual data (e.g., partition
`
`prediction error data) from the differencing may be transmitted to coding partitions
`
`generator module 107. In some examples, such as for intra-prediction of prediction
`
`partitions in any picture type (1-, F/B- or P-pictures), coding partitions generator module
`
`107 may be bypassed via switches 107a and 107b. In such examples, only a single level of
`
`partitioning may be performed. Such partitioning may be described as prediction
`
`partitioning (as discussed) or coding partitioning or both. In various examples, such
`
`partitioning may be performed via prediction partitions generator module 105 (as
`
`discussed) or, as is discussed further herein, such partitioning may be performed via a k-d
`
`tree intra-prediction/coding partitioner module or a bi-tree intra-prediction/coding
`
`partitioner module implemented via coding partitions generator module 107.
`
`In some examples, the partition prediction error data, if any, may not be significant
`
`enoughto warrant encoding. In other examples, where it may be desirable to encode the
`
`partition prediction error data and the partition prediction error data is associated with
`
`
`
`WO 2014/120367
`
`PCT/US2013/077692
`
`10
`
`inter-prediction or the like, coding partitions generator module 107 may determine coding
`
`partitions of the prediction partitions. In some cxamples, coding partitions gencrator
`
`module 107 may not be needed as the partition may be encoded without coding partitioning
`
`(e.g., aS Shown via the bypass path available via switches 107a and 107b). With or without
`
`coding partitioning, the partition prediction error data (which may subsequently be
`
`described as coding partitions in either event) may be transmitted to adaptive transform
`
`module 108 in the event the residuals or residual data require encoding. In some examples,
`
`prediction partitions generator module 105 and coding partitions generator module 107
`
`may together be considered a partitioner subsystem of encoder 100. In various examples,
`
`coding partitions generator module 107 may operate on partition prediction error data,
`
`original pixel data, residual data, or wavelet data.
`
`Coding partitions generator module 107 may generate potential coding partitionings
`
`(e.g., coding partitions) of, for example, partition prediction error data using bi-tree and/or
`
`k-d tree partitioning techniques or the like. In some examples, the potential coding
`
`partitions may be transformed using adaptiveor fixed transforms with various block sizes
`
`via adaptive transform module 108 and a selected coding partitioning and selected
`
`transforms(e.g., adaptive or fixed) may be determined based ona rate distortion
`
`optimization or other basis. In some cxamples, the selected coding partitioning and/or the
`
`selected transform(s) may be determined based on a predetermined selection method based
`
`on codingpartitions size or the like.
`
`For example, adaptive transform module 108 may includea first portion or
`
`componentfor performing a parametric transform to allow locally optimal transform
`
`coding of small to medium size blocks and a second portion or component for performing
`
`globally stable, low overhead transform coding using a fixed transform, such as a discrete
`
`cosine transform (DCT)or a picture based transform from a variety of transforms,
`
`including parametric transforms, or any other configuration as is discussed further herein.
`
`In some examples, for locally optimal transform coding a Parametric Haar Transform
`
`(PHT) or a closed-form solution for a parametric Haar transform or the like may be
`
`performed, as is discussed further herein. In some examples, transforms may be performed
`
`on 2D blocks of rectangular sizes between about 4x4 pixels and 64x64 pixels, with actual
`
`sizes depending on a numberof factors such as whether the transformed data is luma or
`
`chroma, or inter or intra, or if the determined transform used is PHT or DCTorthelike.
`
`As shown,the resultant transform coefficients may be transmitted to adaptive
`
`quantize module 109. Adaptive quantize module 109 may quantize the resultant transform
`
`
`
`WO 2014/120367
`
`PCT/US2013/077692
`
`11
`
`coefficients. Further, any data associated with a parametric transform, as needed, may be
`
`transmitted to cither adaptive quantize module 109 Gf quantization is desired) or adaptive
`
`entropy encoder module 110. Also as shownin FIG. 1, the quantized coefficients may be
`
`scanned and transmitted to adaptive entropy encoder module 110. Adaptive entropy
`
`encoder module 110 may entropy encode the quantized coefficients and include them in
`
`output bitstream 111. In some examples, adaptive transform module 108 and adaptive
`
`quantize module 109 may together be considered a transform encoder subsystem of
`
`encoder 100.
`
`Asalso shownin FIG. 1, encoder 100 includes a local decode loop. The local decode
`
`loop may begin at adaptive inverse quantize module 112. Adaptive inverse quantize
`
`module 112 may be configured to perform the opposite operation(s) of adaptive quantize
`
`module 109 such that an inverse scan may be performed and quantized coefficients may be
`
`de-scaled to determine transform coefficients. Such an adaptive quantize operation may be
`
`lossy, for example. As shown,the transform coefficients may be transmitted to an adaptive
`
`inverse transform module 113. Adaptive inverse transform module 113 may perform the
`
`inverse transform as that performed by adaptive transform module 108, for example, to
`
`generate residuals or residual values or partition prediction error data (or original data or
`
`wavelet data, as discussed) associated with coding partitions. In some examples, adaptive
`
`inverse quantize module 112 and adaptive inverse transform module 113 may together be
`
`considered a transform decoder subsystem of encoder 100.
`
`As shown,the partition prediction error data (or the like) may be transmitted to
`
`optional coding partitions assembler 114. Coding partitions assembler 114 may assemble
`
`coding partitions into decoded prediction partitions as needed (as shown, in some
`
`examples, coding partitions assembler 114 may be skipped via switches 114a and 114b
`
`such that decoded prediction partitions may have been generated at adaptive inverse
`
`transform module 113) to generate prediction partitions of prediction error data or decoded
`
`residual prediction partitions or the like.
`
`As shown, the decoded residual prediction partitions may be added to predicted
`
`partitions (e.g., prediction pixel data) at adder 115 to generate reconstructed prediction
`
`partitions. The reconstructed prediction partitions may be transmitted to prediction
`
`partitions assembler 116. Prediction partitions assembler 116 may assemble the
`
`reconstructed prediction partitions to generate reconstructed tiles or super-fragments. In
`
`some examples, coding partitions assembler module 114 and predictionpartitions
`
`
`
`WO 2014/120367
`
`PCT/US2013/077692
`
`12
`
`assembler module 116 may together be considered an un-partitioner subsystem of encoder
`
`100.
`
`The reconstructed tiles or super-fragments may be transmitted to blockiness analyzer
`
`and deblock filtering module 117. Blockiness analyzer and deblock filtering module 117
`
`may deblock and dither the reconstructed tiles or super-fragments (or prediction partitions
`
`of tiles or super-fragments). The generated deblock and dither filter parameters may be
`
`used for the currentfilter operation and/or coded in bitstream 111 for use by a decoder, for
`
`example. The output of blockiness analyzer and deblock filtering module 117 may be
`
`transmitted to a quality analyzer and quality restoration filtering m