`2
`3
`4
`
`5
`
`6
`
`7
`
`8
`9
`10
`11
`12
`13
`14
`15
`16
`17
`18
`19
`20
`21
`22
`23
`24
`25
`
`26
`
`27
`
`28
`
`29
`
`30
`
`INTERNATIONAL ORGANISATION FOR STANDARDISATION
`ORGANISATION INTERNATIONALE DE NORMALISATION
`ISO/IEC JTC1/SC29/WG11
`CODING OF MOVING PICTURES AND ASSOCIATED AUDIO
`
`ISO/IEC JTC1/SC29/WG11 N0702rev
`
`Incorporating N702 Delta of 24 March
`25 March 1994
`
`Video
`
`INFORMATION TECHNOLOGY -
`GENERIC CODING OF MOVING PICTURES AND
`ASSOCIATED AUDIO
`Recommendation H.262
`
`ISO/IEC 13818-2
`
`1
`
`DISH 1032
`Sling TV v. Realtime
`IPR2018-01342
`
`
`
`1
`
`Draft International Standard
`
`2
`
`Draft of: 10:18 Friday 25 March 1994
`
`
`
`2
`
`
`
`
`
`
`
`I
`I.1
`I.2
`I.3
`I.4
`1
`2
`3
`4
`4.1
`4.2
`4.3
`4.4
`4.5
`4.6
`4.7
`5
`5.1
`5.2
`5.3
`5.4
`6
`6.1
`6.2
`6.3
`7
`7.1
`7.2
`7.3
`7.4
`7.5
`7.6
`7.7
`7.8
`7.9
`7.10
`7.11
`7.12
`8
`8.1
`8.2
`8.3
`8.4
`8.4.1
`8.5
`
`
`B.1
`B.2
`B.3
`B.4
`B.5
`
`
`1
`2
`3
`4
`5
`6
`7
`8
`9
`10
`11
`12
`13
`14
`15
`16
`17
`18
`19
`20
`21
`22
`23
`24
`25
`26
`27
`28
`29
`30
`31
`32
`33
`34
`35
`36
`37
`38
`39
`40
`41
`42
`43
`44
`45
`46
`47
`48
`49
`50
`51
`52
`53
`54
`55
`56
`
`Draft International Standard
`
`ISO/IEC 13818-2
`
`CONTENTS
`CONTENTS ..................................................................................................................i
`Foreword........................................................................................................................iii
`Introduction ...................................................................................................................iv
`Purpose ...................................................................................................................iv
`Application .............................................................................................................iv
`Profiles and levels...................................................................................................iv
`The scalable and the non-scalable syntax...............................................................v
`Scope .............................................................................................................................1
`Normative references.....................................................................................................1
`Definitions .....................................................................................................................3
`Abbreviations and symbols............................................................................................9
`Arithmetic operators...............................................................................................9
`Logical operators....................................................................................................9
`Relational operators................................................................................................9
`Bitwise operators....................................................................................................10
`Assignment.............................................................................................................10
`Mnemonics .............................................................................................................10
`Constants ................................................................................................................10
`Conventions...................................................................................................................11
`Method of describing bitstream syntax...................................................................11
`Definition of functions ...........................................................................................12
`Reserved, forbidden and marker_bit ......................................................................12
`Arithmetic precision ...............................................................................................12
`Video bitstream syntax and semantics...........................................................................13
`Structure of coded video data.................................................................................13
`Video bitstream syntax ...........................................................................................25
`Video bitstream semantics......................................................................................39
`The video decoding process ..........................................................................................63
`Higher syntactic structures .....................................................................................63
`Variable length decoding........................................................................................64
`Inverse scan ............................................................................................................67
`Inverse quantisation................................................................................................68
`Inverse DCT ...........................................................................................................73
`Motion compensation .............................................................................................73
`Spatial scalability....................................................................................................90
`SNR scalability.......................................................................................................103
`Temporal scalability ...............................................................................................109
`Data partitioning.....................................................................................................113
`Hybrid scalability ...................................................................................................115
`Output of the decoding process ..............................................................................116
`Profiles and levels..........................................................................................................119
`ISO/IEC 11172-2 compatibility..............................................................................120
`Relationship between defined profiles ...................................................................120
`Relationship between defined levels ......................................................................122
`Scalable layers........................................................................................................123
`Permissible layer combinations ..............................................................................124
`Parameter values for defined profiles, levels and layers ........................................126
`Annex A Discrete cosine transform..............................................................................131
`Annex B Variable length code tables ...........................................................................132
`Macroblock addressing...........................................................................................132
`Macroblock type.....................................................................................................133
`Macroblock pattern.................................................................................................138
`Motion vectors........................................................................................................139
`DCT coefficients ....................................................................................................140
`Annex C Video buffering verifier ................................................................................149
`
`(10:18 Friday 25 March 1994)
`
`ITU-T Draft Rec. H.262
`
`i
`
`3
`
`
`
`ISO/IEC 13818-2
`
`Draft International Standard
`
`1
`2
`3
`4
`5
`6
`7
`8
`9
`10
`11
`12
`13
`14
`15
`16
`17
`18
`19
`
`
`D.1
`D.2
`D.3
`D.4
`D.5
`D.6
`D.7
`D.8
`D.9
`D.10
`D.11
`D.12
`D.13
`
`E.1
`E.2
`
`
`
`Annex D Features supported by the algorithm ............................................................ 154
`Overview................................................................................................................ 154
`Video Formats ....................................................................................................... 154
`Picture Quality ....................................................................................................... 155
`Data Rate Control .................................................................................................. 155
`Low Delay Mode ................................................................................................... 156
`Random Access/Channel Hopping ........................................................................ 156
`Scalability .............................................................................................................. 156
`Compatibility ......................................................................................................... 164
`Differences bewteen this specification and ISO/IEC 11172-2 .............................. 164
`Complexity............................................................................................................. 167
`Editing Encoded Bitstreams................................................................................... 167
`Trick modes ........................................................................................................... 167
`Error Resilience ..................................................................................................... 169
`Annex E Profile and level restrictions ......................................................................... 178
`Syntax element restrictions in profiles................................................................... 178
`Permissible layer combinations (see 8.4.1)........................................................... 189
`Annex F Patent statements........................................................................................... 192
`Annex G Bibliography................................................................................................. 194
`
`ii
`
`ITU-T Draft Rec. H.262
`
`(10:18 Friday 25 March 1994)
`
`4
`
`
`
`
`
`Draft International Standard
`
`ISO/IEC 13818-2
`
`
`Foreword
`The ITU-T (the ITU Telecommunication Standardisation Sector) is a permanent organ of the
`International Telecommunications Union (ITU). The ITU-T is responsible for studying technical,
`operating and tariff questions and issuing Recommendations on them with a view to developing
`telecommunication standards on a world-wide basis.
`The World Telecommunication Standardisation Conference, which meets every four years, establishes
`the program of work arising from the review of existing questions and new questions among other
`things. The approval of new or revised Recommendations by members of the ITU-T is covered by the
`procedure laid down in the ITU-T Resolution No. 1 (Helsinki 1993). The proposal for
`Recommendation is accepted if 70% or more of the replies from members indicate approval.
`ISO (the International Organisation for Standardisation) and IEC (the International Electrotechnical
`Commission) form the specialised system for world-wide standardisation. National Bodies that are
`members of ISO and IEC participate in the development of International Standards through technical
`committees established by the respective organisation to deal with particular fields of technical
`activity. ISO and IEC technical committees collaborate in fields of mutual interest. Other
`international organisations, governmental and non-governmental, in liaison with ISO and IEC, also
`take part in the work.
`In the field of information technology, ISO and IEC have established a joint technical committee,
`ISO/IEC JTC1. Draft International Standards adopted by the joint technical committee are circulated
`to national bodies for voting. Publication as an International Standard requires approval by at least
`75% of the national bodies casting a vote.
`This specification is a committee draft that is being submitted for approval to the ITU-T, ISO-
`IEC/JTC1 SC29. It was prepared jointly by SC29/WG11, also known as MPEG (Moving Pictures
`Expert Group), and the Experts Group for ATM Video Coding in the ITU-T SG15. MPEG was
`formed in 1988 to establish standards for coding of moving pictures and associated audio for various
`applications such as digital storage media, distribution and communication. The Experts Group for
`ATM Video Coding was formed in 1990 to develop video coding standard(s) appropriate for B-ISDN
`using ATM transport.
`In this specification Annex A, Annex B and Annex C contain normative requirements and are an
`integral part of this specification. Annex D, Annex E, Annex F and Annex G are informative and
`contain no normative requirements.
`ISO/IEC
`This International Standard is published in four Parts.
`13818-1 systems —
`specifies the system coding of the specification. It defines a multiplexed
`structure for combining audio and video data and means of representing the
`timing information needed to replay synchronised sequences in real-time.
`specifies the coded representation of video data and the decoding process
`required to reconstruct pictures.
`specifies the coded representation of audio data.
`13818-3 audio —
`13818-4 conformance— specifies the procedures for determining the characteristics of coded
`bitstreams and for testing compliance with the requirements stated in
`13818-1, 13818-2 and 13818-3.
`
`13818-2 video —
`
`1
`
`2
`3
`4
`5
`6
`7
`8
`9
`10
`11
`12
`13
`14
`15
`16
`17
`18
`19
`20
`21
`22
`23
`24
`25
`26
`27
`28
`29
`30
`31
`32
`33
`34
`35
`36
`37
`38
`39
`40
`41
`42
`
`(10:18 Friday 25 March 1994)
`
`ITU-T Draft Rec. H.262
`
`iii
`
`5
`
`
`
`1
`
`2
`
`3
`4
`5
`6
`7
`8
`
`9
`
`10
`11
`12
`13
`14
`15
`16
`17
`18
`19
`20
`21
`22
`23
`24
`25
`26
`
`27
`
`28
`29
`30
`31
`32
`33
`34
`35
`36
`37
`38
`39
`40
`
`ISO/IEC 13818-2
`
`Draft International Standard
`
`I
`
`Introduction
`
`
`
`Purpose
`I.1
`This Part of this specification was developed in response to the growing need for a generic coding
`method of moving pictures and of associated sound for various applications such as digital storage
`media, television broadcasting and communication. The use of this specification means that motion
`video can be manipulated as a form of computer data and can be stored on various storage media,
`transmitted and received over existing and future networks and distributed on existing and future
`broadcasting channels.
`
`Application
`I.2
`The applications of this specification cover, but are not limited to, such areas as listed below:
`BSS
`Broadcasting Satellite Service (to the home)
`CATV Cable TV Distribution on optical networks, copper, etc.
`CDAD Cable Digital Audio Distribution
`DAB Digital Audio Broadcasting (terrestrial and satellite broadcasting)
`DTTB Digital Terrestrial Television Broadcast
`EC
`Electronic Cinema
`ENG
`Electronic News Gathering (including SNG, Satellite News Gathering)
`FSS
`Fixed Satellite Service (e.g. to head ends)
`HTT Home Television Theatre
`IPC
`Interpersonal Communications (videoconferencing, videophone, etc.)
`ISM
`Interactive Storage Media (optical disks, etc.)
`MMM Multimedia Mailing
`NCA News and Current Affairs
`NDB Networked Database Services (via ATM, etc.)
`RVS
`Remote Video Surveillance
`SSM
`Serial Storage Media (digital VTR, etc.)
`
`Profiles and levels
`I.3
`This specification is intended to be generic in the sense that it serves a wide range of applications,
`bitrates, resolutions, qualities and services. Applications should cover, among other things, digital
`storage media, television broadcasting and communications. In the course of creating this
`specification, various requirements from typical applications have been considered, necessary
`algorithmic elements have been developed, and they have been integrated into a single syntax. Hence
`this specification will facilitate the bitstream interchange among different applications.
`Considering the practicality of implementing the full syntax of this specification, however, a limited
`number of subsets of the syntax are also stipulated by means of "profile" and "level". These and other
`related terms are formally defined in clause 3 of this specification.
`A “profile” is a defined subset of the entire bitstream syntax that is defined by this specification.
`Within the bounds imposed by the syntax of a given profile it is still possible to require a very large
`variation in the performance of encoders and decoders depending upon the values taken by parameters
`in the bitstream. For instance it is possible to specify frame sizes as large as (approximately) 214
`
`iv
`
`ITU-T Draft Rec. H.262
`
`(10:18 Friday 25 March 1994)
`
`6
`
`
`
`
`
`Draft International Standard
`
`ISO/IEC 13818-2
`
`samples wide by 214 lines high. It is currently neither practical nor economic to implement a decoder
`capable of dealing with all possible frame sizes.
`In order to deal with this problem “levels” are defined within each profile. A level is a defined set of
`constraints imposed on parameters in the bitstream. These constraints may be simple limits on
`numbers. Alternatively they may take the form of constraints on arithmetic combinations of the
`parameters (e.g. frame width multiplied by frame height multiplied by frame rate).
`Bitstreams complying with this specification use a common syntax. In order to achieve a subset of the
`complete syntax flags and parameters are included in the bitstream that signal the presence or
`otherwise of syntactic elements that occur later in the bitstream. In order to specify constraints on the
`syntax (and hence define a profile) it is thus only necessary to constrain the values of these flags and
`parameters that specify the presence of later syntactic elements.
`
`The scalable and the non-scalable syntax
`I.4
`The full syntax can be divided into two major categories: One is the non-scalable syntax, which is
`structured as a super set of the syntax defined in ISO/IEC 11172-2. The main feature of the non-
`scalable syntax is the extra compression tools for interlaced video signals. The second is the scalable
`syntax, the key property of which is to enable the reconstruction of useful video from pieces of a total
`bitstream. This is achieved by structuring the total bitstream in two or more layers, starting from a
`standalone base layer and adding a number of enhancement layers. The base layer can use the non-
`scalable syntax, or in some situations conform to the ISO/IEC 11172-2 syntax.
`
`Overview of the non-scalable syntax
`I.4.1
`The coded representation defined in the non-scalable syntax achieves a high compression ratio while
`preserving good image quality. The algorithm is not lossless as the exact sample values are not
`preserved during coding. Obtaining good image quality at the bitrates of interest demands very high
`compression, which is not achievable with intra picture coding alone. The need for random access,
`however, is best satisfied with pure intra picture coding. The choice of the techniques is based on the
`need to balance a high image quality and compression ratio with the requirement to make random
`access to the coded bitstream.
`A number of techniques are used to achieve high compression. The algorithm first uses block-based
`motion compensation to reduce the temporal redundancy. Motion compensation is used both for causal
`prediction of the current picture from a previous picture, and for non-causal, interpolative prediction
`from past and future pictures. Motion vectors are defined for each 16-sample by 16-line region of the
`picture. The difference signal, i.e., the prediction error, is further compressed using the discrete cosine
`transform (DCT) to remove spatial correlation before it is quantised in an irreversible process that
`discards the less important information. Finally, the motion vectors are combined with the residual
`DCT information, and encoded using variable length codes.
`
`Temporal processing
`I.4.1.1
`Because of the conflicting requirements of random access and highly efficient compression, three main
`picture types are defined. Intra coded pictures (I-Pictures) are coded without reference to other
`pictures. They provide access points to the coded sequence where decoding can begin, but are coded
`with only moderate compression. Predictive coded pictures (P-Pictures) are coded more efficiently
`using motion compensated prediction from a past intra or predictive coded picture and are generally
`used as a reference for further prediction. Bidirectionally-predictive coded pictures (B-Pictures)
`provide the highest degree of compression but require both past and future reference pictures for
`motion compensation. Bidirectionally-predictive coded pictures are never used as references for
`prediction (except in the case that the resulting picture is used as a reference in a spatially scalable
`enhancement layer). The organisation of the three picture types in a sequence is very flexible. The
`choice is left to the encoder and will depend on the requirements of the application. Figure I-1
`illustrates the relationship among the three different picture types.
`
`1
`2
`3
`4
`5
`6
`7
`8
`9
`10
`11
`
`12
`
`13
`14
`15
`16
`17
`18
`19
`
`20
`21
`22
`23
`24
`25
`26
`27
`28
`29
`30
`31
`32
`33
`34
`35
`
`36
`37
`38
`39
`40
`41
`42
`43
`44
`45
`46
`47
`48
`
`(10:18 Friday 25 March 1994)
`
`ITU-T Draft Rec. H.262
`
`v
`
`7
`
`
`
`ISO/IEC 13818-2
`
`Draft International Standard
`
`Bidirectional Interpolation
`
`I
`
`B
`
`B
`
`P
`
`B
`
`B
`
`B
`
`P
`
`Prediction
`Figure I-1 Example of temporal picture structure
`
`
`
`Coding interlaced video
`I.4.1.2
`Each frame of interlaced video consists of two fields which are separated by one field-period. The
`specification allows either the frame to be encoded as picture or the two fields to be encoded as two
`pictures. Frame encoding or field encoding can be adaptively selected on a frame-by-frame basis.
`Frame encoding is typically preferred when the video scene contains significant detail with limited
`motion. Field encoding, in which the second field can be predicted from the first, works better when
`there is fast movement.
`
`Motion representation - macroblocks
`I.4.1.3
`As in ISO/IEC 11172-2, the choice of 16 by 16 macroblocks for the motion-compensation unit is a
`result of the trade-off between the coding gain provided by using motion information and the overhead
`needed to represent it. Each macroblock can be temporally predicted in one of a number of different
`ways. For example, in frame encoding, the prediction from the previous reference frame can itself be
`either frame-based or field-based. Depending on the type of the macroblock, motion vector
`information and other side information is encoded with the compressed prediction error signal in each
`macroblock. The motion vectors are encoded differentially with respect to the last encoded motion
`vectors using variable length codes. The maximum length of the motion vectors that may be
`represented can be programmed, on a picture-by-picture basis, so that the most demanding
`applications can be met without compromising the performance of the system in more normal
`situations.
`It is the responsibility of the encoder to calculate appropriate motion vectors. The specification does
`not specify how this should be done.
`
`Spatial redundancy reduction
`I.4.1.4
`Both original pictures and prediction error signals have high spatial redundancy. This specification
`uses a block-based DCT method with visually weighted quantisation and run-length coding. After
`motion compensated prediction or interpolation, the residual picture is split into 8 by 8 blocks. These
`are transformed into the DCT domain where they are weighted before being quantised. After
`quantisation many of the coefficients are zero in value and so two-dimensional run-length and variable
`length coding is used to encode the remaining coefficients efficiently.
`
`Chrominance formats
`I.4.1.5
`In addition to the 4:2:0 format supported in ISO/IEC 11172-2 this specification supports 4:2:2 and
`4:4:4 chrominance formats.
`
`1
`2
`
`3
`4
`5
`6
`7
`8
`9
`
`10
`11
`12
`13
`14
`15
`16
`17
`18
`19
`20
`21
`22
`23
`
`24
`25
`26
`27
`28
`29
`30
`
`31
`32
`33
`
`vi
`
`ITU-T Draft Rec. H.262
`
`(10:18 Friday 25 March 1994)
`
`8
`
`
`
`1
`2
`3
`4
`5
`6
`7
`8
`9
`10
`11
`12
`13
`14
`15
`16
`17
`18
`19
`
`20
`
`21
`
`22
`
`
`
`Draft International Standard
`
`ISO/IEC 13818-2
`
`Scalable extensions
`I.4.2
`The scalability tools in this specification are designed to support applications beyond that supported by
`single layer video. Among the noteworthy applications areas addressed are video telecommunications,
`video on asynchronous transfer mode networks (ATM), interworking of video standards, video
`service hierarchies with multiple spatial, temporal and quality resolutions, HDTV with embedded TV,
`systems allowing migration to higher temporal resolution HDTV etc. Although a simple solution to
`scalable video is the simulcast technique which is based on transmission/storage of multiple
`independently coded reproductions of video, a more efficient alternative is scalable video coding, in
`which the bandwidth allocated to a given reproduction of video can be partially reutilised in coding of
`the next reproduction of video. In scalable video coding, it is assumed that given an encoded
`bitstream, decoders of various complexities can decode and display appropriate reproductions of coded
`video. A scalable video encoder is likely to have increased complexity when compared to a single
`layer encoder. However, this standard provides several different forms of scalabilities that address
`nonoverlapping applications with corresponding complexities. The basic scalability tools offered are:
` temporal scalability. Moreover,
`data partitioning, SNR scalability, spatial scalability and
`combinations of these basic scalability tools are also supported and are referred to as hybrid
`scalability. In the case of basic scalability, two layers of video referred to as the lower layer and the
`enhancement layer are allowed, whereas in hybrid scalability up to three layers are supported. The
`following Tables provide a few example applications of various scalabilities.
`
`Lower layer
`ITU-R-601
`
`High Definition
`
`4:2:0 High Definition
`
`
`Table I-1 Applications of SNR scalability
`Enhancement layer
`Application
`Same resolution and format
`Two quality service for Standard TV
`as lower layer
`Same resolution and format
`as lower layer
`4:2:2 chroma simulcast
`
`Video production / distribution
`
`Two quality service for HDTV
`
`Base
`progressive(30Hz)
`interlace(30Hz)
`progressive(30Hz)
`interlace(30Hz)
`
`Table I-2 Applications of spatial scalability
`Enhancement
`Application
`progressive(30Hz) CIF/SCIF compatibility or scalability
`interlace(30Hz)
`HDTV/SDTV scalability
`interlace(30Hz)
`ISO/IEC 11172-2/compatibility with this specification
`progressive(60Hz) Migration to HR progressive HDTV
`
`
`
`23
`
`24
`
`Base
`progressive(30Hz)
`interlace(30Hz)
`
`
`Table I-3. Applications of temporal scalability
`Enhancement
`Higher
`Application
`progressive(30Hz)
`progressive (60Hz) Migration to HR progressive HDTV
`interlace(30Hz)
`progressive (60Hz) Migration to HR progressive HDTV
`
`25
`
`26
`27
`28
`29
`30
`31
`
`Spatial scalable extension
`I.4.2.1
`Spatial scalability is a tool intended for use in video applications involving telecommunications,
`interworking of video standards, video database browsing, interworking of HDTV and TV etc., i.e.,
`video systems with the primary common feature that a minimum of two layers of spatial resolution are
`necessary. Spatial scalability involves generating two spatial resolution video layers from a single
`video source such that the lower layer is coded by itself to provide the basic spatial resolution and the
`
`(10:18 Friday 25 March 1994)
`
`ITU-T Draft Rec. H.262
`
`vii
`
`9
`
`
`
`ISO/IEC 13818-2
`
`Draft International Standard
`
`1
`2
`3
`4
`5
`6
`7
`8
`9
`
`10
`11
`12
`13
`14
`15
`16
`17
`18
`19
`20
`21
`22
`
`23
`24
`25
`26
`27
`28
`29
`30
`31
`32
`33
`34
`35
`36
`37
`38
`
`39
`40
`41
`42
`43
`44
`45
`46
`47
`
`enhancement layer employs the spatially interpolated lower layer and carries the full spatial resolution
`of the input video source. The lower and the enhancement layers may either both use the coding tools
`in this specification, or the ISO/IEC 11172-2 standard for the lower layer and this specification for the
`enhancement layer. The latter case achieves a further advantage by facilitating interworking between
`video coding standards. Moreover, spatial scalability offers flexibility in choice of video formats to be
`employed in each layer. An additional advantage of spatial scalability is its ability to provide resilience
`to transmission errors as the more important data of the lower layer can be sent over channel with
`better error performance, while the less critical enhancement layer data can be sent over a channel with
`poor error performance.
`
`SNR scalable extension
`I.4.2.2
`SNR scalability is a tool intended for use in video applications involving telecommunications, video
`services with multiple qualities, standard TV and HDTV, i.e., video systems with the primary
`common feature that a minimum of two layers of video quality are necessary. SNR scalability involves
`generating two video layers of same spatial resolution but different video qualities from a single video
`source such that the lower layer is coded by itself to provide the basic video quality and the
`enhancement layer is coded to enhance the lower layer. The enhancement layer when added back to
`the lower layer regenerates a higher quality reproduction of the input video. The lower and the
`enhancement layers may either use this specification or ISO/IEC 11172-2 standard for the lower layer
`and this specification for the enhancement layer. An additional advantage of SNR scalability is its
`ability to provide high degree of resilience to transmission errors as the more important data of the
`lower layer can be sent over channel with better error performance, while the less critical enhancement
`layer data can be sent over a channel with poor error performance.
`
`Temporal scalable extension
`I.4.2.3
`Temporal scalability is a tool intended for use in a range of diverse video applications from
`telecommunications to HDTV for which migration to higher temporal resolution systems from that of
`lower temporal resolution systems may be necessary. In many cases, the lower temporal resolution
`video systems may be either the existing systems or the less expensive early generation systems, with
`the motivation of introducing more sophisticated systems gradually. Temporal scalability involves
`partitioning of video frames into layers, whereas the lower layer is coded by itself to provide the basic
`temporal rate and the enhancement layer is coded with temporal prediction with respect to the lower
`layer, these layers when decoded and temporal multiplexed to yield full temporal resolution of the
`video source. The lower temporal resolution systems may only decode the lower layer to provide
`basic temporal resolution, whereas more sophisticated systems of the future may decode both layers
`and provide high temporal resolution video while maintaining interworking with earlier generation
`systems. An additional advantage of temporal scalability is its ability to provide resilience to
`transmission errors as the more important data of the lower layer can be sent over channel with better
`error performance, while the less critical enhancement layer can be sent over a channel with poor error
`performance.
`
`I.4.2.4
`Data partitioning extension
`Data partitioning is a tool intended for use when two channels are available for transmission and/or
`storage of a video bitstream, as may be the case in ATM networks, terrestrial broadcast, magnetic
`media, etc. The bitstream is partitioned between these channels such that more critical parts of the
`bitstream (such as headers, motion vectors, DC coefficients) are transmitted in the channel with the
`better error performance, and less critical data (such as higher DCT coefficients) is transmitted in the
`channel with poor error performance. Thus, degradation to channel errors are minimised since the
`critical parts of a bitstream are better protected. Data from neither channel may be decoded on a
`decoder that is not intended for decoding data partitioned b