`Chau
`
`US005870087A
`[11] Patent Number:
`[45] Date of Patent:
`
`5,870,087
`Feb. 9, 1999
`
`[54] MPEG DECODER SYSTEMAND METHOD
`HAVING A UNIFIED MEMORY FOR
`TRANSPORT DECODE AND SYSTEM
`CONTROLLER FUNCTIONS
`
`[75] Inventor: Kwok Kit Chau, Los Altos, Calif.
`[73] Assignee: LSI Logic Corporation, Milpitas,
`Calif.
`
`ABSTRACT
`[57]
`An MPEG decoder system and method for performing video
`decoding or decompression which includes a unified
`memory for multiple functions according to the present
`invention. The video decoding system includes transport
`logic, a system controller, and MPEG decoder logic. The
`video decoding system of the present invention includes a
`single unified memory which stores code and data for the
`transport, system controller and MPEG decoder functions.
`The single unified memory is preferably a 16 Mbit memory.
`[21] Appl. No.: 748,269
`The MPEG decoder logic includes a memory controller
`[22] Filed:
`Nov. 13, 1996
`which couples to the single unified memory, and each of the
`transport logic, system controller and MPEG decoder logic
`[51] Int. Cl." … G06T 13/00
`access the single unified memory through the memory
`[52] U.S. Cl. … 345/302
`controller. The video decoding system implements various
`[58] Field of Search ..................................... 345/302, 418;
`frame memory saving schemes, such as compression or
`707/101, 102, 103, 104
`dynamic allocation, to more efficiently use the memory. In
`one embodiment, the memory is not required to store
`reconstructed frame data during B-frame reconstruction,
`thus considerably reducing the required amount of memory
`for this function. Alternatively, the memory is only required
`to store a portion of the reconstructed frame data. In
`addition, these savings in memory allow portions of the
`memory to also be used for transport and system controller
`functions. The present invention thus provides a video
`decoding system with reduced memory requirements.
`
`[56]
`
`References Cited
`
`U.S. PATENT DOCUMENTS
`
`5,675,511 10/1997 Prasad et al. ........................... 345/302
`5,692,213 11/1997 Goldberg et al. ....
`... 345/302
`5,767,846 6/1998 Nakamura et al. ..................... 345/302
`Primary Examiner—Phu K. Nguyen
`Assistant Examiner—Cliff N. Vo
`Attorney, Agent, or Firm—Conley, Rose & Tayon; Jeffrey C.
`Hood
`
`Coded
`
`Stream
`
`
`
`
`
`
`
`
`
`Channel
`-
`Fº
`
`
`
`Transport
`+MIPS
`204
`[208
`
`
`
`
`
`206
`
`
`
`20 Claims, 16 Drawing Sheets
`
`212
`
`16Mbit
`SDRAM
`
`SONY EX. 1001
`Page 1
`
`
`
`U.S. Patent
`
`Feb. 9, 1999
`
`Sheet 1 of 16
`
`5,870,087
`
`
`
`SONY EX. 1001
`Page 2
`
`
`
`U.S. Patent
`U.S. Patent
`
`.eB
`
`h.
`
`99919:
`
`Sheet 2 of 16
`
`5,870,087
`5,870,087
`
`
`
`E820:n_o
`
`eomwmooiv
`
`
`
`gm_On_
`
`
`
`0m_n__>_omn__>_
`
`
`
`Evoocm._wuoomn_
`
`SONY EX. 1001
`Page 3
`
`
`
`U.S. Patent
`
`5,870,087
`
`
`
`| ?epoolBoep A L – – – – – – – – –|
`
`
`
`
`
`
`
`SONY EX. 1001
`Page 4
`
`
`
`U.S. Patent
`U.S. Patent
`
`Feb. 9, 1999
`
`61f04teehS
`
`5,870,087
`5,870,087
`
`o.o..m-mEmE
`
`%.¢oEm_>_
`
`
`
`cmommw_m>:_
`
`Em
`
`
`
`£m:m._m_nm_.m>
`
`§mczuoowo
`
`
`cozmmcoaeoo%:o__.o_>_._.on_mm.m>:_co:mm_Ew:O
`
`EH
`
`
`
`
`
`SONY EX. 1001
`Page 5
`
`
`
`U.S. Patent
`
`Feb. 9, 1999
`
`61f05teehS
`
`5,870,087
`
`
`
`_o6:m2.9.3mEeEo...o.NEo.NmN.%.NwBficm«ESE550%w55cmmESE
`
`
`
`
`
`
`
`25%o_>_._§%m25..2.25,mom\w,.n9fl,_.N
`
`
`
`
`
`>2_+
`
`Sm.Em?>9,_+
`
`....$em$oEN_m
`
`
`
`mE§-n_m_:mqwEcocmmmmaaoo
`
`
`
`8_m8as8E835Bwdxmmnm
`
`
`
`_8o_.cmmmmaeoo,.=_oEm8.~m.m\x.EmEmom?
`
`
`
`
`
`
`
`
`
`=o_mmmaE8:o_..muo__mo_EmS€
`
`
`
`Em:Sofia
`
`E89
`
`SONY EX. 1001
`Page 6
`
`
`
`U.S. Patent
`
`Feb. 9, 1999
`
`Sheet 6 of 16
`
`5,870,087
`
`
`
`OSO
`
`SONY EX. 1001
`Page 7
`
`
`
`U.S. Patent
`
`Feb. 9, 1999
`
`Sheet 7 of 16
`
`5,870,087
`
`
`
`998'998'y
`
`G87/88'O).
`
`SONY EX. 1001
`Page 8
`
`
`
`U.S. Patent
`
`Feb. 9, 1999
`
`Sheet 8 of 16
`
`5,870,087
`
`
`
`§
`
`NS=
`%Z;
`g
`is N
`
`g
`
`Nº.3% ;
`
`Ns
`Ø:
`
`SONY EX. 1001
`Page 9
`
`
`
`Feb. 9, 1999
`
`Sheet 9 of 16
`
`5,870,087
`
` J
`
`3C
`U
`.C
`L)
`ca
`0.)
`
`C(
`
`
`
`MemorySize(bit)
`
`k\\\\\\‘
`0'8
`
`L|1P!MPl-'98
`
`EEE
`
`‘;
`OJ
`
`Ea
`
`n E
`
`‘0
`
`:
`03
`E0.
`
`EU
`
`)> EE3
`
`'5
`U
`an
`E‘O
`E(D
`E
`
`C(
`
`SONY EX. 1001
`Page 10
`
`
`
`U.S. Patent
`
`Feb. 9, 1999
`
`Sheet 10 of 16
`
`5,870,087
`
`
`
`SONY EX. 1001
`Page 11
`
`
`
`U.S. Patent
`
`Feb. 9, 1999
`
`Sheet 11 of 16
`
`5,870,087
`
`
`
`
`
`SONY EX. 1001
`Page 12
`
`
`
`U.S. Patent
`
`Feb. 9,1999
`
`Sheet 12 of 16
`
`5,870,087
`
`I:1-cycle/coeffE2—cyc|e/coeffM4—cyc|e/coeffEr27MHz —A-54MHz 6-81MHz
`
`
`
`\\
`
`‘«//j—
`/17‘.
`
`7////AZ.'S|..»'I?~I€'$:11!!'.
`\\ 10,337
`
`788511,435,517
`I—:0°“IF
`
`
`
`MemorySize(bit)
`
`90
`
`<1 13 8
`E
`
`>|oo|qoJoew/e|oA3
`
`(D
`.5
`(D
`
`3C
`U
`.C
`
`C(
`
`0e
`
`a
`(‘D
`
`E9
`
`LL,
`3.0
`gfl
`m2
`
`2(
`
`D> 53OD
`
`.
`U’)
`.5(ID
`CD
`(DC.)
`9D.
`
`SONY EX. 1001
`Page 13
`
`
`
`SONY EX. 1001
`Page 14
`
`
`
`tHB
`
`.eB
`
`h.
`
`99919:
`
`S
`
`N_._s_w2
`
`V.\IS_§w\
`./ ///
`
`SONY EX. 1001
`Page 15
`
`
`
`U.S. Patent
`
`Feb. 9, 1999
`
`Sheet 15 of 16
`
`5,870,087
`
`
`
`SONY EX. 1001
`Page 16
`
`
`
`U.S. Patent
`
`.eB
`
`h.
`
`99919:
`
`Sheet 16 of 16
`
`5,870,087
`
`moowmaFF—NNNNm
`
`2.0_n_
`
`OH..E:OoE03wee;
`
`
`
`
`
`”m”__._>>EmmzmOmb_>
`
`SONY EX. 1001
`Page 17
`
`
`
`5,870,087
`
`10
`
`15
`
`30
`
`35
`
`25
`
`1
`MPEG DECODER SYSTEM AND METHOD
`HAVING A UNIFIED MEMORY FOR
`TRANSPORT DECODE AND SYSTEM
`CONTROLLER FUNCTIONS
`INCORPORATION BY REFERENCE
`The following references are hereby incorporated by
`reference.
`The ISO/IEC MPEG specification referred to as ISO/IEC
`13818 is hereby incorporated by reference in its entirety.
`U.S. patent application Ser. No. 08/654,321 titled
`“Method and Apparatus for Segmenting Memory to Reduce
`the Memory Required for Bidirectionally Predictive-Coded
`Frames” and filed May 28, 1996 is hereby incorporated by
`reference in its entirety as though fully and completely set
`forth herein.
`U.S. patent application Ser. No. 08/653,845 titled
`“Method and Apparatus for Reducing the Memory Required
`for Decoding Bidirectionally Predictive-Coded Frames Dur
`ing Pull-Down” and filed May 28, 1996 is hereby incorpo
`20
`rated by reference in its entirety as though fully and com
`pletely set forth herein.
`U.S. patent application Ser. No. 08/689,300 titled
`“Method and Apparatus for Decoding B Frames in Video
`Codecs with Minimal Memory” and filed Aug. 8, 1996 now
`U.S. Pat. No. 5,818,533, whose inventors are David R. Auld
`and Kwok Chau, is hereby incorporated by reference in its
`entirety as though fully and completely set forth herein.
`1. Field of the Invention
`The present invention relates generally to digital video
`compression, and more particularly to an MPEG decoder
`system which includes a single unified memory for MPEG
`transport, decode and system controller functions.
`2. Description of the Related Art
`Full-motion digital video requires a large amount of
`storage and data transfer bandwidth. Thus, video systems
`use various types of video compression algorithms to reduce
`the amount of necessary storage and transfer bandwidth. In
`general, different video compression methods exist for still
`40
`graphic images and for full-motion video. Intraframe com
`pression methods are used to compress data within a still
`image or single frame using spatial redundancies within the
`frame. Interframe compression methods are used to com
`press multiple frames, i.e., motion video, using the temporal
`45
`redundancy between the frames. Interframe compression
`methods are used exclusively for motion video, either alone
`or in conjunction with intraframe compression methods.
`Intraframe or still image compression techniques gener
`ally use frequency domain techniques, such as the discrete
`cosine transform (DCT). Intraframe compression typically
`uses the frequency characteristics of a picture frame to
`efficiently encode a frame and remove spatial redundancy.
`Examples of video data compression for still graphic images
`are JPEG (Joint Photographic Experts Group) compression
`and RLE (run-length encoding). JPEG compression is a
`group of related standards that provide either lossless (no
`image quality degradation) or lossy (imperceptible to severe
`degradation) compression. Although JPEG compression was
`originally designed for the compression of still images rather
`than video, JPEG compression is used in some motion video
`applications. The RLE compression method operates by
`testing for duplicated pixels in a single line of the bit map
`and storing the number of consecutive duplicate pixels
`rather than the data for the pixels themselves.
`In contrast to compression algorithms for still images,
`most video compression algorithms are designed to com
`
`50
`
`55
`
`60
`
`65
`
`2
`press full motion video. As mentioned above, video com
`pression algorithms for motion video use a concept referred
`to as interframe compression to remove temporal redundan
`cies between frames. Interframe compression involves stor
`ing only the differences between successive frames in the
`data file. Interframe compression stores the entire image of
`a key frame or reference frame, generally in a moderately
`compressed format. Successive frames are compared with
`the key frame, and only the differences between the key
`frame and the successive frames are stored. Periodically,
`such as when new scenes are displayed, new key frames are
`stored, and subsequent comparisons begin from this new
`reference point. It is noted that the interframe compression
`ratio may be kept constant while varying the video quality.
`Alternatively, interframe compression ratios may be
`content-dependent, i.e., if the video clip being compressed
`includes many abrupt scene transitions from one image to
`another, the compression is less efficient. Examples of video
`compression which use an interframe compression tech
`nique are MPEG, DVI and Indeo, among others.
`MPEG BACKGROUND
`A compression standard referred to as MPEG (Moving
`Pictures Experts Group) compression is a set of methods for
`compression and decompression of full motion video images
`which uses the interframe and intraframe compression tech
`niques described above. MPEG compression uses both
`motion compensation and discrete cosine transform (DCT)
`processes, among others, and can yield compression ratios
`of more than 30:1.
`The two predominant MPEG standards are referred to as
`MPEG-1 and MPEG-2. The MPEG-1 standard generally
`concerns frame data reduction using block-based motion
`compensation prediction (MCP), which generally uses tem
`poral differential pulse code modulation (DPCM). The
`MPEG-2 standard is similar to the MPEG-1 standard, but
`includes extensions to cover a wider range of applications,
`including interlaced digital video such as high definition
`television (HDTV).
`Interframe compression methods such as MPEG are based
`on the fact that, in most video sequences, the background
`remains relatively stable while action takes place in the
`foreground. The background may move, but large portions
`of successive frames in a video sequence are redundant.
`MPEG compression uses this inherent redundancy to encode
`or compress frames in the sequence.
`An MPEG stream includes three types of pictures,
`referred to as the Intra (I) frame, the Predicted (P) frame, and
`the Bi-directional Interpolated (B) frame. The I or
`Intraframes contain the video data for the entire frame of
`video and are typically placed every 10 to 15 frames.
`Intraframes provide entry points into the file for random
`access, and are generally only moderately compressed.
`Predicted frames are encoded with reference to a past frame,
`i.e., a prior Intraframe or Predicted frame. Thus P frames
`only include changes relative to prior I or P frames. In
`general, Predicted frames receive a fairly high amount of
`compression and are used as references for future Predicted
`frames. Thus, both I and P frames are used as references for
`subsequent frames. Bi-directional pictures include the great
`est amount of compression and require both a past and a
`future reference in order to be encoded. Bi-directional
`frames are never used as references for other frames.
`In general, for the frame(s) following a reference frame,
`i.e., P and B frames that follow a reference I or P frame, only
`small portions of these frames are different from the corre
`
`SONY EX. 1001
`Page 18
`
`
`
`3
`sponding portions of the respective reference frame. Thus,
`for these frames, only the differences are captured, com
`pressed and stored. The differences between these frames are
`typically generated using motion vector estimation logic, as
`discussed below.
`When an MPEG encoder receives a video file, the MPEG
`encoder generally first creates the I frames. The MPEG
`encoder may compress the I frame using an intraframe
`compression technique. The MPEG encoder divides respec
`tive frames into a grid of 16x16 pixel squares called mac
`roblocks in order to perform motion estimation/
`compensation. Thus, for a respective target picture or frame,
`i.e., a frame being encoded, the encoder searches for an
`exact, or near exact, match between the target picture
`macroblock and a block in a neighboring picture referred to
`as a search frame. For a target P frame the encoder searches
`in a prior I or P frame. For a target B frame, the encoder
`searches in a prior or subsequent I or P frame. When a match
`is found, the encoder transmits a vector movement code or
`motion vector. The vector movement code or motion vector
`only includes information on the difference between the
`search frame and the respective target picture. The blocks in
`target pictures that have no change relative to the block in
`the reference picture or Iframe are ignored. Thus the amount
`of data that is actually stored for these frames is significantly
`reduced.
`After motion vectors have been generated, the encoder
`then encodes the changes using spatial redundancy. Thus,
`after finding the changes in location of the macroblocks, the
`MPEG algorithm further calculates and encodes the differ
`ence between corresponding macroblocks. Encoding the
`difference is accomplished through a math process referred
`to as the discrete cosine transform or DCT. This process
`divides the macroblock into four sub blocks, seeking out
`changes in color and brightness. Human perception is more
`sensitive to brightness changes than color changes. Thus the
`MPEG algorithm devotes more effort to reducing color data
`than brightness.
`Therefore, MPEG compression is based on two types of
`redundancies in video sequences, these being spatial, which
`is the redundancy in an individual frame, and temporal,
`which is the redundancy between consecutive frames. Spa
`tial compression is achieved by considering the frequency
`characteristics of a picture frame. Each frame is divided into
`non-overlapping blocks, and each block is transformed via
`the discrete cosine transform (DCT). After the transformed
`blocks are converted to the “DCT domain”, each entry in the
`transformed block is quantized with respect to a set of
`quantization tables. The quantization step for each entry can
`vary, taking into account the sensitivity of the human visual
`system (HVS)} to the frequency. Since the HVS is more
`sensitive to low frequencies, most of the high frequency
`entries are quantized to zero. In this step where the entries
`are quantized, information is lost and errors are introduced
`to the reconstructed image. Run length encoding is used to
`transmit the quantized values. To further enhance
`compression, the blocks are scanned in a zig-zag ordering
`that scans the lower frequency entries first, and the non-zero
`quantized values, along with the zero run lengths, are
`entropy encoded.
`When an MPEG decoder receives an encoded stream, the
`MPEG decoder reverses the above operations. Thus the
`MPEG decoder performs inverse scanning to remove the zig
`Zag ordering, inverse quantization to de-quantize the data,
`and the inverse DCT to convert the data from the frequency
`domain back to the pixel domain. The MPEG decoder also
`performs motion compensation using the transmitted motion
`vectors to recreate the temporally compressed frames.
`
`4
`When frames are received which are used as references
`for other frames, such as I or P frames, these frames are
`decoded and stored in memory. When a temporally com
`pressed or encoded frame is received, such as a Por B frame,
`motion compensation is performed on the frame using the
`prior decoded I or P reference frames. The temporally
`compressed or encoded frame, referred to as a target frame,
`will include motion vectors which reference blocks in prior
`decoded I or P frames stored in the memory. The MPEG
`decoder examines the motion vector, determines the respec
`tive reference block in the reference frame, and accesses the
`reference block pointed to by the motion vector from the
`memory.
`A typical MPEG decoder includes motion compensation
`logic which includes local or on-chip memory. The MPEG
`decoder also includes an external memory which stores prior
`decoded reference frames. The MPEG decoder accesses the
`reference frames or anchor frames stored in the external
`memory in order to reconstruct temporally compressed
`frames. The MPEG decoder also typically stores the frame
`being reconstructed in the external memory.
`An MPEG decoder system also typically includes trans
`port logic which operates to demultiplex received data into
`a plurality of individual multimedia streams. An MPEG
`decoder system also generally includes a system controller
`which controls operations in the system and executes pro
`grams or applets.
`Prior art MPEG video decoder systems have generally
`used a frame store memory for the MPEG decoder motion
`compensation logic which stores the reference frames or
`anchor frames as well as the frame being reconstructed.
`Prior art MPEG video decoder systems have also generally
`included a separate memory for the transport and system
`controller functions. It has generally not been possible to
`combine these memories, due to size limitations. For
`example, current memory devices are fabricated on an 4
`Mbit granularity. In prior art systems, the memory require
`ments for the transport and system controller functions as
`well as the decoder motion compensation logic would
`exceed 16 Mbits of memory, thus requiring 20 or 24 Mbits
`of memory. This additional memory adds considerable cost
`to the system.
`The amount of memory is a major cost item in the
`production of video decoders. Thus, it is desired to reduce
`the memory requirements of the decoder system as much as
`possible to reduce its size and cost. Since practical memory
`devices are implemented using particular convenient dis
`crete sizes, it is important to stay within a particular size if
`possible for commercial reasons. For example, it is desired
`to keep the memory requirements below a particular size of
`memory, such as 16 Mb, since otherwise a memory device
`of 20 or 24 Mb would have to be used, resulting in greater
`cost and extraneous storage area. As mentioned above, it has
`heretofore not been possible to combine the memory
`required for the transport and system controller functions
`with the memory required for the MPEG decoder logic due
`to the memory size requirements.
`Therefore, a new video decoder system and method is
`desired which efficiently uses memory and combines the
`memory subsystem for reduced memory requirements and
`hence reduced cost.
`
`SUMMARY OF THE INVENTION
`The present invention comprises an MPEG decoder sys
`tem and method for performing video decoding or decom
`pression which includes a unified memory for multiple
`
`5,870,087
`
`10
`
`15
`
`20
`
`25
`
`30
`
`35
`
`40
`
`45
`
`50
`
`55
`
`60
`
`65
`
`SONY EX. 1001
`Page 19
`
`
`
`5,870,087
`
`5
`functions according to the present invention. The video
`decoding system includes transport logic, a system
`controller, and MPEG decoder logic. The video decoding
`system of the present invention includes a single unified
`memory which stores code and data for the transport logic,
`system controller and MPEG decoder functions. The single
`unified memory is preferably a 16 Mbit memory. The
`present invention thus requires only a single memory, and
`thus has reduced memory requirements compared to prior
`art designs.
`The video decoding system includes transport logic which
`operates to demultiplex received data into a plurality of
`individual multimedia streams. The video decoding system
`also includes a system controller which controls operations
`in the system and executes programs or applets. The video
`decoding system further includes decoding logic, preferably
`MPEG decoder logic, which performs motion compensation
`between temporally compressed frames of a video sequence
`during video decoding or video decompression. The
`memory includes a plurality of memory portions, including
`a video frame portion for storing video frames, a system
`controller portion for storing code and data executable by
`the system controller, and a transport buffer for storing data
`used by the transport logic. The MPEG decoder logic
`preferably includes a memory controller which couples to
`the single unified memory. Each of the transport logic,
`system controller, and MPEG decoder logic accesses the
`single unified memory through the memory controller.
`The video decoding system implements various frame
`memory saving schemes, such as compression or dynamic
`allocation, to reduce the required amount of frame store
`memory. Also, in one embodiment, the memory is not
`required to store reconstructed frame data during motion
`compensation, thus considerably reducing the required
`amount of memory for this function. Alternatively, the
`memory is only required to store a portion of the recon
`structed frame data. These savings in memory allow portions
`of the memory to also be used for transport and system
`controller functions.
`The present invention thus provides a video decoding
`system with reduced memory requirements.
`
`BRIEF DESCRIPTION OF THE DRAWINGS
`A better understanding of the present invention can be
`obtained when the following detailed description of the
`preferred embodiment is considered in conjunction with the
`following drawings, in which:
`FIG. 1 illustrates a computer system which performs
`video decoding and which includes a motion compensation
`logic having a frame memory which stores reference block
`data according to the present invention;
`FIG. 2 is a block diagram illustrating the computer system
`of FIG. 1;
`FIG. 3 is a block diagram illustrating an MPEG decoder
`system including a unified memory for MPEG transport,
`system controller, and decode functions according to the
`present invention;
`FIG. 4 is a block diagram illustrating the MPEG decoder
`logic in the system of FIG. 3;
`FIG. 5 illustrates various frame memory saving schemes
`used in various embodiments of the invention;
`FIGS. 6a and 6b illustrate a table listing the memory
`partitions under different display schemes;
`FIG. 7 illustrates the relationship of memory bandwidth
`vs. memory size in the NTSC decoding scheme;
`
`6
`FIG. 8 illustrates the relationship of memory bandwidth
`vs. memory size in the PAL encoding scheme;
`FIG. 9 illustrates the memory partitions according to the
`preferred embodiment of the invention;
`FIG. 10 illustrates the estimated memory bandwidth dis
`tribution in the preferred embodiment of the invention;
`FIG. 11 illustrates the “worst case” relationship of pro
`cessing power vs. memory size in the NTSC decoding
`scheme;
`FIG. 12 illustrates the clock domains in the system;
`FIG. 13 illustrates clock operating frequencies according
`to the preferred embodiment of the invention;
`FIG. 14 illustrates an example of the packet data interface
`between the transport controller and the source decoder; and
`FIG. 15 illustrates packet header formats used in the
`preferred embodiment.
`DETAILED DESCRIPTION OF THE
`PREFERRED EMBODIMENT
`Video Compression System
`Referring now to FIG. 1, a system for performing video
`decoding or decompression and including a unified memory
`according to the present invention is shown. The video
`decoding system of the present invention includes a single
`unified memory which stores code and data for the transport,
`system controller and MPEG decoder functions. This sim
`plifies the design and reduces the memory requirements in
`the system.
`As shown, in one embodiment the video decoding or
`decompression system is comprised in a general purpose
`computer system 60. The video decoding system may com
`prise any of various types of systems, including a computer
`system, set-top box, television, or other device.
`The computer system 60 is preferably coupled to a media
`storage unit 62 which stores digital video files which are to
`be decompressed or decoded by the computer system 60.
`The media storage unit 62 may also store the resultant
`decoded or decompressed video file. In the preferred
`embodiment, the computer system 60 receives a compressed
`video file orbitstream and generates a normal uncompressed
`digital video file. In the present disclosure, the term “com
`pressed video file” refers to a video file which has been
`compressed according to any of various video compression
`algorithms which use motion estimation techniques, includ
`ing the MPEG standard, among others, and the term
`“uncompressed digital video file” refers to a stream of
`decoded or uncompressed video.
`As shown, the computer system 60 preferably includes a
`video decoder 74 which performs video decoding or decom
`pression operations. The video decoder 74 is preferably an
`MPEG decoder. The computer system 60 optionally may
`also include an MPEG encoder 76. The MPEG decoder 74
`and MPEG encoder 76 are preferably adapter cards coupled
`to a bus in the computer system, but are shown external to
`the computer system 60 for illustrative purposes. The com
`puter system 60 also includes software, represented by
`floppy disks 72, which may perform portions of the video
`decompression or decoding operation and/or may perform
`other operations, as desired.
`The computer system 60 preferably includes various
`standard components, including one or more processors, one
`or more buses, a hard drive and memory. Referring now to
`FIG. 2, a block diagram illustrating the components com
`prised in the computer system of FIG. 1 is shown. It is noted
`that FIG. 2 is illustrative only, and other computer architec
`tures may be used, as desired. As shown, the computer
`
`10
`
`15
`
`20
`
`25
`
`30
`
`35
`
`40
`
`45
`
`50
`
`55
`
`60
`
`65
`
`SONY EX. 1001
`Page 20
`
`
`
`5,870,087
`
`5
`
`10
`
`15
`
`25
`
`7
`system includes at least one processor 80 coupled through
`chipset logic 82 to a system memory 84. The chipset 82
`preferably includes a PCI (Peripheral Component
`Interconnect) bridge for interfacing to PCI bus 86, or another
`type of bus bridge for interfacing to another type of expan
`sion bus. In FIG. 2, MPEG decoder 74 and MPEG encoder
`76 are shown connected to PCI bus 86. Various other
`components may be comprised in the computer system, such
`as video 88 and hard drive 90.
`As mentioned above, in the preferred embodiment of FIG.
`1 the computer system 60 includes or is coupled to one or
`more digital storage or media storage devices. For example,
`in the embodiment of FIG. 1, the computer system 60
`couples to media storage unit 62 through cable 64. The
`media storage unit 62 preferably comprises a RAID
`(Redundant Array of Inexpensive Disks) disk array, or
`includes one or more CD-ROM drives and/or one or more
`Digital Video Disk (DVD) storage units, or other media, for
`storing digital video to be decompressed and/or for storing
`the resultant decoded video data. The computer system may
`also include one or more internal RAID arrays, CD-ROM
`20
`drives and/or may couple to one or more separate Digital
`Video Disk (DVD) storage units. The computer system 60
`also may connect to other types of digital or analog storage
`devices or media, as desired.
`Alternatively, the compressed digital video file may be
`received from an external source, such as a remote storage
`device or remote computer system. In this embodiment, the
`computer system preferably includes an input device, such
`as an ATM (Asynchronous Transfer Mode) adapter card or
`an ISDN (Integrated Services Digital Network) terminal
`adapter, or other digital data receiver, for receiving the
`digital video file. The digital video file may also be stored or
`received in analog format and converted to digital data,
`either externally to the computer system 60 or within the
`computer system 60.
`35
`As mentioned above, the MPEG decoder 74 in the com
`puter system 60 performs video decoding or video decom
`pression functions. As discussed further below, the video
`decoding system includes transport logic which operates to
`demultiplex received data into a plurality of individual
`multimedia streams. The video decoding system also
`includes a system controller which controls operations in the
`system and executes programs or applets comprised in the
`stream. The video decoding system further includes decod
`ing logic, preferably MPEG decoder logic, which performs
`motion compensation between temporally compressed
`frames of a video sequence during video decoding or video
`decompression. The video decoding system of the present
`invention includes a single unified memory which stores
`code and data for the transport, system controller and MPEG
`50
`decoder functions. This simplifies the design and reduces the
`memory requirements in the system. The MPEG decoder 74
`thus performs functions with improved efficiency and
`reduced memory requirements according to the present
`invention.
`It is noted that the system for decoding or decompressing
`video data may comprise two or more interconnected
`computers, as desired. The system for decoding or decom
`pressing video data may also comprise other hardware, such
`as a set top box, either alone or used in conjunction with a
`general purpose programmable computer. It is noted that any
`of various types of systems may be used for decoding or
`decompressing video data according to the present
`invention, as desired.
`FIG. 3—MPEG Decoder Block Diagram
`Referring now to FIG. 3, a block diagram illustrating an
`MPEG decoder system architecture according to one
`
`8
`embodiment of the present invention is shown. As shown,
`the MPEG decoder system includes a channel receiver 202
`for receiving a coded stream. As mentioned above, in the
`preferred embodiment, the coded stream is an MPEG
`encoded stream. The MPEG encoded stream may include
`interactive program content comprised within this stream, as
`desired. The channel receiver 202 receives the coded stream
`and provides the coded stream to a transport and system
`controller block 204.
`The transport and system controller block 204 includes
`transport logic 206 which operates to demultiplex the
`received MPEG encoded stream into a plurality of multi
`media data streams. In other words, the encoded stream
`preferably includes a plurality of multiplexed encoded chan
`nels or multimedia data streams which are combined into a
`single stream, such as a broadcast signal provided from a
`broadcast network. The transport logic 206 in the transport
`and system controller block 204 operates to demultiplex this
`multiplexed stream into one or more programs, wherein
`each of the programs comprise individual multimedia data
`streams including video and/or audio components.
`It is noted that the MPEG stream may comprise one of
`two types of streams including either a transport stream or
`a program stream. A transport stream comprises a 188 byte
`stream which includes error correction and which is
`designed for an error prone environment. A program stream,
`on the other hand, is designed for an error free environment
`and this does not include error correction capabilities.
`The transport and system controller block 204 also
`includes a system controller 208 which monitors the MPEG
`system and is programmable to display audio/graphics on
`the screen and/or execute interactive applets or programs
`which are embedded in the MPEG stream. The system
`controller 208 also preferably controls operations in the
`MPEG decoder system. In the preferred embodiment, the
`system controller 208 comprises a MIPS RISC CPU which
`is programmed to perform system controller functions.
`The transport and system controller block 204 couples
`through a memory controller 211 in MPEG decoder 224 to
`an external memory 212, also referred to as the single
`unified memory 212. The transport logic 206 and system
`controller logic 208 comprised in the transport and system
`controller block 204 utilize the external memory 212 to store
`and/or receive code and data. In the preferred embodiment,
`the external memory 212 is a 16 MB synchronous dynamic
`random access memory (SDRAM).
`As shown, the transport and system c