throbber
(19) United States
`(12) Patent Application Publication (10) Pub. No.: US 2003/0028673 A1
`Lin et al.
`(43) Pub. Date:
`Feb. 6, 2003
`
`US 20030O28673A1
`
`(54) SYSTEM AND METHOD FOR
`COMPRESSING AND DECOMPRESSING
`BROWSER CACHE IN PORTABLE,
`HANDHELD AND WIRELESS
`COMMUNICATION DEVICES
`(75) Inventors: Rui Lin, San Diego, CA (US); Gary
`Wang, San Diego, CA (US); Harvey
`Zien, San Diego, CA (US)
`Correspondence Address:
`Schwegman, Lundberg,
`Woessner & Kluth, P.A.
`P.O. Box 2938
`Minneapolis, MN 55402 (US)
`(73) Assignee: Intel Corporation
`
`(21) Appl. No.:
`
`09/920,223
`
`(22) Filed:
`
`Aug. 1, 2001
`Publication Classification
`
`(51) Int. Cl." ..................................................... G06F 15/16
`(52) U.S. Cl. ............................................ 709/247; 709/218
`
`ABSTRACT
`(57)
`Improved browser caching performance in portable, hand
`held and wireless communication devices has been achieved
`by ASIC compression and decompression engines that com
`preSS cache content resulting in a more efficient use of the
`limited memory of Such devices. Based on a data type, the
`compression engine Selects an appropriate compression
`accelerator which invokes a corresponding compression
`algorithm, to compress the cache data. The proceSS is
`reversed when the cache is requested by the browser.
`W 4
`
`W2
`
`HOST
`PROCESSOR
`
`WA
`
`SYSTEM BUS
`
`
`
`
`
`COMPRESSION
`ENGINE
`
`DECOMPRESSION
`ENGINE
`
`A6
`
`
`
`
`
`
`
`
`
`OTHER
`MEMORY
`
`CACHE
`MAN
`MEMORY
`
`
`
`
`
`
`
`(f
`
`Petitioners Microsoft Corporation and HP Inc. - Ex. 1010, p. 1
`
`

`

`Patent Application Publication
`
`Feb. 6, 2003. Sheet 1 of 3
`
`US 2003/0028673 A1
`
`
`
`HOST
`PROCESSOR
`
`SYSTEM BUS
`
`A.
`
`-
`
`COMPRESSION
`ENGINE
`
`DECOMPRESSION
`ENGINE
`
`6
`
`A /
`
`
`
`
`
`OTHER
`MEMORY
`CACHE
`MAIN
`MEMORY
`
`
`
`
`
`
`
`
`
`
`6
`
`
`
`CONTROFR
`
`INPUT
`BUFFER
`
`OUTPUT
`BUFFER
`
`COMPRESSION
`ACCEERATOR
`
`COMPRESSION
`ACCELERATOR
`
`COMPRESSION
`ACCE ERATOR
`
`COMPRESSIO
`ACCEERATOR
`
`COMPRESSION ENGINE
`
`/W.”
`
`Petitioners Microsoft Corporation and HP Inc. - Ex. 1010, p. 2
`
`

`

`Patent Application Publication
`
`Feb. 6, 2003 Sheet 2 of 3
`
`US 2003/0028673 A1
`
`W
`
`CONTROLLER
`
`INPUT
`BUFFER
`
`OUTPUT
`BUFFER
`
`3.
`DECOMPRESSION
`ACCEERATOR
`
`32
`DECOMPRESSION
`DECOMPRESSION
`ACCE ERATOR
`ACCELERATOR
`
`DECOMPRESSION
`ACCEERATOR
`
`
`
`COMPRESSION &
`FILE
`SYSTEM | DECOMPRESSION
`DRIVERS
`COMPRESSION &
`CACHE
`MEMORY | DECOMPRESSION
`ENGINES
`
`Petitioners Microsoft Corporation and HP Inc. - Ex. 1010, p. 3
`
`

`

`Patent Application Publication
`
`Feb. 6, 2003. Sheet 3 of 3
`
`US 2003/0028673 A1
`
`SW)
`CACHE COMPRESSION
`AND STORAGE
`
`RECEIVE CACHE
`REQUEST
`
`MOVE DATA
`TO INPUT
`BUFFER
`
`IDENTIFY
`DATA TYPE
`
`IDENTIFY
`COMPRESSION
`ALGORTHM
`
`INVOKE
`COMPRESSION
`ACCE ERATOR
`
`MOVE COMPRESSED
`DATA TO OUTPUT BUFFER
`
`NOTIFY HOST
`PROCESSOR
`
`MOVE TO
`CACHE MEMORY
`
`Aw.f
`
`52
`
`SQA
`
`56
`
`5
`
`5)
`
`52
`
`SA
`
`56
`
`6)
`
`CACHE RETRIEWA
`AND DECOMPRESSION
`
`2.
`6
`
`6A
`
`6
`
`IDENTIFY
`COMPRESSED
`CACHE
`
`TRANSFER TO
`INPUT BUFFER
`
`DETERMINE
`DECOMPRESSION
`ACORTHM
`
`INVOKEDECOMPRESSION
`ALGORTHM
`
`62
`
`MOVE DECOMPRESSED
`DATA TO OUTPUT BUFFER
`6A
`
`
`
`
`
`
`
`NOTIFY HOST
`PROCESSOR
`/W6
`
`Petitioners Microsoft Corporation and HP Inc. - Ex. 1010, p. 4
`
`

`

`US 2003/0028673 A1
`
`Feb. 6, 2003
`
`SYSTEMAND METHOD FOR COMPRESSING
`AND DECOMPRESSING BROWSER CACHE IN
`PORTABLE, HANDHELD AND WIRELESS
`COMMUNICATION DEVICES
`
`FIELD OF THE INVENTION
`0001. This invention relates in general to the field of
`wireleSS communication devices, in particular to wireleSS
`communication devices that provide internet type network
`access, and more particularly to wireless, handheld and
`portable communication devices that have a need to cache
`Web pages.
`
`BACKGROUND OF THE INVENTION
`0002 Caching is a process that web browsers typically
`use that provides for faster retrieval of web page content.
`When a user accesses a web page, a cache engine locally
`Stores the page's content including graphics and HTML text.
`Later, when the same web page is accessed, the content for
`that web page is pulled from a memory. This proceSS
`improves download time and reduces network bandwidth
`usage. Web requests are redirected to a cache engine to
`retrieve the content from cache memory rather than from
`over the network.
`0.003 Caching places information closer to the user's
`device in order to make the information more readily and
`Speedily accessible, and does this transparently. At the same
`time, the use of cached content places leSS Strain on the
`limited input and output elements (I/O) of the user device's
`resources and the network's resources. Although caching
`provides significant benefits in wired computing environ
`ments, portable devices and Systems that operate in a wire
`leSS environment would benefit even more from caching
`especially because of additional time required to retrieve
`content from the network Source due to, for example,
`reduced bandwidth and lower reliability of wireless links.
`0004 One problem with caching web page content is that
`caching requires a significant amount of memory resources.
`Memory resources are usually not a major concern for most
`proxy servers, computer Systems and personal computers
`because memory is fairly inexpensive and the additional
`Space required for additional memory is available. Memory
`Space, however, is a major concern for portable, handheld
`and wireleSS communication devices. Portable devices, and
`especially handheld and wireleSS devices, typically have
`Significantly leSS memory available because of size, weight
`and power constraints, making it difficult to Support the high
`memory requirements of browser caching. As a result, the
`browser performance of portable, handheld and wireleSS
`communication devices is often poor.
`0005 Thus what is needed is a method and system for
`providing improved browser performance in portable, hand
`held and wireless communication devices. What is also
`needed is a method and System that efficiently uses the
`limited memory of portable, handheld and wireleSS commu
`nication devices. What is also needed is a method and
`System that provides improved browser caching for portable,
`handheld and wireless communication devices.
`
`BRIEF DESCRIPTION OF THE DRAWINGS
`0006 The invention is pointed out with particularity in
`the appended claims. However, a more complete under
`
`Standing of the present invention may be derived by refer
`ring to the detailed description and claims when considered
`in connection with the figures, wherein like reference num
`bers refer to Similar items throughout the figures and:
`0007 FIG. 1 is a simplified functional block diagram of
`a portion of a communication device in accordance with one
`embodiment of the present invention;
`0008 FIG. 2 is a simplified functional block diagram of
`a compression engine in accordance with one embodiment
`of the present invention;
`0009 FIG. 3 is a simplified functional block diagram of
`a decompression engine in accordance with one embodiment
`of the present invention;
`0010 FIG. 4 is a simplified functional block diagram of
`a cache management architecture in accordance with one
`embodiment of the present invention;
`0011 FIG. 5 is a simplified flow chart of a cache com
`pression and Storage procedure in accordance with one
`embodiment of the present invention; and
`0012 FIG. 6 is a simplified flow chart of a cache retrieval
`and decompression procedure in accordance with one
`embodiment of the present invention.
`0013 The description set out herein illustrates several
`embodiments of the invention in one form thereof, and Such
`description is not intended to be construed as limiting in any
`C.
`
`DETAILED DESCRIPTION OF THE DRAWINGS
`0014. The present invention provides, among other
`things, a method and System that improves browser caching,
`and is especially Suitable for portable, handheld and wireleSS
`communication devices. The available memory is more
`efficiently utilized and browser performance is significantly
`improved. In accordance with one of the embodiments, a
`compression engine and a decompression engine are
`employed to compress and decompress cache content. The
`cache is Stored in a compressed form in a cache memory of
`the hand-held device. The compression engine invokes one
`of Several compression accelerators based on the type of
`data to be compressed. Each compression accelerator imple
`ments a particular compression algorithm in hardware.
`Accordingly, compressing cache content is done rapidly and
`transparently to the user while the amount of cached content
`for use by a browser is significantly increased.
`0015 FIG. 1 is a simplified functional block diagram of
`a portion of a communication device in accordance with one
`embodiment of the present invention. Portion 100 may be a
`portion of any communication device that provides for
`digital communications. Although the present invention is
`equally applicable to any communication device, the advan
`tages of the present invention are most applicable to por
`table, handheld and wireleSS communication devices. By
`way of example, portable, handheld and wireleSS commu
`nication devices include wireleSS and cellular telephones,
`Smart phones, personal digital assistants (PDAs), web
`tablets, and any device that provides access to a network
`Such as an intranet or the internet.
`0016) Portion 100 comprises host processor 102 that
`controls operations of the communication device. Host
`
`Petitioners Microsoft Corporation and HP Inc. - Ex. 1010, p. 5
`
`

`

`US 2003/0028673 A1
`
`Feb. 6, 2003
`
`processor 102 runs, among other things, firmware as well as
`browser-type software. Not shown in portion 100 are other
`System components that are typical in wireless, handheld
`and portable devices. Other System components, may
`include, for example, transceivers for communicating infor
`mation over wireleSS links, a power Source Such as a battery,
`I/O elements, a display and user interface, as well as other
`elements common in communication devices. System bus
`104 couples host processor 102 with memory 106, compres
`Sion engine 114 and decompression engine 116. Memory
`106 includes cache memory 110, a main memory 108 and
`other memory 112.
`0017 Processor 102 may be any processor or micropro
`cessor including the XScale and ARM 7 processors, or
`processors based on the Micro-Signal Architecture (MSA)
`by Intel. Bus 104 may be a PCI type bus or other bus suitable
`for transferring information in accordance with the present
`invention including PX type buses. Memory 106 may be
`Virtually any type of memory and is desirably comprised of
`flash-type memory, however SRAM, Electronically Eras
`able Programmable Read Only Memory (EEPROM) and
`others are equally suitable. It should be noted that FIGS. 1-3
`illustrate functional block diagrams rather than physical
`diagrams. Accordingly, any of the functional elements may
`be implemented in as Single or combined hardware ele
`ments. Compression engine 114 and decompression engine
`116 are desirably implemented on a single Application
`Specific Integrated Circuit (ASIC), although other imple
`mentations are equally Suitable. For example, compression
`engine 114 and decompression engine 116 may be imple
`mented as Separate ASIC devices.
`0.018. In accordance with one of the embodiments of the
`present invention, when the communication device is, for
`example, operating a web browser, the web-browser Soft
`ware will typically initiate a request to cache the content of
`a web page. In response, compression engine 114 receives
`web page content over buS 104 and compresses the web page
`content. The compressed web page content is transferred
`back over bus 104 and stored in cache memory 110. When
`the web-browser software requests retrieval of the cache for
`a web page, the compressed web page content is identified
`in cache memory 110 and transferred to decompression
`engine 116 where it is decompressed and provided back to
`web browser Software. AS used herein, the compression and
`decompression of data referS equally to the compression and
`decompression of any digital or digitized information
`including digital data, text, interpreted data, graphics,
`images, music, Speech, etc.
`0019. One goal of data compression is to reduce the
`number of bits required to represent data. Typically, data
`compression methods which provide the highest levels of
`data compression generally require the most complex data
`processing equipment and are often Slow in execution.
`Those methods which offer lower levels of data compression
`often operate faster and employ leSS complex hardware. In
`general, the choice of a data compression method is made
`based upon a compromise between System complexity and
`time of execution versus desired level of data compression.
`0020. In accordance with one embodiment of the present
`invention, compression engine 114 employs a plurality of
`hardware implemented compression accelerators which are
`dedicated to a particular type of data compression, while
`
`decompression engine 116 employs a plurality of hardware
`implemented decompression accelerators which are dedi
`cated to a particular type of data decompression. This
`provides for fast and efficient data compression and decom
`pression, because a hardware implementation is faster than
`a typical Software implementation, different algorithms are
`more efficient and better Suited for certain types of data, and
`the accelerators may operate on different portion of the data
`in parallel.
`0021. There are several known data compression tech
`niques. For example, a two dimensional coding Scheme
`discussed in the review paper entitled “Coding of Two-Tone
`Images”, Hwang, IEEE Transactions on Communications,
`Vol. COM-25, No. 11, November, 1977, pp. 1406-1424,
`which describes a number of techniques for efficient coding
`of both alphanumeric data and image data. Both Single
`dimension (run length) and two-dimension coding (e.g. per
`block of pixel data) are considered. Another two dimen
`Sional coding Scheme is described by Hunter et al. in
`“International Digital Facsimile Coding Standards”, Pro
`ceedings of the IEEE, Vol. 68, No. 7, July, 1980, pp. 854-867
`which describes various algorithms used in facsimile trans
`mission (generally one-dimension coding techniques). In
`these two-dimension coding Schemes, conditions of a Sub
`Sequent coding line are encoded in dependence upon con
`ditions in a previous reference line.
`0022. Another compression technique has been described
`in a paper entitled “An Extremely Fast Ziv-Lempel Data
`Compression Algorithm” by Williams, Proceedings of the
`IEEE Data Compression Conference, April, 1991, pp. 362
`371, which describes a fast implementation of the Lempel
`Ziv (LZ) compression algorithm that employs the LZ
`method. That method constructs a dictionary of data Strings
`at both the receiving and transmitting nodes and transmits
`codes in dependence upon matches found between an input
`data String and a data String found in the dictionary.
`0023 FIG. 2 is a simplified functional block diagram of
`a compression engine in accordance with one embodiment
`of the present invention. Compression engine 200 comprises
`a plurality of compression accelerators 210, 212, 214 and
`216, each for implementing a predetermined compression
`algorithm. Desirably, compression accelerators 210, 212,
`214 and 216 implement compression algorithms in custom
`designed hardware. Compression engine 200 also comprises
`input buffer 204 and output buffer 206 which couple with
`compression accelerators 210, 212,214 and 216 through bus
`208. Input buffer 204 and output buffer 206 are coupled to
`an external bus, such as bus 104 (FIG. 1). Data for com
`pression by compression engine 200 is buffered in input
`buffer 204 by a host processor such as host processor 102
`(FIG. 1), while data that has been compressed is buffered in
`output buffer 206 for transfer to a storage location such as
`cache memory 110 (FIG. 1). Compression engine 200 is
`suitable for use as compression engine 114 (FIG. 1).
`0024.
`In accordance with one of the embodiments of the
`present invention, web page content to be cached is trans
`ferred to input buffer 204, and one of compression accel
`erators 210, 212, 214 and 216 is selected and invoked
`depending on the data type to be compressed. Different
`compression accelerators may be invoked for different data
`types present in a web page.
`0025. In accordance with one embodiment, a host pro
`ceSSor or other processing element external to compression
`
`Petitioners Microsoft Corporation and HP Inc. - Ex. 1010, p. 6
`
`

`

`US 2003/0028673 A1
`
`Feb. 6, 2003
`
`engine 200 determines which of the compression accelera
`tors to invoke based on the data type. In accordance with one
`embodiment, compression engine includes controller 202
`which includes a data analyzer element that identifies the
`data types present in the web page content. Controller 202
`also includes a Selector element that Selects an appropriate
`one of the compression accelerators 210, 212, 214 and 216,
`invokes the Selected compression accelerator, and desirably
`notifies a host processor when the compressed data is ready
`in output buffer 206. In this embodiment, controller 202
`receives instructions from and communicates with an exter
`nal host processor over a bus such as bus 104 (FIG. 1).
`0.026
`FIG. 3 is a simplified functional block diagram of
`a decompression engine in accordance with one embodiment
`of the present invention. Decompression engine 300 com
`prises a plurality of decompression accelerators 310, 312,
`314 and 316, each for implementing a predetermined
`decompression algorithm and each desirably corresponding
`with one of the compression accelerators of compression
`engine 200 (FIG. 2). Desirably, decompression accelerators
`310, 312,314 and 316 implement decompression algorithms
`in custom designed hardware. Decompression engine 300
`also comprises input buffer 304 and output buffer 306 which
`couple with decompression accelerators 310, 312, 314 and
`316 through bus 308. Input buffer 304 and output buffer 306
`are coupled to an external bus, such as bus 104 (FIG. 1).
`Data for decompression by compression engine 300 is
`buffered in input buffer 304 by a host processor, such as host
`processor 102 (FIG. 1), while data that has been decom
`pressed is buffered in output buffer 306 for transfer to the
`browser software. Decompression engine 300 is suitable for
`use as decompression engine 116 (FIG. 1).
`0027. In accordance with one of the embodiments of the
`present invention, web page content to be retrieved from
`cache memory is transferred to input buffer 304 by a host
`processor. One of decompression accelerators 310, 312,314
`and 316 is Selected and invoked depending on the data type
`to be decompressed. Different decompression accelerators
`are desirably invoked for different data types present in the
`compressed web page.
`0028. In accordance with one embodiment, a host pro
`ceSSor or other processing element external to decompres
`sion engine 300 determines which of the decompression
`accelerators to invoke based on the data type. In accordance
`with one embodiment, the compression engine includes
`controller 302 which includes a data analyzer element that
`identifies the data types present in the compressed web page
`content. Controller 302 also includes a selector element that
`Selects an appropriate one of the decompression accelerators
`310, 312,314 and 316, invokes the selected decompression
`accelerator, and desirably notifies a host processor when the
`decompressed data is ready in output buffer 306. In this
`embodiment, controller 302 receives instructions from and
`communicates with an external host processor over a bus,
`such as bus 104 (FIG. 1).
`0029. In the various embodiments of the present inven
`tion, compression engine 200 and decompression engine
`300 comprise one or more hardware accelerators (i.e.,
`respectively compression accelerators 210, 212, 214 and
`216, and decompression accelerators 310,312,314 and 316)
`that perform a Serial dictionary based algorithm, Such as the
`LZ77 or LZ-Stac (LZs) dictionary based compression and
`
`decompression algorithms. The accelerators may also imple
`ment the LZ78, LZ-Welsh (LZW), LZs, LZ Ross Williams
`(LZRW1) and/or other algorithms. Desirably, each compres
`Sion accelerator (and an associated decompression accelera
`tor) are designed to implement one algorithm in hardware.
`Accordingly, compression engine 200 (and decompression
`engine 300) may have many compression (and decompres
`Sion) accelerators depending on the number of compression
`(or decompression) algorithms that are implemented.
`0030. In accordance with one embodiment, the content of
`the web page comprises a plurality of data types. Each data
`type is identifiable, desirably with a data type tag associated
`therewith. The controller reads the tag and selects one of the
`compression accelerators for each data type. In this embodi
`ment, the first of the compression accelerators 210 is con
`figured to hardware implement a first compression algorithm
`for a first of the data types, and a Second of the compression
`accelerators 212 is configured to hardware implement a
`Second compression algorithm for a Second of the data types.
`The first and Second data types are distinct, and the first and
`Second compression algorithms are distinct.
`0031. Accelerators based on one of the LZ type algo
`rithms, for example, the LZ77 are invoked to compress, for
`example, text data, HTML data, XML data, XHTML data,
`interpreted data, portable network graphics (PNG) data.
`Accelerators based on the LZW, for example, may be
`implemented for data types Such as graphic interface format
`(GIF) data, while accelerators based on the LZH algorithm
`may be implemented for LZH data, joint photographic
`experts group (JPEG), and moving pictures experts group
`(MPEG) data including MPEG Layer-3 (MP3) data and
`MPEG Layer-4 (MP4) data. Accelerators based on the
`LZ77, for example, may also be implemented for data types
`Such as JAR and ZIP data, as well as for data types Such as
`SOZ data, UC2 data, ZOO data, ARC data, ARJ data and
`PAK data. Accelerators may also implement any of the other
`LZ algorithms for compressing the various data types.
`0032. In one embodiment, the first compression accel
`erator 210 implements the LZ77 compression algorithm for
`a first group of data types that include PNG data, the Second
`compression accelerator 212 implements a LZW compres
`Sion algorithm for a Second group of data types that include
`GIF data. A third compression accelerator 214 may be
`included to hardware implement a third compression algo
`rithm for compressing a third group of data types including
`JPEG or MPEG data. In one embodiment, another compres
`Sion accelerators may be configured to hardware implement
`the LZ77 compression algorithm for a fourth group of data
`types. In these various embodiments of the present inven
`tion, decompression engine 300 (FIG. 3) includes decom
`pressing accelerators that correspond with each of the com
`pression accelerators for decompressing data compressed by
`the corresponding compression accelerator.
`0033. In situations where portions of web content are
`received in a compressed form, the controller refrains from
`invoking one of the compression accelerators. In this case,
`controller 202 recognizes the data as compressed or as a
`compressed object and causes these pre-compressed objects
`to be transferred to the cache memory without processing by
`any of the hardware accelerators. Controller 302 desirably
`recognizes pre-compressed objects that are Stored in cache
`memory and refrains from invoking one of the decompres
`
`Petitioners Microsoft Corporation and HP Inc. - Ex. 1010, p. 7
`
`

`

`US 2003/0028673 A1
`
`Feb. 6, 2003
`
`Sion accelerators for these objects. In one alternate embodi
`ment, controller 302 may invoke the appropriate decom
`pression accelerator (based on the data type or tag) for Such
`pre-compressed objects that are Stored in cache memory as
`well as for pre-compressed objects received directly from an
`external Source Such as a web-site.
`0034.
`In one embodiment, the accelerators implement a
`parallel lossleSS compression algorithm, and desirably a
`"parallel' dictionary-based compression and decompression
`algorithm. The parallel algorithm may be based on a Serial
`dictionary-based algorithm, such as the LZ77 or LZSS
`algorithms. The parallel algorithm may also be based on a
`variation of conventional Serial LZ compression, including
`LZ77, LZ78, LZ-Welsh (LZW), LZ-Stac (LZs) and/or
`LZRW1, among others. The parallel algorithm could also be
`based on Run Length Encoding, Predictive Encoding, Huff
`man, Arithmetic, or any other compression algorithm or
`lossleSS compression algorithm. However, the paralleling of
`these is leSS preferred due to their lower compression
`capabilities and/or higher hardware costs.
`0035) Any of various compression methods may be
`implemented by the present invention, however parallel
`implementations are desirably used, although other com
`pression methods that provide fast parallel compression and
`decompression for improved memory bandwidth and effi
`ciency are also Suitable for use with the present invention.
`0036). In accordance with one of the embodiments of the
`present invention, compression engine 114 and decompres
`Sion engine 116 are configured to implement several algo
`rithms that in addition to those method described above,
`include the LZ, LZ77, LZ78, LZH, LZS, LZW, LZRW and
`LZRW1 algorithms. The present invention may employ any
`of a number of compression Schemes and is equally appli
`cable to other known compression methods as well as to
`compression methods yet unknown or unpublished.
`0037. In accordance with one embodiment of the present
`invention, improved performance is achieved by caching
`interpreted data. This avoids reinterpretation when retriev
`ing the data. Interpreted data includes, for example, data
`Such as codes that are hidden in a web page and cause the
`web browser to interpret information that follows in a
`certain way. When a web page is downloaded over a network
`connection, the interpreted data (such as HTML codes) is
`interpreted by the browser and displayed in accordance with
`the interpretation. The same occurs when the web page is
`retrieved from cache. In accordance with this embodiment of
`the present invention, the web page is compressed after
`interpretation of the interpreted data. In this way, the inter
`preted data is not compressed but it is the result of the
`interpreted data on the other data that is compressed.
`Accordingly, re-interpretation of interpreted data is avoided
`when retrieving the compressed web page from cache
`memory.
`0.038
`FIG. 4 is a simplified functional block diagram of
`a cache management architecture in accordance with one
`embodiment of the present invention. Architecture 400
`manages the compressing and caching of content as well as
`the retrieving and decompressing of cache content. File
`system 406 is desirably used for writing data to cache
`memory 410 and for reading data from cache memory 410.
`File System 406 is also used for updating data in a cache
`directory. Architecture 400 comprises browser portion 402,
`
`a virtual cache management module portion 404, file System
`406, and compression and decompression drivers 408.
`While browser portion 402, cache management portion 404,
`file System 406, and compression and decompression drivers
`408 are desirably implemented in software and firmware that
`operate on a communication device, cache memory 410 and
`compression and decompression engines 412 are hardware
`elements invoked by file system 406 and drivers 408 respec
`tively. Cache memory 410, for example, corresponds with
`cache memory 110 (FIG. 1), and compression and decom
`pression engines 412, for example, correspond respectively
`with compression and decompression engines 114 and 116
`(FIG. 1).
`0039. When browser 402 wants to cache data, virtual
`cache management module 404 passes the data to compres
`sion engine 412 using compression driver 408 to invoke the
`compression function. Compression driver 408 is desirably
`an accelerator driver for invoking one of the compression
`accelerators, Such as compression accelerators 210, 212,214
`and/or 216 (FIG. 2). Compression driver 408 may be
`implemented as part of controller 202 (FIG. 2) or in an
`alternative embodiment, may be implemented by a host
`processor external to compression engine 200 (FIG.2). The
`compressed data is then written to cache memory 410 using
`file system 406.
`0040 Similarly, when browser 402 wants to retrieve
`cached data, Virtual cache management module 404 fetches
`the compressed data from cache memory 410 using file
`System 406, passes the compressed data to decompression
`engine 412 using decompression driver 408 which invokes
`a decompression function. Decompression driver 408 is
`desirably an accelerator driver for invoking one of the
`decompression accelerators, Such as decompression accel
`erators 310, 312,314 and/or 316 of (FIG. 3). Decompres
`sion driver 408 may be implemented as part of controller
`302 (FIG. 3) or in an alternative embodiment, may be
`implemented by a host processor external to decompression
`engine 300 (FIG. 3). The decompressed data is then passed
`to browser 402.
`0041 FIG. 5 is a simplified flow chart of a cache
`compression and Storage procedure in accordance with one
`embodiment of the present invention. Procedure 500 is
`desirably performed by portions of Virtual cache memory
`architecture 400 (FIG. 4) as well as the functional elements
`of system 100 (FIG. 1). In task 502, a cache request is
`received from the browser. For example, in accordance with
`web browser Software, a web page or portions thereof are
`requested to be cached. In task 504, the data is moved to the
`input buffer of the compression engine. In task 506, data
`types are identified for each portion of data of the web page
`to be cached. Task 506 identifies, for example, portable
`network graphics (PNG) data, graphic interface format
`(GIF) data, joint photographic experts group (JPEG), mov
`ing pictures experts group (MPEG) data, JAR and/or ZIP
`data types. Task 506 also identifies which of the data (i.e.,
`objects) that are received by the browser in compressed form
`(e.g., pre-compressed).
`0042. In task 508, a compression algorithm is selected
`based on the data type. The compression algorithm corre
`sponds with a particular compression accelerator that imple
`ments the identified algorithm in hardware (e.g., without
`intervention of software). In task 510, the compression
`
`Petitioners Microsoft Corporation and HP Inc. - Ex. 1010, p. 8
`
`

`

`US 2003/0028673 A1
`
`Feb. 6, 2003
`
`accelerator for the identified algorithm is invoked for each of
`the different data types. The identified compression accel
`erator compresses the data. In task 512, the compressed data
`is moved to the output buffer and in task 514, the host is
`notified that compressed data is ready. In task 516, the
`compressed data is moved to the cache memory.
`0043. In one embodiment, for data that was identified in
`task 506 as being pre-compressed, procedure 500 refrains
`from performing tasks 508 and 510, and the pre-compressed
`data is moved directly to output buffer. In an alternate
`embodiment, pre-compressed data portions of a web page
`are transferred directly to the cache memory without the
`involvement of the compression engines.
`0044 FIG. 6 is a simplified flow chart of a cache retrieval
`and decompression procedure in accordance with one
`embodiment of the present invention. Procedure 600 is
`desirably performed by portions of Virtual cache memory
`architecture 400 (FIG. 4) as well as the functional elements
`of system 100 (FIG. 1). In task 602, when the browser
`requests retrieval of cached data for a web page, the com
`pressed cached data is identified in the cache memory. In
`task 604, the identified compressed cached data is trans
`ferred to an input buffer of the decompression engine. In task
`608, a decompression algorithm is Selected for each data
`type of compressed cached data (e.g., portions of a web page
`may be comprised of different data types) based on a data
`type tag. In task 610, one of the decompression accelerators
`that perform the identified decompression algorithm is
`invoked for each of the different data types. The compressed
`data is then decompressed, and in task 612, decompressed
`data from the decompression accelerator is transferred to

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket