throbber
US007831890B2
`
`(12) United States Patent
`Tzannes et al.
`
`(10) Patent No.:
`(45) Date of Patent:
`
`US 7,831,890 B2
`Nov. 9, 2010
`
`(54)
`
`(75)
`
`(73)
`(*)
`
`(21)
`(22)
`(65)
`
`(60)
`
`(51)
`
`(52)
`(58)
`
`(56)
`
`RESOURCE SHARING INA
`TELECOMMUNICATIONS ENVIRONMENT
`
`Inventors: Marcos C. Tzannes, Orinda, CA (US);
`Michael Lund, West Newton, MA (US)
`Assignee: Aware, Inc., Bedford, MA (US)
`Notice:
`Subject to any disclaimer, the term of this
`patent is extended or adjusted under 35
`U.S.C. 154(b) by 1123 days.
`Appl. No.: 11/246,163
`
`Filed:
`
`Oct. 11, 2005
`
`Prior Publication Data
`US 2006/OO88054A1
`Apr. 27, 2006
`
`Related U.S. Application Data
`Provisional application No. 60/618,269, filed on Oct.
`12, 2004.
`
`Int. C.
`(2006.01)
`H03M, 3/00
`U.S. Cl. ........................ 714/774; 714/784; 375/222
`Field of Classification Search ................. 709/215;
`375/222; 714/774,784; 711/147, 153, 157,
`711/170, 173; 379/93.01
`See application file for complete search history.
`References Cited
`
`U.S. PATENT DOCUMENTS
`
`6,337,877
`6,707,822
`
`1, 2002
`3, 2004
`
`Cole et al.
`Fadavi-Ardekani et al. ...... 37Of
`395.5
`
`6,775,320
`6,778,589
`6,778,596
`2003, OO67877
`20040114536
`2005/018O323
`2009/03OO450
`
`B1
`B1
`B1
`A1
`A1
`A1
`A1
`
`8, 2004
`8, 2004
`8, 2004
`4, 2003
`6, 2004
`8, 2005
`12, 2009
`
`Tzannes et al.
`Ishii
`Tzannes
`Sivakumar et al.
`O'Rourke
`Beightol et al.
`Tzannes
`
`FOREIGN PATENT DOCUMENTS
`
`1225735
`EP
`1246409
`EP
`WO 03/063060 A
`WO
`WO WO 2006/044227
`
`T 2002
`10, 2002
`T 2003
`4/2006
`
`OTHER PUBLICATIONS
`
`International Application WO 2006/04427A1, published on Apr. 27.
`2006.
`PCT/US2005/036015 International Search Report, mailed Feb. 8,
`2006.
`http://www.sunrisetelecom.com/technotes/APP-xDSL-8B.pdf.
`"Sunset xDSL: Prequalification of ADSL Circuits with ATU-C Emu
`lation” 2001, p.3, Sunrise Telecom Inc., Application Series, San Jose,
`USA, XP002363272.
`
`(Continued)
`Primary Examiner Joon H Hwang
`Assistant Examiner Mark Pfizenmayer
`(74) Attorney, Agent, or Firm—Jason H. Vick; Sheridan Ross,
`P.C.
`
`(57)
`
`ABSTRACT
`
`A transceiver is designed to share memory and processing
`power amongst a plurality of transmitter and/or receiver
`latency paths, in a communications transceiver that carries or
`Supports multiple applications. For example, the transmitter
`and/or receiver latency paths of the transceiver can share an
`interleaver/deinterleaver memory. This allocation can be
`done based on the data rate, latency, BER, impulse noise
`protection requirements of the application, data or informa
`tion being transported over each latency path, or in general
`any parameter associated with the communications system.
`
`8 Claims, 3 Drawing Sheets
`
`100
`- 4
`TRANSCEWER
`200
`
`TransTitler Piction
`...212. -5-------218-cis-1210
`latencyPath
`ya
`Y
`A.
`
`Shared
`Processing
`Module
`
`-----------
`
`
`
`130
`
`Paramater
`etermination
`Module
`
`Receiver Portion
`--------------- -316-sc310
`v.
`LatencyPath
`
`Deinter
`
`5
`
`4.
`
`40
`
`A/
`
`Fath Module
`
`16
`
`Shared
`Resource
`hanagement
`Module
`
`CommScope, Inc.
`IPR2023-00066, Ex. 1024
`Page 1 of 12
`
`

`

`US 7,831,890 B2
`Page 2
`
`OTHER PUBLICATIONS
`
`Written Opinion for International (PCT) Patent Application No.
`PCT/US2005/036015, mailed Feb. 8, 2006.
`International Preliminary Report on Patentability for International
`(PCT) Patent Application No. PCT/US2005/036015, mailed Apr. 26,
`2007.
`Examiner's First Report for Australian Patent Application No.
`2005296086, mailed Jun. 24, 2009.
`Notification of the First Office Action (including translation) for
`Chinese Patent Application No. 200580032703, mailed Sep. 25,
`2009.
`Shoji, T. et al: “Wireless Access Method to Ensure. Each Users QOS
`in Unpredictable and Various QOS Requirements Wireless Personal
`
`Communications.” Springer, Dordrecht, NL, vol. 22, No. 2, Aug.
`2002, pp. 139-151.
`“ITU-T Recommendation G.992.5—Series G: Transmission Sys
`tems and Media, Digital Systems and Networks'. International Tele
`communication Union, ADSL2. May 2003, 92 pages.
`U.S. Appl. No. 12/783,758, filed May 20, 2010, Tzannes.
`U.S. Appl. No. 12/760,728, filed Apr. 15, 2010, Tzannes.
`U.S. Appl. No. 12/783,765, filed May 20, 2010, Tzannes.
`U.S. Appl. No. 12/761,586, filed Apr. 16, 2010, Lund et al.
`“ITU-T Recommendation G.992.3.” International Telecommunica
`tion Union, ADSL2, Jan. 2005, 436 pages.
`“VDSL2 ITU-T Recommendation G.993.2.' International Telecom
`munication Union, Feb. 2006, 252 pages.
`* cited by examiner
`
`CommScope, Inc.
`IPR2023-00066, Ex. 1024
`Page 2 of 12
`
`

`

`U.S. Patent
`
`Nov. 9, 2010
`
`Sheet 1 of 3
`
`US 7,831,890 B2
`
`
`
`100
`
`TRANSCEIVER
`
`Transmitter Portion
`
`O
`
`Shared
`Processing
`Module
`
`Shared
`
`ry
`
`Receiver Portion
`
`?
`LatencyPath 310
`
`140
`
`Parameter
`Determination
`Module
`
`Path Module
`
`Allocation
`Module
`
`Shared
`Resource
`Management
`Module
`
`Fig. 1
`
`CommScope, Inc.
`IPR2023-00066, Ex. 1024
`Page 3 of 12
`
`

`

`U.S. Patent
`
`Nov. 9, 2010
`
`Sheet 2 of 3
`
`US 7,831,890 B2
`
`S200
`
`Allocate Shared
`interleaver/
`Deinterleaver Memory
`and/or Shared Coder/
`Decoder Processing
`Resources
`
`S300
`
`Fig. 2
`
`
`
`
`
`
`
`
`
`Determine Maximum Amount of Shared S310
`Memory that can be Allocated to A
`Specific Interleaver or Deinterleaver of a
`plurality of Interleavers or
`Deinterleavers
`
`Exchange Determined
`Amount With another
`Transceiver
`
`
`
`Fig. 3
`
`CommScope, Inc.
`IPR2023-00066, Ex. 1024
`Page 4 of 12
`
`

`

`U.S. Patent
`
`Nov. 9, 2010
`
`Sheet 3 of 3
`
`US 7,831,890 B2
`
`s4 's410
`Determine Number
`of Latency Paths
`
`Exchange Latency Path
`Information
`
`
`
`For Each Latency Path
`
`S430
`
`
`
`
`
`
`
`
`
`Monitor Resource
`Allocation
`S450
`
`
`
`
`
`S440
`
`Determine
`Ole O Ore
`Parameters
`
`Allocate
`Shared
`Resource(s)
`
`
`
`Coordinate Shared S470
`Resource
`Allocation with
`another
`Transceiver
`
`
`
`
`
`Adjust
`Requirements
`p
`
`No
`
`S490
`
`CommScope, Inc.
`IPR2023-00066, Ex. 1024
`Page 5 of 12
`
`

`

`US 7,831,890 B2
`
`1.
`RESOURCE SHARING INA
`TELECOMMUNICATIONS ENVIRONMENT
`
`RELATED APPLICATION DATA
`
`This application claims the benefit of and priority under 35
`U.S.C. S 119(e) to U.S. Patent Application No. 60/618,269,
`filed Oct. 12, 2004, entitled “Sharing Memory and Processing
`Resources in DSL Systems.” which is incorporated herein by
`reference in its entirety.
`
`BACKGROUND
`
`10
`
`15
`
`25
`
`30
`
`35
`
`1. Field of the Invention
`This invention generally relates to communication sys
`tems. More specifically, an exemplary embodiment of this
`invention relates to memory sharing in communication sys
`tems. Another exemplary embodiment relates to processing
`or coding resource sharing in a communication system.
`2. Description of Related Art
`U.S. Pat. Nos. 6,775,320 and 6,778,589 describe DSL sys
`tems Supporting multiple applications and multiple framer/
`coder/interleaver FCI blocks (an FCI block is also referred to
`as a latency path). DSL systems carry applications that have
`different transmission requirements with regard to, for
`example, data rate, latency (delay), bit error rate (BER), and
`the like. For example, video typically requires a low BER
`(<1E-10) but can tolerate higher latency (>20 ms). Voice, on
`the other hand, typically requires a low latency (<1 ms) but
`can tolerate BER (>1E-3).
`As described in U.S. Pat. No. 6,775,320, different applica
`tions can use different latency paths in order to satisfy the
`different application requirements of the communication sys
`tem. As a result a transceiver must Support multiple latency
`paths in order to support applications such as video, Internet
`access and Voice telephony. When implemented in a trans
`ceiver, each of the latency paths will have a framer, coder, and
`interleaver block with different capabilities that depend on
`the application requirements.
`
`2
`an exemplary aspect of the invention relates to determining
`the maximum amount of shared memory that can be allocated
`to one or more interleaves or deinterleavers.
`According to another exemplary aspect of the invention,
`processing power is shared between a number of transceiver
`modules. More specifically, and in accordance with an exem
`plary embodiment of the invention, a coding module is shared
`between one or more coders and/or decoders.
`Another exemplary embodiment of the invention relates to
`transitioning from a fixed memory configuration to a shared
`memory configuration during one or more of initialization
`and SHOWTIME (user data transmission).
`An additional exemplary aspect of the invention relates to
`dynamically updating one or more of shared memory and
`processing resources based on changing communication con
`ditions.
`An additional exemplary aspect of the invention relates to
`updating one or more of shared memory and processing
`resources based on an updated communication parameter.
`An additional exemplary aspect of the invention relates to
`updating the allocation of one or more of shared memory and
`processing resources based on an updated communication
`parameter(s).
`Additional aspects of the invention relate to exchanging
`shared resource allocations between transceivers.
`Additional exemplary aspects relate to a method of allo
`cating shared memory in a transceiver comprising allocating
`the shared memory to a plurality of modules, wherein each of
`the plurality of modules comprise at least one interleaver, at
`least one deinterleaver or a combination thereof.
`Still further aspects relate to the above method wherein the
`plurality of modules comprise interleavers.
`Still further aspects relate to the above method wherein the
`plurality of modules comprise deinterleavers.
`Still further aspects relate to the above method wherein the
`plurality of modules comprise at least one interleaver and at
`least one deinterleaver.
`Additional exemplary aspects relate to a transceiver com
`prising a plurality of modules each including at least one
`interleaver, at least one deinterleaver oracombination thereof
`and a shared memory designed to be allocated to a plurality of
`the modules.
`Still further aspects relate to the above transceiver wherein
`the plurality of modules comprise interleavers.
`Still further aspects relate to the above transceiver wherein
`the plurality of modules comprise deinterleavers.
`Still further aspects relate to the above transceiver wherein
`the plurality of modules comprise at least one interleaver and
`at least one deinterleaver.
`These and other features and advantages of this invention
`are described in, or are apparent from, the following descrip
`tion of the embodiments.
`
`SUMMARY
`
`40
`
`45
`
`50
`
`55
`
`One difficulty with implementing multiple latency paths in
`a transceiver is the fact that a latency path is a complicated
`digital circuit that requires a large amount of memory and
`processing power. An interleaver within a latency path can
`consume a large amount of memory in order to provide error
`correcting capability. For example, a typical DSL transceiver
`will have at least one latency path with approximately 16
`kbytes of memory for the interleaver. Likewise, the coding
`block, for example, a Reed Solomon coder, consumes a large
`amount of processing power. In general, as the number of
`latency paths increase, the memory and processing power
`requirements for a communication system become larger.
`Accordingly, an exemplary aspect of this invention relates
`to sharing memory between one or more interleavers and/or
`deinterleavers in a transceiver. More particularly, an exem
`plary aspect of this invention relates to shared latency path
`memory in a transceiver.
`Additional aspects of this invention relate to configuring
`and initializing shared memory in a communication system.
`More particularly, an exemplary aspect of this invention
`relates to configuring and initializing interleaver/deinter
`leaver memory in a communication system.
`Additional aspects of the invention relate to determining
`the amount of memory that can be allocated to a particular
`component by a communication system. More specifically,
`
`BRIEF DESCRIPTION OF THE DRAWINGS
`
`60
`
`65
`
`The embodiments of the invention will be described in
`detail, with reference to the following figures, wherein:
`FIG. 1 is a functional block diagram illustrating an exem
`plary transceiver according to this invention;
`FIG. 2 is a flowchart outlining an exemplary method of
`sharing resources according to this invention;
`FIG. 3 is a flowchart outlining an exemplary method of
`determining a maximum amount of shared memory accord
`ing to this invention; and
`
`CommScope, Inc.
`IPR2023-00066, Ex. 1024
`Page 6 of 12
`
`

`

`US 7,831,890 B2
`
`3
`FIG. 4 is a flowchart outlining an exemplary resource shar
`ing methodology according to this invention.
`
`DETAILED DESCRIPTION
`
`4
`According to an exemplary embodiment of the invention,
`memory and processing power can be shared among a plu
`rality of transmitter and/or receiver latency paths, in a com
`munications transceiver that carries or Supports multiple
`applications. For example, the transmitter and/or receiver
`latency paths of the transceiver can share an interleaver/
`deinterleaver memory and the shared memory can be allo
`cated to the interleaver and/or deinterleaver of each latency
`path. This allocation can be done based on the data rate,
`latency, BER, impulse noise protection requirements of the
`application, data or information being transported over each
`latency path, or in general any parameter associated with the
`communications system.
`Likewise, for example, the transmitter and/or receiver
`latency paths can share a Reed-Solomon coder/decoder pro
`cessing module and the processing power of this module can
`be allocated to each encoder and/or decoder. This allocation
`can be done based on the data rate/latency, BER, impulse
`noise protection requirements of the application data or infor
`mation being transported over each latency path, or in general
`based on any parameter associated with the communication
`system.
`In accordance with an exemplary operational embodiment,
`a first transceiver and a second transceiver transmit to one
`another messages during, for example, initialization which
`contain information on the total and/or shared memory capa
`bilities of each transceiver and optionally information about
`the one or more latency paths. This information can be trans
`mitted prior to-determining how to configure the latency
`paths to Support the application requirements. Based on this
`information, one of the modems can select an FCI configu
`ration parameter(s) that meets the transmission requirements
`of each application being transported over each latency paths.
`While an exemplary of the embodiment of the invention will
`be described in relation to the operation of the invention and
`characteristics thereof being established during initialization,
`it should be appreciated that the sharing of resources can be
`modified and messages transmitted between a two transceiv
`ers at any time during initialization and/or user data transmis
`sion, i.e., SHOWTIME.
`FIG. 1 illustrates an exemplary embodiment of a trans
`ceiver 100. The transceiver 100 includes a transmitter portion
`200 and a receiver portion 300. The transmitter portion 200
`includes one or more latency paths 210, 220, . . . . Similarly,
`the receiver portion 300 includes one or more latency paths
`310,320, . . . . Each of the latency paths in the transmitter
`portion 200 includes a framer, coder, and interleaver desig
`nated as 212, 214, 216 and 222, 224 and 226, respectively.
`Each of the latency paths in the receiver portion includes a
`deframer, decoder, and deinterleaver designated as 312,314,
`316 and 322,324, and 326, respectively. The transceiver 100
`further includes a shared processing module 110, a shared
`memory 120, a parameter determination module 130, a path
`module 140, an allocation module 150, and a shared resource
`management module 160, all interconnected by one or more
`links (not shown).
`In this exemplary embodiment, the transceiver 100 is illus
`trated with four total transmitter portion and receiver portion
`latency paths, i.e., 210, 220, 310, and 320. The shared
`memory 120 is shared amongst the two transmitter portion
`interleavers 216 and 226 and two receiver portion deinter
`leavers 316 and 326. The shared processing module 110, such
`as a shared coding module, is shared between the two trans
`mitter portion coders 214 and 224 and the two receiver por
`tion decoders 314 and 324.
`While the exemplary embodiment of the invention will be
`described in relation to a transceiver having a number of
`
`The exemplary embodiments of this invention will be
`described in relation to sharing resources in a wired and/or
`wireless communications environment. However, it should
`be appreciated, that in general, the systems and methods of
`this invention will work equally well for any type of commu
`nication system in any environment.
`The exemplary systems and methods of this invention will
`also be described in relation to multicarrier modems, such as
`DSL modems and VDSL modems, and associated communi
`cation hardware, Software and communication channels.
`However, to avoid unnecessarily obscuring the present inven
`tion, the following description omits well-known structures
`and devices that may be shown in block diagram form or
`otherwise Summarized.
`For purposes of explanation, numerous details are set forth
`in order to provide a thorough understanding of the present
`invention. It should be appreciated however that the present
`invention may be practiced in a variety of ways beyond the
`specific details set forth herein.
`Furthermore, while the exemplary embodiments illus
`trated herein show the various components of the system
`collocated, it is to be appreciated that the various components
`of the system can be located at distant portions of a distributed
`network, Such as a telecommunications network and/or the
`Internet, or within a dedicated secure, unsecured and/or
`encrypted system. Thus, it should be appreciated that the
`components of the system can be combined into one or more
`devices, such as a modem, or collocated on a particular node
`of a distributed network, Such as a telecommunications net
`work. As will be appreciated from the following description,
`and for reasons of computational efficiency, the components
`of the system can be arranged at any location within a distrib
`uted network without affecting the operation of the system.
`For example, the various components can be located in a
`Central Office modem (CO, ATU-C, VTU-O), a Customer
`Premises modem (CPE, ATU-R, VTU-R), a DSL manage
`ment device, or Some combination thereof. Similarly, one or
`more functional portions of the system could be distributed
`between a modem and an associated computing device.
`Furthermore, it should be appreciated that the various
`links, including communications channel 5, connecting the
`elements can be wired or wireless links, or any combination
`thereof, or any other known or later developed element(s) that
`is capable of Supplying and/or communicating data to and
`from the connected elements. The term module as used herein
`can refer to any known or later developed hardware, software,
`firmware, or combination thereof that is capable of perform
`ing the functionality associated with that element. The terms
`determine, calculate and compute, and variations thereof, as
`used herein are used interchangeably and include any type of
`methodology, process, mathematical operation or technique.
`FCI block and latency path are used interchangeably hereinas
`well as transmitting modem and transmitting transceiver.
`Receiving modem and receiving transceiver are also used
`interchangeably.
`FIG. 1 illustrates an exemplary embodiment of a trans
`ceiver 100 that utilizes shared resources. It should be appre
`ciated that numerous functional components of the trans
`ceiver have been omitted for clarity. However, the transceiver
`100 can also include the standard components found in typi
`cal communications device(s) in which the technology of the
`Subject invention is implemented into.
`
`5
`
`10
`
`15
`
`25
`
`30
`
`35
`
`40
`
`45
`
`50
`
`55
`
`60
`
`65
`
`CommScope, Inc.
`IPR2023-00066, Ex. 1024
`Page 7 of 12
`
`

`

`5
`transmitter portion latency paths and receiver portion latency
`paths, it should be appreciated that this invention can be
`applied to any transceiver having any number of latency
`paths. Moreover, it should be appreciated that the sharing of
`resources can be allocated Such that one or more of the trans
`mitter portion latency paths are sharing a shared resource, one
`or more of the receiver portion latency paths are sharing a
`shared resource, or a portion of the transmitter portion latency
`paths and a portion of the receiver portion latency paths are
`sharing shared resources. Moreover, any one or more of the
`latency paths, or portions thereof, could also be assigned to a
`fixed resource while, for example, another portion of the
`latency path(s) assigned to a shared resource. For example, in
`latency path 210, the interleaver 216 could be allocated a
`portion of the shared memory 120, while the coder 214 could
`be allocated to a dedicated processing module, Vice versa, or
`the like.
`In accordance with the exemplary embodiment, a plurality
`of transmitter portion or receiver portion latency paths share
`an interleaver/deinterleaver memory, such as shared memory
`120, and a coding module. Such as shared processing module
`110. For example, the interleaver/deinterleaver memory can
`be allocated to different interleavers and/or deinterleavers.
`This allocation can be based on parameters associated with
`the communication systems such as data rate, latency, BER,
`impulse noise protection, and the like, of the applications
`being transported. Similarly, a coding module, which can be
`a portion of the shared processing module 110, can be shared
`between any one or more of the latency paths. This sharing
`can be based on requirements such as data rate, latency, BER,
`impulse noise protection, and the like, of the applications
`being transported.
`For example, an exemplary transceiver could comprise a
`shared interleaver/deinterleaver memory and could be
`designed to allocate a first portion of the shared memory 120
`to an interleaver, such as interleaver 216 in the transmitter
`portion of the transceiver and allocate a second portion of the
`shared memory 120 to a deinterleaver, such as 316, in the
`receiver portion of the transceiver.
`Alternatively, for example, an exemplary transceiver can
`comprise a shared interleaver/deinterleaver memory. Such as
`shared memory 120, and be designed to allocate a first portion
`of shared memory 120 to a first interleaver, e.g., 216, in the
`transmitter portion of the transceiver and allocate a second
`portion of the shared memory to a second interleaver, e.g.,
`226, in the transmitter portion of the transceiver.
`Alternatively, for example, an exemplary transceiver can
`comprise a shared interleaver/deinterleaver memory and be
`designed to allocate a first portion of the shared memory 120
`to a first deinterleaver, e.g., 316, in the receiver portion of the
`transceiver and allocate a second portion of the shared
`memory to a second deinterleaver, e.g., 326, in the receiver
`portion of the transceiver. Regardless of the configuration, in
`general any interleaver or deinterleaver, or grouping thereof,
`be it in a transmitter portion or receiver portion of the trans
`ceiver, can be associated with a portion of the shared memory
`120.
`Establishment, configuration and usage of shared
`resources is performed in the following exemplary manner.
`First, and in cooperation with the path module 140, the num
`60
`ber of transmitter and receiver latency paths (N) is deter
`mined. The parameter determination module 130 then analy
`ses one or more parameters such as data rate, transmitter data
`rate, receiver data rate, impulse noise protection, bit error
`rate, latency, or the like. Based on one or more of these
`parameters, the allocation module 150 allocates a portion of
`the shared memory 120 to one or more of the interleaver
`
`25
`
`30
`
`35
`
`40
`
`45
`
`50
`
`55
`
`65
`
`US 7,831,890 B2
`
`5
`
`10
`
`15
`
`6
`and/or deinterleavers, or groupings thereof. This process con
`tinues until the memory allocation has been determined and
`assigned to each of the N latency paths.
`Having determined the memory allocation for each of the
`latency paths, and in conjunction with the shared resource
`management 160, the transceiver 100 transmits to a second
`transceiver one or more of the number of latency paths (N),
`the maximum interleaver memory for any one or more of the
`latency paths and/or the maximum total and/or shared
`memory for all of the latency paths.
`Three examples of sharing interleaver/deinterleaver
`memory and coding processing in a transceiver are described
`below. The latency paths in these examples can be in the
`transmitter portion of the transceiver or the receiverportion of
`the transceiver.
`
`EXAMPLE ii.1
`
`A first transmitter portion or receiver portion latency path
`may carry data from a video application, which needs a very
`low BER but can tolerate higher latency. In this case, the
`Video will be transported using an latency path that has a large
`amount of interleaving/deinterleaving and coding (also
`known as Forward Error Correction (FEC) coding). For
`example, the latency path may be configured with Reed
`Solomon coding using a codeword size of 255 bytes (N=255)
`with 16 checkbytes (R=16) and interleaving/deinterleaving
`using an interleaver depth of 64(D=64). This latency path will
`require N*D=255*64=16Kbytes of interleaver memory at the
`transmitter (or de-interleaver memory at the receiver). This
`latency path will be able to correct a burst of errors that is less
`than 512 bytes in duration.
`A second transmitter portion or receiver portion latency
`path may carry an internet access application that requires a
`medium BER and a medium amount of latency. In this case,
`the internet access application will be transported using a
`latency path that has a medium amount of interleaving and
`coding. For example, the latency path may be configured with
`Reed-Solomon coding using a codeword size of 128 bytes
`(N=128) with 8 checkbytes (R=8) and interleaving using an
`interleaver depth of 16 (D=32). This latency path will require
`N*D=128*32–4. Kbytes of interleaver memory and the same
`amount of deinterleaver memory. This latency path will be
`able to correct a burst of errors that is less than 128 bytes in
`duration.
`A third transmitter portion or receiver portion latency path
`may carry a Voice telephony application, which needs a very
`low latency but can tolerate BER. In this case, the video will
`be transported using an latency path that has a large amount of
`interleaving and coding. For example, the third transmitter
`portion or receiver portion latency path may be configured
`with no interleaving or coding which will result in the lowest
`possible latency through the latency path but will provide no
`error correction capability.
`According to the principles of this invention, a system
`carrying the three applications described above in Example
`#1, would have three latency paths that share one memory
`space containing at least (16+4)=20Kbytes. The three latency
`paths also share a common coding block that is able to simul
`taneously encode (in the transmitter portion) or decode (in a
`receiver portion) two codewords with N=255/R=16 and
`N=128/R=8.
`According to an exemplary embodiment of this invention,
`the latency paths can be reconfigured at initialization or dur
`ing data transmission mode (also known as SHOWTIME in
`
`CommScope, Inc.
`IPR2023-00066, Ex. 1024
`Page 8 of 12
`
`

`

`7
`ADSL and VDSL transceivers). This would occur if, for
`example, the applications or application requirements were to
`change.
`
`US 7,831,890 B2
`
`EXAMPLE ii.2
`
`If instead of 1 video application, 1 internet application and
`1 Voice application, there were 3 internet access applications
`then the transmitter portion and/or receiver portion latency
`paths would be reconfigured to utilize the shared memory and
`coding module in a different way. For example, the system
`could be reconfigured to have 3 transmitter portion or receiver
`portion latency paths, with each latency path being config
`ured with Reed-Solomon coding using a codeword size of
`128 bytes.(N=128) with 8 checkbytes (R=8) and interleaving
`using an interleaver depth of 16 (D=32). Each latency path
`will require N*D=128*32–4. Kbytes of interleaver memory
`and each block will be able to correct a burst of errors that is
`less than 128 bytes in duration. Based on the example of
`carrying the three internet access applications described, the
`three latency path share one memory space containing at least
`3*4=12 Kbytes. Also the three latency paths share a common
`coding block that is able to simultaneously encode (on the
`transmitter side) or decode (on the receiver side) three code
`words with N=128/R=16, N=128/R=8 and N=128/R=8.
`
`10
`
`15
`
`25
`
`EXAMPLE i3
`
`35
`
`40
`
`The system could be configured to carry yet another set of
`applications. For example, the latency paths could be config
`30
`ured to carry 2 video applications. In this case only 2 trans
`mitter portion or receiver portion latency paths are needed,
`which means that the third latency path could be simply
`disabled. Also, assuming that the memory is constrained
`based on the first example above, then the maximum shared
`memory for these 2 latency paths is 20 kBytes. In this case, the
`system could be reconfigured to have 2 latency paths, with
`each block being configured with Reed-Solomon coding
`using a codeword size of 200 bytes (N=200) with 10 check
`bytes (R=10) and interleaving/deinterleaving using an inter
`leaver depth of 50 (D=50). Each latency path will require
`N*D=200*50=10 Kbytes of interleaver memory and each
`block will be able to correct a burst of errors that is less than
`250 bytes in duration. This configuration results in 20K of
`shared memory for both latency paths, which is the same as in
`45
`the first example. In order to stay within the memory con
`straints of the latency paths, the error correction capability for
`each latency path is decreased to 250 bytes from 512 bytes in
`Example #1.
`Another aspect of this invention is the how FCI configura
`tion information is transmitted between a first modem and a
`second modem. FCI configuration information will depend
`on the requirements of the applications being transported over
`the DSL connection. This information may need to be for
`warded during initialization in order to initially configure the
`DSL connection. This information may also need to be for
`warded during SHOWTIME in order to reconfigure the DSL
`connection based on a change in applications or the applica
`tion requirements.
`According to one embodiment, a first modem determines
`the specific FCI configuration parameters, e.g., N. D. R as
`defined above, needed to meet specific application require
`ments, such as latency, burst error correction capability, etc.
`In order to determine the FCI configuration parameters, the
`first modem must know what are the capabilities of a second
`modem. For example, the first modem must know how many
`latency paths (FCI blocks) the second modem can Support.
`
`50
`
`55
`
`60
`
`65
`
`8
`Also the first modem must know the maximum amount of
`interleaver memory for each transmitter latency path. In addi
`tion, since the transmitter latency paths may share a common
`memory space the first modem must know the total shared
`memory for all transmitter latency paths. This way the first
`modem will be able to choose a configuration that can meet
`application requirements and also meet the transmitter por
`tion latency path capabilities of the second modem.
`For example, using values from examples above, a first
`transceiver could send a message to a second transceiver
`during initialization or during SHOWTIME containing the
`following information:
`Number of supported transmitter and receiver latency
`paths=3
`Max Interleaver Memory for latency path #1 =16 Kbytes
`Max Interleaver Memory for latency path i2=16 Kbytes
`Max Interleaver Memory for latency path #3=16 Kbytes
`Maximum total/shared memory for all latency paths-20
`kBytes
`Based on this information, and the application require
`ments, the first transceiver would select latency path settings.
`For example, if the applications are 1 video. 1 internet access
`and 1 voice application, the first transceiver could configure 3
`latency paths as follows:
`latency path #1

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket