`
`(12) United States Patent
`Yancey et al.
`
`(10) Patent No.:
`(45) Date of Patent:
`
`US 7,921,323 B2
`Apr. 5, 2011
`
`(54) RECONFIGURABLE COMMUNICATIONS
`NFRASTRUCTURE FORASC NETWORKS
`
`(75) Inventors: Jerry W. Yancey, Rockwall, TX (US);
`Yea Zong Kuo, Rockwall, TX (US)
`
`O
`O
`(73) Assignee: L-3 Communications Integrated
`Systems, L.P., Greenville, TX (US)
`
`(*) Notice:
`
`Subject to any disclaimer, the term of this
`patent is extended or adjusted under 35
`U.S.C. 154(b) by 1104 days.
`(21) Appl. No.: 11/600,934
`
`(22) Filed:
`
`Nov. 16, 2006
`
`(65)
`
`Prior Publication Data
`US 2007/O1 O1242 A1
`May 3, 2007
`
`Related U.S. Application Data
`(63) Continuation-in-part of application No. 10/843,226,
`filed on May 11, 2004, now Pat. No. 7,444.454.
`(51) Int. Cl.
`(2006.01)
`G06F I3/00
`(52) U.S. Cl. ........................................................... 714.f4
`(58) Field of Classification Search ................... 714/4, 5
`See application file for complete search history.
`
`(56)
`
`References Cited
`
`U.S. PATENT DOCUMENTS
`4,528,658 A
`7, 1985 Israel
`5,737,235 A
`4, 1998 Kean et al.
`5,802,290 A * 9/1998 Casselman .................... TO9,201
`5,838,167 A 11/1998 Erickson et al.
`5,931,959 A * 8/1999 Kwiat ............................. 714.f48
`5,941,988 A
`8/1999 Bhagwat et al.
`5,953,372 A
`9, 1999 Virzi
`6,020,755 A
`2, 2000 Andrews et al.
`
`
`
`$39. E.
`'E' f
`aylor
`J.
`4
`3/2001 Schneider
`6,201,829 B1
`5, 2001 Scott et al.
`6,233,704 B1
`7/2001 Ganmukhi et al.
`6,259,693 B1
`9, 2001 Genrich et al.
`6,292,923 B1
`12/2001 Wasson
`6,333,641 B1
`6,339,819 B1* 1/2002 Huppenthal et al. ............ T12/16
`6,381.238 B1
`4/2002 Hluchy
`6,385,236 B1
`5, 2002 Chen
`6,389,379 B1
`5, 2002 Lin et al.
`6.421,251 B1
`7/2002 Lin
`6,496.291 B1
`12/2002 Raj et al.
`(Continued)
`
`GB
`
`FOREIGN PATENT DOCUMENTS
`2377138 A 12/2002
`OTHER PUBLICATIONS
`Search Report, PCT/US07/23700; Apr. 18, 2008; 2 pgs.
`(Continued)
`Primary Examiner — Stephen M Baker
`(74) Attorney, Agent, or Firm — O'Keefe, Egan, Peterman &
`Enders LLP
`
`ABSTRACT
`(57)
`Reconfigurable communications infrastructures may be
`implemented to interconnect ASIC devices (e.g., FPGAs) and
`other computing and input/output devices using high band
`width interconnection mediums. The computing and input/
`output devices may be positioned in locations that are physi
`cally segregated from each other, and/or may be provided to
`project a reconfigurable network across a wide area. The
`reconfigurable communications infrastructures may be
`implemented to allow Such computing and input/output
`devices to be used in different arrangements and applications,
`e.g., for use in any application where a large array of ASIC
`devices may be usefully employed such as Supercomputing,
`etc.
`
`34 Claims, 18 Drawing Sheets
`
`it.
`
`User-Defined
`Fuctions
`
`User-Defined
`Functions
`
`User-Defined
`Functions
`
`5
`
`6
`
`Prs a rari
`st E.
`
`7
`
`8
`
`-- a--
`Race++P2Backplane
`130
`
`S vs
`PC
`
`Ex. 1001
`CISCO SYSTEMS, INC. / Page 1 of 37
`
`
`
`US 7,921,323 B2
`Page 2
`
`U.S. PATENT DOCUMENTS
`6,496,505 B2 12/2002 La Porta et al.
`6,614.267 B2 * 9/2003 Taguchi ........................ 326, 101
`6,617,877 B1
`9/2003 Cory et al.
`6,651,225 B1
`1 1/2003 Lin et al.
`6,668,361 B2 * 12/2003 Bailis et al. ....................... T16/4
`6,721,313 B1
`4/2004 Van Duyne
`6,754,881 B2 * 6/2004 Kuhlmann et al. ............. T16, 16
`6,873, 180 B2
`3/2005 Bentz
`6,888,376 B1
`5, 2005 Venkata et al.
`6,901.072 B1
`5/2005 Wong
`6,934,763 B2
`8, 2005 Kubota et al.
`6,965,571 B2 11/2005 Webber
`6,993,032 B1
`1/2006 Dammann et al.
`7,003,585 B2
`2/2006 Phong et al.
`7,020, 147 B1
`3/2006 Amadon et al.
`7,035,228 B2
`4/2006 Baumer
`7,111,110 B1
`9, 2006 Pedersen
`7,137,048 B2 11/2006 Zerbeet al.
`7,188,283 B1
`3/2007 Shafer et al.
`7,224,184 B1
`5, 2007 Levi et al.
`7,260,650 B1
`8, 2007 Lueckenhoff
`7,389,487 B1 * 6/2008 Chan et al. ...................... 716/17
`7,404,170 B2 * 7/2008 Schott et al. .................... T16, 16
`7,415,331 B2 * 8/2008 Dapp et al. .........
`TO1/25
`7,439,763 B1 * 10/2008 Kavipurapu et al. .
`... 326/38
`7,453,899 B1 * 1 1/2008 Vaida et al. ........
`370/419
`7,506.297 B2 * 3/2009 Mukherjee et al. ............. T16/18
`7,518,396 B1
`4/2009 Kondapallietal.
`2002, 0021680 A1
`2/2002 Chen
`2002fOO57657 A1
`5, 2002 La Porta et al.
`2002fOO59274 A1
`5, 2002 Hartsell et al.
`2002fO0954.00 A1
`7/2002 Johnson et al.
`2003/OOO9585 A1
`1/2003 Antoine et al.
`2/2003 Ogasawara et al.
`2003, OO26260 A1
`2003.0167340 A1
`9, 2003 Jonsson
`2004/0085902 A1
`5, 2004 Miller et al.
`2004/O131072 A1
`7/2004 Khan et al.
`2004.0156368 A1
`8, 2004 Barrietal.
`2004/O158784 A1
`8/2004 Abuhamdeh et al.
`2004/0240468 A1
`12/2004 Chin et al.
`12/2004 Mougel
`2004/0249964 A1
`2005/0044439 A1
`2/2005 Shataset al.
`2005, 0169311 A1
`8, 2005 Millet et al.
`8/2005 Wong
`2005/0175O18 A1
`8/2005 Vogel et al.
`2005, 0183.042 A1
`2005/0242834 A1
`11/2005 Vadi et al.
`2005/02483.64 A1
`11/2005 Vadi et al.
`1 1/2005 Yancey et al.
`2005/0256969 A1
`2006, OOO2386 A1
`1/2006 Yiket al.
`
`
`
`OTHER PUBLICATIONS
`Copending U.S. Appl. No. 1 1/600,935; entitled "Methods and Sys
`tems for Relaying Data Packets', filed Nov. 16, 2006; 101 pgs.
`Laxdal, “ELEC 563 Project Reconfigurable Computers'. http://
`www.ece.uvic.ca/~elaxdal/Elect563/reconfigurable computers.
`html; printed from the Internet Dec. 19, 2003, Dec. 2, 1999, 10 pgs.
`“PCI DSP-4 Four Complete Channels Of Digital Acoustic Emission
`Data Acquisition On A Single Board'. http://www.pacindt.com/prod
`ucts/Multichannel/pcidsp.html, printed from the Internet Dec. 19.
`2003, 3 pgs.
`Zaid Technologies, “Innovation: Methodology Briefs'. http://www.
`Zaid tech.com/innovation/m fpga.html, printed from the Internet
`Jan. 15, 2004, 12 pgs.
`Hardt et al., “Flysig: Dataflow Oriented Delay-Insensitive Processor
`for Rapid Prototyping of Signal Processing”, (obtained from Internet
`Dec. 2003), 6 pgs.
`Chang et al., “Evaluation of Large Matrix Operations on a
`Reconfigurable Computing Platform for High Performance Scien
`tific Computations.” (obtained from Internet Dec. 2003), 10 pgs.
`Alfke, “FPGA Configuration Guidelines.” XAPP 090 Nov. 24, 1997,
`Version 1.1, pp. 31-38.
`“XC18V00 Series of In-System Programmable Configuration
`PROMs”, Xilinx Product Specification, DS026 (v.3.0), Nov. 12,
`2001, 19 pgs.
`
`Thacker, "System ACE Technology: Configuration Manager Break
`through', New Technology, FPGA Configuration, Xcell Journal,
`Summer 2001, pp. 52-55.
`“System ACE MPM Solution”, Xilinx Product Specification, DS087
`(v1.0) Sep. 25, 2001, 29 pgs.
`“RapidIOTM: An Embedded System Component Network Architec
`ture'. Architecture and Systems Platforms, Feb. 22, 2000, 25 pgs.
`“Raceway Internlink Functional Specification'. Mercury Computer
`Systems, Inc., Nov. 8, 2000, 118 pgs.
`“DXMC-3310 High Speed Transceiver ePMC Module”, Spectrum
`Signal Processing, http://www.spectrumsignal.com/Products
`Datasheets/XMC-3310 datasheet.asp. (C) 2002-2004), 5 pgs. (this
`reference describes a product available prior to the May 11, 2004
`filing date of the present application).
`“XMC-3310 High Speed Transceiver ePMC Module”, Spectrum
`Signal Processing, Rev. May 2004, 4pgs. (this reference describes a
`product available prior to the May 11, 2004 filing date of the present
`application).
`RocketIOTM Transceiver User Guide, Xilinx, UGO24 (v2.3) Feb. 24,
`2004, 152 pgs.
`“The FPGA Systems Connectivity Tool”. Product Brief, Nallatech,
`DIMEtalk 2.1, Feb. 2004, pp. 1-8.
`B. Hall, “BTeV Front End Readout & Links”, BTEV Co., Aug. 17,
`2000, 11 pgs.
`Irwin, “UsageModels for Multi-Gigabit Serial Transceivers', Xilinx.
`xilinix.com. White Paper, WP157 (v1.0), Mar. 15, 2002, 10 pgs.
`Campenhout, “Computing Structures and Optical Interconnect:
`Friends or Foes?'. Department of Electronics and Information Sys
`tems, Ghent University, Obtained from Internet Oct. 8, 2006, 11 pgs.
`E. Hazen, “HCAL HO Trigger Link”. Optical SLB-HTR Interface
`Specification, May 24, 2006, 4pgs.
`G. Russell, “Analysis and Modelling of Optically Interconnected
`Computing Systems'. School of Engineering and Physical Sciences,
`Heriot-Watt University, May 2004, 170 pgs.
`Copending U.S. Appl. No. 1 1/529,712; entitled "Systems and Meth
`ods for Interconnection of Multiple FPGA Devices', filed Sep. 28.
`2006; 42 pgs.
`Copending U.S. Appl. No. 1 1/529,713; entitled "Systems and Meth
`ods for Interconnection of Multiple FPGA Devices', filed Sep. 28.
`2006; 42 pgs.
`Yancey et al. “Systems and Methods for Data Transfer”. U.S. Appl.
`No. 1 1/529,713, filed Sep. 28, 2006, Preliminary Amendment; Dec.
`22, 2006, 11 pgs.
`Yancey et al. “Systems and Methods for Data Transfer”. U.S. Appl.
`No. 1 1/529,713, filed Sep. 28, 2006, Office Action, Feb. 19, 2009, 12
`pg.S.
`Yancey et al. “Systems and Methods for Data Transfer”. U.S. Appl.
`No. 1 1/529,713, filed Sep. 28, 2006, Amendment; Response to Office
`Action, May 19, 2009, 17 pgs.
`Yancey et al. “Systems and Methods for Data Transfer”. U.S. Appl.
`No. 1 1/529,713, filed Sep. 28, 2006, Office Action, Aug. 19, 2009, 5
`pg.S.
`Yancey et al. “Systems and Methods for Data Transfer”. U.S. Appl.
`No. 1 1/529,713, filed Sep. 28, 2006, Response to Office Action, Aug.
`25, 2009, 4pgs.
`Yancey et al. “Systems and Methods for Data Transfer”. U.S. Appl.
`No. 1 1/529,713, filed Sep. 28, 2006, Office Action, Oct. 2, 2009, 3
`pg.S.
`Yancey et al. “Systems and Methods for Data Transfer”. U.S. Appl.
`No. 1 1/529,713, filed Sep. 28, 2006, Response to Advisory Action,
`Oct. 14, 2009, 4pgs.
`Yancey et al. “Systems and Methods for Data Transfer”. U.S. Appl.
`No. 1 1/529,713, filed Sep. 28, 2006, Notice of Allowance and Fees
`Due, Dec. 4, 2009, 4pgs.
`Yancey et al. “Systems and Methods for Interconnection of Multiple
`FPGA Devices”, U.S. Appl. No. 10/843,226, filed May 11, 2004,
`Preliminary Amendment, Nov. 14, 2006, 19 pgs.
`Yancey et al. “Systems and Methods for Interconnection of Multiple
`FPGA Devices”, U.S. Appl. No. 10/843,226, filed May 11, 2004,
`Second Preliminary Amendment, Nov. 29, 2006, 3 pgs.
`Yancey et al. “Systems and Methods for Interconnection of Multiple
`FPGA Devices”, U.S. Appl. No. 10/843,226, filed May 11, 2004,
`Office Action, Jan. 4, 2007, 25 pgs.
`
`Ex. 1001
`CISCO SYSTEMS, INC. / Page 2 of 37
`
`
`
`US 7,921,323 B2
`Page 3
`
`Yancey et al. “Systems and Methods for Interconnection of Multiple
`FPGA Devices”, U.S. Appl. No. 10/843,226, filed May 11, 2004,
`Amendment and response to Office Action, May 4, 2007, 32 pgs.
`Yancey et al. “Systems and Methods for Interconnection of Multiple
`FPGA Devices”, U.S. Appl. No. 10/843,226, filed May 11, 2004,
`Office Action, Jul. 27, 2007, 29 pgs.
`Yancey et al. “Systems and Methods for Interconnection of Multiple
`FPGA Devices”, U.S. Appl. No. 10/843,226, filed May 11, 2004,
`Amendment and Response to Office Action, Sep. 27, 2007, 37 pgs.
`Yancey et al. “Systems and Methods for Interconnection of Multiple
`FPGA Devices”, U.S. Appl. No. 10/843,226, filed May 11, 2004,
`Office Action, Nov. 6, 2007, 26 pgs.
`Yancey et al. “Systems and Methods for Interconnection of Multiple
`FPGA Devices”, U.S. Appl. No. 10/843,226, filed May 11, 2004,
`Amendment and Response to Office Action, Apr. 16, 2008, 46 pgs.
`Yancey et al. “Systems and Methods for Interconnection of Multiple
`FPGA Devices”, U.S. Appl. No. 10/843,226, filed May 11, 2004,
`Notice of Allowance and Fees Due, Jul. 23, 2008, 11 pgs.
`
`Yancey et al. “Systems and Methods for Writing Data With a Fifo
`Interface”, U.S. Appl. No. 1 1/529,712, filed Sep. 28, 2006, Prelimi
`nary Amendment, Dec. 7, 2006, 13 pgs.
`Yancey et al. “Systems and Methods for Writing Data With a Fifo
`Interface”, U.S. Appl. No. 1 1/529,712, filed Sep. 28, 2006, Office
`Action, Apr. 27, 2007, 17 pgs.
`Yancey et al. “Systems and Methods for Writing Data With a Fifo
`Interface”, U.S. Appl. No. 1 1/529,712, filed Sep. 28, 2006, Amend
`ment and Response to Office Action, Jul. 25, 2007, 19 pgs.
`Yancey et al. “Systems and Methods for Writing Data With a Fifo
`Interface”, U.S. Appl. No. 1 1/529,712, filed Sep. 28, 2006, Office
`Action, Oct. 22, 2007, 17 pgs.
`Yancey et al. “Systems and Methods for Writing Data With a Fifo
`Interface”, U.S. Appl. No. 1 1/529,712, filed Sep. 28, 2006, RCE and
`Amendment, Mar. 19, 2008, 26 pgs.
`Yancey et al. “Systems and Methods for Writing Data With a Fifo
`Interface”, U.S. Appl. No. 1 1/529,712, filed Sep. 28, 2006, Notice of
`Allowance and Fees Due, May 30, 2008, 7 pgs.
`* cited by examiner
`
`Ex. 1001
`CISCO SYSTEMS, INC. / Page 3 of 37
`
`
`
`U.S. Patent
`
`Apr. 5, 2011
`
`Sheet 1 of 18
`
`US 7,921,323 B2
`
`| 9
`
`|
`
`
`
`
`
`
`
`
`
`Ex. 1001
`CISCO SYSTEMS, INC. / Page 4 of 37
`
`
`
`U.S. Patent
`
`Apr. 5, 2011
`
`Sheet 2 of 18
`
`US 7,921,323 B2
`
`¢Old
`8dGOPOdd||SOPOdd
`pauyaq-1asy
`41i19999WwNWWw
`irsryir—
`suojoun,
`Pion}
`Low|
`
`a
`
`8S)
`
`WSIdd
`
`Jaynoy
`
`802
`
`Ob2
`
`rn
`
`rr
`
`klfA
`
`41a9wW
`i49awWw
`
`iN19)
`
`Sb
`
`ro
`
`pauyjaq-lasy
`
`9SOPOdd
` suoyoun4
`SSOPOdd
`
`r
`
`rt
`
`=
`
`ii2QWw
`
`WSIdd
`
`Joynoy
`
`r=
`
`or
`
`WSIYd
`
`Jaynoy
`
`202
`
`Od
`
`Ex. 1001
`CISCO SYSTEMS, INC./ Page 5 of 37
`
`LUSOAT
`vESOPOdd||S0POdd
`paUuyjeq-1esy}
`suonoun,
`EEihE95
` aLyvotObepredaqyoneq49~
`
`SO)L—a|nLWOd4|
`éLS0POdd||S0POdd
`pouljag-asr)
`suoyoun4
`XP9AWA
`
`SEm=z Or
`
`|La
`
`)=
`
`OWd
`
`Od
`
`Ex. 1001
`CISCO SYSTEMS, INC. / Page 5 of 37
`
`
`
`
`
`
`
`
`
`
`U.S. Patent
`
`Apr. 5, 2011
`
`Sheet 3 of 18
`
`US 7,921,323 B2
`
`
`
`Ex. 1001
`CISCO SYSTEMS, INC. / Page 6 of 37
`
`
`
`U.S. Patent
`
`Apr. 5, 2011
`
`Sheet 4 of 18
`
`US 7,921,323 B2
`
`yoeqdoo7
`
`JaydainwX0z
`
`PODXL
`
`JOVe1aUa5)
`
`Jazijeliasaq
`
`
`
`jpajeqewwo)
`
`STE—JJOZIJEUIS
`
`’WiDs3a
`got/agKe2dKK,——
`
`HulpuogjeuueyD
`
`
`
`U0ID9.07)490|D
`
`
`
`01090LOW
`
`Japoouy
`
`WLVOXL
`
`Ex. 1001
`CISCO SYSTEMS, INC./ Page 7 of 37
`
`Ex. 1001
`CISCO SYSTEMS, INC. / Page 7 of 37
`
`
`
`
`
`
`
`
`
`U.S. Patent
`
`Apr. 5, 2011
`
`Sheet 5 of 18
`
`US 7,921,323 B2
`
`
`
`009
`
`Ex. 1001
`CISCO SYSTEMS, INC. / Page 8 of 37
`
`
`
`U.S. Patent
`
`Apr. 5, 2011
`
`Sheet 6 of 18
`
`US 7,921,323 B2
`
`
`
`009
`
`Z].
`
`| Z | ZZ
`
`6Z | 09
`
`| 9
`
`00]nOS
`
`Ex. 1001
`CISCO SYSTEMS, INC. / Page 9 of 37
`
`
`
`U.S. Patent
`
`Apr. 5, 2011
`
`Sheet 7 of 18
`
`US 7,921,323 B2
`
`W
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`)
`
`001
`
`Ex. 1001
`CISCO SYSTEMS, INC. / Page 10 of 37
`
`
`
`U.S. Patent
`
`Apr. 5, 2011
`
`Sheet 8 of 18
`
`US 7,921,323 B2
`
`(
`0! 1
`
`008
`
`
`
`
`
`
`
`
`
`
`
`
`
`Z98)
`
`Ex. 1001
`CISCO SYSTEMS, INC. / Page 11 of 37
`
`
`
`U.S. Patent
`
`Apr. 5, 2011
`
`Sheet 9 of 18
`
`US 7921,323 B2
`
`
`
`006
`
`?00uuOOTXnuu
`
`S?OunOS
`
`??X|Ded
`
`Ex. 1001
`CISCO SYSTEMS, INC. / Page 12 of 37
`
`
`
`U.S. Patent
`
`Apr. 5, 2011
`
`Sheet 10 of 18
`
`US 7,921,323 B2
`
`G0462 || TURKE,
`
`
`
`Ex. 1001
`CISCO SYSTEMS, INC. / Page 13 of 37
`
`
`
`U.S. Patent
`
`Apr. 5, 2011
`
`Sheet 11 of 18
`
`US 7,921,323 B2
`
`V=
`V
`
`
`
`
`
`
`
`
`
`OI
`
`Z98
`
`Ex. 1001
`CISCO SYSTEMS, INC. / Page 14 of 37
`
`
`
`U.S. Patent
`
`Apr. 5, 2011
`
`Sheet 12 of 18
`
`US 7,921,323 B2
`
`
`
`800||
`
`
`
`WOG ÞÍ ) || .609,90, O-E,
`
`?098
`
`BZ98
`
`0098
`
`0098
`
`Ex. 1001
`CISCO SYSTEMS, INC. / Page 15 of 37
`
`
`
`U.S. Patent
`U.S. Patent
`
`Apr. 5, 2011
`
`Sheet 13 of 18
`
`US 7,921,323 B2
`US 7,921,323 B2
`
`
`
`OSEL
`
`OPE|
`
`bpel
`
`CHL
`
`Orel
`
`9LEl
`
`lel
`
`cLel
`
`ebOld
`
`Oce!
`
`cel
`
`ccel
`
`OLEL2
`
`Ocel
`
`Ex. 1001
`CISCO SYSTEMS, INC./ Page 16 of 37
`
`Ex. 1001
`CISCO SYSTEMS, INC. / Page 16 of 37
`
`
`
`
`
`U.S. Patent
`
`Apr. 5, 2011
`
`Sheet 14 of 18
`
`US 7,921,323 B2
`
`
`
`Ex. 1001
`CISCO SYSTEMS, INC. / Page 17 of 37
`
`
`
`U.S. Patent
`
`Apr. 5, 2011
`
`Sheet 15 of 18
`
`US 7921,323 B2
`
`
`
`Ex. 1001
`CISCO SYSTEMS, INC. / Page 18 of 37
`
`
`
`U.S. Patent
`
`Apr. 5, 2011
`
`Sheet 16 of 18
`
`US 7,921,323 B2
`
`
`
`Ex. 1001
`CISCO SYSTEMS, INC. / Page 19 of 37
`
`
`
`U.S. Patent
`
`Apr. 5, 2011
`
`Sheet 17 of 18
`
`US 7,921,323 B2
`
`
`
`Ex. 1001
`CISCO SYSTEMS, INC. / Page 20 of 37
`
`
`
`U.S. Patent
`
`Apr. 5, 2011
`
`Sheet 18 of 18
`
`US 7,921,323 B2
`
`
`
`Ex. 1001
`CISCO SYSTEMS, INC. / Page 21 of 37
`
`
`
`1.
`RECONFIGURABLE COMMUNICATIONS
`NFRASTRUCTURE FORASC NETWORKS
`
`US 7,921,323 B2
`
`This patent application is a continuation-in-part of U.S.
`patent application Ser. No. 10/843,226, titled “SYSTEMS
`AND METHODS FORNETWORKING MULTIPLE FPGA
`DEVICES.” by Jerry W. Yancey, et al., filed on May 11, 2004
`now U.S. Pat. No. 7,444.454, and which is incorporated
`herein by reference in its entirety.
`
`10
`
`BACKGROUND OF THE INVENTION
`
`15
`
`25
`
`30
`
`35
`
`1. Field of the Invention
`This invention relates generally to interconnection of mul
`tiple electrical devices, and more particularly to interconnec
`tion of multiple ASIC devices, for example, multiple Field
`Programmable Gate Array (FPGA) devices.
`2. Description of the Related Art
`In the past, multiple FPGA devices have been intercon
`nected as an array on a single circuit card using point-to-point
`or bussed parallel wiring configurations. Such configurations
`use many wires (along with associated I/O counts and termi
`nation components) to achieve required data transfer band
`widths, thus requiring the creation of many connection layers
`on a circuit card leading to undesirable outcomes such as a
`high degree of mechanical complexity and cost. Examples of
`these parallel interfaces include those using signaling stan
`dards such as Gunning Transceiver Logic (“GTL'), Stub
`Series Termination Logic (“SSTL'), and High-Speed Trans
`ceiver Logic (“HSTL). Some of these standards require as
`many as three termination components per signal to imple
`ment.
`Additional parallel wiring is typically employed when a
`FPGA array is used to implement multiple card-level inter
`faces and embedded processor nodes, further increasing cir
`cuit complexity. In addition, diverse types of interfaces
`(VME64x, Race++, and PCI), processors and user hardware
`modules are often required to communicate with each other
`on a single card, further complicating inter-card communica
`tions issues. For example, current commercial products com
`40
`monly bridge two standard interfaces together, such as
`VERSA-Module Europe (“VME) and Peripheral Compo
`nent Interconnect (“PCI) interfaces using parallel bridging
`chips. Additionally, system-level FPGAs with embedded
`Power PC (“PPC) or similar functions require implementa
`tion of more processing and interface nodes on a single card.
`Banking of I/O pins has reduced the need for termination
`components, but large I/O counts still require many layers to
`route, driving printed circuit board (PCB) layer counts and
`costs upward.
`In addition to parallel wiring configurations, FPGAs on a
`single card have been interconnected using IEEE 1149 (Joint
`Test Action Group—“JTAG”) serial interconnections for
`configuration purposes. However, Such JTAG serial intercon
`nections are not suitable for functions such as high-speed data
`transfer or signal processing. Thus, the use of multiple large
`FPGAs, embedded processors, and various standard inter
`faces on a single card present significant problems with card
`layout/routing and inter-card communication.
`In large systems, FPGA and other high-performance com
`60
`puting devices are often buried in many layers of custom I/O
`connections, making them difficult to access for general use.
`This characteristic comprises many of the benefits realized
`from using a reconfigurable circuit.
`Medical imaging applications such as Magnetic Reso
`nance Imaging (MRI) and Positron Emission Tomography
`(PET) are by nature massively parallel calculation-intensive
`
`45
`
`50
`
`55
`
`65
`
`2
`processes. Modern versions of these imaging technologies
`make extensive use of sophisticated digital signal processing
`(DSP) algorithms and matrix arithmetic to perform such
`functions as 3-D reconstruction, color coding, and real-time
`Video display. Seismic oil exploration technology involves
`not only geology, but also the collection and processing of
`large amounts of data from geophone and hydrophone arrays.
`The analysis and multi-dimensional reconstruction of data
`from Such arrays is a parallel problem which involves Sophis
`ticated matrix arithmetic as well as DSP
`Pharmaceutical and biotech-related applications such as
`drug interaction modeling and protein folding simulations are
`at the same time numerous and by nature extremely calcula
`tion-intensive. In one example, a simulation which works out
`the folding sequence for just 50 amino acid molecules (a very
`limited set compared to the chains which form real proteins)
`took 4 to 5 days to run. The computational problems with such
`calculations are so daunting that some researchers have even
`turned to Volunteer computer networks to get more run-time
`on these simulations. One Such group (the "folding(ahome’
`project from Stanford), runs protein folding and aggregation
`simulations by using the internet to access screen saver pro
`grams on volunteer PCs which each run a small piece of the
`overall parallel calculation.
`Special effects in motion pictures and television are also
`very calculation intensive. Sophisticated effects Such as shad
`ing, shadowing, texturing, as well as full character animation
`are becoming increasingly commonplace. One recent movie
`contained over 6,000 independent artificial intelligence (AI)-
`driven characters fighting in a lengthy battle sequence. Digi
`tal synthesis of a large number of Such frames is very costly
`and time consuming. Because of the long times required to
`produce the final rendered product, wireframes and other
`shortcut methods are often used to facilitate the shooting
`process. As a result, intricate planning and post production is
`required to make sure that the final effects will fit together
`with the related live action.
`
`SUMMARY OF THE INVENTION
`
`Disclosed are methods and systems for interconnecting
`Application Specific Integrated Circuit (ASIC) devices
`using simplex and/or duplex serial I/O connections, including
`high speed serial connections such as multi-gigabit serial
`transceiver (“MGT) connections. Examples of ASIC
`devices that may be interconnected using the disclosed sys
`tems and methods include, but are not limited to, Field Pro
`grammable Gate Arrays (“FPGAs) or other field program
`mable devices (“FPDs) or programmable logic devices
`(“PLDs). In one embodiment of the practice of the disclosed
`systems and methods, serial I/O connections may be
`employed to interconnect a pair of ASICs to create a low
`signal count connection. For example, in one exemplary
`embodiment, high speed serial I/O connections (e.g., Such as
`MGT connections) may be employed to interconnect a pair of
`ASICs to create a high bandwidth, low signal count connec
`tion.
`In one embodiment of the disclosed systems and methods,
`any given pair of multiple ASIC devices on a single circuit
`card (e.g., selected from three or more ASIC devices present
`as a ASIC array on a single circuit card) may be intercon
`nected by one or more serial data communication links (sim
`plex and/or duplex serial data communication links formed
`between respective serial I/O connections of a given pair of
`ASIC devices) so that the given pair of ASIC devices may
`communicate with each other through the two serial I/O con
`nections of each of the serial data communication links with
`
`Ex. 1001
`CISCO SYSTEMS, INC. / Page 22 of 37
`
`
`
`3
`no other serial connection intervening in between, or in other
`words, in a “one-step’ fashion. Such a capability may be
`implemented, for example, Such that each embedded proces
`Sor, processor node, card level-interface, user-defined hard
`ware module, etc. is provided with access to each of the other
`Such entities on the card through one or more separate respec
`tive "one step data communication links that each includes
`no more than two respective serial connections coupled
`together (e.g., no more than two respective high speed serial
`connections coupled together) in the data communication
`path and through a minimum number of packet transfer
`points. In a further embodiment. Such a respective data com
`munication link may be further characterized as a “direct
`serial interconnection” between two such entities, meaning
`that no multi-port Switch device (e.g., crossbar Switch, etc.)
`exists in the serial data communication path between the
`boundaries of the two entities. Advantageously, the disclosed
`systems and methods may be so implemented in one embodi
`ment to achieve communication between given pairs of
`devices with relatively high data transfer bandwidths and
`minimal wiring. Furthermore, the disclosed systems and
`methods may be utilized (e.g., extended) to establish a com
`munications infrastructure across multiple circuit cards.
`The disclosed systems and methods may be implemented
`in a variety of environments including, but not limited to,
`signal processing applications, communication applications,
`interfacing applications, networking applications, cognitive
`computing applications, test and measurement applications,
`etc. For example, the disclosed systems and methods may be
`implemented as part of a reconfigurable hardware architec
`ture (“RHA). Such as a reconfigurable signal processing
`circuit, that serves as a consistent framework in which ASIC
`applications may be user-defined and/or deployed in Such a
`way as to enhance code portability, design re-use, and inter
`communication, as well as to support board-level simulations
`extending beyond and between individual ASIC boundaries.
`In one embodiment, a RHA may be configured to include a
`packet-based communications infrastructure that uses a high
`bandwidth Switch fabric (e.g., crossbar, etc.) packet router to
`establish standard communications protocols between mul
`40
`tiple interfaces and/or multiple devices that may be present on
`a single circuit card (e.g., interfaces, processor nodes, and
`user-defined functions found on signal processing cards).
`Such a RHA may be further configured in one embodiment to
`provide a useful communications framework that promotes
`commonality across multiple (e.g., all) signal processing
`applications without restricting user utility. For example,
`packets conforming to a given interface (e.g., Race++ stan
`dard) may be processed by Stripping the packet header off and
`then routing the remaining packet between ASIC devices
`using the standardized packet router infrastructure of the
`disclosed methods and systems. Advantageously, such a
`RHA may be implemented in a manner that does not preclude
`the addition of high-performance user connectivity, e.g., by
`only using a relatively small fraction of the available serial
`I/O connections (e.g., MGT connections) and ASIC (e.g.,
`FPGA) gate resources. In one specific embodiment, embed
`ded serial I/O connections (e.g., embedded MGT connec
`tions) of multiple FPGA devices may be used to interconnect
`the FPGA devices in a manner that advantageously reduces
`on-card I/O counts and the need for large numbers of termi
`nation components. However, it will be understood that non
`embedded serial I/O connections may also be employed in the
`practice of the disclosed systems and methods.
`In the practice of one exemplary embodiment of the dis
`65
`closed systems and methods, multiple FPGAs of a FPGA
`array may be coupled together on a single card to communi
`
`50
`
`35
`
`45
`
`55
`
`60
`
`US 7,921,323 B2
`
`10
`
`15
`
`25
`
`30
`
`4
`cate at the card-level basis using packet routing through one
`or more Switch fabrics, e.g., crossbar Switches, etc. In Such an
`embodiment, each given pair of FPGA devices of a FPGA
`array may be linked in a manner that advantageously mini
`mizes packet transfer latency times in the switch fabric, while
`at the same time allowing every source to have access to every
`destination in the array. In Such an embodiment, a universal
`bridging method may be used in each FPGA to allow inter
`communication between any two processors/interfaces on a
`single circuit card. In one exemplary embodiment, the bridg
`ing method may be implemented with a First-In First-Out
`(“FIFO) packet relay protocol that may be readily integrated
`into or mapped onto the slave functionality of standard inter
`faces and/or processor buses.
`Thus, the disclosed systems and methods may be imple
`mented using a predictable and uniform or standardized inter
`face across the boundaries between each pair of board-level
`components (e.g., FPGAs, ASICs, general-purpose proces
`sors, etc.) to help promote consistent communications, board
`level testability, design portability/re-use, and to provide a
`user with a relatively high degree of flexibility in establishing
`functional partitions for hardware modules mapped into an
`ASIC (e.g., FPGA) array. Further, built-in support for packet
`integrity checking and automatic retransmission of bad pack
`ets may be provided to facilitate the usage of the inter-ASIC
`links with hardware modules (e.g., signal processors such as
`Software-Defined Radios (SDRs), signal processing algo
`rithms such as Fast-Fourier Transforms (FFTs) and wavelet
`transforms, data stream encryption and decryption, packet
`routing, etc.) that are sensitive to data corruption. For
`example, packet integrity checking (e.g., checksum, CRC,
`etc.) may be incorporated into the hardware layer (e.g., physi
`cal layer 1 of Open System Interconnection “OSI protocol),
`for example, so that data may be transferred between hard
`ware devices using a packet integrity checking method that is
`handled automatically by the hardware without the need for
`an upper layer of Software to perform the packet integrity
`checking. For example, packet integrity protocol tasks (e.g.,
`Such as packet acknowledge, timeout, and retransmit tasks)
`may be built into interface/interconnection hardware present
`in a data communication link between ASICs or other
`devices. Using the configuration of the above-described
`embodiment, a ASIC array may be configured so as to be
`easily scaleable to other cards, e.g., permitting expansion of
`ASIC resources. Where described herein in relation to a
`FPGA array, it will be understood that the disclosed systems
`and methods may be implemented with an array of any other
`type of ASIC device or an array of a combination of types
`Such devices.
`As disclosed herein, reconfigurable communications infra
`structures may be implemented to interconnect ASIC devices
`(e.g., FPGAs) and other computing and input/output devices
`using high bandwidth interconnection mediums. The dis
`closed reconfigurable communications infrastructures may
`be implemented in one embodiment to address communica
`tions infrastructure issues associated with interconnecting
`multiple computing devices such as ASICs. In this regard, the
`disclosed reconfigurable communications infrastructures
`may be implemented not only to interconnect ASIC devices
`that are provided on a single circuit card or that are provided
`within a single electronics chassis (e.g., provided on separate
`circuit cards within the same electronics chassis), but also to
`interconnect ASIC devices and other computing and input/
`output devices that are positioned in locations that are physi
`cally segregated from each other (e.g., that are positioned in
`different electronics chassis, positioned in different rooms of
`a given building or facility Such as a military base, stationary
`
`Ex. 1001
`CISCO SYSTEMS, INC. / Page 23 of 37
`
`
`
`5
`oil and gas platform, shopping mall, or office building, posi
`tioned in different compartments of a given mobile vehicle
`Such as an aircraft, truck and/or trailer, spacecraft, Submarine,
`train, boat, mobile oil and gas platform, etc., and/or that are
`positioned at different locations using ports across a wide
`area network Such as the Internet, wireless networks, public
`telephone system, cable television network, satellite commu
`nications system, etc.).
`Examples of computing and input/output devices that may
`be interconnected using the disclosed systems and methods
`while positioned in locations that are physically segregated
`from each other include, but are not limited to, analog/digital
`converters, digital/analog converters, RF receivers and distri
`bution systems, sensor interfaces and arrays of Such devices
`(e.g., Such as antennas, microphones, geophones, hydro
`phones, magnetic sensors, RFIDS, etc.). Other examples of
`such devices include, but are not limited to, wired network
`interfaces (e.g., such as Ethernet, Gigabit Ethernet, Universal
`Serial Bus (USB), Firewire, Infiniband, Serial and Parallel
`RapidIO, PCIe, Fibre Channel, optical interfaces etc.), the
`Internet, wireless network interfaces (e.g., such as 802.11a,
`802.11b, 802.11g, 802.11n, Multiple Input/Multiple Output
`(MIMO), Ultra-Wideband (UWB), etc.), bus interfaces (e.g.,
`such as VME, PCI, ISA, Multibus, etc.), compute nodes (in
`cluding both singl