`US0073 73449B2
`
`c12) United States Patent
`Radulescu et al.
`
`(IO) Patent No.:
`(45) Date of Patent:
`
`US 7,373,449 B2
`May 13, 2008
`
`(54) APPARATUS AND METHOD FOR
`COMMUNICATING IN AN INTEGRATED
`CIRCUIT
`
`(56)
`
`References Cited
`
`U.S. PATENT DOCUMENTS
`
`(75)
`
`Inventors: Andrei Radulescu, Eindhoven (NL);
`Kees Gerard Willem Goossens,
`Eindhoven (NL)
`
`(73) Assignee: Koninklijke Philips Electronics N.V.,
`Eindhoven (NL)
`
`( *) Notice:
`
`Subject to any disclaimer, the term of this
`patent is extended or adjusted under 35
`U.S.C. 154(b) by 270 days.
`
`(21) Appl. No.:
`
`10/530,267
`
`(22) PCT Filed:
`
`Oct. 7, 2003
`
`(86) PCT No.:
`
`PCT/IB03/04414
`
`§ 371 (c)(l),
`(2), ( 4) Date: Apr. 5, 2005
`
`(87) PCT Pub. No.: WO2004/034176
`
`PCT Pub. Date: Apr. 22, 2004
`
`(65)
`
`Prior Publication Data
`
`US 2006/0041889 Al
`
`Feb. 23, 2006
`
`(30)
`
`Foreign Application Priority Data
`
`Oct. 8, 2002
`
`(EP)
`
`.................................. 02079196
`
`(51)
`
`Int. Cl.
`G06F 13/00
`
`(2006.01)
`
`(52) U.S. Cl. ....................... 710/316; 709/227; 709/228
`(58) Field of Classification Search ........ 710/300-317,
`710/8-19, 56-61, 104-110; 709/201-203,
`709/310-317, 227-229
`See application file for complete search history.
`
`4,807,118 A *
`5,317,568 A *
`6,539,450 Bl *
`6,629,166 Bl*
`6,769,046 B2 *
`7,165,128 B2 *
`2003/0110339 Al*
`2003/0182419 Al *
`2004/0103230 Al*
`
`2/1989 Lin et al. .................... 709/237
`5/1994 Bixby et al. ................ 370/401
`3/2003 James et al. ................ 710/306
`9/2003 Grun ........................... 710/36
`7/2004 Adams et al.
`.............. 710/316
`1/2007 Gadre et al. .................. 710/52
`6/2003 Calvignac et al. .......... 710/305
`9/2003 Barr et al.
`.................. 709/224
`5/2004 Emerson et al. ............ 710/110
`
`OTHER PUBLICATIONS
`
`Kumar et al; "A Network on Chip Architecture and Design Meth(cid:173)
`odology", VLSI on Annual Symposium, IEEE Computer Society
`ISVLSI.2002 Pittsburgh, PA, Apr. 25, 2002.
`Los Alamitos, CA.; pp. 117-124, XP002263346, unknown dates.
`
`* cited by examiner
`
`Primary Examiner-Raymond N Phan
`
`(57)
`
`ABSTRACT
`
`An integrated circuit comprising a plurality of processing
`modules (M; I; S; T) and a network (N; RN) arranged for
`providing at least one connection between a first and at least
`one second module is provided. Said connections comprises
`a set of communication channels each having a set of
`connection properties. Said connection supports transactions
`comprising outgoing messages from the first module to the
`second module and return messages from the second module
`to the first module. The connection properties of the different
`communication channels of said connection can be adjusted
`independently. Therefore, the utilization of the resources of
`a network on chip is more efficient, since the connection
`between modules can be efficiently adapted to their actual
`requirement, such that the connection is not over dimen(cid:173)
`sioned and unused network resources can be assigned to
`other connections.
`
`18 Claims, 1 Drawing Sheet
`
`M
`
`NI
`
`NI
`
`NI
`
`S1
`
`S2
`
`Samsung Ex. 1001
`Page 1
`
`
`
`U.S. Patent
`
`May 13, 2008
`
`US 7,373,449 B2
`
`M
`
`I
`
`REQ ► ANIP
`◄
`RESP
`
`NI
`
`NI
`
`SI
`
`S2
`
`FIG.1
`
`T
`
`RESP
`
`FIG.2
`
`I
`
`------
`
`REQ ► ANIP
`◄
`RESP
`
`RN
`
`PNIP REQ
`...
`RESP
`
`T
`
`FIG.3
`
`Samsung Ex. 1001
`Page 2
`
`
`
`US 7,373,449 B2
`
`1
`APPARATUS AND METHOD FOR
`COMMUNICATING IN AN INTEGRATED
`CIRCUIT
`
`FIELD OF THE INVENTION
`
`The invention relates to an integrated circuit having a
`plurality of processing modules and a network arranged for
`providing connections between processing modules and a
`method for exchanging messages in such an integrated
`circuit.
`
`BACKGROUND OF THE INVENTION
`
`Systems on silicon show a continuous increase in com(cid:173)
`plexity due to the ever increasing need for implementing
`new features and improvements of existing functions. This
`is enabled by the increasing density with which components
`can be integrated on an integrated circuit. At the same time
`the clock speed at which circuits are operated tends to
`increase too. The higher clock speed in combination with the
`increased density of components has reduced the area which
`can operate synchronously within the same clock domain.
`This has created the need for a modular approach. According
`to such an approach the processing system comprises a
`plurality of relatively independent, complex modules. In
`conventional processing systems the systems modules usu(cid:173)
`ally communicate to each other via a bus. As the number of
`modules increases however, this way of communication is
`no longer practical for the following reasons. On the one
`hand the large number of modules forms a too high bus load.
`On the other hand the bus forms a communication bottle(cid:173)
`neck as it enables only one device to send data to the bus.
`A communication network forms an effective way to over(cid:173)
`come these disadvantages.
`Networks on chip (NoC) have received considerable
`attention recently as a solution to the interconnect problem
`in highly-complex chips. The reason is twofold. First, NoCs
`help resolve the electrical problems in new deep-submicron
`technologies, as they structure and manage global wires. At
`the same time they share wires, lowering their number and
`increasing their utilization. NoCs can also be energy efficient
`and reliable and are scalable compared to buses. Second,
`NoCs also decouple computation from communication,
`which is essential in managing the design ofbillion-transis- 45
`tor chips. NoCs achieve this decoupling because they are
`traditionally designed using protocol stacks, which provide
`well-defined interfaces separating communication service
`usage from service implementation.
`Using networks for on-chip communication when design- 50
`ing systems on chip (SoC), however, raises a number of new
`issues that must be taken into account. This is because, in
`contrast to existing on-chip interconnects (e.g., buses,
`switches, or point-to-point wires), where the communicating
`modules are directly connected, in a NoC the modules 55
`communicate remotely via network nodes. As a result,
`interconnect arbitration changes from centralized to distrib(cid:173)
`uted, and issues like out-of order transactions, higher laten(cid:173)
`cies, and end-to-end flow control must be handled either by
`the intellectual property block (IP) or by the network.
`Most of these topics have been already the subject of
`research in the field of local and wide area networks ( com(cid:173)
`puter networks) and as an interconnect for parallel machine
`interconnect networks. Both are very much related to on(cid:173)
`chip networks, and many of the results in those fields are 65
`also applicable on chip. However, Noe's premises are
`different from off-chip networks, and, therefore, most of the
`
`2
`network design choices must be reevaluated. On-chip net(cid:173)
`works have different properties ( e.g., tighter link synchro(cid:173)
`nization) and constraints (e.g., higher memory cost) leading
`to different design choices, which ultimately affect the
`5 network services.
`NoCs differ from off-chip networks mainly in their con(cid:173)
`straints and synchronization. Typically, resource constraints
`are tighter on chip than off chip. Storage (i.e., memory) and
`computation resources are relatively more expensive,
`10 whereas the number of point-to-point links is larger on chip
`than off chip. Storage is expensive, because general-purpose
`on-chip memory, such as RAMs, occupy a large area.
`Having the memory distributed in the network components
`in relatively small sizes is even worse, as the overhead area
`15 in the memory then becomes dominant.
`For on-chip networks computation too comes at a rela(cid:173)
`tively high cost compared to off-chip networks. An off-chip
`network interface usually contains a dedicated processor to
`implement the protocol stack up to network layer or even
`20 higher, to relieve the host processor from the communication
`processing. Including a dedicated processor in a network
`interface is not feasible on chip, as the size of the network
`interface will become comparable to or larger than the IP to
`be connected to the network. Moreover, running the protocol
`25 stack on the IP itself may also be not feasible, because often
`these IPs have one dedicated function only, and do not have
`the capabilities to run a network protocol stack.
`The number of wires and pins to connect network com(cid:173)
`ponents is an order of magnitude larger on chip than off chip.
`30 If they are not used massively for other purposes than NoC
`communication, they allow wide point-to-point intercon(cid:173)
`nects (e.g., 300-bit links). This is not possible off-chip,
`where links are relatively narrower: 8-16 bits.
`On-chip wires are also relatively shorter than off chip
`35 allowing a much tighter synchronization than off chip. This
`allows a reduction in the buffer space in the routers because
`the communication can be done at a smaller granularity. In
`the current semiconductor technologies, wires are also fast
`and reliable, which allows simpler link-layer protocols ( e.g.,
`40 no need for error correction, or retransmission). This also
`compensates for the lack of memory and computational
`resources.
`Reliable communication: A consequence of the tight
`on-chip resource constraints is that the network components
`(i.e., routers and network interfaces) must be fairly simple to
`minimize computation and memory requirements. Luckily,
`on-chip wires provide a reliable communication medium,
`which can help to avoid the considerable overhead incurred
`by off-chip networks for providing reliable communication.
`Data integrity can be provided at low cost at the data link
`layer. However, data loss also depends on the network
`architecture, as in most computer networks data is simply
`dropped if congestion occurs in the network.
`Deadlock: Computer network topologies have generally
`an irregular (possibly dynamic) structure, which can intro(cid:173)
`duce buffer cycles. Deadlock can also be avoided, for
`example, by introducing constraints either in the topology or
`routing. Fat-tree topologies have already been considered
`for NoCs, where deadlock is avoided by bouncing back
`60 packets in the network in case of buffer overflow. Tile-based
`approaches to system design use mesh or torus network
`topologies, where deadlock can be avoided using, for
`example, a turn-model routing algorithm. Deadlock is
`mainly caused by cycles in the buffers. To avoid deadlock,
`routing must be cycle-free, because of its lower cost in
`achieving reliable communication. A second cause of dead-
`lock are atomic chains of transactions. The reason is that
`
`Samsung Ex. 1001
`Page 3
`
`
`
`US 7,373,449 B2
`
`3
`while a module is locked, the queues storing transactions
`may get filled with transactions outside the atomic transac(cid:173)
`tion chain, blocking the access of the transaction in the chain
`to reach the locked module. If atomic transaction chains
`must be implemented (to be compatible with processors 5
`allowing this, such as MIPS), the network nodes should be
`able to filter the transactions in the atomic chain.
`Data ordering: In a network, data sent from a source to a
`destination may arrive out of order due to reordering in
`network nodes, following different routes, or retransmission
`after dropping. For off-chip networks out-of-order data
`delivery is typical. However, for NoCs where no data is
`dropped, data can be forced to follow the same path between
`a source and a destination ( deterministic routing) with no
`reordering. This in-order data transportation requires less
`buffer space, and reordering modules are no longer neces(cid:173)
`sary.
`Network flow control and buffering strategy: Network
`flow control and buffering strategy have a direct impact on
`the memory utilization in the network. Wormhole routing
`requires only a flit buffer (per queue) in the router, whereas
`store-and-forward and virtual-cut-through routing require at
`least the buffer space to accommodate a packet. Conse(cid:173)
`quently, on chip, wormhole routing may be preferred over
`virtual-cut-through or store-and-forward routing. Similarly, 25
`input queuing may be a lower memory-cost alternative to
`virtual-output-queuing or output-queuing buffering strate(cid:173)
`gies, because it has fewer queues. Dedicated (lower cost)
`FIFO memory structures also enable on-chip usage of vir(cid:173)
`tual-cut-through routing or virtual output queuing for a 30
`better performance. However, using virtual-cut-through
`routing and virtual output queuing at the same time is still
`too costly.
`Time-related guarantees: Off-chip networks typically use
`packet switching and offer best-effort services. Contention 35
`can occur at each network node, making latency guarantees
`very hard to offer. Throughput guarantees can still be offered
`using schemes such as rate-based switching or deadline(cid:173)
`based packet switching, but with high buffering costs. An
`alternative to provide such time-related guarantees is to use 40
`time-division multiple access (TDMA) circuits, where every
`circuit is dedicated to a network connection. Circuits pro(cid:173)
`vide guarantees at a relatively low memory and computation
`cost. Network resource utilization is increased when the
`network architecture allows any left-over guaranteed band- 45
`width to be used by best-effort communication.
`Introducing networks as on-chip interconnects radically
`changes the communication when compared to direct inter(cid:173)
`connects, such as buses or switches. This is because of the
`multi-hop nature of a network, where communication mod- 50
`ules are not directly connected, but separated by one or more
`network nodes. This is in contrast with the prevalent existing
`interconnects (i.e., buses) where modules are directly con(cid:173)
`nected. The implications of this change reside in the arbi(cid:173)
`tration (which must change from centralized to distributed), 55
`and in the communication properties ( e.g., ordering, or flow
`control).
`An outline the differences of NoCs and buses will be
`given below. We refer mainly to buses as direct intercon(cid:173)
`nects, because currently they are the most used on-chip 60
`interconnect. Most of the bus characteristics also hold for
`other direct interconnects (e.g., switches). Multilevel buses
`are a hybrid between buses and NoCs. For our purposes,
`depending on the functionality of the bridges, multilevel
`buses either behave like simple buses or like NoCs. The 65
`programming model of a bus typically consists of load and
`store operations which are implemented as a sequence of
`
`4
`primitive bus transactions. Bus interfaces typically have
`dedicated groups of wires for command, address, write data,
`and read data. A bus is a resource shared by multiple IPs.
`Therefore, before using it, IPs must go through an arbitration
`phase, where they request access to the bus, and block until
`the bus is granted to them.
`A bus transaction involves a request and possibly a
`response. Modules issuing requests are called masters, and
`those serving requests are called slaves. If there is a single
`10 arbitration for a pair of request-response, the bus is called
`non-split. In this case, the bus remains allocated to the
`master of the transaction until the response is delivered, even
`when this takes a long time. Alternatively, in a split bus, the
`bus is released after the request to allow transactions from
`15 different masters to be initiated. However, a new arbitration
`must be performed for the response such that the slave can
`access the bus.
`For both split and non-split buses, both communication
`parties have direct and immediate access to the status of the
`20 transaction. In contrast, network transactions are one-way
`transfers from an output buffer at the source to an input
`buffer at the destination that causes some action at the
`destination, the occurrence of which is not visible at the
`source. The effects of a network transaction are observable
`only through additional transactions. A request-response
`type of operation is still possible, but requires at least two
`distinct network transactions. Thus, a bus-like transaction in
`a NoC will essentially be a split transaction.
`Transaction Ordering: Traditionally, on a bus all transac(cid:173)
`tions are ordered (cf. Peripheral VCI, AMBA, or CoreCon(cid:173)
`nect PLB and OPB. This is possible at a low cost, because
`the interconnect, being a direct link between the communi(cid:173)
`cating parties, does not reorder data. However, on a split bus,
`a total ordering of transactions on a single master may still
`cause performance penalties, when slaves respond at differ(cid:173)
`ent speeds. To solve this problem, recent extensions to bus
`protocols allow transactions to be performed on connec(cid:173)
`tions. Ordering of transactions within a connection is still
`preserved, but between connections there are no ordering
`constraints (e.g., OCP, or Basic VCI). A few of the bus
`protocols allow out-of-order responses per connection in
`their advanced modes (e.g., Advanced VCI), but both
`requests and responses arrive at the destination in the same
`order as they were sent.
`In a NoC, ordering becomes weaker. Global ordering can
`only be provided at a very high cost due to the conflict
`between the distributed nature of the networks, and the
`requirement of a centralized arbitration necessary for global
`ordering. Even local ordering, between a source-destination
`pair, may be costly. Data may arrive out of order if it is
`transported over multiple routes. In such cases, to still
`achieve an in-order delivery, data must be labeled with
`sequence numbers and reordered at the destination before
`being delivered. The communication network comprises a
`plurality of partly connected nodes. Messages from a mod(cid:173)
`ule are redirected by the nodes to one or more other nodes.
`To that end the message comprises first information indica(cid:173)
`tive for the location of the addressed module(s) within the
`network. The message may further include second informa(cid:173)
`tion indicative for a particular location within the module,
`such as a memory, or a register address. The second infor-
`mation may invoke a particular response of the addressed
`module.
`Atomic Chains of Transactions: An atomic chain of
`transactions is a sequence of transactions initiated by a
`single master that is executed on a single slave exclusively.
`That is, other masters are denied access to that slave, once
`
`Samsung Ex. 1001
`Page 4
`
`
`
`US 7,373,449 B2
`
`5
`the first transaction in the chain claimed it. This mechanism
`is widely used to implement synchronization mechanisms
`between master modules (e.g., semaphores). On a bus,
`atomic operations can easily be implemented, as the central
`arbiter will either (a) lock the bus for exclusive use by the 5
`master requesting the atomic chain, or (b) know not to grant
`access to a locked slave. In the former case, the time
`resources are locked is shorter because once a master has
`been granted access to a bus, it can quickly perform all the
`transactions in the chain (no arbitration delay is required for 10
`the subsequent transactions in the chain). Consequently, the
`locked slave and the bus can be opened up again in a short
`time. This approach is used in AMBA and CoreConnect. In
`the latter case, the bus is not locked, and can still be used by
`other modules, however, at the price of a longer locking time 15
`of the slave. This approached is used in VCI and OCP.
`In a NoC, where the arbitration is distributed, masters do
`not know that a slave is locked. Therefore, transactions to a
`locked slaved may still be initiated, even though the locked
`slave camiot accept them. Consequently, to prevent dead- 20
`lock, these other transactions in the atomic chain must be
`able to bypass them to be served. Moreover, the time a
`module is locked is much longer in case of NoCs, because
`of the higher latency per transaction.
`Media Arbitration: An important difference between 25
`buses and NoCs is in the medium arbitration scheme. In a
`bus, master modules request access to the interconnect, and
`the arbiter grants the access for the whole interconnect at
`once. Arbitration is centralized as there is only one arbiter
`component, and global as all the requests as well as the state
`of the interconnect are visible to the arbiter. Moreover, when
`a grant is given, the complete path from the source to the
`destination is exclusively reserved. In a non-split bus, arbi(cid:173)
`tration takes place once when a transaction is initiated. As a
`result, the bus is granted for both request and response. In a 35
`split bus, requests and responses are arbitrated separately.
`In a NoC arbitration is also necessary, as it is a shared
`interconnect. However, in contrast to buses, the arbitration is
`distributed, because it is performed in every router, and is
`based only on local information. Arbitration of the commu(cid:173)
`nication resources (links, buffers) is performed incremen(cid:173)
`tally as the request or response advances.
`Destination Name and Routing: For a bus, the command,
`address, and data are broadcasted on the interconnect. They
`arrive at every destination, of which one activates based on
`the broadcasted address, and executes the requested com(cid:173)
`mand. This is possible because all modules are directly
`connected to the same bus. In a NoC, it is not feasible to
`broadcast information to all destinations, because it must be
`copied to all routers and network interfaces. This floods the
`network with data. The address is better decoded at the
`source to find a route to the destination module. A transac(cid:173)
`tion address will therefore have two parts: (a) a destination
`identifier, and (b) an internal address at the destination.
`Latency: Transaction latency is caused by two factors: (a)
`the access time to the bus, which is the time until the bus is
`granted, and (b) the latency introduced by the interconnect
`to transfer the data. For a bus, where the arbitration is
`centralized the access time is proportional to the number of
`masters connected to the bus. The transfer latency itself
`typically is constant and relatively fast, because the modules
`are linked directly. However, the speed of transfer is limited
`by the bus speed, which is relatively low.
`In a NoC, arbitration is performed at each router for the
`following link. The access time per router is small. Both
`end-to-end access time and transport time increase propor(cid:173)
`tionally to the number of hops between master and slave.
`
`6
`However, network links are unidirectional and point to
`point, and hence can run at higher frequencies than buses,
`thus lowering the latency. From a latency prospective, using
`a bus or a network is a trade off between the number of
`modules connected to the interconnect (which affects access
`time), the speed of the interconnect, and the network topol(cid:173)
`ogy.
`Data Format: In most modem bus interfaces the data
`format is defined by separate wire groups for the transaction
`type, address, write data, read data, and return acknowledg(cid:173)
`ments/errors (e.g., VCI, OCP, AMBA, or CoreConnect).
`This is used to pipeline transactions. For example, concur(cid:173)
`rently with sending the address of a read transaction, the data
`of a previous write transaction can be sent, and the data from
`an even earlier read transaction can be received. Moreover,
`having dedicated wire groups simplifies the transaction
`decoding; there is no need for a mechanism to select
`between different kinds of data sent over a common set of
`wires. Inside a network, there is typically no distinction
`between different kinds of data. Data is treated uniformly,
`and passed fron one router to another. This is done to
`minimize the control overhead and buffering in router. If
`separate wires would be used for each of the above-men(cid:173)
`tioned groups, separate routing, scheduling, and queuing
`would be needed, increasing the cost of routers.
`In addition, in a network at each layer in the protocol
`stack, control information must be supplied together with
`the data ( e.g., packet type, network address, or packet size).
`This control information is organized as an envelope around
`30 the data. That is, first a header is sent, followed by the actual
`data (payload), followed possibly by a trailer. Multiple such
`envelopes may be provided for the same data, each carrying
`the corresponding control information for each layer in the
`network protocol stack.
`Buffering and Flow Control: Buffering data of a master
`(output buffering) is used both for buses and NoCs to
`decouple computation from communication. However, for
`NoCs output buffering is also needed to marshal data, which
`consists of (a) (optionally) splitting the outgoing data in
`40 smaller packets which are transported by the network, and
`(b) adding control information for the network around the
`data (packet header). To avoid output buffer overflow the
`master must not initiate transaction that generate more data
`than the currently available space. Similarly to output buff-
`45 ering, input buffering is also used to decouple computation
`from communication. In a NoC, input buffering is also
`required to un-marshal data.
`In addition, flow control for input buffers differs for buses
`and NoCs. For buses, the source and destination are directly
`50 linked, and, destination can therefore signal directly to a
`source that it camiot accept data. This information can even
`be available to the arbiter, such that the bus is not granted to
`a transaction trying to write to a full buffer.
`In a NoC, however, the destination of a transaction cannot
`55 signal directly to a source that its input buffer is full.
`Consequently, transactions to a destination can be started,
`possibly from multiple sources, after the destination's input
`buffer has filled up. If an input buffer is full, additional
`incoming transitions are not accepted, and stored in the
`60 network. However, this approach can easily !eat to network
`congestion, as the data could be eventually stored all the way
`to the sources, blocking the links in between.
`To avoid input buffer overflow connections can be used,
`together with end-to-end flow control. At connection set up
`65 between a master and one or more slaves, buffer space is
`allocated at the network interfaces of the slaves, and the
`network interface of the master is assigned credits reflecting
`
`Samsung Ex. 1001
`Page 5
`
`
`
`US 7,373,449 B2
`
`7
`the amount of buffer space at the slaves. The master can only
`send data when it has enough credits for the destination
`slave(s). The slaves grant credits to the master when they
`consume data.
`
`SUMMARY OF THE INVENTION
`
`It is an object of the invention to provide an integrated
`circuit and a method for exchanging messages in an inte(cid:173)
`grated circuit with a more effective usage of the properties
`of the network.
`This object is achieved by an integrated circuit according
`to claim 1 and a method for exchanging messages according
`to claim 7.
`Therefore, and integrated circuit comprising a plurality of
`processing modules M; I; S; T and a network N; RN
`arranged for providing at least one connection between a
`first and at least one second module is provided. Said
`connections comprises a set of communication channels
`each having a set of connection properties. Said connection
`supports transactions comprising outgoing messages from
`the first module to the second module and return messages
`from the second module to the first module. The connection
`properties of the different communication channels of said
`connection can be adjusted indepently.
`Therefore, the utilization of the resources of a network on
`chip is more efficient, since the connection between modules
`can be efficiently adapted to their actual requirement such
`that the connection is not over dimensioned and unused
`network resources can be assigned to other connections.
`The invention is based on the idea to allow connection
`channels of a connection with different connection proper(cid:173)
`ties.
`Acccording to an aspect of the invention, said integrated 35
`circuit comprises at least one communication managing
`means CM for managing the communication between dif(cid:173)
`ferent modules; and at least one resource managing means
`RM for managing the resources of the network N.
`According to a further aspect of the invention, said first 40
`module M; I issues a request for a connection to at least one
`of said second modules to said communication managing
`means CM. Said communication managing means CM
`forwards the request for a connection with communication
`channels each having a specific set of connection properties 45
`to said resource managing means (RM). Said resource
`managing means RM determines whether the requested
`connection based on said communication channels with said
`specific connection properties is available, and responds the
`availability of the requested connection to said communi- 50
`cation managing means CM. A connection between the first
`and second module is established based on the available
`properties or available network resources required to imple(cid:173)
`ment the properties of said communication channels of said
`connection.
`According to a still further aspect of the invention, said
`communication managing means CM rejects establishing a
`connection based on the available connection properties
`when the available connection properties are not sufficient to
`perform the requested connection between said first and
`second module. The connection properties require that some
`network resources are implemented, e.g. a throughput
`requires slot reservation and flow control requires buffering.
`Therefore, a connection requiring some properties is opened
`or not depending on the availability of these resources.
`Accordingly, the communication manager CM has some
`control over the minimum requirements for a connection.
`
`8
`According to a further aspect of the invention, said
`communication managing means CM issues a request to
`reset the connection between said first and second module,
`when said modules have successfully performed their trans-
`5 actions, so that the network resources can be used again for
`other connections.
`According to still a further aspect of the invention, said
`integrated circuit comprises at least one network interface
`means Nl, associated to each of said modules, for managing
`10 the communication between said modules and said network
`N. Hence the modules can be designed independent from the
`network and can therefore be re-used.
`The invention also relates to a method for exchanging
`messages in an integrated circuit comprising a plurality of
`15 modules as described above. The messages between the
`modules are exchanged over connections via a network.
`Said connections comprises a set of communication chan(cid:173)
`nels each having a set of connection properties. Said con(cid:173)
`nection through the network supports transactions compris-
`20 ing outgoing messages from the first module to the second
`module and return messages from the second module to the
`first module. The network manages the outgoing messages
`in a way different from the return messages, i.e. the con(cid:173)
`nection channels can be configured independently.
`Further aspects of the invention are described in the
`dependent claims.
`These and other aspects of the invention are apparent
`from and will be elucidated with reference to the embodi(cid:173)
`ment( s) described hereinafter.
`
`25
`
`30
`
`BRIEF DESCRIPTION OF THE DRAWINGS
`
`FIG. 1 shows a System on chip according to a first
`embodiment,
`FIG. 2 shows a System on chip according to a second
`embodiment, and
`FIG. 3 shows a System on chip according to a third
`embodiment.
`
`DESCRIPTION OF THE PREFERRED
`EMBODIMENTS
`
`The following embodiments relate to systems on chip, i.e.
`a plurality of modules on the same chip communicate with
`each other via some kind of interconnect. The interconnect
`is embodied as a network on chip NOC. The network on chip
`may include wires, bus time-division multiplexing, switch,
`and/or routers within a network. At the transport layer of
`said network, the communication between the modules are
`performed over connections. A connection is considered as
`a set of channels, each having a set of connection properties,
`between a fi