throbber
Computer
`G. Bell, S. Fuller and
`Systems
`D. Siewiorek, Editors
`Ethernet: Distributed
`Packet Switching for
`Local Computer
`Networks
`
`Robert M. Metcalfe and David R. Boggs
`Xerox Palo Alto Research Center
`
`Ethernetis a branching broadcast communication
`system for carrying digital data packets amonglocally
`distributed computing stations. The packet transport
`mechanism provided by Ethernet has been used to build
`systems which can be viewed aseither local computer
`networks or loosely coupled multiprocessors. An Ether-
`net’s shared communication facility, its Ether, is a pas-
`sive broadcast medium with no central control. Coordi-
`nation of access to the Ether for packet broadcasts is
`distributed among the contending transmitting stations
`using controlled statistical arbitration. Switchingof
`packetsto their destinations on the Ether is distributed
`among the receiving stations using packet address
`recognition. Design principles and implementation are
`described, based on experience with an operating Ether-
`net of 100 nodes along a kilometer of coaxial cable. A
`model for estimating performance under heavy loads
`and a packet protocol for error controlled communica-
`tion are included for completeness.
`Key Words and Phrases: computer networks, packet
`switching, multiprocessing, distributed control, dis-
`tributed computing, broadcast communication,statisti-
`cal arbitration
`CR Categories: 3.81, 4.32, 6.35
`
`Copyright © 1976, Association for Computing Machinery, Inc.
`General permission to republish, but not for profit, all or part
`of this material is granted provided that ACM’s copyright notice
`is given and that reference is made to the publication, to its date
`of issue, and to the fact that reprinting privileges were granted
`by permission of the Association for Computing Machinery.
`Author’s present addresses: R.M. Metcalfe, Transaction Tech-
`nology, Inc., 10880 Wilshire Boulevard, Los Angeles, CA 94304; D.
`Boggs, Xerox Palo Alto Research Center, 3333 Coyote Hill Road,
`Palo Alto, CA 94304.
`
`395
`
`1. Background
`
`One can characterize distributed computing as a
`spectrum of activities varying in their degree of decen-
`tralization, with one extreme being remote computer
`networking and the other extreme being multiprocess-
`ing. Remote computer networking is the loose intercon-
`nection of previously isolated, widely separated, and
`rather large computing systems. Multiprocessing is the
`construction of previously monolithic and serial com-
`puting systems from increasingly numerous and smaller
`pieces computing in parallel. Near the middle of this
`spectrum is local networking,
`the interconnection of
`computers to gain the resource sharing of computer
`networking and the parallelism of multiprocessing.
`The separation between computers and the associ-
`ated bit rate of their communication can be used to di-
`vide the distributed computing spectrum into broad
`activities. The product of separation and bit rate, now
`about 1 gigabit-meter per second (1 Gbmps), is an in-
`dication of the limit of current communication tech-
`nology and can be expected to increase with time:
`
`
`
` Activity Separation Bit rate
`
`
`
`Remote networks
`Local networks
`Multiprocessors
`
`> 10km
`10-1 km
`< lkm
`
`< .1 Mbps
`1-10 Mbps
`> 10 Mbps
`
`1.1 Remote Computer Networking
`Computer networking evolved from telecommunica-
`tions terminal-computer communication, where the ob-
`ject was to connect remote terminals to a central com-
`puting facility. As the need for computer-computer
`interconnection grew, computers themselves were used
`to provide communication [2, 4, 29]. Communication
`using computers as packet switches [15-21, 26] and
`communications among computers for resource sharing
`[10, 32] were both advanced by the developmentof the
`Arpa Computer Network.
`The Aloha Network at the University of Hawaii was
`originally developed to apply packet radio techniques
`for communication between a central computer and its
`terminals scattered among the Hawaiian Islands[1, 2].
`Many of the terminals are now minicomputers com-
`municating among themselves using the Aloha Net-
`work’s Menehune as a packet switch. The Menehune
`and an Arpanet Imp are now connected,providingter-
`minals on the Aloha Network access to computing
`resources on the U.S. mainland.
`Just as computer networks have grown across con-
`tinents and oceans to interconnect major computing
`facilities around the world, they are now growing down
`corridors and between buildings to interconnect mini-
`computers in offices and laboratories [3, 12, 13, 14, 35].
`
`1.2 Multiprocessing
`
`Multiprocessingfirst took the form of connecting an
`1/0 controller to a large central computer; 1BM’s Aspis a
`
`Communications
`of
`the ACM
`
`July 1976
`Volume 19
`Number 7
`
`BMC Exhibit 2128
`PMC Exhibit 2128
`Apple v. PMC
`Apple v. PMC
`IPR2016-00753
`IPR2016-00753
`Page 1
`Page 1
`
`

`

`classic example [29]. Next, multiple central processors
`were connected to a common memory to provide more
`powerfor compute-boundapplications [33]. For certain
`of these applications, more exotic multiprocessor archi-
`tectures such asIlliac IV were introduced [5].
`More recently minicomputers have been connected
`in multiprocessor configurations for economy, relia-
`bility, and increased system modularity (24, 36]. The
`trend has been toward decentralization for reliability;
`loosely coupled multiprocessor systems dependless on
`shared central memory and more on thin wires for in-
`terprocess communication with increased component
`isolation [18, 26]. With the continued thinning of in-
`terprocessor communication for reliability and the de-
`velopmentofdistributable applications, multiprocessing
`is gradually approaching a local form of distributed
`computing.
`
`1.3 Local Computer Networking
`Ethernet shares many objectives with other local
`networks such as Mitre’s Mitrix, Bell Telephone Labora-
`tory’s Spider, and U.C.Irvine’s Distributed Computing
`System (DCS) [12, 13, 14, 35]. Prototypes of all four
`local networking schemes operate at bit rates between
`one and three megabits per second. Mitrix and Spider
`have a central minicomputer for switching and band-
`width allocation, while DCS and Ethernet use distrib-
`uted control. Spider and DCSuse a ring communication
`path, Mitrix uses off-the-shelf CATV technology to
`implement two one-way busses, and our experimental
`Ethernet uses a branching two-way passive bus. Differ-
`ences among these systems are due to differences among
`their intended applications, differences among the cost
`constraints under which trade-offs were made, and
`differences of opinion among researchers.
`Before going into a detailed description of Ethernet,
`weoffer the following overview (see Figure 1).
`
`2. System Summary
`
`Ethernetis a system for local communication among
`computing stations. Our experimental Ethernet uses
`tapped coaxial cables to carry variable length digital
`data packets among, for example, personal minicom-
`puters, printing facilities,
`large file storage devices,
`magnetic tape backup stations, larger central computers,
`and longer-haul communication equipment.
`The shared communication facility, a branching
`Ether,
`is passive. A station’s Ethernet interface con-
`nects bit-serially through an interface cable to a trans-
`ceiver which in turn taps into the passing Ether. A
`packet is broadcast onto the Ether, is heard byall sta-
`tions, and is copied from the Ether by destinations
`which select it according to the packet’s leading address
`bits. This is broadcast packet switching and should be
`distinguished from store-and-forward packet switching,
`in which routing is performed by intermediate process-
`
`396
`
`ing elements. To handle the demands of growth, an
`Ethernet can be extended using packet repeaters for
`signal regeneration, packetfilters for traffic localization,
`and packet gatewaysfor internetwork address extension.
`Control is completely distributed amongstations,
`with packet transmissions coordinated throughstatisti-
`cal arbitration. Transmissions initiated by a station de-
`fer to any which may already be in progress. Once
`started, if interference with other packets is detected, a
`transmission is aborted and rescheduled by its source
`station. After a certain period of interference-free trans-
`mission, a packet is heard byall stations and will run to
`completion without interference. Ethernet controllers
`in colliding stations each generate random retransmis-
`sion intervals to avoid repeated collisions. The mean of
`a packet’s retransmission intervals is adjusted as a func-
`tion ofcollision history to keep Ether utilization near
`the optimum with changing network load.
`Even when transmitted without source-detected in-
`terference, a packet maystill not reach its destination
`without error; thus, packets are delivered on!y with high
`probability. Stations requiring a residual error rate
`lower than that provided by the bare Ethernet packet
`transport mechanism must follow mutually agreed upon
`packet protocols.
`
`3. Design Principles
`
`Our object is to design a communication system
`which can grow smoothly to accommodate several
`buildings full of personal computers and the facilities
`needed for their support.
`Like the computing stations to be connected, the
`communication system must be inexpensive. We choose
`to distribute control of the communications facility
`among the communicating computers to eliminate the
`reliability problems of an active central controller, to
`avoid creating a bottleneck in a system rich in parallel-
`ism, and to reduce the fixed costs which make small sys-
`tems uneconomical.
`Ethernet design started with the basic idea of packet
`collision and retransmission developed in the Aloha
`Network [1]. We expected that, like the Aloha Network,
`Ethernets would carry bursty traffic so that conven-
`tional synchronous time-division multiplexing (STDM)
`would beinefficient[1, 2, 21, 26]. We saw promise in the
`Aloha approach to distributed control of radio channel
`multiplexing and hoped that it could be applied effec-
`tively with media suited to local computer communica-
`tion. With several innovations of our own, the promise
`is realized.
`/uminiferous
`Ethernet is named for the historical
`ether through which electromagnetic radiations were
`once alleged to propagate. Like an Aloha radio trans-
`mitter, an Ethernet transmitter broadcasts completely-
`addressed transmitter-synchronous bit sequences called
`packets onto the Ether and hopes that they are heard by
`
`Communications
`of
`the ACM
`
`July 1976
`Volume 19
`Number7
`
`PMC Exhibit 2128
`PMC Exhibit 2128
`Apple v. PMC
`Apple v. PMC
`IPR2016-00753
`IPR2016-00753
`Page 2
`Page 2
`
`

`

`Fig. 1. A two-segment Ethernet.
`
`TERMINATOR
`
`
`
`TAP
`
`STATION
`
`
`
`
`
`
`TRANS-
`CEIVER
`
`
`
`
`
`
`
`STATION
`
`TRANS-
`
`
` CEIVER REPEATER
`
`
`CONTROLLER
`
`TRANS-
`CEIVER
`
`TRANS-
`CEIVER
`
`
`
`the intended receivers. The Ether is a logically passive
`medium for the propagation of digital signals and can
`be constructed using, any number of media including
`coaxial cables, twisted pairs, and opticalfibers.
`
`3.1 Topology
`We cannot afford the redundant connections and
`dynamic routing of store-and-forward packet switching
`to assure reliable communication, so we choose to
`achieve reliability through simplicity. We choose to
`make the shared communication facility passive so that
`the failure of an active element will tend to affect the
`communicationsof only a single station. The layout and
`changing needsof office and laboratory buildings leads
`us to pick a network topology with the potential for
`convenient incremental extention and reconfiguration
`with minimal service disruption.
`The topology of the Ethernet is that of an unrooted
`tree. It is a tree so that the Ether can branch at the en-
`trance to a building’s corridor, yet avoid multipath in-
`terference. There must be only one path through the
`Ether between any source and destination; if more than
`one path were to exist, a transmission would interfere
`with itself, repeatedly arriving at its intended destina-
`tion having travelled by paths of different length. The
`Etheris unrooted because it can be extended from any of
`its points in any direction. Any station wishing to join
`
`397
`
`
`
`[nosaamaz™|AmMecromAZOQ
`
`-%H2mzomeAmram
`
`
`
`amerromyzoo
`mnOratwaagz
`
`
`
`ETHER SEGMENT #2
`
`
`an Ethernet taps into the Ether at the nearest convenient
`point.
`Looking at the relationship of interconnection and
`control, we see that Ethernet is the dual of a star net-
`work. Rather than distributed interconnection through
`many separate links and central control in a switching
`node,as in a star network, the Ethernethas central inter-
`connection through the Ether and distributed control
`amongits stations.
`Unlike an Aloha Network, which is a star network
`with an outgoing broadcast channel and an incoming
`multi-access channel, an Ethernet supports many-to-
`many communication with a single broadcast multi-
`access channel.
`
`3.2 Control
`Sharing of the Ether is controlled in such a way that
`it is not only possible but probable that two or moresta-
`tions will attempt to transmit a packet at roughly the
`same time. Packets which overlap in time on the Ether
`are said to collide; they interfere so as to be unrecogniza-
`ble by a receiver. A station recovers from a detected
`collision by abandoning the attempt and retransmitting
`the packet after some dynamically chosen random time
`period. Arbitration of conflicting transmission demands
`is both distributed andstatistical.
`Whenthe Etheris largely unused, a station transmits
`its packetsat will, the packets are received without error,
`and all is well. As more stations begin to transmit, the
`rate of packet interference increases. Ethernet controllers
`in eachstationare built to adjust the mean retransmission
`interval in proportion to the frequency of collisons;
`sharing of the Ether among competing station-station
`transmissionsis thereby kept near the optimum [20, 21].
`A degree of cooperation among the stations is re-
`quired to share the Ether equitably. In demanding ap-
`plications certain stations might usefully take trans-
`mission priority through some systematic violation of
`equity rules. A station could usurp the Ether by not ad-
`justing its retransmission interval with increasing traffic
`or by sending very large packets. Both practices are now
`prohibited by low-level software in each station.
`
`3.3 Addressing
`Each packet has a source and destination, both of
`which are identified in the packet’s header. A packet
`placed on the Ether eventually propagates to all sta-
`tions. Any station can copy a packet from the Ether into
`its local memory, but normally only an active destina-
`tion station matching its address in the packet’s header
`will do so as the packet passes. By convention, a zero
`destination address is a wildcard and matchesall ad-
`dresses; a packet with a destination of zero is called a
`broadcast packet.
`
`3.4 Reliability
`An Ethernetis probabilistic. Packets may be lost due
`to interference with other packets, impulse noise on the
`
`Communications
`of
`eth ACM
`
`July 1976
`Vowme 19
`Number7
`
`we
`PMC Exhibit 2128
`PMC Exhibit 2128
`Apple v. PMC
`Apple v. PMC
`IPR2016-00753
`IPR2016-00753
`Page 3
`Page 3
`
`

`

`Ether, an inactive receiver at a packet’s intended desti-
`nation, or purposeful discard. Protocols used to com-
`municate through an Ethernet must assume that packets
`will be received correctly at intended destinations only
`with high probability.
`An Ethernetgives its best efforts to transmit packets
`successfully, butitis the responsibility of processes in the
`source and destination stations to take the precautions
`necessary to assure reliable communication of the quality
`they themselves desire [18, 21]. Recognizing the costli-
`ness and dangers of promising ‘error-free’ communi-
`cation, we refrain from guaranteeing reliable delivery
`of any single packet to get both economy of transmis-
`sion and highreliability averaged over many packets
`[21]. Removing the responsibility for reliable communi-
`cation from the packet transport mechanism allowsus to
`tailor reliability to the application and to place error re-
`covery where it will do the most good. This policy be-
`comes more important as Ethernets are interconnected
`in a hierarchy of networks through which packets must
`travel farther and suffer greater risks.
`
`3.5 Mechanisms
`A station connects to the Ether with a tap and a
`transceiver. A tap is a device for physically connecting to
`the Ether while disturbing its transmission characteris-
`tics as little as possible. The design of the transceiver
`must be an exercise in paranoia. Precautions must be
`taken to insure that likely failures in the transceiver or
`station do not result in pollution of the Ether. In par-
`ticular, removing power from the transceiver should
`causeit to disconnect from the Ether.
`Five mechanisms are provided in our experimental
`Ethernet for reducing the probability and cost of losing
`a packet. These are (1) carrier detection, (2) interference
`detection,
`(3) packet error detection,
`(4)
`truncated
`packetfiltering, and (5) collision consensus enforcement.
`
`3.5.1 Carrier detection. As a packet’s bits are placed
`on the Ether by a station, they are phase encoded (like
`bits on a magnetic tape), which guarantees that thereis
`at least one transition on the Ether during eachbit time.
`The passing of a packet on the Ether can therefore be de-
`tected by listening for its transitions, To use a radio
`analogy, we speak of the presence of carrier as a packet
`passes a transceiver. Because a station can sense the car-
`rier of a passing packet, it can delay sending oneofits
`own until the detected packet passes safely. The Aloha
`Network does not have carrier detection and conse-
`quently suffers a substantially higher collision rate.
`Without carrier detection, efficient use of the Ether
`would decrease with increasing packet length. In Section
`6 below, we show that with carrier detection, Ether
`efficiency increases with increasing packet length.
`With carrier detection we are able to implement
`deference: nostation will start transmitting while hearing
`carrier. With deference comes acquisition: once a packet
`transmission has been in progress for an Ether end-to-
`
`398
`
`end propagation time, all stations are hearing carrier
`and are deferring; the Ether has been acquired and the
`transmission will complete without an interfering colli-
`sion.
`With carrier detection, collisions should occur only
`when two or morestations find the Ether silent and be-
`gin transmitting simultaneously: within an Ether end-to-
`end propagation time. This will almost always happen
`immediately after a packet transmission during which
`two or morestations were deferring. Because stations do
`not now randomize after deferring, when the trans-
`mission terminates, the waiting stations pile on together,
`collide, randomize, and retransmit.
`
`3.5.2 Interference detection. Each transceiver has an
`interference detector. Interference is indicated when the
`transceiver notices a difference between the value of the
`bit it is receiving from the Ether and the value ofthe bit
`it is attempting to transmit.
`Interference detection has three advantages. First, a
`station detecting a collision knows that its packet has
`been damaged. The packet can be scheduled for re-
`transmission immediately, avoiding a long acknowledg-
`ment timeout. Second,interference periods on the Ether
`are limited to a maximum ofone roundtrip time. Collid-
`ing packets in the Aloha Network run to completion,
`but the truncated packets resulting from Ethernetcolli-
`sions waste only a small fraction of a packet time on the
`Ether. Third, the frequency of detected interference is
`used to estimate Ether traffic for adjusting retrans-
`mission intervals and optimizing channelefficiency.
`
`3.5.3 Packet error detection. As a packet is placed
`on the Ether, a checksum is computed and appended.
`Asthe packet is read from the Ether, the checksum is
`recomputed. Packets which do not carry a consistent
`checksum are discarded. In this way transmissionerrors,
`impulse noise errors, and errors due to undetected inter-
`ference are caught at a packet’s destination.
`
`3.5.4 Truncated packet filtering. Interference de-
`tection and deference cause most collisions to result in
`truncated packets of only a few bits; colliding stations
`detect interference and abort transmission within an
`Ether round trip time. To reduce the processing load
`that the rejection of such obviously damaged packets
`would place on listening station software,
`truncated
`packets arefiltered out in hardware.
`
`3.5.5 Collision consensus enforcement. When a sta-
`tion determines that its transmission is experiencing in-
`terference, it momentarily jams the Ether to insure that
`all other participants in the collision will detect inter-
`ference and, because of deference,will be forced to abort.
`Withoutthis collision consensus enforcement mechanism,
`it is possible that the transmitting station which would
`otherwise be the last to detect a collision might not do
`so as the other interfering transmissions successively
`abort and stop interfering. Although the packet may
`look goodto that last transmitter, different path lengths
`
`Communications
`of
`the ACM
`
`July 1976
`Volume 19
`Number 7
`
`We
`PMC Exhibit 2128
`PMC Exhibit 2128
`Apple v. PMC
`Apple v. PMC
`IPR2016-00753
`IPR2016-00753
`Page 4
`Page 4
`
`

`

`between the colliding transmitters and the intended re-
`ceiver will cause the packet to arrive damaged.
`
`4, Implementation
`
`Our choices of 1 kilometer, 3 megabits per second,
`and 256 stations for the parameters of an experimental
`Ethernet were based on characteristics of the locally
`distributed computer communication environment and
`our assessments of what would be marginally achiev-
`able; they were certainly not hard restrictions essential
`to the Ethernet concept.
`Weexpect that a reasonable maximum networksize
`would be on the order of 1 kilometer of cable. We used
`this working number to choose amongEthers of varying
`signal attenuation and to design transceivers with ap-
`propriate power and sensitivity.
`The dominantstation on our experimental Ethernet
`is a minicomputer for which 3 megabits per second is a
`convenient data transfer rate. By keeping the peak rate
`well below that of the computer’s path to main memory,
`wereduce the need for expensive special-purpose packet
`buffering in our Ethernetinterfaces. By keeping the peak
`rate as high as is convenient, we provide for larger num-
`bers of stations and more ambitious multiprocessing
`communications applications.
`To expedite low-level packet handling among 256
`stations, we allocate thefirst 8-bit byte of the packet to
`be the destination addressfield and the second byte to be
`the source addressfield (see Figure 2). 256 is a number
`small enough to allow each station to get an adequate
`share of the available bandwidth and approaches the
`limit of what we can achieve with current techniques for
`tapping cables. 256 is only a convenient numberfor the
`lowest level of protocol; higher levels can accomodate
`extended address spaces with additionalfields inside the
`packet and software to interpret them.
`Our experimental Ethernet implementation has four
`major parts: the Ether, transceivers, interfaces, and con-
`trollers (see Figure 1).
`
`4.1 Ether
`We chose to implement our experimental Ether using
`low-loss coaxial cable with off-the-shelf CATV taps and
`connectors. It is possible to mix Ethers on a single
`Ethernet; we use a smaller-diameter coax for convenient
`connection within station clusters and a larger-diameter
`coax for low-loss runs between clusters. The cost of
`coaxial cable Etheris insignificant relative to the cost of
`the distributed computing systems
`supported by
`Ethernet.
`
`4.2 Transceivers
`Our experimental transceivers can drive a kilometer
`of coaxial cable Ether tapped by 256 stations trans-
`mitting at 3 megabits per second. The transceivers can
`endure (i.e. work after) sustained direct shorting,
`im-
`
`399
`
`proper termination of the Ether, and simultaneous
`drive by all 256 stations; they can folerate (i.e. work
`during) ground differentials and everyday electrical
`noise, from typewriters or electric drills, encountered
`whenstations are separated by as much as a kilometer.
`An Ethernet
`transceiver attaches directly to the
`Ether which passes by in the ceiling or underthe floor.
`It is powered and controlled through five twisted pairs
`in an interface cable carrying transmit data, receive
`data,
`interference detect, and power supply voltages.
`When unpowered,
`the transceiver disconnects itself
`electrically from the Ether. Here is where our fight for
`reliability is won or lost; a broken transceiver can, but
`should not, bring down an entire Ethernet. A watchdog
`timer circuit in each transceiver attempts to prevent
`pollution of the Ether by shutting down the output stage
`if it acts suspiciously. For transceiver simplicity we use
`the Ether’s base frequency band, but an Ethernet could
`be built to use any suitably sized band of a frequencydi-
`vision multiplexed Ether.
`Even though our experimental transceivers are very
`simple and cantolerate only limited signal attenuation,
`they have proven quite adequate and reliable. A more
`sophisticated transceiver design might permit passive
`branching of the Ether and widerstation separation.
`
`4.3 Interface
`An Ethernet interface serializes and deserializes the
`parallel data used by its station. There are a numberof
`different stations on our Ethernet; an interface must be
`built for each kind.
`Each interface is equipped with the hardware neces-
`sary to compute a 16-bit cyclic redundancy checksum
`(CRC) on serial data as it is transmitted and received.
`This checksum protects only against errors in the Ether
`and specifically not against errors in the parallel por-
`tions of the interface hardware or station. Higher-level
`software checksums are recommendedfor applications
`in which a higher degree ofreliability is required.
`A transmitting interface uses a packet buffer address
`and word countto serialize and phase encode a variable
`numberof 16-bit words which are taken from the sta-
`tion’s memory and passed to the transceiver, preceded
`bya start bit (called SYNC in Figure 2) and followed by
`the CRC. A receiving interface uses the appearance of
`carrier to detect the start of a packet and uses the SYNC
`bit to acquire bit phase. As long as carrier stays on, the
`interface decodes and deserializes the incoming bit
`stream depositing 16-bit words in a packet buffer in the
`station’s main memory. When carrier goes away, the
`interface checks that an integral number of 16-bit words
`has been received and that the CRCis correct. The last
`wordreceived is assumed to be the CRCandis not copied
`into the packet buffer.
`These interfaces ordinarily include hardware for
`accepting only those packets with appropriate addresses
`in their headers. Hardware addressfiltering helps a sta-
`tion avoid burdensome software packet processing when
`
`Communications
`of
`the ACM
`
`July 1976
`Volume 19
`Number 7
`
`Le
`PMC Exhibit 2128
`PMC Exhibit 2128
`Apple v. PMC
`Apple v. PMC
`IPR2016-00753
`IPR2016-00753
`Page 5
`Page 5
`
`

`

`Fig. 2. Ethernet packet layout.
`ACCESSIBLE TO SOFTWARE
`
`
`DEST
`
`SOURCE
`
`CHECKSUM
`
`[eee]
`
`wimfmsfmm|
`
`
`8 BITS
`8 BITS
`~ 4000 BITS
`16 BITS
`
`the Ether is very busy carrying traffic intended for other
`stations.
`
`4.4 Controller
`An Ethernet controller is the station-specific low-
`level firmware or software for getting packets onto and
`out of the Ether. When a source-detected collision oc-
`curs, it is the source controller’s responsibility to gene-
`rate a new random retransmissioninterval based on the
`updated collision count. We have studied a numberofal-
`gorithms for controlling retransmission rates in stations
`to maintain Ether efficiency [20, 22]. The most practical
`of these algorithms estimate traffic load using recent
`collision history.
`Retransmission intervals are multiples of a s/ot, the
`maximum time between starting a transmission and de-
`tecting a collision, one end-to-end round trip delay. An
`Ethernet controller begins transmission of each new
`packet with a mean retransmission interval of one slot.
`Each time a transmission attempt endsin collision, the
`controller delays for an interval of random length with a
`mean twice that of the previous interval, defers to any
`passing packet, and then attempts retransmission. This
`heuristic approximates an algorithm we have called
`Binary Exponential Backoff (see Figure 3) [22].
`When the network is unloaded and collisions are
`rare, the mean seldom departs from one and retrans-
`missions are prompt. Asthe traffic load increases, more
`collisions are experienced, a backlog of packets builds
`up in the stations, retransmission intervals increase, and
`retransmission traffic backs off
`to sustain channel
`efficiency.
`
`5. Growth
`
`5.1 Signal Cover
`One can expand an Ethernet just so far by adding
`transceivers and Ether. At somepoint, the transceivers
`and Ether will be unable to carry the required signals.
`The signal cover can be extended with a simple un-
`buffered packet repeater. In our experimental Ethernet,
`where becauseof transceiver simplicity the Ether cannot
`be branched passively, a simple repeater may join any
`number of Ether segments to enrich the topology while
`extending the signal cover.
`Weoperate an experimental two-segment packet re-
`peater, but hope to avoid relying on them. In branching
`
`400
`
`the Ether and extending its signal cover, there is a trade-
`off between using sophisticated transceivers and using
`repeaters. With increased power andsensitivity, trans-
`ceivers become more expensive and less reliable. The
`introduction of repeaters into an Ethernet makes the
`centrally interconnecting Ether active. The failure of a
`transceiver will sever the communications of its owner;
`the failure of a repeater partitions the Ether severing
`many communications.
`
`5.2 Traffic Cover
`One can expand an Ethernet just so far by adding
`Ether and packet repeaters. At some point the Etherwill
`be so busy that additional stations will just divide more
`finely the already inadequate bandwidth. The traffic
`cover can be extended with an unbufferedtraffic-filtering
`repeater or packet filter, which passes packets from one
`Ether segment to another only if the destination station
`is located on the new segment. A packetfilter also ex-
`tends the signal cover.
`
`5.3 Address Cover
`One can expand an Ethernet just so far by adding
`Ether, repeaters, and traffic filters, At some point there
`will be too manystations to be addressed with the Ether-
`net’s 8-bit addresses. The address cover can be extended
`with packet gateways and the software addressing con-
`ventions they implement [7]. Addresses can be expanded
`in two directions: down into the station by addingfields
`to identify destination ports or processes within a sta-
`tion, and up into the internetwork by addingfields to
`identify destination stations on remote networks.
`A gateway also extends the traffic and signal covers.
`There can be only one repeater or packet filter con-
`necting two Ether segments; a packet repeated onto a
`segment by multiple repeaters would interfere with itself.
`However, there is no limit to the number of gateways
`connecting two segments; a gateway only repeats packets
`addressed to itself as an intermediary. Failure of the
`single repeater connecting two segments partitions the
`network; failure of a gateway need not partition the net
`if there are paths through other gateways between the
`segments.
`
`6. Performance
`
`Wepresent here a simple set of formulas with which
`to characterize the performance expected of an Ethernet
`whenit is heavily loaded. More elaborate analyses and
`several detailed simulations have been done, but the
`following simple model has proven very useful
`in
`understanding the Ethernet’s distributed contention
`scheme, even when it is loaded beyond expectations
`{1, 20, 21, 22, 23, 27].
`Wedevelop a simple model of the performanceof a
`loaded Ethernet by examining alternating Ether time
`periods. Thefirst, called a transmission interval, is that
`
`Communications
`of
`the ACM
`
`July 1976
`Volume 19
`Number 7
`
`PMC Exhibit 2128
`PMC Exhibit 2128
`Apple v. PMC
`Apple v. PMC
`IPR2016-00753
`IPR2016-00753
`Page 6
`Page 6
`
`

`

`during which the Ether has been acquired for a success-
`ful packet transmission. The second, called a contention
`interval, is that composed of the retransmission slots of
`Section 4.4, during which stations attempt to acquire
`control of the Ether. Because the model’s Ethernets are
`loaded and becausestations defer to passing packets be-
`fore starting transmission, the slots are synchronized
`by the tail of the preceding acquisition interval. A slot
`will be empty when nostation chooses to attempt trans-
`mission in it and it will contain a collision if more than
`one station attempts to transmit. When a slot contains
`only one attempted transmission, then the Ether has
`been acquired for the duration of a packet, the conte

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket