throbber
Computer
`G. Bell, S. Fuller and
`Systems
`D. Siewiorek, Editors
`Ethernet: Distributed
`Packet Switching for
`Local Computer
`Networks
`
`Robert M. Metcalfe and David R. Boggs
`Xerox Palo Alto Research Center
`
`Ethernet is a branching broadcast communication
`system for carrying digital data packets among locally
`distributed computing stations. The packet transport
`mechanism provided by Ethernet has been used to build
`systems which can be viewed as either local computer
`networks or loosely coupled multiprocessors. An Ether-
`net’s shared communication facility, its Ether, is a pas-
`sive broadcast medium with no central control. Coordi-
`
`nation of access to the Ether for packet broadcasts is
`distributed among the contending transmitting stations
`using controlled statistical arbitration. Switching‘ of
`packets to their destinations on the Ether is distributed
`among the receiving stations using packet address
`recognition. Design principles and implementation are
`described, based on experience with an operating Ether-
`net of 100 nodes along a kilometer of coaxial cable. A
`model for estimating performance under heavy loads
`and a packet protocol for error controlled communica-
`tion are included for completeness.
`Key Words and Phrases: computer networks, packet
`switching, multiprocessing, distributed control, dis-
`tributed computing, broadcast communication, statisti-
`cal arbitration
`
`CR Categories: 3.81, 4.32, 6.35
`
`Copyright © 1976, Association for Computing Machinery, Inc.
`General permission to republish, but not for profit, all or part
`of this material is granted provided that ACM’s copyright notice
`is given and that reference is made to the publication, to its date
`of issue, and to the fact that reprinting privileges were granted
`by permission of the Association for Computing Machinery.
`Author’s present addresses: R.M. ‘Metcalfe, Transaction Tech-
`nology, Inc., 10880 Wilshire Boulevard, Los Angeles, CA 94304; D.
`Boggs, Xerox Palo Alto Research Center, 3333 Coyote Hill Road,
`Palo Alto, CA 94304.
`
`395
`
`1. Background
`
`One can characterize distributed computing as a
`spectrum of activities varying in their degree of decen-
`tralization, with one extreme being remote computer
`networking and the other extreme being multiprocess-
`ing. Remote computer networking is the loose intercon-
`nection of previously isolated, widely separated, and
`rather large computing systems. Multiprocessing is the
`construction of previously monolithic and serial com-
`puting systems from increasingly numerous and smaller
`pieces computing in parallel. Near the middle of this
`spectrum is local networking,
`the interconnection of
`computers to gain the resource sharing of computer
`networking and the parallelism of multiprocessing.
`The separation between computers and the associ-
`ated bit rate of their communication can be used to di-
`
`vide the distributed computing spectrum into broad
`activities. The product of separation and bit rate, now
`about 1 gigabit—meter per second (1 Gbmps), is an in-
`dication of the limit of current communication tech-
`
`nology and can be expected to increase with time:
`
`Activity
`
`Separation
`
`Bit rate
`
`Remote networks
`Local networks
`Multiprocessors
`
`> 10 km
`10—.1 km
`< .1 km
`
`< .1 Mbps
`.1—10 Mbps
`> 10 Mbps
`
`1.1 Remote Computer Networking
`Computer networking evolved from telecommunica-
`tions terminal-computer communication, where the ob-
`ject was to connect remote terminals to a central com-
`puting facility. As the need for computer—computer
`interconnection grew, computers themselves were used
`to provide communication [2, 4, 29]. Communication
`using computers as packet switches [l5—21, 26] and
`communications among computers for resource sharing
`[10, 32] were both advanced by the development of the
`Arpa Computer Network.
`The Aloha Network at the University of Hawaii was
`originally developed to apply packet radio techniques
`for communication between a central computer and its
`terminals scattered among the Hawaiian Islands [1, 2].
`Many of the terminals are now minicomputers com-
`municating among themselves using the Aloha Net-
`work’s Menehune as a packet switch. The Menehune
`and an Arpanet Imp are now connected, providing ter-
`minals on the Aloha Network access to computing
`resources on the US. mainland.
`
`Just as computer networks have grown across con-
`tinents and oceans to interconnect major computing
`facilities around the world, they are now growing down
`corridors and between buildings to interconnect mini-
`computers in offices and laboratories [3, 12, 13, 14, 35].
`
`1.2 Multiprocessing
`
`Multiprocessing first took the form of connecting an
`I/o controller to a large central computer; IBM’s Asp is a
`
`Communications
`of
`the ACM
`
`July 1976
`Volume 19
`Number 7
`
`PMC Exhibit 2128
`
`Apple v. PMC
`|PR2016-00755
`
`Page 1
`
`PMC Exhibit 2128
`Apple v. PMC
`IPR2016-00755
`Page 1
`
`

`
`classic example [29]. Next, multiple central processors
`were connected to a common memory to provide more
`power for compute-bound applications [33]. For certain
`of these applications, more exotic multiprocessor archi-
`tectures such as Illiac IV were introduced [5].
`More recently minicomputers have been connected
`in multiprocessor configurations for economy, relia-
`bility, and increased system modularity [24, 36]. The
`trend has been toward decentralization for reliability;
`loosely coupled multiprocessor systems depend less on
`shared central memory and more on thin wires for in-
`terprocess communication with increased component
`isolation [18, 26]. With the continued thinning of in-
`terprocessor communication for reliability and the de-
`velopment of distributable applications, multiprocessing
`is gradually approaching a local form of distributed
`computing.
`
`1.3 Local Computer Networking
`Ethernet shares many objectives with other local
`networks such as Mitre’s Mitrix, Bell Telephone Labora-
`tory’s Spider, and U.C. Irvine’s Distributed Computing
`System (DCS) [12, 13, 14, 35]. Prototypes of all four
`local networking schemes operate at bit rates between
`one and three megabits per second. Mitrix and Spider
`have a central minicomputer for switching and band-
`width allocation, while DCS and Ethernet use distrib-
`uted control. Spider and DCS use a ring communication
`path, Mitrix uses ofl'—the—shelf CATV technology to
`implement two one—way busses, and our experimental
`Ethernet uses a branching two—way passive bus. Differ-
`ences among these systems are due to differences among
`their intended applications, differences among the cost
`constraints under which trade-offs were made, and
`differences of opinion among researchers.
`Before going into a detailed description of Ethernet,
`we offer the following overview (see Figure 1).
`
`2. System Summary
`
`Ethernet is a system for local communication among
`computing stations. Our experimental Ethernet uses
`tapped coaxial cables to carry variable length digital
`data packets among, for example, personal minicom-
`puters, printing facilities,
`large file storage devices,
`magnetic tape backup stations, larger central computers,
`and longer-haul communication equipment.
`The shared communication facility, a branching
`Ether,
`is passive. A station’s Ethernet interface con-
`nects bit-serially through an interface cable to a trans-
`ceiver which in turn taps into the passing Ether. A
`packet is broadcast onto the Ether, is heard by all sta-
`tions, and is copied from the Ether by destinations
`which select it according to the packet’s leading address
`bits. This is broadcast packet switching and should be
`distinguished from store—and-forward packet switching,
`in which routing is performed by intermediate process-
`
`396
`
`ing elements. To handle the demands of growth, an
`Ethernet can be extended using packet repeaters for
`signal regeneration, packet filters for traffic localization,
`and packet gateways for internetwork address extension.
`Control is completely distributed among stations,
`with packet transmissions coordinated through statisti-
`cal arbitration. Transmissions initiated by a station de-
`fer to any which may already be in progress. Once
`started, if interference with other packets is detected, a
`transmission is aborted and rescheduled by its source
`station. After a certain period of interference-free trans-
`mission, a packet is heard by all stations and will run to
`completion without interference. Ethernet controllers
`in colliding stations each generate random retransmis-
`sion intervals to avoid repeated collisions. The mean of
`a packet’s retransmission intervals is adjusted as a func-
`tion of collision history to keep Ether utilization near
`the optimum with changing network load.
`Even when transmitted without source—detected in-
`terference, a packet may still not reach its destination
`without error; thus, packets are delivered only with high
`probability. Stations requiring a residual error rate
`lower than that provided by the bare Ethernet packet
`transport mechanism must follow mutually agreed upon
`packet protocols.
`
`3. Design Principles
`
`Our object is to design a communication system
`which can grow smoothly to accommodate several
`buildings full of personal computers and the facilities
`needed for their support.
`Like the computing stations to be connected, the
`communication system must be inexpensive. We choose
`to distribute control of the communications facility
`among the communicating computers to eliminate the
`reliability problems of an active central controller, to
`avoid creating a bottleneck in a system rich in parallel-
`ism, and to reduce the fixed costs which make small sys-
`tems uneconomical.
`
`Ethernet design started with the basic idea of packet
`collision and retransmission developed in the Aloha
`Network [1]. We expected that, like the Aloha Network,
`Ethernets would carry bursty traffic so that conven-
`tional synchronous time-division multiplexing (STDM)
`would be ineflicient [1, 2, 21, 26]. We saw promise in the
`Aloha approach to distributed control of radio channel
`multiplexing and hoped that it could be applied effec-
`tively with media suited to local computer communica-
`tion. With several innovations of our own, the promise
`is realized.
`
`luminiferous
`Ethernet is named for the historical
`ether through which electromagnetic radiations were
`once alleged to propagate. Like an Aloha radio trans-
`mitter, an Ethernet transmitter broadcasts completely-
`addressed transmitter—synchronous bit sequences called
`packets onto the Ether and hopes that they are heard by
`
`Ciommunications
`fire ACM
`
`Jully 19769
`V
`1
`Ncinglé 7
`
`PMC Exhibit 21 28
`Apple V- PMC
`|PR2016-00755
`
`Page 2
`
`PMC Exhibit 2128
`Apple v. PMC
`IPR2016-00755
`Page 2
`
`

`
`an Ethernet taps into the Ether at the nearest convenient
`point.
`Looking at the relationship of interconnection and
`control, we see that Ethernet is the dual of a star net-
`work. Rather than distributed interconnection through
`many separate links and central control in a switching
`node, as in a star network, the Ethernet has central inter-
`connection through the Ether and distributed control
`among its stations.
`Unlike an Aloha Network, which is a star network
`with an outgoing broadcast channel and an incoming
`multi-access channel, an Ethernet supports many-to-
`many communication with a single broadcast multi-
`access channel.
`
`3.2 Control
`
`Sharing of the Ether is controlled in such a way that
`it is not only possible but probable that two or more sta-
`tions will attempt to transmit a packet at roughly the
`same time. Packets which overlap in time on the Ether
`are said to collide; they interfere so as to be unrecogniza-
`ble by a receiver. A station recovers from a detected
`collision by abandoning the attempt and retransmitting
`the packet after some dynamically chosen random time
`period. Arbitration of conflicting transmission demands
`is both distributed and statistical.
`
`When the Ether is largely unused, a station transmits
`its packets at will, the packets are received without error,
`and all is well. As more stations begin to transmit, the
`rate of packet interference increases. Ethernet controllers
`in each station are built to adjust the mean retransmission
`interval in proportion to the frequency of collisons;
`sharing of the Ether among competing station—station
`transmissions is thereby kept near the optimum [20, 21].
`A degree of cooperation among the stations is re-
`quired to share the Ether equitably. In demanding ap-
`plications certain stations might usefully take trans-
`mission priority through some systematic violation of
`equity rules. A station could usurp the Ether by not ad-
`justing its retransmission interval with increasing traffic
`or by sending very large packets. Both practices are now
`prohibited by low-level software in each station.
`
`3.3 Addressing
`Each packet has a source and destination, both of
`which are identified in the packet’s header. A packet
`placed on the Ether eventually propagates to all sta-
`tions. Any station can copy a packet from the Ether into
`its local memory, but normally only an active destina-
`tion station matching its address in the packet’s header
`will do so as the packet passes. By convention, a zero
`destination address is a wildcard and matches all ad-
`
`dresses; a packet with a destination of zero is called a
`broadcast packet.
`
`3.4 Reliability
`An Ethernet is probabilistic. Packets may be lost due
`to interference with other packets, impulse noise on the
`
`§§mm““"°‘“‘°°S
`eth ACM
`
`fig PMC Exhibit 2128
`Number 7
`Apple V- PMC
`|PR2016-00755
`
`Page 3
`
`STATION
`
`
`
`
`
`STATION
`
`
`
`Fig. 1. A two-segment Ethernet.
`
`TERMINATOR
`
`TAP
`
`TRANS-
`CEIVER
`
`INTER!-‘A E
`
`
`
` IHFFOH-I200
`
`
`
`"'§-)2l’1ZDl1'1U15111:‘-15
`flHl'l"O5"lZO(')
`
`TRANS-
`CEIVER
`
`l'iO>fiHl’1-!Z"'
`
`
`
`
`TRANS-
`CEIVER
`
`REPEATER
`
`CONTROLLER
`
`
`
`
`TRANS-
`CEIVER
`
`TRANS-
`CEIVER
`
`FPHER SEGMENT II2
`
`the intended receivers. The Ether is a logically passive
`medium for the propagation of digital signals and can
`be constructed using_any number of media including
`coaxial cables, twisted pairs, and optical fibers.
`
`3.1 Topology
`We cannot afford the redundant connections and
`
`dynamic routing of store—and—forward packet switching
`to assure reliable communication, so we choose to
`achieve reliability through simplicity. We choose to
`make the shared communication facility passive so that
`the failure of an active element will tend to affect the
`
`communications of only a single station. The layout and
`changing needs of office and laboratory buildings leads
`us to pick a network topology with the potential for
`convenient incremental extention and reconfiguration
`with minimal service disruption.
`The topology of the Ethernet is that of an unrooted
`tree. It is a tree so that the Ether can branch at the en-
`
`trance to a building’s corridor, yet avoid multipath in-
`terference. There must be only one path through the
`Ether between any source and destination; if more than
`one path were to exist, a transmission would interfere
`with itself, repeatedly arriving at its intended destina-
`tion having travelled by paths of different length. The
`Ether is unrooted because it can be extended from any of
`its points in any direction. Any station wishing to join
`
`397
`
`PMC Exhibit 2128
`Apple v. PMC
`IPR2016-00755
`Page 3
`
`

`
`Ether, an inactive receiver at a packet’s intended desti-
`nation, or purposeful discard. Protocols used to com-
`municate through an Ethernet must assume that packets
`will be received correctly at intended destinations only
`with high probability.
`An Ethernet gives its best efforts to transmit packets
`successfully, but it is the responsibility of processes in the
`source and destination stations to take the precautions
`necessary to assure reliable communication of the quality
`they themselves desire [18, 21]. Recognizing the costli-
`ness and dangers of promising “error-free” communi-
`cation, we refrain from guaranteeing reliable delivery
`of any single packet to get both economy of transmis-
`sion and high reliability averaged over many packets
`[21]. Removing the responsibility for reliable communi-
`cation from the packet transport mechanism allows us to
`tailor reliability to the application and to place error re-
`covery where it will do the most good. This policy be-
`comes more important as Ethernets are interconnected
`in a hierarchy of networks through which packets must
`travel farther and suffer greater risks.
`
`3.5 Mechanisms
`
`A station connects to the Ether with a tap and a
`transceiver. A tap is a device for physically connecting to
`the Ether while disturbing its transmission characteris-
`tics as little as possible. The design of the transceiver
`must be an exercise in paranoia. Precautions must be
`taken to insure that likely failures in the transceiver or
`station do not result in pollution of the Ether. In par-
`ticular, removing power from the transceiver should
`cause it to disconnect from the Ether.
`
`Five mechanisms are provided in our experimental
`Ethernet for reducing the probability and cost of losing
`a packet. These are (1) carrier detection, (2) interference
`detection,
`(3) packet error detection,
`(4)
`truncated
`packet filtering, and (5) collision consensus enforcement.
`
`3.5.1 Carrier detection. As a packet’s bits are placed
`on the Ether by a station, they are phase encoded (like
`bits on a magnetic tape), which guarantees that there is
`at least one transition on the Ether during each bit time.
`The passing of a packet on the Ether can therefore be de-
`tected by listening for its transitions. To use a radio
`analogy, we speak of the presence of carrier as a packet
`passes a transceiver. Because a station can sense the car-
`rier of a passing packet, it can delay sending one of its
`own until the detected packet passes safely. The Aloha
`Network does not have carrier detection and conse-
`
`quently suffers a substantially higher collision rate.
`Without carrier detection, efficient use of the Ether
`would decrease with increasing packet length. In Section
`6 below, we show that with carrier detection, Ether
`efficiency increases with increasing packet length.
`With carrier detection we are able to implement
`
`deference: no station will start transmitting while hearing
`carrier. With deference comes acquisition: once a packet
`transmission has been in progress for an Ether end—to—
`
`398
`
`end propagation time, all stations are hearing carrier
`and are deferring; the Ether has been acquired and the
`transmission will complete without an interfering colli-
`sion.
`
`With carrier detection, collisions should occur only
`when two or more stations find the Ether silent and be-
`
`gin transmitting simultaneously: within an Ether end—to-
`end propagation time. This will almost always happen
`immediately after a packet transmission during which
`two or more stations were deferring. Because stations do
`not now randomize after deferring, when the trans-
`mission terminates, the waiting stations pile on together,
`collide, randomize, and retransmit.
`3.5.2 Interference detection. Each transceiver has an
`interference detector. Interference is indicated when the
`transceiver notices a difference between the value of the
`
`bit it is receiving from the Ether and the value of the bit
`it is attempting to transmit.
`Interference detection has three advantages. First, a
`station detecting a collision knows that its packet has
`been damaged. The packet can be scheduled for re-
`transmission immediately, avoiding a long acknowledg-
`ment timeout. Second, interference periods on the Ether
`are limited to a maximum of one round trip time. Collid-
`ing packets in the Aloha Network run to completion,
`but the truncated packets resulting from Ethernet colli-
`sions waste only a small fraction of a packet time on the
`Ether. Third, the frequency of detected interference is
`used to estimate Ether traffic for adjusting retrans-
`mission intervals and optimizing channel efficiency.
`
`3.5.3 Packet error detection. As a packet is placed
`on the Ether, a checksum is computed and appended.
`As the packet is read from the Ether, the checksum is
`recomputed. Packets which do not carry a consistent
`checksum are discarded. In this way transmission errors,
`impulse noise errors, and errors due to undetected inter-
`ference are caught at a packet’s destination.
`
`3.5.4 Truncated packet filtering. Interference de-
`tection and deference cause most collisions to result in
`
`truncated packets of only a few bits; colliding stations
`detect interference and abort transmission within an
`
`Ether round trip time. To reduce the processing load
`that the rejection of such obviously damaged packets
`would place on listening station software,
`truncated
`packets are filtered out in hardware.
`
`3.5.5 Collision consensus enforcement. When a sta-
`
`tion determines that its transmission is experiencing in-
`terference, it momentarily jams the Ether to insure that
`all other participants in the collision will detect inter-
`ference and, because of deference, will be forced to abort.
`Without this collision consensus enforcement mechanism,
`it is possible that the transmitting station which would
`otherwise be the last to detect a collision might not do
`so as the other interfering transmissions successively
`abort and stop interfering. Although the packet may
`look good to that last transmitter, different path lengths
`
`ff"mm““i°*“i°“-S
`the ACM
`
`Number 7
`
`PMC Exhibit 2128
`Apple V- PMC
`|PR2016-00755
`
`Page 4
`
`PMC Exhibit 2128
`Apple v. PMC
`IPR2016-00755
`Page 4
`
`

`
`between the colliding transmitters and the intended re-
`ceiver will cause the packet to arrive damaged.
`
`4. Implementation
`
`Our choices of 1 kilometer, 3 megabits per second,
`and 256 stations for the parameters of an experimental
`Ethernet were based on characteristics of the locally
`distributed computer communication environment and
`our assessments of what would be marginally achiev-
`able; they were certainly not hard restrictions essential
`to the Ethernet concept.
`We expect that a reasonable maximum network size
`would be on the order of 1 kilometer of cable. We used
`
`this working number to choose among Ethers of varying
`signal attenuation and to design transceivers with ap-
`propriate power and sensitivity.
`The dominant station on our experimental Ethernet
`is a minicomputer for which 3 megabits per second is a
`convenient data transfer rate. By keeping the peak rate
`well below that of the computer’s path to main memory,
`we reduce the need for expensive special—purpose packet
`buffering in our Ethernet interfaces. By keeping the peak
`rate as high as is convenient, we provide for larger num-
`bers of stations and more ambitious multiprocessing
`communications applications.
`To expedite low—level packet handling among 256
`stations, we allocate the first 8-bit byte of the packet to
`be the destination address field and the second byte to be
`the source address field (see Figure 2). 256 is a number
`small enough to allow each station to get an adequate
`share of the available bandwidth and approaches the
`limit of what we can achieve with current techniques for
`tapping cables. 256 is only a convenient number for the
`lowest level of protocol; higher levels can accomodate
`extended address spaces with additional fields inside the
`packet and software to interpret them.
`Our experimental Ethernet implementation has four
`major parts: the Ether, transceivers, interfaces, and con-
`trollers (see Figure 1).
`
`4.1 Ether
`
`We chose to implement our experimental Ether using
`low—loss coaxial cable with olT—the—shelf CATV taps and
`connectors. It is possible to mix Ethers on a single
`Ethernet; we use a smaller—diameter coax for convenient
`connection within station clusters and a larger—diameter
`coax for low—loss runs between clusters. The cost of
`
`coaxial cable Ether is insignificant relative to the cost of
`the distributed computing systems
`supported by
`Ethernet.
`
`4.2 Transceivers
`
`Our experimental transceivers can drive a kilometer
`of coaxial cable Ether tapped by 256 stations trans-
`mitting at 3 megabits per second. The transceivers can
`endure (i.e.
`\"0I'k after) sustained direct shorting,
`im-
`
`399
`
`proper termination of the Ether, and simultaneous
`drive by all 256 stations; they can tolerate (i.e. work
`during) ground differentials and everyday electrical
`noise, from typewriters or electric drills, encountered
`when stations are separated by as much as a kilometer.
`An Ethernet
`transceiver attaches directly to the
`Ether which passes by in the ceiling or under the floor.
`It is powered and controlled through five twisted pairs
`in an interface cable carrying transmit data, receive
`data,
`interference detect, and power supply voltages.
`When unpowered,
`the transceiver disconnects itself
`electrically from the Ether. Here is where our fight for
`reliability is won or lost; a broken transceiver can, but
`should not, bring down an entire Ethernet. A watchdog
`timer circuit in each transceiver attempts to prevent
`pollution of the Ether by shutting down the output stage
`if it acts suspiciously. For transceiver simplicity we use
`the Ether’s base frequency band, but an Ethernet could
`be built to use any suitably sized band of a frequency di-
`vision multiplexed Ether.
`Even though our experimental transceivers are very
`simple and can tolerate only limited signal attenuation,
`they have proven quite adequate and reliable. A more
`sophisticated transceiver design might permit passive
`branching of the Ether and wider station separation.
`
`4.3 Interface
`An Ethernet interface serializes and deserializes the
`
`parallel data used by its station. There are a number of
`different stations on our Ethernet; an interface must be
`built for each kind.
`
`Each interface is equipped with the hardware neces-
`sary to compute a 16-bit cyclic redundancy checksum
`(CRC) on serial data as it is transmitted and received.
`This checksum protects only against errors in the Ether
`and specifically not against errors in the parallel por-
`tions of the interface hardware or station. Higher—level
`software checksums are recommended for applications
`in which a higher degree of reliability is required.
`A transmitting interface uses a packet buffer address
`and word count to serialize and phase encode a variable
`number of 16-bit words which are taken from the sta-
`
`tion’s memory and passed to the transceiver, preceded
`by a start bit (called SYNC in Figure 2) and followed by
`the CRC. A receiving interface uses the appearance of
`carrier to detect the start of a packet and uses the SYNC
`bit to acquire bit phase. As long as carrier stays on, the
`interface decodes and deserializes the incoming bit
`stream depositing 16-bit words in a packet bufl'er in the
`station’s main memory. When carrier goes away, the
`interface checks that an integral number of 16-bit words
`has been received and that the CRC is correct. The last
`
`word received is assumed to be the CRC and is not copied
`into the packet buffer.
`These interfaces ordinarily include hardware for
`accepting only those packets with appropriate addresses
`in their headers. Hardware address filtering helps a sta-
`tion avoid burdensome software packet processing when
`
`Communications
`of
`the ACM
`
`July
`Volume 19
`Number 7
`
`Apple V- PMC
`|PR2016-00755
`
`Page 5
`
`PMC Exhibit 2128
`Apple v. PMC
`IPR2016-00755
`Page 5
`
`

`
`Fig. 2. Ethernet packet layout.
`ACCESSIBLE T0 SOFTWARE
`
`
`O2-<‘/1
`
`DFST
`
`SOURCE
`
`CHECKSUM
`
`
`B BITS
`B BI'lS
`- 4000 EH5
`I6 BITS
`
`the Ether and extending its signal cover, there is a trade-
`ofl” between using sophisticated transceivers and using
`repeaters. With increased power and sensitivity, trans-
`ceivers become more expensive and less reliable. The
`introduction of repeaters into an Ethernet makes the
`centrally interconnecting Ether active. The failure of a
`transceiver will sever the communications of its owner;
`the failure of a repeater partitions the Ether severing
`many communications.
`
`the Ether is very busy carrying traffic intended for other
`stations.
`
`5.2 Traffic Cover
`
`4.4 Controller
`
`An Ethernet controller is the station—specific low-
`level firmware or software for getting packets onto and
`out of the Ether. When a source—detected collision oc-
`
`curs, it is the source controller’s responsibility to gene-
`rate a new random retransmission interval based on the
`
`updated collision count. We have studied a number of al-
`gorithms for controlling retransmission rates in stations
`to maintain Ether efliciency [20, 22]. The most practical
`of these algorithms estimate traffic load using recent
`collision history.
`Retransmission intervals are multiples of a slot, the
`maximum time between starting a transmission and de-
`tecting a collision, one end-to—end round trip delay. An
`Ethernet controller begins transmission of each new
`.packet with a mean retransmission interval of one slot.
`Each time a transmission attempt ends in collision, the
`controller delays for an interval of random length with a
`mean twice that of the previous interval, defers to any
`passing packet, and then attempts retransmission. This
`heuristic approximates an algorithm we have called
`Binary Exponential Backoff (see Figure 3) [22].
`When the network is unloaded and collisions are
`
`rare, the mean seldom departs from one and retrans-
`missions are prompt. As the traffic load increases, more
`collisions are experienced, a backlog of packets builds
`up in the stations, retransmission intervals increase, and
`retransmission traffic backs off
`to sustain channel
`
`efficiency.
`
`5. Growth
`
`5.1 Signal Cover
`One can expand an Ethernet just so far by adding
`transceivers and Ether. At some point, the transceivers
`and Ether will be unable to carry the required signals.
`The signal cover can be extended with a simple un-
`buffered packet repeater. In our experimental Ethernet,
`where because of transceiver simplicity the Ether cannot
`be branched passively, a simple repeater may join any
`number of Ether segments to enrich the topology while
`extending the signal cover.
`We operate an experimental two-segment packet re-
`peater, but hope to avoid relying on them. In branching
`
`400
`
`One can expand an Ethernet just so far by adding
`Ether and packet repeaters. At some point the Ether will
`be so busy that additional stations will just divide more
`finely the already inadequate bandwidth. The trafiic
`cover can be extended with an unbuffered traific—filtering
`repeater or packet filter, which passes packets from one
`Ether segment to another only if the destination station
`is located on the new segment. A packet filter also ex-
`tends the signal cover.
`
`5.3 Address Cover
`
`One can expand an Ethernet just so far by adding
`Ether, repeaters, and traffic filters. At some point there
`will be too many stations to be addressed with the Ether-
`net’s 8-bit addresses. The address cover can be extended
`
`with packet gateways and the software addressing con-
`ventions they implement [7]. Addresses can be expanded
`in two directions: down into the station by adding fields
`to identify destination ports or processes within a sta-
`tion, and up into the internetwork by adding fields to
`identify destination stations on remote networks.
`A gateway also extends the traffic and signal covers.
`There can be only one repeater or packet filter con-
`necting two Ether segments; a packet repeated onto a
`segment by multiple repeaters would interfere with itself.
`However, there is no limit to the number of gateways
`connecting two segments; a gateway only repeats packets
`addressed to itself as an intermediary. Failure of the
`single repeater connecting two segments partitions the
`network; failure of a gateway need not partition the net
`it there are paths through other gateways between the
`segments.
`
`6. Performance
`
`We present here a simple set of formulas with which
`to characterize the performance expected of an Ethernet
`when it is heavily loaded. More elaborate analyses and
`several detailed simulations have been done, but the
`following simple model has proven very useful
`in
`understanding the Ethernet’s distributed contention
`scheme, even when it is loaded beyond expectations
`[1, 20, 21, 22, 23, 27].
`We develop a simple model of the performance of a
`loaded Ethernet by examining alternating Ether time
`periods. The first, called a transmission interval, is that
`
`Communications
`of
`the ACM
`
`July 1976
`Volume 19
`Number 7
`
`PMC Exhibit 2128
`
`Apple V- PMC
`|PR2016-00755
`
`Page 6
`
`PMC Exhibit 2128
`Apple v. PMC
`IPR2016-00755
`Page 6
`
`

`
`to be the optimum statistical decision rule, approxi-
`mated in Ethernet stations by means of our load—esti-
`mating retransmission control algorithms [20, 21].
`
`6.1 Acquisition Probability
`We now compute A, the probability that exactly one
`station attempts a transmission in a slot and therefore
`acquires the Ether. A is Q*(1/Q)*((l — (1/Q))**
`(Q — 1)); there are Q ways in which one station can
`choose to transmit (with probability (1/Q)) while Q — 1
`stations choose to wait (with probability 1 — (l/Q)).
`Simplifying,
`
`A = (1 —(1/Q))‘°“’-
`
`6.2 Waiting Time
`
`We now compute W, the mean number of slots of
`waiting i

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket