throbber
The Kiewit Network: A Large AppleTalk Internetwork
`
`Richard E. Brown
`Dartmouth College, Kiewit Computer Center, Hanover, NH 03755
`
`Abstract: Dartmouth College's Kiewit Net-
`work connects nearly all of the computing re-
`sources on the campus: mainframes, mini-
`computers, personal computers, terminals,
`printers, and file servers. It is a large internet-
`work, based on the AppleTalk protocols. There
`are currently over 2900 AppleTalk outlets in 44
`zones on campus. Over 90 minicomputers act
`as bridges between 177 AppleTalk twisted pair
`busses. This paper describes the extent and fa-
`cilities of the current network; the extensions
`made to the AppleTalk protocols, including a
`stream protocol and an asynchronous link pro-
`tocol; and current development projects, includ-
`ing an AppleTalk stream to TCP converter.
`
`The Kiewit Network is a general purpose local
`data network. It serves as glue between all the
`host computers and nearly all the personal com-
`puters and terminals on campus. The network
`extends to the academic and administrative
`buildings, and the student residence halls, with
`network ports placed in almost all the offices
`and rooms. From these ports, users may access
`any of the host computers connected to the net-
`work. In addition, the hosts use network con-
`nections to communicate between themselves
`for electronic mail, file transfer, printer sharing,
`etc. The network also supports Apple Com-
`puter's AppleTalk® protocols. Departments and
`individuals can purchase standard devices
`which use AppleTalk (such as LaserWriters®
`and file servers), and connect them directly to
`the network.
`
`Permission to copy without fee all or part of this material is granted
`provided that the copies are not made or distributed for direct
`commercial advantage, the ACM copyright notice and the title of
`the publication and its date appear, and notice is given that copying
`is by permission of the Association for Computing Machinery. To
`copy otherwise, or to republish, requires a fee and /or specfic
`permission.
`
`© 1988 ACM 0- 89791- 245- 4/88/0001/0015 $1.50
`
`15
`
`The Kiewit Network is not part of the tele-
`phone, energy control, or security systems of
`the College. While there can be benefits from
`combining them, separating these systems has
`simplified our design in many ways. A network
`failure doesn't interrupt these other services.
`We can also schedule experimental time for test-
`ing new software at our convenience.
`
`Dartmouth College started its campus network
`with General Electric Datanet 30 computers act-
`ing as front ends for the Dartmouth College
`Time Sharing (DCTS) system. In 1978, as the
`requirement for ports grew, Honeywell 716
`minicomputers replaced the Datanet machines.
`In the same time period, two factors forced a
`change in perspective away from a front -end
`function toward a networked approach. New
`hosts began to arrive on campus. These new
`machines required enhancements to the net-
`work. Second, off -campus sites demanded
`more ports than the existing phone lines could
`supply. We served these remote users by using
`Prime P200 minicomputers as statistical multi-
`plexors. In 1980, we implemented the DCTS
`terminal handling protocols on a less expensive
`minicomputer, and began to place these network
`nodes in the basements of buildings across the
`campus.
`
`When the College recommended that the fresh-
`men entering in the fall of 1984 purchase a
`MacintosW" computer, funds were appropriated
`for three major network enhancements: installa-
`tion of a network port for each undergraduate in
`the residence halls; conversion of the network to
`the AppleTalk protocols; and creation of a termi-
`nal emulation program to exploit the enhanced
`network capabilities.
`
`GELHAR
`Exhibit No. 3 for I.D.
`Reporter: K.A. Smith
`Date: 613115
`
`

`
`Computers at Dartmouth
`Dartmouth supports a variety of computing en-
`vironments which include the Dartmouth Col-
`lege Time Sharing system (DCTS), Unix® (4.2
`and 4.3 bsd), VAX/VMS, IBM VM/CMS, and
`Primos operating systems. The host computers
`which support these include two Honeywell
`DPS- 8/49's, one VAX 8500, eight VAX 11/
`785 and 11/750's, six microVAX'es, one IBM
`43814, a Convex Cl. XP, and a Prime 650.
`The network also serves Sun and Xerox work-
`stations, many IBM RT PC's, and the Telenet
`public data network
`
`Several services are accessible from the network
`without logging in:
`the library's on -line card
`catalog, a computer help system, the campus
`events calendar, and several student jobs
`databases.
`
`Owners of Macintosh computers use DarTermi-
`nal, a locally- developed terminal emulation pro-
`gram, to connect to any of the host computers
`on campus. DarTerminal supports multiple si-
`multaneous sessions, each in its own window;
`cut -and -paste transfer between sessions; VT100
`emulation; TEK 4012 graphics; the ability to
`transfer arbitrary Macintosh documents to and
`from hosts; and a distributed screen editor called
`MacAvatar ® This screen editor works with a
`back -end editor on a host computer to give a
`Macintosh -style editing interface for each of the
`hosts. This gives us essentially the same screen
`editor on four of the popular operating systems
`on campus: Macintosh, DCTS, Unix, and
`VAX/VMS.
`
`There are about 4000 Macintosh computers on
`campus. Over half are owned by students who
`live in residence halls. About 800 IBM PCs of
`various models use asynchronous ports to con-
`nect to the network. There are over 1200 known
`RS -232 terminals, including CRT's, dot -matrix
`printers, and letter -quality printers.
`
`Why choose AppleTalk?
`A number of factors led to our choice of Apple -
`Talk as the base protocol within the Kiewit Net-
`work: the difficulty of maintaining the existing
`network; the desire to use higher bandwidth
`links; the existence of a coaxial cable around the
`campus; the low cost of an AppleTalk connec-
`
`tion; and the College's decision to recommend
`Macintosh computers to the students.
`
`In early 1984, the Kiewit Network comprised
`about 40 minicomputers which acted as packet
`switches and terminal concentrators. The net-
`work served about 1500 terminal ports (at 2400
`bps) with a normal load of 200 -250 simultane-
`ous terminal sessions. Internally, the network
`was based on X.25 level 2 data links, with the
`DCTS terminal handling protocols as a transport
`layer. The links between buildings were 19.2
`kbps local area data circuits.
`
`The network met our user's demands, but was
`quite hard to maintain. The major problem was
`that all routing information had to be hard -
`configured: a network engineer had to enter an
`explicit path from each of the network hub
`nodes to each host computer. Furthermore, we
`had to balance traffic flows through the network
`manually, by reconfiguring these paths. As the
`number of hosts increased, it was becoming dif-
`ficult to manage and configure the network.
`
`In 1978, as part of a major project to install
`cable TV wire around the campus, Kiewit had
`several runs of coaxial cable laid to the major
`buildings on campus. We believed that the best
`way to exploit the coax was to have a datagram-
`oriented network. We had considered stripping
`down TCP and IP to make a protocol with ade-
`quate performance across the medium speed
`links on and off campus.
`
`The decision, in 1984, to recommend that fresh-
`men purchase personal computers gave us ac-
`cess to real networking, with enough computing
`power to support reasonable protocols. After
`studying Apple's network plans, we saw that
`AppleTalk gave adequate internetwork facilities,
`yet was simple enough to be reliable and imple-
`mentable. In addition, AppleTalk was consider-
`ably more efficient than the other contender, the
`Internet protocols.
`
`AppleTalk requires only a $50 connector to ac-
`cess the network. By using relatively inexpen-
`sive bridges, AppleTalk is very cost competitive
`with other networking schemes, even RS -232
`async terminal ports.
`
`All these reasons convinced us that AppleTalk
`would be a reasonable alternative. AppleTalk
`
`16
`
`

`
`offered low -cost connections, with the dynamic
`configuration that we desired. We had great
`hope that it would succeed in the market, so that
`third party products could enhance the value of
`the campus network. Even if it failed, though,
`we were sure that AppleTalk would work here
`at Dartmouth. We began a major redesign of the
`network in May 1984. When the freshmen ar-
`rived in September, the network was providing
`AppleTalk services in the residence halls.
`
`Inside the Network
`The Kiewit Network is a large AppleTalk inter-
`net. The network nodes act as AppleTalk
`bridges, providing routing, naming and zone
`services. In addition, the bridges act as terminal
`servers and as controllers for networked line
`printers using an AppleTalk stream protocol.
`Any device which uses the Apple protocols can
`be connected to one of the 230.4 kbps ports and
`communicate with other AppleTalk devices any-
`where on campus. As commercial products
`(networked printers, file servers, electronic
`mail, etc) have come to market, our users have
`a much wider choice of products than we could
`have developed internally.
`
`We currently have 95 bridges active in the net-
`work. These support 82 AppleTalk busses (at
`230.4 kbps). We give fictitious network num-
`bers to the terminal servers, and thus have a to-
`tal of 177 networks, divided over 44 zones.
`Typically, one zone serves a single building.
`
`The network supports the full set of AppleTalk
`protocols including AppleTalk Link Access Pro-
`tocol (ALAP), Datagram Delivery Protocol
`(DDP), Routing Table Maintenance Protocol
`(RTMP), Name Binding Protocol (NBP), and
`Zone Information Protocol (ZIP). ALAP sends
`packets between devices on a single bus. DDP
`is responsible for routing packets between de-
`vices on separate AppleTalk busses. RTMP de-
`fines how the network nodes keep track of the
`best route to all other nodes. NBP and ZIP de-
`fine how a name of a service is converted to a
`(numeric) network address.
`
`The Kiewit Network also supports the X.25
`protocol, connecting us to host computers and
`public data networks. This X.25 package has
`been certified by Telenet, Tymnet, and Uninet.
`
`The X.25 also provides terminal connections
`and file transfers to our VMS, Unix, and VM/
`CMS hosts.
`
`In addition to the standard Apple- defined proto-
`cols, we designed and implemented two en-
`hancements. An Async AppleTalk Link Access
`Protocol can replace the standard (230.4 kbps)
`ALAP. Async AppleTalk allows a Mac to con-
`nect into an AppleTalk network over an async
`(RS -232) link, even on a 1200 baud dial -up
`line. Async AppleTalk has been implemented as
`a Macintosh driver. We use Async AppleTalk
`for printing to LaserWriters, file serving, elec-
`tronic mail, terminal access, and several of the
`public domain AppleTalk packages. Currently,
`Async AppleTalk only works with the terminal
`ports of the Kiewit Network; commercial prod-
`ucts which convert between the async and
`230.4 kbps links are beginning to come to
`market.
`
`We also designed a data stream protocol (DSP)
`which we use for terminal access to host com-
`puters (see The Data Stream Protocol, below).
`
`Network Hardware
`The packet switching computers of the network
`are 16 bit minicomputers, manufactured by
`New England Digital (NED) of White River
`Junction, VT. These nodes act as terminal con-
`centrators, and perform packet switching and
`resource naming. Nodes contain up to 128 Kb
`of RAM and a variety of interface boards.
`
`All nodes contain a small ROM bootstrap. The
`program in the ROM establishes a connection to
`a central reload server and requests a reload.
`This ROM bootstrap loads in a larger RAM
`bootstrap program, which continues the reload
`procedure.
`
`Each processor contains a 10 second watch dog
`timer which is reset by the software's main
`loop. If the software or hardware reaches an er-
`ror condition and fails to reset the watch dog
`timer, the processor transfers into the ROM
`bootstrap.
`
`Each node computer can support a combination
`of up to 56 asynchronous (RS -232) ports, or up
`to 12 HDLC interfaces which use X.25 (to 56
`
`17
`
`

`
`kbps) or the 230.4 kbps AppleTalk ALAP. We
`connect to our Honeywell hosts through disk
`channels on the I/O multiplexor. The NED
`nodes also support Centronics and Data Prod-
`ucts printer interfaces.
`
`Network Topology
`The nodes of the network are connected to form
`a tree, rooted in the reloading machine located in
`the computing services building. There are mul-
`tiple interconnections between the nodes within
`the computer center. Notable among these links
`is a cross -tree AppleTalk bus which joins a
`dozen nodes in the building.
`
`These central nodes fan -out to other nodes
`which in turn connect to leaf nodes in the base-
`ments of buildings around campus. These leaf
`nodes contain asynchronous ports, AppleTalk
`interface boards, and printer interfaces.
`
`Links between nodes are either medium speed
`synchronous (HDLC) lines or standard 230.4
`kbps AppleTalk busses. The links between
`buildings are generally 19.2 kbps LADS (local
`area data set) circuits.
`
`Within buildings, we place a four -prong tele-
`phone outlet (the old style square jack, not a
`modular jack) in each room or office and run
`four- conductor shielded station wire from the
`room to a central point on each floor. Addition-
`ally, we run a vertical backbone of 50 -pair cable
`from the network node (generally in the base-
`ment) to each floor.
`
`This wiring scheme has the advantage that it
`handles either asynchronous terminals with RS-
`232 signaling or AppleTalk devices with an RS-
`422 driver. We select asynchronous or Apple-
`Talk connections simply by cross -wiring the
`station wire to the appropriate pair(s) in the
`backbone cable.
`
`In the residence halls, we have one outlet per
`bed, for a total of 2600 outlets. In the academic
`and administrative buildings, there are more
`than 1650 active asynchronous terminal ports.
`Currently, a project to convert these RS -232
`outlets to AppleTalk has installed about 300
`ports.
`
`The Master Node
`A single network node (the "master node ")
`watches over the network. It runs standard net-
`work software with additional functions, and
`never carries production traffic; it only monitors
`the network and reloads nodes which have
`failed.
`The master node is at the top of the network
`tree. It is the only machine which has mass stor-
`age. Its disks hold reload images and configura-
`tion information for all the nodes in the
`network. When a failure occurs, the master
`node receives a copy of the failed memory im-
`age (a "dump "), sends a fresh copy of the pro-
`gram and configuration information to the failed
`node, and forwards a copy of the dump to the
`DCTS mainframe for examination. To track
`down the cause of the failure, a program on
`DCTS displays a stack trace, identifies and for-
`mats packets in a human -readable form, and
`makes several automatic checks on the binary
`image.
`
`Configuration and Startup
`Configuration information for the network is
`stored in a (text) file on the master node. The
`configuration can be changed by editing the file
`and reloading the affected node (we cannot cur-
`rently perform on- the -fly configuration). To re-
`load a node, a technician either presses a front
`panel switch or forces a crash from a privileged
`terminal. Forcing a crash is simple: the node
`turns off interrupts, and enters a tight loop. The
`watch dog timer and ROM bootstrap do the rest.
`A front panel switch on the remote node, or on
`the master node, can force the master node to
`skip the dump procedure, saving time and
`storage.
`
`New revisions of the software are installed the
`same way. We update the binary image of the
`program on the master node's disk, and then
`force a reload in the particular node. We can re-
`load the entire network by asking the master
`node to pass a "death packet" to its children.
`Each recipient passes the packet to its children
`and then begins its reload procedure. Thus the
`reload request spreads outward through the net-
`work, and it reloads from the center. One node
`takes about two minutes to load (with a dump);
`the entire network reloads (without dumps) in
`15 minutes.
`
`18
`
`

`
`Master Node Monitoring
`The master node polls each of the other nodes,
`and prepares a display (called the "status dis-
`play") which shows software response times,
`the round trip time from the master to the other
`node, number of connections in a node, errors
`on a link, and other performance statistics. Any
`terminal on the network may connect to the
`master and receive this display.
`
`The status display has several views on the net-
`work's state. The default screen shows errors -
`nodes or links which are in an unusual state.
`The terminal bell sounds whenever an entity
`changes to a worse state. A technician can si-
`lence the bell by acknowledging the problem
`from a network privileged terminal; typically,
`the technician enters information about the trou-
`ble, with their actions and initials.
`
`The status display can also show information
`about the entire network, a specific node or
`link, or a certain class of link.
`
`Troubleshooting Capabilities
`We have invested heavily in the control and
`monitoring functions for the network. In the
`standard network software, between 10 and 15
`per cent of the total lines of code are devoted to
`testing and diagnostics.
`
`Each network node has a "control process"
`which accepts connections (with password pro-
`tection) from terminals. This process provides a
`user interface to the monitoring functions in the
`machine. Support personnel can connect to the
`control process and request information about
`the relevant aspects of a node's performance.
`This includes link information (bytes and pack-
`ets per second, errors per minute), node infor-
`mation (free storage, per cent busy, per cent
`interrupt processing), routing tables, the current
`configuration, terminal port activity, the state of
`active connections, etc.
`
`Each code module within the node has calls to
`routines which can produce error messages with
`detailed information about interesting events.
`Examples are CRC errors on a link, checksum
`errors, routing table changes, protocol viola-
`tions, etc. The various protocol layers can pro-
`duce lines containing relevant protocol
`
`information. A support person can cause these
`lines to be formatted and displayed by setting
`the appropriate monitoring flag. We are careful
`to keep monitoring flags turned off most of the
`time, since formatting the data is computation-
`ally expensive and often increases the node's re-
`sponse time.
`
`Each node maintains a connection to one of our
`DCTS host computers for archival purposes.
`There, a log file for each node continually re-
`ceives information about unusual events from
`the control process. We perform daily mainte-
`nance on these files, watching for files which
`have grown too much or which show unusual
`occurrences.
`
`The Data Stream Protocol
`In the summer of 1984, we designed and imple-
`mented a data stream protocol (DSP) as part of
`our DarTerminal terminal emulation program.
`We chose to design a new protocol, rather than
`use the industry- standard TCP family, for these
`reasons:
`
`At the time, none of our host computers
`used TCP (terminal connections were pri-
`marily over X.25 links); we would have
`had to integrate TCP/IP as well as the Ap-
`pleTalk protocols into the existing network.
`There was no IP support within a
`Macintosh at the time.
`
`The links of the network were slow. Most
`of the links between buildings were me-
`dium speed (19.2 kbps) local area data cir-
`cuits, we had several links to off -campus
`sites running at 2400 baud. With the over-
`head of TCP and IP headers, these links
`would have given poor service for many
`applications (most notably host character
`echoes).
`
`The original design used a two -byte header.
`This has served well for the last three years. In
`the summer of 1987, we converted to a four-
`byte header because: a) The two -byte header
`had an eight -bit packet sequence number. This
`was too small to prevent wraparound over fast
`links. The new protocol has 16 bits in the se-
`
`19
`
`

`
`In addition to a reliable, sequenced stream of
`messages, DSP provides a means for its clients
`to exchange out -of -band information in the form
`of attention messages. Attention messages may
`be sent any time a connection is opened, even if
`the other end's receive window is closed. Deliv-
`ery of these messages is best -effort; they are not
`sequenced, acknowledged or automatically re-
`transmitted. Only one unread attention message
`will be held in a connection; subsequent atten-
`dons will be discarded.
`
`The DSP Header
`The DSP packets are sent as AppleTalk DDP
`datagrams. The DDP header carries several in-
`formation fields, including the internet source
`and destination socket addresses. A DSP header
`follows the DDP header, using DDP protocol
`type $26 and contains information specific to
`the DSP protocol.
`
`Sequence /Acknowledge Number
`(2 bytes)
`
`Modifiers
`(4 bits)
`
`ATN
`
`Eo
`
`M
`
`AC l
`
`C
`
`CTL
`
`Connection ID
`(1 byte)
`
`Data f eid (optional)
`
`The DSP Header
`
`Byte ordering is assumed to have the most sig-
`nificant (high- order) byte first, conforming to
`the AppleTalk conventions.
`
`Sequence /Acknowledge Number (16 bits)
`Generally, this field contains the sequence
`number of the packet. If ACK is set among
`the control bits, this field contains the se-
`quence number of the packet the sender next
`expects to receive. If ATN is set, this field
`must be zero.
`
`Connection ID (8 bits)
`The sender's identification of the connection.
`Coupled with the sender's internez address,
`this allows unique identification of the
`sender's client by the receiving DSP.
`
`The DSP uses a unique, incrementing con-
`nection ID number (ConnID) for each con-
`nection half, which must never be zero. The
`
`quence number, b) A cooperative effort with a
`third -party AppleTalk vendor required addi-
`tional features that the two -byte protocol could
`not support. This commercial implementation
`will allow our users to use any Macintosh termi-
`nal emulator across an AppleTalk network. The
`rest of this document describes the four -byte
`header.
`The DSP is a client of the AppleTalk Datagram
`Delivery Protocol (DDP) and Name Binding
`Protocol (NBP). A client may use DSP as its
`only interface to the AppleTalk network. The
`DSP establishes a connection between two enti-
`ties on an AppleTalk network. The client may
`specify a name or the AppleTalk address of the
`peer for the connection.
`A DSP connection provides full duplex, relia-
`ble, flow- controlled delivery of a stream of
`bytes. In addition to a raw stream of bytes, the
`DSP provides a record service whereby clients
`use an end of message (EOM) flag to indicate
`that one or more datagrams should be treated as
`a single transaction. A client may also create its
`own layered protocols, by using a protocol type
`field in the DSP header which is not interpreted
`by the DSP.
`The sending side of the DSP accepts buffers of
`data from its client and sends that information
`across the connection. The DSP may fragment a
`buffer into several datagrams; each will contain
`a sequence number. Each of the datagrams is
`held in a retransmit queue, pending receipt of an
`acknowledge packet. The receiving side of the
`DSP checks the sequence number of the arriv-
`ing datagrams, and sends back an acknowledge
`packet for any which are in- sequence. Ac-
`knowledge packets contain the sequence num-
`ber of the next expected datagram and a receive
`window which indicates the number of addi-
`tional maximum- length datagrams which may
`be sent. Datagrams which arrive ahead of se-
`quence are stored in a short queue, pending arri-
`val of the expected datagram(s).
`If a block of data must be fragmented, each
`fragment retains the protocol type field, and all
`but the last have the EOM bit cleared. The last
`datagram has its EOM bit set to the value re-
`quested by the client. The receiver must observe
`the EOM bit and the protocol type field, and
`only reassemble packets into receive buffers
`which have the same type.
`
`20
`
`

`
`connlD is derived from a seed value main-
`tained within each DSP implementation (ini-
`tialized from a random source, such as the low
`order bits of a system clock); it is incremented
`at each connection attempt, and must be
`unique within the set of active connections.
`
`Control bits (4 bits, mutually exclusive)
`CTL: This packet transfers information be-
`tween DSP peers; the CTL bit is not set
`by the client. The modifier field defines
`the control function. The data field may
`carry supplementary control infor-
`mation.
`ACK: This is an acknowledge packet; the ACK
`bit is not set by the client. The modifier
`field contains the receive window. The
`sequence field specifies the next re-
`quested message. There is no data field.
`EOM: This is the last data packet in a "record"
`message; the EOM bit is set by the
`client. The modifier field is user defina-
`ble for implementing layered protocols.
`ATN: This is a client supervisory control
`packet; the ATN bit is set by the client.
`The modifier and data fields carry user
`defined supplementary information. The
`value of the sequence field must be zero.
`
`Modifier Bits (4 bits)
`This field expands the meaning of the four con-
`trol bits defined above (the default case, with no
`control bits, indicates a data packet).
`
`If CTL is set, this field contains a control code
`which identifies the type of DSP control packet:
`note that the values marked as "processed im-
`mediately" will be processed even if ahead -of-
`sequence.
`$0
`Connection Request (non - privileged)
`Connection Request (privileged)
`$1
`$2
`Reserved for future expansion
`Connection Deny
`$3
`$4
`Disconnect Request
`Abort Advise *
`$5
`Acknowledge Request
`$6
`Forward Reset Request *
`$7
`$8 -$F Reserved for future expansion
`*- processed immediately
`
`If ACK is set, this field contains the sender's
`receive window, i.e. the number of maximum -
`length packets, beginning with the one indicated
`in the sequence field, which the sender of this
`packet is willing to accept.
`
`In all other cases (ATN, EOM, no control bits),
`this field is available for client- defined, higher -
`level layered protocols. Reserved protocol
`values are:
`
`$0
`
`$1
`
`$2
`
`$F
`
`A data packet, requiring no special
`interpretation.
`
`Kiewit Network terminal handling pro-
`tocol (TCFACE).
`
`Infosphere ComServe protocol.
`
`Extended modifier value - escape to
`next byte.
`
`If the modifier field contains a value of $F, then
`the following byte (i.e. the first byte of the data
`field) contains the actual modifier value and the
`data field begins one byte later. This allows ex-
`pansion to 256 modifier values. Extended modi-
`fier values of $00 to $0E have the same
`meanings as the basic value range.
`
`Implementation recommendations
`There is always a tradeoff between timely trans-
`mission of data and the desire to increase net-
`work efficiency by sending large packets.
`Optimal transmission strategies are an area of
`active research. The DSP specification offers
`recommendations which are simple (so they are
`likely to be implemented correctly) and safe (so
`they will work, with minimal impact, under all
`network conditions). Implementors may deviate
`from these recommendations, but must be sure
`that the differences will actually perform better
`than these recommendations.
`
`Transmission Strategy
`Whenever data is to be transmitted on an idle
`connection, the sender will foward a window-
`full of data. This is the number of packets speci-
`fied in the receiver's receive window. Once a
`window -full has been sent, the sender will re-
`frain from sending more packets until all out-
`standing packets have been acknowledged. This
`
`21
`
`

`
`-:au ag1 lEgp sla3jord 3o raquanu am sazituiuitu
`jruzudo-atau guipinord ajitjm `aato o1 SEIT 31ao
`luatp agi ao1 asuadsaa
`
`luatio a111 uzoaj EIEp atom saniaoaa aapuas ag13I
`-iu Xr1u :i TuTputpslno rips am spa3Tord arigm
`-Mord Jamo giim riEp mau ag1 auiquaoo olldtual
`dSQ am 3i) Palliuzsueal ag oiSupigm s1a
`sastaaoui siq1, (liuuád Tspg pur addCp T000ioad
`punouzu alp 2uTSraaout
`Aouatopla 31a0m1au am
`°1a3ToEd alp ui papato tlup Jo
`
`1t tlup mau alp autquaoo 1ouuEO aapuas agi ji
`-as mau u Tim) aRtssauz aluaudas t ananb Him
`2uipurpspno TTu uagm liras ag 01(aaquanu aouanb
`pap0Tmou3joE uaag anug s3a3jord
`
`daalrals uorssitusur.ilag
`:ou antg goigm spa3joud spivasuraiaa aapuas ag1
`3jaom:au aqp Juql aansua o1 papajmou3jot uaaq
`dSQ alp `suoissiuisurrlaa glint papoo1l :ou si
`*alai uóTssilusuta:ar s1T lsnfpu XTTroiuxeuAp lsnuz
`-au oput a3rEp put `psngoa ag:snua unjluoTr au
`-ao aaagp
`ñatn uro goiim spaads }TutT lunoo
`(sdgYlTOi op pnrg OOZj) apnitugEUZ Jo slap
`
`Jo altuailsa ut :santEn Onn1 sum-urea" dSQ 0111
`aq: ssoaor (.T_T,g) amp dut punoa luaaano alp
`lnoauzu
`agi put `uopoauuoo
`-anstauz ,Cq 11.21 agi sapEUaT:sa dSQ ag.L (0121)
`Jo uóissicusur.n atp uaannlag auzp pasdEja agp 2ui
`-sa situ a2tssauz 3iou sal Jo ldtaoaa pug 1a3jotd
`sett :a3ptd aq1 3i pasn ag /quo pinoqs a:tutt:
`wogs alrtuilsa ,T°T,g agy aouo pallnzzsurA wag
`rTim `sajduaus pctaoaa 10 a2r1ant papqOtam E aq
`poraa oi) .1J,21 J'utsraaoui aol:urlsuoo auapaaisrl
`(uousaguoo o1 i(j3jotnb
`
`:uaaano ag1 aoimi 02 OL2T aquas (r Tenn dSQ ag1
`r Jo uoissivasutai TtuiuT am aoj a:tuaT:sa ,T_T,21
`am qlim pa3Totd aqi iitttsuralaa /quo (q :1a3jotd
`am up 2uillivasuraia2T °aaquanuaouailbas Isamu'
`STjtaaua2 ananb :ivasuna:aa ag1 ut sañussaut
`QUO Xjuo aouts `aouruuo3a0d anoaduit 1,uom
`luanbasqns lsoj uaag ánrg
`1a3jotd
`s,aantaoaa aqp uo 2upttm ag pjnom sod
`pue `sa2essatu aouanbas jo pump 10 ananb
`aignop (o ;palliLUSUtalaa ag o1 paau pou mom
`(a:guzi:sa auap dup punoa am iou :ng) O1,2T ag1
`2upjorq dg papliiusurapaa si ia3jotd r auail Xur
`-pau agi poop pou Tlim dSQ aq1 `JiTanlssaat jjo
`sastaaouT ICivappns pro' uaqm 3Ta0m
`
`lualua2uurw atopum
`a4aTmou3jou spuas ssaooad dSQ 2utniaoax au
`alp jo uoilou s,rapuas alp a:gpdn o1 saussatu
`702 au mopuim aniaoáa ao aaquanu aouanbas
`-au Jo saquzniz agi aziunuiva ol si dsQ ag13o
`4Crauail an12 Tips pur `puas sattssaut aipaimou}T
`dSQ ag1, °apgis s,aaniaoar agp:nogt uoi1E111.1O3ui
`:uagm apajmou3Tor ut puas Him
`
`:paniaoaa st arssaLU aouanbas-ui uy
``.ananb antaoaa am sapduaa luaiTa au,
`imp gons ananb aniaoaa aup suiEap puaijo au,
`111as anrgn am aoiml si mopuim aniaoaa am
`:a2rssatu apajmou3ror snoinaad atjl ui
`-Tim) sandxa aatuu aT3Tou puooas 0£ atl.I.
`(a2tssaua Jamo Aur 2uipuasino
`
`enaie0 [000lo.td asa ol ¿Di au,
`loafo.td
`-0jd4° am Jo luauzdojanap agi glitt jajTtaud ui
`slau rare jeooT `anogr paquosap 3Taom:au3jjÈ1
`-óT ia3j jtaanas UT p02aauza anEg 1011aa1pg 2utsn
`-12u0 ag: apnTouT asags, sndureo pun= suopEo
`am pug sftipTinq aouatos ralnduaoo put 2uuaau
`-tdX1 alp sraaaqm aaluaD uótlulnduzo0 :imaYx
`saotnap am `gsolutom u sT aotnap 311r1,ajddle Ito
`put) jnlaamod arouz ag op pual lauaaglg ag1 uo
`puu sNjA au1 apnToui /Sag', (anisuadxa aim"
`pur uns `anogt pauopuatuslsogXdA 'tun
`aaquanu Suistaaoui ut pur `suoprls3jaom xoaaX
`firms Jamo puE `Od ix ATEI `XäAoaoiva 10
`Mai, agp asn tiro saoin,ap asag111d saalnduaoo
`'Tauaaq:g 0q1 aano alEOtunuzuxoa o: sj000load
`
`pur 3jiElajddy `spjaomomi owl uotstntp stgl
`02 preTT si :t :3jotgmrap aofruz t stg `pauaaq:g
`tam °s3Taomlau ow agi uaamlaq alEOiunuzuaoo
``snduzEo a111 uo aaatinnAaana
`3rjrlarddy
`:sog ano ire (ATrtau) o1 paloauuoa iauaaglg pur
`auaagos jEaaua2 u pup 01 papaau am `saalnduaoo
`-moo t upim aotnap
`uE 2unoauuoo ao3
`lauaatjlg ag: uo ralnd
`
`-uoo qoiqm suoilnjos ma1 t QM aaaqp `Apuaran0
``Sturat;rP dI Pur dQQ 3ijulajddd uaam:aq uan
`s,paojur:s uo pastq 0.1E praadsapim isouz au
`-tui jnjaaau.nuoa spi pur `,/gojougoap g,T,dO`dgS
`sitp amiss, 'au' `soi;auix map uoiluauauzajd
`-pos Rolls-alp-go ou we aaatp `s3jaom aremparg
`
`ZZ
`
`

`
`ware packages which will support the terminal
`sessions we require. Rather than implement
`TCP/IP, we decided to design a converter be-
`tween the stream and terminal protocols of the
`AppleTalk DSP and TCP/Telnet. There were
`several reasons:
`
`The links between buildings are still slow
`(19.2 kbps). Full TCP/IP headers would
`give poor performance. Async AppleTalk at
`1200 baud would be unusable.
`
`The network nodes which perform terminal
`service have very little code space left. A
`new protocol family probably would not fit
`in the machine.
`
`We have three separate implementations of
`the DSP: the network terminal handlers, the
`DCTS host, and the Macintosh. All three
`implementations would have to be changed
`to a new protocol.
`
`A third -party AppleTalk vendor is develop-
`ing an "AppleTalk serial driver" which will
`replace the current async RS -232 driver in
`the Macintosh. Any terminal emulator will
`then work across the AppleTalk network,
`simply by installing the new driver.
`
`The Internet protocols still cannot provide
`several of the services we get from Apple-
`Talk: dynamic network address assignment;
`simple routing algorithms for a local net-
`work; naming algorithms which don't re-
`quire a system administrator to update name
`tables.
`
`The Gateway Implementation
`We plan to use a Macintosh II computer running
`A/UX (Unix System V, Release

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket