throbber

`
`A Communication Architecture for High-speed
`Networking
`
`Zygmunt Haas
`ATEB'T Bell Laboratories
`Room 4F—501
`
`Holmdel, NJ 07733
`
`Abstract
`
`The communication speed in wide, metropolitan—, and
`local-area networking has increased over the past decade
`from Kbps lines to Gbps lines;
`i.e., six orders of mag
`nitude, while the processing speed of commercial CPUs
`that can be employed as communications processor has
`changed only two to three orders of magnitude. This dis—
`crepancy in speed translates to “bottlenecks” in the corn-
`munications process, because the software that supports
`some of the high-level functionality of the communication
`process is now several order of magnitude slower than the
`transmission media. Moreover, the overhead introduced
`by the operating system (US) on the communication pro—
`cess strongly affects the application-toapplication come
`munication performance. As a result, new proposals for
`interface architecture have emerged that considerably re-
`duce the overhead associated with the OS. With the
`alleviation of the OS bottleneck,
`the performance re-
`quired from the communication system will be so high
`that a new protocol architecture that will support high-
`performance (high-throughput and low-delay) communi-
`cation is needed. The purpose of this paper is to propose
`an alternative structure to the existing models of commu-
`nication architecture. The architecture presented employs
`an approach quite different than that used in the exist—
`ing layered models; i.e., the proposed architecture has a
`horizontal (parallel) structure, as opposed to the vertical
`(serial) structure of the layered models. The horizontal
`structure, coupled with small number of layers, results in
`elimination of unnecessary replication of functions, unnec—
`essary processing of functions, and lowers the unnecessary
`interprocess communication burden of the protocol execu-
`tion (For example, unnecessary error detection for voice
`packets in integrated applications and unnecessary error
`detection in multiple layers for data applications are elim-
`inated, the overhead associated with multi peer-topeer
`connection of multi—layer architecture is substantially re-
`duced, etc.) Furthermore, the proposed architecture lends
`itself more naturally toward full parallel implementation.
`Parallelism offers the potential of increased protocol pro-
`
`cessing rates, reduction in processing latency, and further
`reduction in processing overhead. And with the advances
`in VLSI, full parallel implementation becomes more and
`more feasible. Some additional features of the proposed
`scheme include selective functionality, which permits the
`protocol to easily adjust itself to the kind of performance
`demanded by the network-application requirements. The
`architecture presented supports the notion of communi-
`cation networks that resemble an extension of a computer
`bus, and are capable of providing very high bandwidth,
`on the order of Gbps, directly to the user.
`
`1
`
`Introduction and Motivation
`
`The communication speed in wide, metropolitan—, and
`local—area networking has increased over the past decade
`from Kbps lines to Gbps lines; i.e., six orders of magni-
`tude. The processing speed of commercial CPUS that can
`be employed as communications processors has changed
`only two to three orders of magnitude. This discrepancy
`in speed translates to “bottlenecks” in the communica—
`tions process, because the software that supports some
`of the high-level functionality of the communication pro-
`cess is now several orders of magnitude slower than the
`transmission media.
`In other words, when the lines op—
`erated at 9.6ths, the multi—layer conventional protocols
`implemented in software were fast enough to match the
`speed of the lines. However, now, when the lines operate
`at 1.7Gbps, the mismatch in speed is so large that the
`advantage of highvspeed lines is buried in the large pro-
`cessing overhead of high-level protocols, leading to large
`delay and low throughput. This change has shifted the
`bottleneck from the transmission to the (software) pro-
`cessing at the end points.
`Another trend that has influenced the design of commu-
`nication networks is the potential of hardware implemen—
`tation (ie, VLSI). It is much easier and cheaper today to
`implement large and fast communications hardware on a.
`single silicon chip. However, this opportunity cannot be
`fully exploited with current protocols that were developed
`
`CH28265/90/0000/0433/$01.00 © 1990 IEEE
`
`433
`
`DEFS-ALA0006965
`
`Alacritech, Ex. 2033 Page 1
`
`Alacritech, Ex. 2033 Page 1
`
`

`

`
`
`for software implementation. The purpose of this work is
`to investigate the various possibilities of improving the
`performance of communications protocols and interfaces,
`so that the slow~software-fast-transmission bottleneck can
`be alleviated, and to propose a new protocol architecture
`that is suitable for future, high-performance communica»
`tion.
`the current
`A basic question is whether or not
`(software-based) protocols can provide the required per
`formance? As suggested in [1], TOP/1P ([2]) can, with
`some minor changes, provide up to few hundred Mbps
`throughput. Assuming that the machines will,
`indeed,
`require such a high bandwidth, there are still other ma~
`jor communication bottlenecks that prevent delivery of
`such large throughput. Thus, [1] concludes that it is the
`other bottlenecks that we’re supposed to be concerned
`with, not the high-layer protocols. Furthermore, there is
`still the question of what applications require such a large
`throughput. (The problem is not only in identifying such
`applications, but also in predicting future application.)
`Let us first provide an answer to the last question. The
`general trend of computing environments to move toward
`distributed and parallel processing systems is, and will be,
`strongly responsible {or the demand for increased perfor-
`mance. For example, a parallel processing system imple»
`mented on the fine—grain level requires delays on the order
`of useconds. In distributed processing system, large files
`may be required to be transferred between machines with
`very low latency. Thus, for the parallel and distributed
`processing environment, both the very low delay and large
`throughput are crucial.
`(Another bandwidth demanding
`application that can be identified today is video. Video
`applications will become more and more important with
`traffic integration in packet-switched networks.)
`Unfortunately, indeed, the impact of the overhead in-
`troduced by the operating systems (OS) on the com—
`munication process strongly affects the application-to-
`application communication periormance.
`The major
`sources of this overhead are (see also [3,4,5,6]):
`a scheduling
`I multiple data transfers from/to the user1
`o overhead of entities management:
`— timers
`— buffers
`— connection states
`
`- overhead associated with division of the protocol
`processing into processes (including interprocess
`communication)
`
`1It has been already emphasized several times in the lit—
`erature (for example, [3,4)), that the number of data copies
`between bufi'ers (i.e., the per octet overhead) has a crucial ef-
`fect on the performance of a protocol.
`
`a interrupts
`o context switching
`The reason for large OS overhead is the structure of the
`communication process in general, and the implementa-
`tion of network interfaces, in particular.
`in other words,
`the CPU performs such an important role in the com—
`munication process, that, as a consequence, there are too
`many interrupts, too many context switches, and too large
`a scheduling overhead. Network interfaces were invented
`to offload the CPU from the communication process. Un—
`fortunately, they do only a partial job; interfaces are still
`being built that interrupt the processor for each received
`packet, leading to multiple context switches and sched—
`uler invocations.
`(Another solution is to structure the
`process in a different way; to eliminate the scheduler calls
`by the interrupts, and to resolve the scheduling of the pro-
`cess that completed the communication at. the “regular”
`scheduler invocations.) Some new proposals for interface
`architecture (Bl) considerably reduce the overhead asso-
`ciated with the OS. These proposals structure the com-
`munication process much in the same way that Direct
`Memory Access (DMA) is implemented;
`the CPU initi-
`ates a. communication event, but has little to do in the
`actual information exchange process. The considerable
`reduction in the OS overhead that results from the new
`structuring of the communication process, will probably
`have a crucial impact on the feasibility of providing multia
`Mbps bandwidth directly to the user (see also Section 6
`for our ideas on a. new interface structure).
`The communication goal in the parallel and distributed
`system environment is to provide communication, whose
`throughput is restricted only by the source capacity of the
`transmitter or the sink ability of the receiver. Also,
`the
`communication delay should be minimized. As stated, OS
`are today the bottleneck of the communication process.
`However, once the OS bottleneck are resolved.
`the per—
`formance required from the communication systems will
`be so high that a new approach will be needed to the
`architecture of communication protocols and to the net~
`work interface design to support high-performance (high-
`thronghput and low-delay). This argumentation answers
`the basic question whether the current protocols are ade‘
`quate for future high-speed networks; local improvements
`in the protocol design might be adequate for current US
`limited systems. This Will not be the case for future sys—
`toms.
`Along these lines, we propose an architecture that is
`an alternative to the existing layered architectures. The
`novel feature of the proposed architecture is the reduction
`in the vertical layering; services that correspond to the
`definitions of layers 4 to 6 in the lSO/OSl RM2 are com-
`2The references to the 150/031 Reference Model in this
`paper relate only to the definition of services in the various
`layers of the model, and not to the actual architecture of the
`ISO/OSI protocols.
`
`434
`
`DEFS-ALAOOO6966
`
`Alacritech, Ex. 2033 Page 2
`
`Alacritech, Ex. 2033 Page 2
`
`

`

`
`
`bined into a single layer that is horizontally structured.
`This approach lends itself more naturally to parallel im-
`plementation. Moreover, the delay of a. set of processes
`implemented in parallel is determined by the delay of the
`longest process, and not by the sum of all the process de—
`lays, as is the case in sequential implementation.
`In the
`same way, the total throughput need not to be limited
`by the lowest capacity process, but can be increased by
`concurrently performing the function on several devices.
`Thus a protocol structure that lends itself to parallel im~
`plementation has the potential to provide the high perfor-
`man ce matched to the requirements of the new generation
`of improved OS.
`
`2 The Challenge
`The challenge is to propose a single protocol that suc-
`cessfully provides communication over diverse networks:
`data rates ranging from Kbps to Gbps and network di—
`ameters from LANs to WANs. Also, the diverse require-
`ments of many different applications need to be sup-
`ported: connectiomoriented and connectionless service,
`stream—like traffic and bursty traffic, reliable transport
`and best-efi'ort delivery (erroncontrol and flow‘control),
`difierent data sizes, different delay and throughput re
`quirements, etc. The required throughput is on the order
`of hundreds of Mbps application-toapplication (with a.
`tendency toward Gbps), and the delay is on the order of
`hundreds of microsecondsa.
`Let us discuss the above a little bit further. Assum—
`ing a very reliable subnet, the error-recovery procedures
`can be simplified so that when there are no errors very
`little overhead is incurred. However, this low overhead
`comes at the expense of having a large penalty in the
`event of an error. (On the other hand, in networks with a.
`high bit error rate (BER), both of the error and no—error
`cases the recovery procedure should be minimized) This
`is what success-oriented; protocols mean: minimize the
`overhead for successful delivery cases at the expense of
`larger penalty for unsuccessful delivery attempts.
`The
`reason
`for
`insisting
`on optimizing the performance5 over such an extensive
`range of the data—rate (Kbps to Gbps) is that in the fu-
`ture we expect to have an enormous variety of networks.
`30f course, because of the propagation delay, such a low
`delay requirement has little advantage for a WAN, and is nec-
`essary only in LAN/WAN environment. However, because of
`the requirement that the protocol design be independent of the
`actual subnet being used, the stringent performance require
`ments need to apply to any communication.
`4"l‘his term refers to protocols that exploit the characteris-
`tics of reliable networks. Such networks, typically composed
`of fiber links and digital switches, have a very low error rate,
`deliver packets in order, and rarely drop packets.
`5Here “performance” refers to:
`throughput, delay, and
`transmission efficiency.
`
`In other words, one cannot expect that, with the intro-
`duction of multi—megabit-per-second networks, Ethernets
`will (at least immediately) disappear. Even today, the
`range of communication is very large; 300 bps modems
`are still being used.
`Thus, in order to optimize the performance of the pro-
`tocol for all the diverse networks/applications, the proto-
`col must consist of a set of versatile protocols, that can
`be readily switched between. This is what we refer to in
`this work as selective functionality protocols.
`The work is organized in the following way: The next
`Section outlines possible approaches to improving proto-
`col performance. Section 4 presents our horizontal pro-
`tocol approach, while the selective-functionality feature is
`described in Section 5. A new approach to network inter-
`faces is discussed briefly in Section 6. Section 7 shows a
`basic design example of the horizontally structured archi—
`tecture. Section 8 concludes the work.
`
`3 The Possible Approaches.
`There are several possible solutions to overcome the slow-
`software-fast-trausmission problem:
`1. Improve the performance of current high-layer pro-
`tocols implementations([1 ,7l)
`2. Design new high—layer protocols based on the cur—
`rent philosophy; i.e., software-based, layered struc-
`ture ([8,9,5])
`3. Hardware implementation of high—layer protocols
`([10])
`4. Reduction of highslayer protocols overhead (i.e.,
`“lite” protocols)6
`5. New design philosophy that differently structures
`the high~layer protocols ([6])
`6. Migration of high-layer functionality to lower lay—
`ers, in particular to the physical layer (for example,
`networks that perform some high-layer functions
`by trading the physical bandwidth; i.e., end-to—end
`flow control done at the physical layer in Blazenet
`[11]).
`The approaches were arranged according to how much
`they diverge from the conventional protocol design phi-
`losophy. Of course, there can be any combination of the
`above five approaches.
`6We note that a. “lite" (i.e., lightweight) protocol is created
`by adjusting the number of states and the amount of control
`information that is passed between the protocol states, in such
`a way, on one hand, to maximize the protocol functionality (to
`reduce the overhead of the software—based application layer)
`and, on the other hand, to minimize the overhead caused by the
`low-level implementation of the protocol states (for example,
`number of chips for hardware implementation or the number
`of page faults for software implementation).
`
`435
`
`DEFS—ALA0006967
`
`Alacritech, Ex. 2033 Page 3
`
`Alacritech, Ex. 2033 Page 3
`
`

`

`
`
`Note that there endsts yet another approach: Reduction
`of lower-layer functionality.
`(For example,
`[12] proposes
`to solve the link-by-link flow control problem by drop-
`ping excessive packets and correcting the packet loss at
`the higher—level (transport) through retransmissions.) We
`consider this approach, which is the opposite of the fifth
`approach, to be, in general, unable to solve the high—speed
`communication problem, since pushing the problems to
`higher layers introduces, in fact, larger delays (the imple-
`mentation of higher layers is typically soft ware-based, and
`thus slower). Also, in this specific case of the flow con—
`trol example, the long timeout associated with detecting
`the dropped packets (which is done on the end-to—end ba-
`sis), increases the overall delay. (We note, however, that
`pushing some of the functionality to higher layers simpliv
`lies the network design, since at the lower layers there is
`more traffic aggregation, and the total number of packets-
`per—second is larger, leading to more complex processing
`within the network. Thus, in some cases this approach
`may be appropriate.)
`Every one of the six approaches outlined can poten-
`tially improve the protocol performance. However, major
`improvement in performance can be obtained by combin-
`ing a few of the above approaches. For instance, even
`though changes in the current version of TCP/IP that re
`duce overhead might increase the protocol throughput to
`several hundred Mbps, hardware implementation of the
`improved version will signify the improvement even fur-
`ther. Consequently, we believe that the “correct” answer
`to the question of how to implement high—speed protocols,
`is an intelligent integration of several approaches.
`In the next section, we consider a combination of the
`third, fourth, fifth, and (in a limited sense) the sixth ap—
`proaches. The proposed architecture is based on three
`layers:
`the Network Access Control (the NAG layer),
`the Communication Interface (the Cl layer), and the
`Application (the A layer). The NAC layer consists of all
`the services defined in and below the Network layer of the
`ISO/OS]: RM. The CI layer consist of the services defined
`by the Transport, Session, and Presentation layers. The
`A layer corresponds to the conventional Application layer.
`(The reason for the proposed threolayer structure is that
`the services of the NAC are mostly hardware—based, while
`the services of the A layer are software‘implemented.
`Thus the CI represents the boundary between the soft—
`ware and the hardware. Thus, in our view, the model
`of the communication protocol architecture can consist
`of the three layers, where the NAG is hardware-based,
`the A is software, and the Cl is a. mixture of software
`and hardware. It is our belief that to achieve high—speed
`communication, the structure of CI must lend itself easv
`ily (little overhead) toward parallel hardware implemen-
`tation, and, as shown in the nest section, the proposed
`three-layer structure provida a basis for such a parallel
`implementation.
`
`4 Horizontally oriented proto-
`col for high-speed communi-
`cation
`
`We propose here an alternative structure to the existing
`communication architectures. The central observation is
`that protocols based on extensively-layered architectures
`possess an inherent disadvantage for high-speed commu-
`nication. While the layering is beneficial for educational
`purposes, strict adherence to layering in implementation
`decreases the throughput and increases the communica—
`tion delay. There are several reasons for this reduction in
`performance, among them (see also [6]) the replication of
`functions in different layers, performance of unnecessary
`functions, overhead of control messages, and the inability
`to parallelize protocol processing,
`The architecture presented here employs an approach
`quite different than that used in the extensively-layered
`models;
`i.e., the proposed architecture has a horizon—
`tal structure, as opposed to the vertical structure of
`multi-layered architectures. We refer to our architecture
`as Horizontally Oriented Protocol Structure, or HOPS.
`The main idea behind HOPS is the division of the pro—
`tocol into functions,
`instead of layers. The functions,
`in general, are mutually independent, in the sense that
`the execution of one function can be performed without
`knowing the results of the execution of another. (Thus in-
`tercommunication between the functions is substantially
`reduced.) For example, flow—control and decryption are
`independent functions.
`If the dependency between two
`(or more) functions is such that the execution of one de—
`pends on the result of another, the function can still be
`conditionally executed. For example, packet resequencing
`is to be executed only if error—control detects no errors.
`Thus, resequencing can be conditionally executed in par-
`allel with error-control, and at the end a (binary) deci-
`sion made whether to accept or ignore the resequencing
`results.
`Because of the independence between the functions
`they can be executed in parallel, thus reducing the la»
`tency of the protocol and improving throughput.
`Figure 1 shows the structure of HOPS. In this Figure,
`the correspondence in services between HOPS and the
`ISO/OBI RM is also shown. Thus the CI of HOPS im-
`plements in hardware the services defined by layers 4 to
`6. The meaning of hardware implementation is not (nec—
`essarily) that HOPS is fully cast into silicon, but that
`specific hardware exists to perform the functions, rather
`than relying on the host software. Thus HOPS can be
`implemented as a collection of custom—designed hardware
`and general-purpose processors.
`Cl receives the raw information (unprocessed packv
`ets) from the Network Access Control (NAG) layer. The
`central layer in HOPS is the Communication Interface
`
`436
`
`DEFS-ALA0006968
`
`Alacritech, Ex. 2033 Page 4
`
`Alacritech, Ex. 2033 Page 4
`
`

`

`
`
`
`
`Optionsw
`
`It
`
`nAddress
`
`ieI
`
`Figure 2: HOPS packet format
`
`based on the fourth, fifth, and sixth approaches ducribed
`in Section 3, are not compatible with the ISO/031 model,
`and, in fact, violate the model boundaries.
`
`5 HOPS as a Selective Func-
`
`tionality Protocol
`Because HOPS is intended to support communication
`over diverse networks and for diverse application, a single
`protocol cannot provide optimum performance. For ex-
`ample, retransmission policy depends on the quality of the
`network: selective retransmission is better for networks
`with large average BER, while go-bsclr-n may be benefi-
`cial in very reliable environment. Moreover, the require
`ments for a protocol may change with time and space; for
`example increasing congestion may change retransmission
`policy, or the required error~control mechanism may dill’er
`from subnet to subnet. Consequently, what we propose
`is a “protocol with a. menu,” whereby a user will request
`some combination of functions that are needed to achieve
`some particular level of performance. For instance,
`the
`function retransmission may be designed to receive the
`following values: selective, go-bacb-n, go-and—wuit, none,
`any,
`The network interface (NI) has to decide on the
`particular retransmission policy required.
`If the NI has
`some knowledge about the subnets the packet is going to
`travel on, then the NI can make an intelligent decision on
`the required policy. The N! can change its decision with
`time, ifit learns that the conditions have changed or that
`its previous decision was incorrect.
`The function values are communicated throughout the
`network by means of the special option field in the packet
`format as shown in Figure 2. The values are decoded sep-
`arately, in parallel, and “on the fly” at the packet arrival
`time.
`There is another, very important advantage of the se-
`lective functionality feature; possible compatibility with
`current protocols.
`It cannot be expected that there will
`be immediate transfer from the rooted architectures and
`protocols. Thus protocols like TPl/CLNP will continue
`to exist, and it is of paramount importance that the cur»
`rent and the new high~speed oriented protocols interwork.
`
`DEFS-ALA0006969
`
`Alacritech, Ex. 2033 Page 5
`
`onnec or
`9 -
`CI -:..°‘
`
`usmuos
`
`
`
`
`
`Application
`,
`Presentation
`
`Session
`-3It'0o:1
`
`Network
` Network
`
`
`Access
`
`
`Data-link
`
`Physical
`ISO/OS!
`HOPS
`
`3E
`
`.
`
`Figure l: HOPS
`
`(and independent or
`(CI). CI is divided into parallel
`conditionally—independent) functions. Before the results
`of the functions are passed to the Application layer, they
`are evaluated in the Connector. The Connector exe-
`cutes the conditional dependency among the functions,
`and passes the processed information to the' Application
`layer.
`HOPS is expected to lead to high-performance imple-
`mentations because of several reasons. First, because of
`the horizontal structure of functions, the delay of a packet
`processing is determined by the slowest function, rather
`than by the sum of all delays. This is achieved in HOPS
`by the independency characteristic of functions. Thus a
`function need not to wait for a result of another lunction
`before its execution can begin. Second, the throughput
`can be easily improved by increasing the number of units
`of capacity-limited functions. Such increase is rather nat—
`ural in paralleled-structured CI. Third, because of the
`compression of layers, much of the replication and over-
`head are eliminated (such as bufiering on different layers,
`for example). Fourth, the horizontal structure lends it-
`self to parallel implementation on separate (possibly cus-
`tomized) hardware. The parallel implementation by itself
`has the potential of lowering the processing delay and in-
`creasing the processing throughput. Also, the overhead
`associated with switching between the execution of func-
`tions is grossly eliminated, as is the communication mes-
`sages between the processes. Finally, the selective lune.
`tionals'ty feature’ can eliminate unnecessary function pro-
`cessing.
`We also believe that in order to achieve the limit of
`performance, one should implement the HOPS-structured
`protocols in custom designed hardware.
`It should be noted that HOPS, as well as other solutions
`’See Section 5
`
`437
`
`Alacritech, Ex. 2033 Page 5
`
`

`

`
`
`bandwidth. We want to push this idea even further, and
`
`Wide-
`. eliminate even more of the CPU’a involvement in the com-
`
`
`Metro-
`munication process. In other words, we intend to:
`
`
`Local-
`Aros Network
`
`0 Eliminate the CPU involvement in processing of the
`transport functions.
`
`no3901.]
`
`
`
`Wisdom
`
`
`Marques
`
`
`Figure 3: Network Interface
`
`The selective functionality approach enables one to com-
`pose any protocol from the extended menu. Thus, for
`example, communication between a TP4/CLNP site and
`a HOPS site can be easily achieved by an adaptation layer
`(to be abolished with time) providing simple translation
`of the TP4/CLNP packets to the HOPS format. Since
`by the virtue of the selective functionality feature, HOPS
`is a. superset of all the TP4/CLNP functionalities. such a
`translation is possible.
`Initial considerations of HOPS architecture suggmt
`that the delay and throughput will be determined by the
`slowest and capacity limited functions. However, as op-
`posed to conventional protocols, this limits are not fixed,
`and they change with the particular functionalitiee re-
`quired by the network/application pair. Thus, transmis-
`sion over a highly reliable network may skip most of the
`overhead associated with resequencing. Moreover, custom
`design implementation may increase the performance of
`the limiting functions.
`
`6 The structure of the netWOrk
`
`interface
`
`The network interface (N1) is the hardware that intercon—
`nects the network and the host by receiving the packets
`addressed to the host and by transmitting the host output
`traffic over the network (Figure 3). The network interface.
`proposed in this work, also performs the highslayer proto-
`cols. Thus the N1 is the actual hardware that implements
`the Communication Interface of HOPS.
`The goal of the NI is to oflload the protocol process-
`ing from the CPU. For example, in NAB8 ([4]) the CPU
`initiates information exchange, but does not participate
`in the actual transfer of data. The NAB does substan-
`tially reduce the host load, thus increasing the interface
`
`8Network Adapter- Board
`
`438
`
`0 Eliminate the CPU involvement in processing of the
`presentation functions.
`
`0 Reduce the CPU involvement in processing of the
`session functions.
`
`0 Reduce the overhead associated with the communi-
`cation between the NI and the OS (i.e., interrupts,
`scheduler, etc)
`0 Eliminate the bufiering and copying of information
`from the 08 space into the user space.
`
`The elimination of CPU involvement in processing the
`transport and the presentation functions is done by fully
`performing these functions within the CI layer by the Ni,
`as explained in Section 4.
`Reduction of CPU involvement in the processing of the
`session functions is a bit tricky, since obviously some pro—
`cessing must be done by the OS. For example, opening a
`new session usually involves space allocation to the user.
`This is done by the OS. However, most of the bookkeeping
`and some processing can be done by the NI; i.e., negoti-
`ation of parameter values, management of the dialogue,
`once the connection is established, etc.
`The reduction in communication between the Ni and
`the OS is performed in several ways. First, the NI keeps
`a pointer to the user space, so that it can write the re-
`ceived data directly to the user buffers. This eliminates
`interrupts when data is received.
`(This also eliminates
`the overhead associated with copying the data from 08
`space into user space.) Second, the NI has access to the
`scheduler and monitor tables, directly adjusting the table
`upon reception of data. When the scheduler is invoked, it
`checks on the status of the pending communication pro—
`cesses by referring to the Nloaccessible table. Finally, the
`scheduler itself can be implemented as a different module.
`Some remarks are called for here. First, it may be nec-
`essary to keep the user buffers aligned on the word bound—
`ary. Second, either paging is inhibited (it may make sense
`when the data to be communicated is small, or when the
`machine is dedicated to a single job), or the NI must be
`informed of paging (so that it does not write into space
`that doesn’t belong to the destination user).
`If paging
`occurs, the OS needs to provide a buffer assigned to the
`user (this might be a disadvantage, since once the user is
`running again, the data need to be transferred between
`the OS and the user space). Another approach is to use
`communication buffers located in common space that be—
`longs to the OS (to the NI, more specifically) and to the
`user process. The NI writes the received data into these
`buffers, which can be directly (without copying) accessed
`
`DEFS-ALA0006970
`
`Alacritech, Ex. 2033 Page 6
`
`Alacritech, Ex. 2033 Page 6
`
`

`

`
`
`CPU only if some other action is required to be
`performed. For example, when a new session open
`request arrives, the request will generate a new en-
`try in the TOP, however, the CPU will be inter—
`rupted to allocate space for the new session.
`is
`0 new connection-oriented pocket:
`the packet
`placed directly in the process space, without the
`CPU intervention. If necessary, a. control informa—
`tion is set in the NI for future process incarnation.
`
`0 new request-response”: managed directly by the
`N] by interpreting the request and directing it to
`the space of the service. If the service is not recog—
`nized, the CPU might be interrupted.
`0 new datagram packet:
`same as request-response
`packet.
`
`7 Example of HOPS Design
`Implementation
`In this section, we provide an example to illustrate how
`the HOPS architecture can be implemented. The imple»
`mentation presented here includes the receiver side only.
`Also, it is somewhat simplified in the sense that not all
`the data. structures are formally defined.
`The following functions are incorporated into the de-
`sign:
`0 Error~control
`Retransmissions
`
`Connection-option
`Sequencing
`I Flow-control
`
`0 Addressing
`0 Presentation
`
`I Session-management
`I Congestion-Control
`The structure of the design, presented in Figure 5,
`shows the flow of information between the various blocks
`that implement functions. The parallel structure is em-
`phasized. In what follows, each functions is described by
`its attributes: values that the function can receive, func-
`tionality performed, what input is received by the func-
`tion, and what output is produced.
`Error-control
`detection, n-comectx'on,
`
`Values:
`correction.
`Functionality:
`10In the transactional communication model ([8]), a request
`sent to a server and the server mponds are both performed in
`the connectionless mode.
`
`tit-detection and n-
`
`DEFS-ALA0006971
`
`Alacritech, Ex. 2033 Page 7
`
`a.-
`0'I:m
`
`
`
`3o(
`
`Figure 4: The proposed communication model
`
`by the user. Third, there must be a very intimate rela-
`tion between the OS and the NI; the NI must have access
`to 05 tables. The NI access to the OS data structures
`may be system dependent. Finally, the resequencing,
`if
`needed, can be done directly in the user buffers. The
`NI leaves “holes” in the buffer for una

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket