`
`1.
`
`I am currently the Production Manager at USENIX, The Advanced
`
`Computing Systems Association. In that role I am responsible for USENIX publi-
`
`cations. I have been employed at USENIX since 2012.
`
`2.
`
`I have been asked to confirm certain dates regarding the following ar-
`
`ticle: Marc F. Pucci, “Configurable Data Manipulation in an Attached Multipro-
`
`cessor,” Computing Systems, Vol. 4 No. 3, Summer 1991 (“the Pucci article”), a
`
`true and correct copy of which accompanies this declaration as Exhibit A. Addi-
`
`tional true and correct excerpts from the issue of Computing Systems, Vol. 4 No. 3,
`
`Summer 1991, in which the Pucci article appeared, are attached as Exhibit B.
`
`3.
`
`My statements are based on my understanding of USENIX’s general
`
`practices and procedures for publishing in the ordinary course of business, as well
`
`as any records available to me regarding USENIX’s publishing activities.
`
`4.
`
`I can confirm, based on the above, that USENIX arranged for publica-
`
`tion of the Pucci article. The Pucci article was originally published in Computing
`
`Systems 1991, a quarterly publication of the USENIX Association. I have reviewed
`
`the Summer 1991 issue of Computing Systems (Vol. 4 No. 3), and have determined
`
`that the Pucci article was published in a paper version in the Summer of 1991.
`
`5.
`
`Based on the above, I understand that it would have been USENIX’s
`
`practice to distribute paper versions of the publication Computing Systems to at
`
`Apple 1052
`U.S. Pat. 9,189,437
`
`
`
`least USENIX Association members and the Editor in Chief, Editorial Advisory
`
`Board, Staff, and Contributors of the publication. Members of the public would
`
`have been able to purchase the issue from USENIX beginning in the Summer of
`
`199 1 .
`
`I hereby declare that all statements made herein of my own knowledge are
`
`true and that all statements made on information and belief are believed to be true;
`
`and further that these statements were made with the knowledge that willful false
`
`statements and the like so made are punishable by fine or imprisonment, or both,
`
`under Section 1001 of Title 18 of the United States Code.
`
`Executed this 19th day of September 2016.
`
`Respectfully submitted
`
`4;4/ad/L.Lg,./Ltxoarv
`Michele Nelson
`
`
`
`
`
`EXHIBIT A
`
`EXHIBIT A
`
`
`
`C onfi gurable Data Manipulation
`in an Attached Multþrocessor
`Marc F. Pucci Bellcore
`
`ABSTRACT: The ION Data Engine is a multþocessor
`tasking system that provides data manipulation services
`for collections of workstations or other conventional
`computers. It is a back-end system, connecting to a
`workstation via the Small Computer Systems Interface
`(SCSÐ disk interface. ION appears to the workstation
`as a large, high speed disk device, but with user
`extensible characteristics. By mapping an application's
`functionality into simple disk read and write accesses,
`ION achieves a high degree of application portability,
`while providing enhanced performance via dedicated
`processors closely positioned to I/O devices and a
`streamlined tasking system for device control.
`The programming model for ION supports the
`notion of separation of control function from data
`transmission. Typically, a small list of data manþ-
`lation directives is transmitted from the workstation to
`the ION node, where data filtering or other forms of
`processing occur. Only results, as opposed to all data,
`need be returned to the workstation. In the extreme
`case, the ION system can acquire all input data and
`generate all output data, without any processing
`occurring in the workstation. An example application
`uses a simple set of directives to capture and digitize
`high quality stereo audio, mix it to monaural, rate
`adjust the digitized samples to ISDN rates, convert
`
`@ Computing Systems, Vol. 4 ' No. 3 ' Summer l99l 2I7
`
`
`
`from binary to mulaw encoding, and transmit the result
`to a workstation.
`ION is being used as an experimental platform for
`voice mail services in a userprogrammable telephone
`switch prototype, and as a tool for measuring the I/O
`performance of computer-disk interfaces. Applications
`under development include an automated camera
`positioning system and an object repository.
`
`l. Introduction
`
`The workstations that exploit the rapidly advancing state-of-the-art in
`processor technology can often be a bane to developers of applications
`that utilize dedicated special purpose hardware or that impose strict ac-
`cess requirements on conventional hardware. Such evolving systems
`can suffer from:
`' Constantly porting hardware dependent components to new
`hardware.
`. Being locked into a particular vendor to avoid major hardware
`disruptions.
`. Forcing the use of high-end stations because entry-level stations
`are not easily expandable.
`. Constantly upgrading local workstation based device drivers to
`coexist with operating system releases.
`. Rding upon an operating system that is not appropriate for the
`system's functionality.
`. Insufûcient workstation capacity to support the hardware re-
`quirements of the application.
`
`Applications tied to obsolete processor technology will soon suffer
`from comparative performance problems as newer workstation technol-
`ogy passes it by. However, interfacing new workstations to an existing
`hardware base is not simple. Initial workstation offerings often possess
`
`2I8 Marc Pucci
`
`
`
`meager expansion characteristics, typically just a disk and network
`connection, so achieving even the electrical connection can be
`difficult.
`Ideally, utilizing a new workstation should entail only simple re-
`compilation of the application code; however, machine dependencies
`that result from the use of special purpose hardware complicate a code
`port. Workstation hardware may not be portable to different manufac-
`turer's stations or even across a line of workstations from the same
`vendor. This can lead to the loss of a significant hardware investment
`as working components must be redesigned. Supporting multiple ver-
`sions of hardware in order to preserve customer satisfaction with older
`configurations can also be expensive. Even hardware common to mul-
`tiple stations, which is currently possible since many stations now offer
`VME bus interfaces, may still require device driver changes and must
`also track operating system variations from release to release.
`An additional problem of using special purpose hardware on a
`conventional workstation is that the internal structure of the host oper-
`ating system may not be conducive to the requirements of the hard-
`ware. It may be preferable to model an application into subtasks, each
`with its own flow of control; however, the relatively expensive context
`switch time for a general purpose operating system may make such an
`implementation infeasible for performance reasons. Also, the data
`rates generated by some hardware may have a detrimental effect on
`other functions in the workstation. In general, it is best to place com-
`pute power as close as possible to the source of data, passing only re-
`sults or preprocessed information on to higher levels in the system. In
`this manner, devices requiring rapid response need not interfere with
`time-sharing operations.
`ION addresses these problems by partitioning an application into
`hardware dependent and independent components, and providing a
`vendor independent interface between the two. The hardware indepen-
`dent components reside in the workstation, and are therefore easily
`ported to new architectures. The hardware dependent components are
`situated within a separate backplane-based environment, which is
`portable in its entirety across workstation changes. The low level con-
`nection between these components is the Small Computer Systems In-
`terface (SCSD disk interface, ANSI X3.131. Since each workstation
`accesses ION using its local disk system, which is a stable, well-
`defined interface, there is no need to change vendor supplied host sys-
`
`Configu.rable Data Manipulation in an Attached Multþrocessor 219
`
`
`
`A to D Converters
`D to A Converters
`Video Frame Grabber
`Special Purpose Hardware
`
`^
`Private
`I
`Disk
`Workstations +
`Figure 1. An ION system. Multiple workstations connect to an ION node,
`which contains single board computers and other peripheral interfaces and
`devices. Each workstation views its ION connection as though it were a large
`conventional disk drive.
`
`lon Disks
`
`tem software. Current SCSI perfofrnance capabilities also provide a
`respectable (5 megabyte per second) data access rate.
`ION configurations are expandable and sharable as needs dictate.
`Additional single board computers (SBC's) in a backplane can connect
`multiple workstations to the same set of hardware resources' or
`provide extra CPU cycles for I/O devices or applications that require
`it. Further expansion is possible by using bus repeaters and local area
`networks to interconnect multiple ION nodes together. The basic
`structure of an ION system is shown in Figure 1'
`
`2. The ION Interface
`
`A workstation sees ION as though it were physically a local disk drive
`(an ION drive) with a data capacity of 2 terabytes (the SCSI limit).
`Software running within the ION system mimics the behavior of a
`conventional device, providing the workstation with a peripheral that
`it knows how to deal with. The "data" contained in this pseudo-disk
`
`220 Marc Pucci
`
`
`
`device can be random read/write data, traditional file system data, or
`more complex objects for a variety of applications managed by tasks
`running within the ION qystem. The latter is implemented by def,ning
`application specific functions, called actions, that are enabled by read-
`ing or writing specific disk block addresses within the ION drive.
`For example, ION supports an analog to digital (A-to-D) conver-
`sion application that provides voice messaging service for a prototype
`telephone switch. The bulk of the application resides in a conventional
`workstation, while the peripheral devices are located within ION. The
`application's interface to the A-to-D converters is implemented as an
`action defined on a set of 5 disk block addresses, each corresponding
`to 1 of the 5 analog channels. The controlling program within the
`workstation merely reads from one of these designated disk block ad-
`dresses to obtain the converted data (lseek0 followed by read0 in the
`Unix domain). By defining such interactions in terms of standard disk
`read and write accesses, the application remains portable across work-
`station changes, operating system releases, and to a large degree, com-
`plete operating system changes (e.g., Unix to VMS), while preserving
`any existing special purpose hardware investments.
`A further advantage of the disk-like interface of ION is its robust-
`ness in the face of application failure. Since ION mimics a local disk
`drive, the worst case scenario for failure merely results in the apparent
`symptom that the ION drive has gone into an offline condition equiv-
`alent to a real drive losing power or spinning down. This should not
`have any long lasting effect on the workstation and is remedied by re-
`booting the ION system.
`
`3. System Architecture
`
`The hardware configuration of an ION node is shown in Figure 2. The
`current ION configuration uses high speed Motorola 68030 micropro-
`cessor based single board computers (SBC's). A port to an Intel 960
`based product is underway, although the current system only deals
`with homogeneous processors. These processors offer suff,cient power
`for the current set of ION I/O devices and will be upgraded to faster
`processors when more demanding peripherals are in use.
`
`Configurable Data Manipulation in øn Attached Multþrocessor 221
`
`
`
`VME Backplane
`
`CPU
`
`CPU
`
`Privale
`Memory
`
`Private
`Memory
`
`SCSI Bus
`lnterface
`
`SCSI Bus
`lnlerface
`
`To individual workstations To local
`lon storage
`
`To application
`peripherals
`
`Figure 2. An ION node. Each node contains a dedicated single board com-
`puter (SBC) to manage each workstation interface. Other SBC's control local
`storage, manage object repositories, control additional I/O devices, or run
`application code.
`
`An SBC is dedicated to each workstation connection, primarily be-
`cause most hosts insist on using the same SCSI bus address. Each SBC
`contains its own SCSI interface chip and DMA interface, and is capa-
`ble of transferring data directly into its private memory without creat-
`ing external (VME) bus activity. Additional disk interfaces are used to
`control local node storage, which may consist of file system data or
`application managed object repositories. Large buffer memory, on the
`order of hundreds of megabytes, is used as a cache for physical device
`data. A network interface (Ethernet) connects an ION node to other
`hosts.
`SCSI exploits the use of programmed intelligence within each
`device on the bus, offloading many functions otherwise performed by
`the host. It is the fastest common interface supported by a variety of
`workstations. None of the design issues in ION are constrained to
`SCSI, and the current version of the system uses Ethernet and serial
`line interfaces for additional connectivity. Until better interfaces are
`commonly available, SCSI provides the fastest, common, expansion in-
`terface and has a defined next generation architecture offering
`significantly higher performance (40 megabytes per second SCSI-2).
`Additional details on SCSI can be found in the appendix.
`
`222 Marc Pucci
`
`
`
`3.2 Software
`
`ION is implemented as a fast tasking qystem, specifically designed to
`support peripheral devices by adding flexible processing power to their
`functioning, and then interfacing the enhanced device to other conven-
`tional computers in an easy and portable manner. Modularity is
`achieved by dedicating tasks to specific system functions, and passing
`requests for service through client-server transactions on the same or
`other ION processors.
`All ION tasks are memory resident and execute with their own
`flow of control. Although they share a common address space, tasks
`typically do not share any data, relying instead on the fast communica-
`tions mechanisms available in the system. Most tasks are simple filters
`that read an input queue, process data, and pass results to an output
`queue. Tasks are classified into 3 fixed priority groups: interrupt, nor-
`mal, and background. V/ithin a group, tasks are non-preemptive and
`run until completion or resource unavailability. Across groups, tasks
`can be preempted, essentially allowing interrupt tasks to respond to
`hardware as quickly as possible.
`While state machines rather than multiple tasks are often used for
`managing disk-like operations lzf, the recursive state machines neces-
`sary for SCSI are difficult to design and enhance. Alternatively, as-
`signing a task to the management of each individual workstation con-
`nection and I/O device simplifies the coordination of multiple objects,
`which in turn allows for easier parallelization of I/O activities. Individ-
`ual SCSI tasks manage their own disconnect/reconnect behavior on the
`SCSI bus on a device by device basis. The multiple flows of control
`offered by the tasking system are useful for application as well as sys-
`tem functions. Such operations as consistency management, network
`control and routing, and recovery management are more easily de-
`signed as separate tasks. Multiple processors fit naturally into such an
`environment and provide needed power and responsiveness for han-
`dling multiple powerful workstations and devices.
`While acknowledging that multiple tasks can lead to a loss of per-
`formance [3], ION alleviates this by exploiting certain characteristics
`of its environment: With a single address space and no need for the
`complete functionality of a general purpose operating system, many
`
`Configurable Data Manipulation in an Attached Multiprocessor 223
`
`
`
`optimizations are possible. Tiask switching, event synchronization and
`interrupt response time have been designed with minimal overhead.
`Table 1 summarizes some of these characteristics for the 68030-based
`system. The cost of using a tasking system over a conventional state
`machine can be seen in some of the performance measurements pre-
`sented in Section 6.
`
`Null subroutine time
`I Frs
`4ps
`Intemrpt dispatch time.
`Null intemrpt service time
`9ps
`25ps
`Task switch
`Event synchronization
`16p.s
`8ps
`Simple system call
`Same processor null clienVserver interaction
`105¡rs
`l40ps
`Remote processor null clienVserver interaction
`Täble l: ION system characteristic measurements.
`
`Interrupt dispatch time is the delay between when an I/O device
`signals its need for service and when its interrupt service routine is en-
`tered. Null interrupt service time is the time needed to save and re-
`store pre-interrupt state and increment a counter. The task switch time
`measures the delay incurred when one task suspends and a second task
`continues execution without any specific form of synchronization. It is
`mostly the time required to save and restore 2 sets of the general pur-
`pose registers of the 68030. Event synchronization time must be added
`to the basic task switch time when 2 tasks qynchronize through the
`data queueing and dequeueing primitives. A simple system call is simi-
`lar in timing to the null interrupt time since much of the same func-
`tionality must occur.
`The null client/server interactions are essentially remote procedure
`call (RPC) interfaces between cooperating tasks. When on the same
`processor, this involves task-switching the receiving and sending tasks,
`queueing and dequeueing the request and response data, and determin-
`ing the location of the sending and receiving queues. When the RPC
`crosses processor boundaries, the timing includes the single processor
`case above plus interrupt latency for sending and receiving, and extra
`
`224 Marc Pucci
`
`
`
`interrupt processing necessary in the processor-to-processor commum-
`cations functions. (A single interrupt indicates message reception from
`multiple processors in the system, so a number of input sources must
`be checked for the presence of a message.) This interprocessor com-
`munication takes place over a shared backplane bus, and copying of
`data can therefore be avoided. It should be noted that the processors
`are essentially independent of each other they do not share tables or
`other system data. All interactions occur by placing data in appropriate
`queues.
`The point of the above measurements is not that ION is the fastest
`system around (it is not), but that a system constructed out of tradi-
`tional piece parts and written in a high-level language is capable of
`performance that encourages the application designer to use separate
`independent tasks for services rather than constructing large complex
`procedures. The 25:1 ratio between task switch time and null subrou-
`tine execution is encouraging, but is still an order of magnitude higher
`than desired. More favorable is the 3:1 ratio of task switch time com-
`pared to a null, non-blocking system call, which indicates that the
`overhead associated with a request for service is close to becoming in-
`dependent of how that service is implemented. The 307o overhead of
`the remote null-RPC over the local null-RPC is also encouraging, and
`suggests that the location of a service need not be constrained by
`closeness to its clients.
`
`3.3 Internal ION System Servíces
`
`The system primitives provided by ION fall into 5 categories:
`Task contol: Create or destroy a flow of control. Thsks can be created
`at interrupt time.
`SCSI application interfoce: Defrne the set of disk block addresses
`to which an ION application will respond, and the action function to
`invoke when a workstation reads or writes a block from the set.
`SCSI hardware interface.'Exchange data with the workstation
`across the SCSI bus. Also, disconnect and reconnect from the bus to
`improve bus utilization.
`Message exchange and nsk synchronizatiorz.' Queue and dequeue
`messages. This is also the only task synchronization facility available
`in ION, essentially combining task activation with data availability.
`
`Configurable Data Manipulation in an Attached Multiprocessor 225
`
`
`
`Producing
`
`Queue
`
`Consuming
`Task
`
`Task_+o m+o
`\-m*/
`
`Data Source
`
`\
`
`Freelist Queue
`
`/
`
`Figure 3. Data queueing model. Queues are used to pass data between tasks,
`are of infinite length, and never block on a "queue put" operation' Queues
`are the only mechanism for task synchronization in the system. Data flows in
`a closed-loop path between tasks to control resource consumption.
`
`Message buffer manipulatíon: Allocation and duplication of system
`buffer memory. Three types are available: cached, uncached, and ex-
`ternal bus (VME) memory. Cached memory is traditional system
`memory that can be cached by the processor's memory system hard-
`ware for faster access. Uncached memory is used for regions of local
`memory that can be changed by I/O devices or external bus refer-
`ences. This class of memory is necessary for SBC's that do not have
`snooping caches. VME memory is the pool of external memory avail-
`able for buffering large quantities of device data.
`
`3.4 The Data Queueing Model
`
`The queueing of data objects for task-to-task communication uses a
`straight forward producer / consumer model as shown in Figure 3. The
`queue structures do not contain space for the pointers to the queued
`objects, but rather some space is claimed in the object for this pur-
`pose. In this manner, queues are effectively of "infinite" length, where
`any task capable of creating an object is guaranteed to have a place to
`put it. Tasks never block placing data on a queue, but only by de-
`queueing from an empty one.
`ION places minimal structure on passed data: the data arc manipu-
`lated by:
`. list pointers used for queue linkage,
`. a free-list pointer used when a buffer is no longer needed, and
`
`226 Marc Pucci
`
`
`
`. buffer length, data length, and data offset indicators used for
`defining the amount of data within the object.
`The data description is flexible enough to allow the expansion and
`contraction of a data object without requiring recopying. The offset
`and data length describe the location of data within the physical
`buffer. As an object is passed through a set of tasks, data can be
`added or removed from either end by adjusting these values within the
`confines of the actual buffer size. Thus, for example, protocol layers
`can add wrappers to data without extra copying.
`The queue structures contain two optional activation procedures
`that are invoked during queueing and dequeueing. Apør routine is
`called after a data object has been placed on an empty queue. This
`allows a hardware device, which is not being maintained by a task,
`to "kick" the device in order to reactivate VO. The VO completion
`intemrpt is used to maintain data flow when the device is active. A
`get rotÍine is called before a dequeue operation on an empty queue.
`This allows a "status queue" to be constructed which will return infor-
`mation about an application as it exists immediately before the de-
`queue operation.
`
`3.5 Flow Control
`
`Flow control is the mechanism used to control the rate at which data
`moves about in a system. It prevents resource starvation by limiting
`the activity of tasks that generate data too quickly for consuming tasks
`to absorb. The conventional mechanism for flow control uses an upper
`and lower limit on the number of elements that can be stored in a
`queue (also known as a high and low-water mark). A process that at-
`tempts to queue more than the upper limit will suspend execution. It
`will be reactivated when consuming tasks reduce the number of ele-
`ments below the lower limit for the queue. These two values add hys-
`teresis and prevent the thrashing which can occur if a single maximum
`were imposed on queue size.
`Several complications exist with this scheme. Scheduling is im-
`pacted because tasks can be suspended at queueing time, rather than
`just dequeueing time. The ordering of queue operations cannot be
`guaranteed across separate tasks since it is impossible to predict the
`closeness of a queue to its high-watsr mark. The number of buffers in
`use by a pipeJine of cooperating tasks will grow with the number of
`
`Confgurable Data Manipulation ín an Atnched Multþrocessor 227
`
`
`
`tasks. This can stress the stability of the system's buffering mecha-
`nism. Finally, tasks must be designed with the understanding that
`any given queue operation can cause task suspension, rather than the
`more intuitive approach of suspension occurring only when data are
`requested.
`Because ION tasks do not block on queueing, flow control cannot
`be implemented in this manner. Instead, closed-loop paths are estab-
`lished that link together the original producer of data to the final con-
`sumer as well as all intermediate tasks. The advantage of such a
`scheme is that the maximum number of buffers that will be used by a
`particular application is constant and can be preallocated. Also, a run-
`ãway task *itt U" throttled back when it waits for a buffer to be re-
`leasld by its last consumer. This scheme also permits trivial tuning of
`the amount of read-ahead or pre-fetching that an application can per-
`form, since this will be limited by the number of assigned buffers in
`the pipe-line. Finally, the system does not waste time constantly per-
`forming dynamic buffer allocation and deallocation, which can require
`searching and coalescing of free space. Buffers remain in use until an
`application terminates, and are then returned to the global memory
`pool.
`
`3.6 Specifying the Data Manipulation
`DescriPtion
`A data manþlation list is used to describe the interconnection of
`tasks, define buffer allocations, and construct the closed-loop free-list
`circuits. The syntax is similar to the Iio redirection of unix, except
`that the outputs and inputs specify queues, not files, and can indicate
`multiple inPuts or outPuts:
`command argument<input-queuel input-queue2 "'
`>output-queuel output-queue2'''
`where a queue specification includes:
`queue-name:free-list(number-of-buffers'size-of-buffer)
`The command can identify a built-in generic task such as a mixing or
`duplicating operation, or can specify hardware interfaces, which should
`generate or accept data from the indicated queues. Thefreelist
`Ipecification is uied to supply buffers to a task that must create data,
`rather than simply modify existing data. The buffering details are op-
`
`228 Marc Pucci
`
`
`
`mon¡tor
`
`( Ø
`
`o
`
`--+
`
`/"ãr@\e/i¡_:^ >\
`
`@ r
`
`ight
`
`Figure 4. Task connections for an audio mix application. The closed-loop
`free-list connections are omitted for clarity, but feed into the atod and dup
`tasks, and drain from the dtoa, mix and scsi tasks.
`
`tional, and a default size and number exists for each hardware device.
`Figure 4 illustrates a simple example of an application used to mix
`a stereo source of analog data into a single stream, duplication of the
`stream for 1) monitoring at a loudspeaker, and 2) mu-law compression
`and output to a workstation by reading from an indicated SCSI I/O ad-
`dress.
`The ION description of this behavior is shown in Figure 5, and is
`sent in a data buffer to a particular SCSI block address, placing it into
`a corresponding input queue. A previously created task that awaits in-
`put from that queue accepts the list, creates the tasks and buffers, and
`activates all the I/O devices. Further instructions can be sent to tear
`down the connections by disablingthe atod devices. This will cause an
`end-of-file indication to pass through the system as tasks discover their
`inputs have closed and in turn close their ouþuts. Further details on
`the internal construction of a similar data manipulation can be found
`in the example application section on analog to digital conversion.
`
`-+
`
`atod 0 > aJ(50)
`atod 1 > b:f
`mixab>c
`dupc>de:f
`dtoa2<d
`mulawce>g
`scsi 32 < g
`
`*@*@
`
`Figure 5. Data manipulation description. A short sequence of instructions is
`prepared by the workstation to configure the data acquisition tasks.
`
`Configurable Dan Manipularton in an Attnched Multiprocessor 229
`
`
`
`4. Relationship to Other Systems
`
`ION's tasking system is similar in scope to such minimal kernels as
`Alpha [5], Arts [6], Chaos [7], Mach [8] Ra [9], Spring [10], Synthe-
`sis [t1], V U2], and others. However, ION is specifically geared to-
`wards supporting peripheral devices and then interfacing the resultant
`modified, intelligent peripheral to other conventional computers in an
`easy and portable manner. Most of these systems provide similar task
`manipulation services and are optimized for low latency service re-
`quests. Unlike the Spring kernel, ION does not support dynamic dead-
`line guarantees for service [13], but uses a closed-loop control path
`with a fixed number of preallocated buffers to control service by limit-
`ing availability. This scheme prevents a task from using excessive ca-
`pacity by restricting its rate of production to the rate of consumption,
`while controlling the amount of read-ahead that can occur in an appli-
`cation. Although ION requires the user to assign fixed scheduling pri-
`orities to tasks, which is unnecessary in Spring, the use of 3 levels (in-
`terrupt service, normal task service, and background service) and the
`closed-loop buffer mechanism has proved sufficient for all applications
`constructed to date.
`The programming model of ION resembles a large-grain data flow
`system [14] such as that used in Max [15]. It is also similar to the
`Unix Streams package [16] although ION queues are not associated in
`pairs, and the processing of queue data is done by the Unix equivalent
`of a process rather than at interrupt time. The information flowing be-
`tween stream queues is typed to indicate whether it contains data or
`various forms of control, whereas ION information is untyped.
`The notion of sending a configuration description for required data
`processing to a remote site is similar to the NeFS protocol specifi-
`cation [17] for remote file system access. This system has the advan-
`tage of using an interpretor to avoid system dependencies, while ION
`uses predefined generic services or compiledand-down-loaded modules
`that necessitate a system reboot. The NeFS programs sent to a remote
`host are intended to perform file system operations, possibly return
`data, and then terminate. ION descriptions can be used to establish
`perpetual data flow, or to set up transactions that are terminated by
`subsequent requests.
`Some of the features used in ION are now appearing in the com-
`mercial marketplace. For example, an analog data capture device [18]
`
`230 Marc Pucci
`
`
`
`is available which returns data to a user in a manner similar to that of
`the voice messaging system described below. However, ION offers the
`advantage of user programmability of the interfaces and device charac-
`teristics. This leads to greater functionality and power located off the
`host processor and in the peripheral device. ION can also accommo-
`date multiple different devices within the same physical conûguration,
`supporting data filtering operations that can reduce the amount of data
`that passes through the workstation, thereby improving system
`efficiency.
`
`5. An Example Application-Analog
`to Digital Conversion
`
`ION provides the platform for analog to digital (A-to-D) services for a
`voice messaging application of a prototype programmable telephone
`switch system called GARDEN. It provides the physical interface to
`readily available VME cards, and also provides additional processing
`power to off-load the interrupt handling and data formatting necessary
`for their operation. It has already provided protection against obsoles-
`cence of the hardware investment, since the workstation running the
`application has already been upgraded, without any impact on the I/O
`component of the application software. Additionally, since the hard-
`ware dependent A-to-D code remains within ION, no driver changes
`to the host's operating system are necessary upon workstation upgrade.
`The part of the A-to-D application that resides within ION is
`structured around three cooperating tasks. One task is activated by pe-
`riodic interrupts from the hardware and extracts the raw data from the
`converter, placing it into a queue for temporary storage. Since the data
`extraction is not done at interrupt time, less system activity occurs at a
`high CPU priority level. The interrupt routine and the task share a
`pair of queues and a token which is passed between the queues to co-
`ordinate activity. This prevents the interrupt routine from reactivating
`the task if the task has not completed its previous data extraction.
`The second task is a generic system utility that translates 16-bit
`linear data into 8-bit mu-law data, as required by this particular appli-
`cation. It is essentially performing data compression on the input
`stream.
`
`Configurable Data Manipulation in an Attached Multiprocessor 231
`
`
`
`The t