throbber
SRC00034170
`
`,
`
`Data-Driven and Demand-Driven Computer Architecture PHILIP C. TRELEAVEN, DAVID R. BROWNBRIDGE, AND RICHARD P. HOPKINS Computing Laboratory, Unwerstty of Newcastle upon Tyne, Newcastle upon Tyne, NE1 7RU, England Novel data-driven and demand-driven computer architectures are under development in a large number of laboratories in the United States, Japan, and Europe. These computers are not based on the tradlUonal von Neumann organization; instead, they are attempts to identify the next generation of computer. Basmally, m data-driven (e.g., data-flow) computers the availability of operands triggers the execution of the operation to be performed on them, whereas in demand-driven (e.g, reduction) computers the reqmrement for a result triggers the operation that will generate it. Although there are these two distinct areas of research, each laboratory has developed its own mdlvxdual model of computation, stored program representation, and machine organization. Across this spectrum of designs there m, however, a significant sharing of concepts. The aim of this palaer m to identify the concepts and relationships that exist both within and between the two areas of research. It does thin by examlmng data-driven and demand-driven architecture at three levels, computation organizatmn, (stored) program organization, and machine organLzation. Finally, a survey of various novel computer architectures under development is given. Categories and Subject Descriptors: C.0 [Computer Systems Organization]: General-- hardware/software interfaces; system architectures; C.1.2 [Processor Architecture]: Multiple Data Stream Architectures (Multiprocessors); C.1.3 [Processor Architecture] Other Architecture Styles--data-flow architectures; high-level language architectures, D 3 2 [Programming Languages] Language Classifications--data-flow languages; macro and assembly languages; very hzgh-level languages General Terms: Design Add~tmnal Key Words and Phrases Demand = driven architecture, data - driven architecture INTRODUCTION For more than thirty years the principles of computer architecture design have largely remained static [ORGA79], based on the von Neumann organization. These von Neu- mann principles include (1) a single computing element incorporat- ing processor, communications, and memory; (2) hnear organization of fLxed-size mem- ory cells; (3) one-level address space of cells; (4) low-level machine language (instruc- tions perform simple operations on el- ementary operands); (5) sequential, centralized control of com- putation. Over the last few years, however, a num- ber of novel computer architectures based on new "naturally" parallel organizations for computation have been proposed and some computers have even been built. The principal stimuli for these novel architec- tures have come from the pioneering work on data flow by Jack Dennis [DENN74a, DENS74b], and on reduction languages and machines by John Backus [BACK72, BACK73] and Klaus Berkling [BERK71, BERK75]. The resulting computer architec- ture research can be broadly classified as either data driven or demand driven. In data-driven (e.g., data-flow) computers the availability of operands triggers the execu- tion of the operation to be performed on them, whereas in demand-driven (e.g., re- Permission to copy without fee all or part of this material m granted provided that the copies are not made or distributed for direct commercial advantage, the ACM copyright notice and the title of the publication and its date appear, and notice is gwen that copying m by permission of the Association for Computing Machinery. To copy otherwise, or to republish, requires a fee and/or specific permission. © 1982 ACM 0010-4892/82/0300-0093 $00.75 Computing Surveys, Vol. 14, No 1, March 1982
`
`Patent Owner Saint Regis Mohawk Tribe
`Ex. 2047, p. 1
`
`

`

`SRC00034171
`
`94 * P. C. Treleaven, D. R. Brownbridge, and R. P. Hopkins CONTENTS INTRODUCTION I BASIC CONCEPTS 1 1 Control Flow 1 2 Data Flow 1 3 Reduction 2. COMPUTATION ORGANIZATION 2 1 Classification 2 2 Control Flow 2 3 Data Flow 2 4 Reductmn 2 5 Implications 3 PROGRAM ORGANIZATION 3.1 Classification 3 2 Control Flow 3 3 Data Flow 3 4 Reduction 3 5 Imphcations 4 MACHINE ORGANIZATION 4 1 Classification 4 2 Control Flow 4 3 Data Flow 4.4 Reductmn 4 5 Implications 5 DATA-FLOW COMPUTERS 5 1 M I.T Data-Flow Computer 5 2 Texas Instruments Distributed Data Processor 5 3 Utah Data-Driven Machine (DDM1) 5 4 Irvme Data-Flow Machine 5 5 Manchester Data-Flow Computer 5 6 Toulouse LAU System 5 7 Newcastle Data-Control Flow Computer 5.8 Other Projects 6 REDUCTION COMPUTERS 6 1 GMD Reduction Machine 6 2 Newcastle Reduction Machine 6 3 North Carohna Cellular Tree Machine 6 4 Utah Applicative Multlprocessmg System 6.5 S-K Reduction Machine 6 6 Cambridge SKIM Machine 6.7 Other Projects 7 FUTURE DIRECTIONS ACKNOWLEDGMENTS REFERENCES BIBLIOGRAPHY A v duction) computers the requirement for a result triggers the operation that will gen- erate it. Although the motivations and emphasis of individual research groups vary, there are basically three interacting driving forces. First, there is the desire to utilize concurrency to increase computer perform- ance. This is based on the continuing de- mand from areas such as weather forecast- ing and wind tunnel simulation for com- puters with a higher performance. The nat- ural physical laws place fundamental limi- tations on the performance increases ob- tainable from advances in technology alone. And conventional high-speed computers like CRAY 1 and ILLIAC IV seem unable to meet these demands [TREL79]. Second, there is the desire to exploit very large scale integration (VLSI) in the design of com- puters [SEIT79, MEAD80, TREL80b]. One effective means of employing VLSI would be parallel architectures composed of iden- tical computing elements, each containing integral capabilities for processing, com- munication, and memory. Unfortunately "general-purpose" organizations for inter- connecting and programming such archi- tectures based on the von Neumann prin- ciples have not been forthcoming. Third, there is the growing interest in new classes of very high level programming languages. The most well-developed such class of lan- guages comprises the functional languages such as LISP [McCA62], FP [BACK78], LUCID [ASHC77], SASL [TURN79a], Id [ARvI78], and VAL [ACKE79b]. Because of the mismatch between the various princi- ples on which these languages are based, and those of the von Neumann computer, conventional implementations tend to be inefficient. There is growing agreement, particularly in Japan and the United Kingdom, that the next generation of computers will be based on non-von Neumann architecture. (A re- port [JIPD81a] by Japan's Ministry of In- ternational Trade and Industry contains a good summary of the criteria for these fifth- generation computers.) Both data-driven and demand-driven computer architecture are possible fifth-generation architectures. The question then becomes, which archi- tectural principles and features from the various research projects will contribute to this new generation of computers? Work on data-driven and demand-driven architecture falls into two principal re- search areas, namely, data flow [DENN79b, Gosw79a] and reduction [BERK75]. These areas are distinguished by the way compu- tation, stored programs, and machine re- Computing Surveys, Vol 14, No. 1, March 1982
`
`Patent Owner Saint Regis Mohawk Tribe
`Ex. 2047, p. 2
`
`

`

`SRC00034172
`
`Data-Driven and Demand-Driven Computer Architecture • 95 sources are organized. Although research groups in each area share a basic set of concepts, each group has augmented the concepts often by introducing ideas from other areas {including traditional control- flow architectures) to overcome difficulties. The aim of this paper is to identify the concepts and relationships that exist both within and between these areas of research. We start by presenting simple operational models for control flow, data flow, and re- duction. Next we classify and analyze the way computation, stored programs, and machine resources are organized across the three groups. Finally, a survey of various novel computer architectures under devel- opment is given in terms of these classifi- cations. 1. BASIC CONCEPTS Here we present simple operational models of control flow, data flow, and reduction. In order to compare these three models we discuss each in terms of a simple machine code representation. These representations are viewed as instructions consisting of se- quences of arguments--operators, literal operands, references--dehmited by paren- theses: (argO argl arg2 arg3 ... argn- 1 argn). However, the terms "instruction" and "ref- erence" are given a considerably more gen- eral meaning than their counterparts in conventional computers. To facilitate com- parisons of control flow, data flow, and re- duction, simple program representations for the statement a = (b + 1) • (b - c) are used. Although this statement consists of simple operators and operands, the con- cepts illustrated are equally applicable to more complex operations and data struc- tures. 1.1 Control Flow We start by examining control flow, the most familiar model. In the control-flow program representations shown in Figure 1, the statement a = (b + 1)*(b - c) is specified by a series of instructions each consisting of an operator followed by one or more operands, which are literals or refer- ences. For instance, a dyadic operation such as + is followed by three operands; the f'~rst two, b and 1, provide the input data and the last, tl, is the reference to the shared memory cell for the result. Shared memory cells are the means by which data are passed between instructions. Each refer- ence in Figure 1 is also shown as a unidi- rectional arc. Solid arcs show the access to stored data, while dotted arcs define the flow of control. In traditional sequential (von Neumann) control flow there is a single thread of con- trol, as in Figure la, which is passed from instruction to instruction. When control reaches an instruction, the operator is ini- tially examined to determine the number and usage of the following operands. Next the input addresses are dereferenced, the operator is executed, the result is stored in a memory cell, and control is passed im- plicitly to the next instruction in sequence. Explicit control transfers are caused by op- erators such as GOTO. There are also parallel forms of control flow [FARR79, HOPK79]. In the parallel form of control flow, shown in Figure lb, the implicit sequential control-flow model is augmented by parallel control operators. These parallel operators allow more than one thread of control to be active at an instance and also provide means for syn- chronizing these threads. For example, in Figure lb the FORK operator activates the subtraction instruction at address i2 and passes an implicit flow of control on to the addition instruction. The addition and sub- traction may then be executed in parallel. When the addition finishes execution, con- trol is passed via the GOTO i3 instruction to the JOIN instruction. The task of the JOIN is to synchronize the two threads of control that are released by the addition and subtraction instruction, and release a single thread to activate the multiply in- struction. In the second parallel form of control flow, shown in Figure lc, each instruction explicitly specifies its successor instruc- tions. Such a reference, il/0, defines the specific instruction and argument position for the control signal, or control token. Ar- gument positions, one for each control sig- nal required, are represented by empty bracket symbols (), and an instruction is Computing Surveys, Vol. 14, No 1, March 1982
`
`Patent Owner Saint Regis Mohawk Tribe
`Ex. 2047, p. 3
`
`

`

`SRC00034173
`
`tllo-= Ub
`~
`

`
`>
`
`wa t 2
`
`a
`
`-
`
`B I
`
`+
`
`b : ( 4 )
`
`C r e
`
`*
`
`96 • P.C. Treleaven, D. R. Brownbridge, and R. P. Hopkins ( ) t2:( ) a:( ) ..~) (a) s-- -~ i2:~--- -~i3: (...FORK i2 ~ + b 1 tl GOTO i3 ~- b c t2 JOIN 2 * tl t2 a b:(4 : tl:( ) t2:( ) a:( ) (b) .,.) t0: (... il/o 12/o) bj,(4)¢,__ ¢: (2) ; ..... \ il: ((l~) + b 1 tl i3/0) t2: ((4) - b c i t -" "t2: ( ) I tl: ( ) / t3: ((~') (I,) tl t2 a ...) a: ( ) (c) t2 i3/i ) Figure 1. Control~flow programs for a = (b + 1) * (b - c): (a) sequential, (b) parallel "FORK-JOIN"; (c) parallel "control tokens." executed when it has received the required control tokens. The two parallel forms of control flow, illustrated by Figures lb and lc, are semantically equivalent; FORKS are equivalent to multiple successor instruction addresses and JOINs are equivalent to mul- tiple empty bracket arguments. The sequential and parallel control-flow models have a number of common features: (1) data are passed indirectly between in- structions via references t~ shared memory cells; (2) literals may be stored in instruc- tions, which can be viewed as an optimiza- tion of using a reference to access the literal; (3) flow of control is implicitly sequential, but explicit control operators can be used for parallelism, etc.; and (4) because the flows of data and control are separate, they can be made identical or distinct. 1.2 Data Flow Data flow is very similar to the second form of parallel control flow with instructions Computing Surveys, Vol. 14, No 1, March 1982 activated by tokens and the requirement for tokens being the indicated ( ) symbols. Data-flows programs are usually described in terms of directed graphs, used to illus- trate the flow of data between instructions. In the data-flow program representation shown in Figure 2, each instruction consists of an operator, two inputs which are either literal operands or "unknown" operands de- fined by empty bracket ( ) symbols, and a reference, i3/1, defining the specific instruc- tion and argument position for the result. A reference, also shown as a unidirectional arc, is used by the producer instruction to store a data token (i.e., result) into the consumer. Thus data are passed directly between instructions. An instruction is enabled for execution when all arguments are known, that is, when all unknowns have been replaced by partial results made available by other in- structions. The operator then executes, re- moving the inputs from storage, processing them according to the specified operation,
`
`Patent Owner Saint Regis Mohawk Tribe
`Ex. 2047, p. 4
`
`

`

`SRC00034174
`
`i l :
`
`(+
`
`1
`
`43/1)
`
`42:
`
`( -
`
`13/2)
`
`( *
`
`\
`
`Data-Driven and Demand-Driven Computer Architecture 14 i 1 13: ) ( )~ a/l) (a) 97 ! ! 11: (+ (~) 1 13/1) 12: (- 13: (*~() (~) (I) i3/2) ) (b) 10 Figure 2. Data-flow program for a = (b + 1) * (b - c) (a) Stage 1; (b) Stage 4. and using the embedded reference to store the result at an unknown operand in a successor instruction. In terms of directed graphs, an instruction is enabled when a data token is present on each of its input arcs. During execution the operator re- moves one data token from each input arc and releases a set of result tokens onto the output arcs. Figure 2 illustrates the sequence of exe- cution for the program fragment a = (b + 1) * (b - c), using a black dot on an arc to indicate the presence of a data token. The two black dots at Stage 1 in Figure 2 indi- cate that the data tokens corresponding to the values of b and c have been generated by predecessor instructions. Since b is re- quired as input for two subsequent instruc- tions, two copies of the token are generated and stored into the respective locations in each instruction. The availability of these inputs completes both the addition and the subtraction instruction, and enables their operators for execution. Executing com- pletely independently, each operator con- sumes its input tokens and stores its result into the multiplication instruction "i3." This enables the multiplication, which ex- ecutes and stores its result corresponding to the identifier "a," shown at Stage 4. In the data-flow model there are a num- ber of interesting features: (1) partial re- sults are passed directly as data tokens between instructions; (2) literals may be embedded in an instruction that can be viewed as an optimization of the data token mechanism; (3) execution uses up data to- kensmthe values are no longer available as inputs to this or any other instruction; (4) there is no concept of shared data storage as embodied in the traditional notion of a variable; and (5) sequencing constraints-- flows of control--are tied to the flow of data. 1.3 Reduction Control-flow and data-flow programs are built from fixed-size instructions whose ar- guments are primitive operators and oper- ands. Higher level program structures are built from linear sequences of these primi- tive instructions. Computing Surveys, Vol. 14, No 1, March 1982
`
`Patent Owner Saint Regis Mohawk Tribe
`Ex. 2047, p. 5
`
`

`

`SRC00034175
`
`98 ° P. C. Treleaven, D. R. Brownbridge, and R. P. Hopkins In contrast, reduction programs are built from nested expressions. The nearest anal- ogy to an "instruction" in reduction is a function application, consisting of (func- tion) (argument), which returns its result in place. Here a (function) or (argument) is recursively defined to be either an atom, such as + or 1, or an expression. Likewise, a reference may access, and function appli- cation may return, either an atom or an expression. Higher level program structures are reflected in this machine representa- tion, being themselves function applica- tions built from more primitive functions. In reduction, a program is mathematically equivalent to its result in the same way that the expression 3 + 3 is equivalent to the number 6. Demanding the result of the definition "a," where a -- (b + 1) * (b - c), means that the embedded reference to "a" is to be rewritten in a simpler form. (It may be helpful for the reader to view this eval- uation of a reference as calling the corre- sponding definition, giving reduction a CALL-RETURN pattern of control.) Be- cause of these attributes, only one defini- tion of "a" may occur in a program, and all references to it give the same value, a prop- erty known as referential transparency. There are two forms of reduction, differ- entiated in the way that arguments in a program are manipulated, called string re- duction and graph reduction. The basis of string reduction is that each instruction that accesses a particular defi- nition will take and manipulate a separate copy of the definition. Figure 3 illustrates string manipulation for a reduction execu- tion sequence involving the definition a -- (b + 1) • (b - c). Each instruction consists of an operator followed by literals or embedded references, which are used to demand its input operands. At Stage 1 in Figure 3 some instruction, containing the reference "a," demands the value corre- sponding to the definition "a." This causes a copy of the definition to be loaded into the instruction overwriting the reference "a," as also shown in Figure 3. Next the multiplication operator demands the values corresponding to il and i2, causing them to be overwritten by copies of their defini- tions. The multiplication then suspends and the addition and subtraction operators de- mand the values of b and c. The substitu- tion of the values 4 and 2 is shown at Stage 3 in Figure 3. The reducible subexpressions (+ 4 1) and (- 4 2) are then rewritten, caus- ing the multiplication to be reenabled. Fi- nally at Stage 5 the multiplication is re- placed by the constant 10, which is the value of "a." The basis of graph reduction is that each instruction that accesses a particular defi- nition will manipulate references to the def- inition. That is, graph manipulation is based on the sharing of arguments using pointers. Figure 4 illustrates graph reduc- tion using the same program definition a -- (b + 1) * (b - c) as above. At Stage 1 in Figure 4 an instruction demands the value corresponding to "a," but instead of a copy of the definition being taken, the reference is traversed in order to reduce the definition and return with the actual value. One of the ways of identifying the original source of the demand for "a," and thus supporting the return, is to reverse the arcs (as shown in Figure 4) by inserting a source reference in the definition. This traversal of the definition and the reversal of the references is continued until constant arguments, such as b and c in Figure 4, are encountered. In Figure 4, re- duction of the subexpressions in the defi- nition starts with the rewriting of the ad- dition and the subtraction as shown at Stage 4. This proceeds until the value of "a" is calculated and a copy is returned to the instruction originally demanding "a." (If there are no further references to b, c, il, and i2, then they can be "garbage col- lected.") Any subsequent requests for the value of "a" will immediately receive the constant 10--one of the major benefits of graph reduction over string reduction. In reduction the main points to note are that: (1) program structures, instructions, and arguments are all expressions; (2) there is no concept of updatable storage such as a variable; (3) there are no additional se- quencing constraints over and above those implied by demands for operands; and (4) demands may return both simple or com- plex arguments such as a function (as input to a higher order function). Control flow, data flow, and reduction clearly have fundamental differences, Computing Surveys, Vol. 14, No 1, March 1982
`
`Patent Owner Saint Regis Mohawk Tribe
`Ex. 2047, p. 6
`
`

`

`SRC00034176
`
`b:
`
`(4)
`
`ce:
`
`(2)
`
`b
`
`1)
`
`b
`
`ec)
`
`b:
`
`(4)
`
`( 2
`
`~
`
`Q
`
`o n
`
`~
`
`Data-Driven and Demand-Driven Computer Architecture • 99 definition il: (+ 7 a: (* tl t2) ~~copy " N I demand (... a ...) => (...(* (a) ll i2)...) (...(* (+ 4 i) (- 4 2))...) => (...(* 5 2)...) => (...i0...) (b) Figure 3. String reductlon program for a = (b + l) * (b - c) (a) Stages 1 and 3, (b) Stages 3-5. definition il: (+ b~: (-?e) a: (* il 12) =>... => ° , . j: (... a ...) demand il: (+ b 1 a/l) i2: (- b /2) J a: (* II i2 j/l) Y (a) b: (4) c: (2) il: (+ 4 1 a/l) 12: (- 4 2 a/2) a: (* ii 12 j/l) => b: (4) c: (2) il: (5) i2: (2) a: (* 5 2 j/l) , .) . J =>Ja: (10) J (b) Figure 4. Graph reduction program for a = (b + 1) * (b - c): (a) Stages 1 and 3, (b) Stages 4-6. Computing Surveys, Vol 14, No 1, March 1982
`
`Patent Owner Saint Regis Mohawk Tribe
`Ex. 2047, p. 7
`
`

`

`SRC00034177
`
`100 • P.C. Treleaven, D. R. Brownbridge, and R. P. Hopkins which relate to their advantages and dis- advantages for representing programs. However, they also have interesting under- lying similarities. In the next two sections on computation organization and program organization we attempt to identify and classify these underlying concepts. 2. COMPUTATION ORGANIZATION In this section we examine, at an abstract level, the way computation progresses. This progress takes the form of successive changes in the state of the computation brought about by executing instructions. Computation organization describes how these state changes come to take place by describing the sequencing and the effect of instructions. We describe the rules deter- mining which instructions are selected for execution and how far reaching the effects of their execution may be. This abstract level of classification en- ables a clear distinction to be drawn be- tween the terms: control driven, data driven, and demand driven. These three classes are often identified with, respec- tively, the operational models control flow, data flow, and reduction. Here we define the notions of control-driven, data-driven, and demand-driven computation organiza- tions and identify their relationships to the three operational models. 2.1 Classification Computation organizations may be classi- fied by considering computation to be a continuous repetition of three phases: se- lect, examine, and execute. It needs to be emphasized that this description does not necessarily reflect the way in which partic- ular computer implementations operate but rather that it is a logical description of the affects achieved. (1) Select. At the select phase a set of instructions is chosen for possible execu- tion. The rule for making this choice is called a computation rule. The computa- tion rule selects a subset of instructions in the program. Only instructions chosen by the select phase may be executed, but se- lection does not guarantee execution. Three of the computational rules used in this clas- sification are imperative, innermost, and outermost. The imperative computation rule selects the instructions indicated by, for example, a special ("program counter") register or the presence of control tokens. This selection is made regardless of the position of the instruction in the program structure. Innermost and outermost com- putation rules select, respectively, the in- structions most deeply nested and least deeply nested in the program structure. An innermost instruction has no instructions as arguments to it (only values). An outer- most instruction is not nested as an argu- ment of any other instruction. The instruc- tions selected by the three rules are illus- trated in Figure 5. (2) Examine. At the examine phase, each of the instructions previously chosen in the select phase is examined to see if it is executable. The decision is based on ex- amination of each instruction's actual ar- guments. The rule for making this decision is called a firing rule. For instance, the firing rule may require all operands to be data values, or it may require only one operand to be a value as, for example, in a conditional. If an instruction is executable, it is passed on to the next phase for execu- tion; otherwise, the examine phase may take some action, such as delaying the in- struction or attempting to coerce argu- ments so as to allow execution. (3) Execute. At the execute or "target" phase, which is broadly similar in all com- putation organizations, instructions are ac- tually executed. The result of execution is to change the state of the computer. Results are made available and are passed to other parts of the program. Execution may pro- duce globally perceived changes, perhaps by changing the state of a globally shared memory, or it may produce localized changes as when an expression is replaced by its value. 2.2 Control Flow The select phase of control-flow computa- tion corresponds to the fetch part of the fetch-execute control cycle. Each control- flow computing element has a program counter naming the next instruction to ex- ecute. In the select phase, the program Computing Surveys, Vol. 14, No. 1, March 1982
`
`Patent Owner Saint Regis Mohawk Tribe
`Ex. 2047, p. 8
`
`

`

`SRC00034178
`
`.
`
`-
`
`(ake)
`p e s
`
`(akd) +
`
`,
`
`)
`
`)
`
`)
`
`Data-Driven and Demand-Driven Computer Architecture The expression denotes the product of two complex numbers: (a,b) * (c,d) 101 Imperatlve ((a'c) - (b~d) , (a'd) + (b~c)) ? PC Instructions selected depending on the value of PC Innermost Instructions selected are the most deeply nested Outermost ( (a-c) r (b'd) , (a'd) + (b~c)) Instructions selected are those un-nested Figure 5. Three computation rules applied to an expression. counter is used to choose the instructions to be used. Once chosen by select, instruc- tions are not checked by an examine phase, but are automatically passed on to execu- tion. The execute phase of control-flow in- structions is allowed to change any part of the state. Control flow uses a shared mem- ory to communicate results. The state of computation is represented by the contents of this shared memory and of the pro- gram counter register(s). A program counter is updated at the end of each cycle either implicitly or, in the case of GOTOs, explicitly. We define the term control driven to denote computation organizations in which instructions are executed as soon as they are selected. The select phase alone deter- mines which instructions are to be exe- cuted. For all computation organizations in this class the examine phase is redundant, and instruction sequencing is independent of program structure. 2.3 Data Flow There are many varieties of data-flow com- puters; here we restrict ourselves to "pure" data-flow computation as described in Sec- tion 1.2. In pure data flow, instructions are executed as soon as all their arguments are available. Logically at least, each instruc- tion has a computing element allocated to it continuously, just waiting for arguments to arrive. So the select phase of data-flow computation may be viewed as logically allocating a computing element to every instruction. The examine phase implements the data-flow firing rule, which requires all arguments to be available before execution can take place. Arguments must be data items, not unevaluated expressions. If val- ues are not yet available, the computing element will not try to execute the instruc- tion but will remain dormant during the execute phase. The execute phase in data flow changes a local state consisting of the executing instruction and its set of succes- sor instructions. The instruction consumes its arguments and places a result in each successor instruction. We define the term data driven to denote computation organizations where instruc- tions passively wait for some combination of their arguments to become available. This implies a select phase, which (logi- cally) allocates computing elements to all instructions, and an examine phase, which suspends nonexecutable instructions. In data-driven computation organizations, the key factor governing execution is the avail- ability of data. For this reason "data driven" is the same as "availability driven." Computing Surveys, Vol. 14, No. 1, March 1982
`
`Patent Owner Saint Regis Mohawk Tribe
`Ex. 2047, p. 9
`
`

`

`SRC00034179
`
`102 2.4 Reduction Reduction computers each have different rules embodied in their select phase. The choice of computation rule is a design choice for a particular reduction computer. The commonest rules used are innermost and outermost (see Figure 5), and in fact the discussion of reduction in Section 1 was restricted to outermost reduction. The computation rule in a reduction computer determines the allocation of computing ele- ments at the beginning of each computation cycle. In the examine phase the arguments are examined to see whether execution is possible. If it is, the instruction is executed. Otherwise, the computing element tries to coerce the arguments into the required pat- tern. This coercion demands the evaluation of argument(s) until sufficient are available for execution. Logically, this demand con- sists of spawning one or more subcompu- tations to evaluate operands and waiting for them to return with a value. The in- struction set of a reduction computer may contain many different firing rules, each instruction having the rule most suited to it. For example, all arithmetic operations will have a firing rule that forces their ar- guments to be values. The execute phase in a reduction machine involves rewriting an instruction in situ. The instruction is re- placed by its result where it stands. Only the local state consisting of the instruction itself and those instructions that use its results are changed. Execution may thus also enable another instruction. We define the term demand driven to denote a computation organization where instructions are only selected when the value they produce is needed by another, already selected instruction. All outermost reduction architectures fall into this cate- gory but innermost reduction architectures do not. The essence of a demand-driven computation organization is that an in- struction is executed only when its result is demanded by some other instruction and the arguments may be recursively evalu- ated where necessary. In reduction com- puters with an innermost computation rule, instructions are never chosen by select until their arguments are available. This restric- tion means that all arguments reaching the • P.C. Treleaven, D. R. Brownbrtdge, and R. P. Hopkins examine stage are preevaluated and hence no coercions need ever take place. It also means that all instructions have all their arguments evaluated whether or not this is necessary, exactly as occurs in data flow. Thus we believe innermost computation organizations are data driven. 2.5 Implications The implications of the computation orga- nization classification can now be summa- rized. Control-flow computers have a con- trol-driven computation organization; in- structions are arbitrarily selected, and once selected they are immediately executed. Data-flow computers have a data-driven computation organization; all instructions are in p

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket