`tremely helpful in the inference process.
`A very simple example illustrates the foregoing points. Suppose that S con
`sists of the set of clauses
`SouthOf(river2,x), NorthOf (riverl,*) — Between (river 1, river2, x)
`— SouthOf(w,silo30)
`— NorthOf (riverl, sik>30)
`
`(12.4)
`(12.5)
`(12.6)
`
`Clause (12.5) might arise when it is determined that "silo30" is south of some
` Bottom up inference derives new
`feature in the image whose identity is not known.
`assertions from old ones. Thus in the example above the variable substitutions
`u = river2
`x = silo30
`match assertion (12.5) with the general clause (12.4) and allow the inference
`
`(12.7)
`
`(12.8)
`
`NorthOf (riverl, silo30)
`—► Between (riverl, river2, silo30)
`Consequently, use (12.6) and (12.7) to assert
`— Between (river 1, river2, silo 30)
` case: that is, that
`Suppose that this was not the
` —*
`Between (riverl, river2, silo30)
` (12.9)}. One could then use
`and that S = {(12.4),
`new denials from old ones. In this case
`(12.10)
`NorthOf (riverl, silo30), SouthOf(river2,silo30) —
`follows with the variable substitution x = silo30. This can be interpreted as fol
`lows: "If Ais really silo30, then it is neither north of riverl or south of
` river2." Fig
`ure 12.2 shows two examples using the network notation.
`Now suppose the goal is to prove that (12.8) logically follows from (12.4)
`through (12.6) and the substitutions. The strategy would be to negate (12.8), add
`it to the data base, and show that the empty clause can be derived. Negating an
`assertion produces a denial, in this case (12.9), and now the set of axioms (includ
` (12.9)}. It is easy to repeat the
`ing the denial) consists of {(12.4), (12.5), (12.6),
`earlier steps to the point where the set of clauses includes (12.8) and (12.9), which
`resolve to produce the empty clause. Hence the theorem is proved.
`
`(12.9)
` topdown inference, which infers
`
`12.1.6 Predicate Calculus And Knowledge Representation
`
`Pure predicate calculus has strengths and weaknesses as a knowledge representa
`tion system. Some of the seeming weaknesses can be overcome by technical
`"tricks." Some are not inherent in the representation but are a property of the
`common interpreters used on it (i.e., on stateoftheart theorem provers). Some
`problems are relatively basic, and the majority opinion seems to be that first order
`
`392
`
`Ch. 72
`
`Inference
`
`IPR2021-00921
`Apple EX1015 Page 403
`
`
`
`*~ Between
`
`River 2
`
`River 1
`
`Silo 30
`
`River?
`
`(b)
`
`Fig. 12.2 Resolution using networks, (a) Bottomup inference as a result of substitu
`tions u = river2, x = silo30. (b) Topdown inference as a result of substitutions w = \\ x
`= silo30.
`
`predicate logic must be extended in order to become a representation scheme that
`is satisfactorily matched to the power of the deductive methods applied by human
`beings. Opinion is divided on the technical aspects of such enhancements. Predi
`cate calculus has several strengths, some of which we list below.
`
`1. Predicate logic is a wellpolished gem, having been refined and studied for
`several generations. It was designed to represent knowledge and inference.
`One knows what it means. Its model theory and proof theory are explicit and
`lucid [Hayes 1977; 1980].
`
`Sec. 72.7 First Order Predicate Calculus
`
`393
`
`IPR2021-00921
`Apple EX1015 Page 404
`
`
`
`2. Predicate logic can be considered a language with a machineindependent se
` logic, not
`mantics; the meaning of the language is determined by the laws of
`the actual programming system upon which the logic is "executed."
`3. Predicate calculus clauses with only one conclusion atom (Horn clauses) may
`be considered as "procedures," with the single conclusion being the name of
`the procedure and the conditions being the procedure body, which itself is
`made up of procedure calls. This view of logic leads to the development of
`predicate logicbased programming languages (such as PROLOG [Warren et
`al. 1977; McDermott 1980]). These programs exhibit nondeterminism in
`several interesting ways; the order of computations is not specified by the
`proof procedure (and is not restricted by it, either). Several resolutions are in
`general possible for any clause; the combinations determine many computa
`tions and several distinguishable forms of nondeterminism [Kowalski 1974].
`4. Predicate logic may be interpreted as a problemreduction system. Then a
`(Horn) clause of the form
`
`— B
`represents a solved problem. One of the form
`
`AU...,A„*
`,x k is a goal statement, or command, which is to find the
`with variables x\,...
`x's that solve the problems A i, . . . ,A
`n. Finding the x's solves the goal. A
`clause
`
`...,A n>B
`Au
` to a combination of solu
`is a solution method, which reduces the solution of B
`tions of A's. This interpretation of Horn clauses maps cleanly into a standard
`andor goal tree formulation of problem solving.
`5. Resolutions may be performed on the left or right of clauses, and the resulting
`derivation trees correspond, in the problemsolving interpretation of predicate
`calculus, to topdown and bottomup versions of problem solving. This duality
`is very important in conceptualizing aspects of problem solving.
`6. There is a uniform proof procedure for logic which is guaranteed to prove in
`finite time any true theorem (logic is semidecidable and complete). No false
`theorems are provable (logic is correct). These and other good formal proper
`ties are important when establishing formally the properties of a knowledge
`representation system.
`
`Predicate calculus is not a favorite of everyone, however: some of the (per
`ceived) disadvantages are given below, together with ways they might be coun
`tered.
`
`1. Sometimes the axioms necessary to implement relatively common con
`cepts are not immediately obvious. A standard example is "equality." These
`largely technical problems are annoying but not basic.
`2. The "first order" in first order predicate calculus means that the system
`
`394
`
`Ch. 12
`
`Inference
`
`IPR2021-00921
`Apple EX1015 Page 405
`
`
`
`does not allow clauses with variables ranging over an infinite number of predicates,
`functions, assertions and sentences (e.g., "All unary functions are boring" cannot
`be stated directly). This problem may be ameliorated by a notational trick; the si
`tuations under which predicates are true are indicated with a Holds predicate. Thus
` 1,sur
`instead of writing On(blockl, surface, situationl), write Holds (On(block
`face), situationl). This notation allows inferences about many situations with only
` 12.3.1. Another
`one added axiom. The "situational calculus" reappears in Section
`useful notational trick is a Diff relation, which holds between two terms if they are
`syntactically different. There are infinitely many axioms asserting that terms are
`different; the actual system can be made to incorporate them implicitly in a well
` 12.3.1.
`defined way. The Diff relation is also used in Section
`3. The frame problem (so called for historical reasons and not related to the
`frames described in Section 10.3.1) is a classic bugbear of problemsolving
`methods including predicate logic. One aspect of this problem is that for technical
`reasons, it must be explicitly stated in axioms that describe actions (in a general
`sense a visual test is an action) that almost all assertions were true in a world state
`remain true in the new world state after the action is performed. The addition of
`these new axioms causes a huge increase in the "bureaucratic overhead" neces
`sary to maintain the state of the world. Currently, no really satisfactory way of han
`dling this problem has been devised. The most common way to attack this aspect of
`the frame problem is to use explicit "add lists" and "delete lists" ([Fikes 1977],
`Chapter 13) which attempt to specify exactly what changes when an action occurs.
`New true assertions are added and those that are false after an action must be delet
`ed. This device is useful, but examples demonstrating its inadequacy are readily
`constructed. More aspects of the frame problem are given in Chapter 13.
`4. There are several sorts of reasoning performed by human beings that
`predicate logic does not pretend to address. It does not include the ability to
`describe its own formulae (a form of "quotation"), the notion of defaults, or a
`mechanism for plausible reasoning. Extensions to predicate logic, such as modal
`logic, are classically motivated. More recently, work on extensions addressing the
`topics above have begun to receive attention [McCarthy 1978; Reiter 1978; Hayes
`1977]. There is still active debate as to whether such extensions can capture many
`important aspects of human reasoning and knowledge within the modeltheoretic
` process of reasoning
`system. The contrary view is that in some reasoning, the very
`itself is an important part of the semantics of the representation. Examples of such
`extended inference systems appear in the remainder of this chapter, and the issues
`are addressed in more detail in the next section.
`
`12.2 COMPUTER REASONING
`
`Artificial intelligence in general and computer vision in particular must be con
`cerned with efficiency and plausibility in inference [Winograd 1978]. Computer
`based knowledge representations and their accompanying inference processes
`often sacrifice classical formal properties for gains in control of the inference proc
`ess and for flexibility in the sorts of "truth" which may be inferred.
`
`Sec. 12.2 Computer Reasoning
`
`395
`
`IPR2021-00921
`Apple EX1015 Page 406
`
`
`
`does not allow clauses with variables ranging over an infinite number of predicates,
`functions, assertions and sentences (e.g., "All unary functions are boring" cannot
`be stated directly). This problem may be ameliorated by a notational trick; the si
`tuations under which predicates are true are indicated with a Holds predicate. Thus
` 1,sur
`instead of writing On(blockl, surface, situationl), write Holds (On(block
`face), situationl). This notation allows inferences about many situations with only
` 12.3.1. Another
`one added axiom. The "situational calculus" reappears in Section
`useful notational trick is a Diff relation, which holds between two terms if they are
`syntactically different. There are infinitely many axioms asserting that terms are
`different; the actual system can be made to incorporate them implicitly in a well
` 12.3.1.
`defined way. The Diff relation is also used in Section
`3. The frame problem (so called for historical reasons and not related to the
`frames described in Section 10.3.1) is a classic bugbear of problemsolving
`methods including predicate logic. One aspect of this problem is that for technical
`reasons, it must be explicitly stated in axioms that describe actions (in a general
`sense a visual test is an action) that almost all assertions were true in a world state
`remain true in the new world state after the action is performed. The addition of
`these new axioms causes a huge increase in the "bureaucratic overhead" neces
`sary to maintain the state of the world. Currently, no really satisfactory way of han
`dling this problem has been devised. The most common way to attack this aspect of
`the frame problem is to use explicit "add lists" and "delete lists" ([Fikes 1977],
`Chapter 13) which attempt to specify exactly what changes when an action occurs.
`New true assertions are added and those that are false after an action must be delet
`ed. This device is useful, but examples demonstrating its inadequacy are readily
`constructed. More aspects of the frame problem are given in Chapter 13.
`4. There are several sorts of reasoning performed by human beings that
`predicate logic does not pretend to address. It does not include the ability to
`describe its own formulae (a form of "quotation"), the notion of defaults, or a
`mechanism for plausible reasoning. Extensions to predicate logic, such as modal
`logic, are classically motivated. More recently, work on extensions addressing the
`topics above have begun to receive attention [McCarthy 1978; Reiter 1978; Hayes
`1977]. There is still active debate as to whether such extensions can capture many
`important aspects of human reasoning and knowledge within the modeltheoretic
` process of reasoning
`system. The contrary view is that in some reasoning, the very
`itself is an important part of the semantics of the representation. Examples of such
`extended inference systems appear in the remainder of this chapter, and the issues
`are addressed in more detail in the next section.
`
`12.2 COMPUTER REASONING
`
`Artificial intelligence in general and computer vision in particular must be con
`cerned with efficiency and plausibility in inference [Winograd 1978]. Computer
`based knowledge representations and their accompanying inference processes
`often sacrifice classical formal properties for gains in control of the inference proc
`ess and for flexibility in the sorts of "truth" which may be inferred.
`
`Sec. 12.2 Computer Reasoning
`
`395
`
`IPR2021-00921
`Apple EX1015 Page 407
`
`
`
`Automated inference systems usually have inference methods that achieve
`efficiency through implementational, computationbased, inference criteria. For
`example, truth may be defined as a successful lookup in a data base, falsity as the
`failure to find a proof with a given allocation of computational resources, and the
`establishment of truth may depend on the order in which deductions are made.
`The semantics of computer knowledge representations is intimately related
`to the inference process that acts on them. Therefore, it is possible to define
`knowledge representations and interpreters in computers whose properties differ
`fairly radically from those of classical representations and proof procedures, such
`as the firstorder predicate calculus. For instance, although the systems are deter
`ministic, they may not be formally consistent (loosely, they may contain contradic
`tory information). They may not be complete (they cannot derive all true
`theorems from the givens); it may be possible to prove P from Qbut ~Pfrom Qand
`R. The set of provable theorems may not be recursively enumerable [Reiter 1978].
`Efforts are being made to account for the "extended inference" needed by
`artificial intelligence using more or less classical logic [McCarthy 1978; Reiter
`1978; Hayes 1977; 1978a; 1978b; Kowalski 1974,1979]. In each case, the classical
`view of logic demands that the deductive process and the deducible truths be in
`dependent. On the other hand, it is reasonable to devote attention to developing a
`nonclassical semantics of these inference processes; this topic is in the research
`stage at this writing.
`Several knowledge representations and inference methods using them are
`"classical" in the artificial intelligence world; that is, they provide paradigmatic
`methods of dealing with the issues of computational inference. They include
`STRIPS [Fikes and Nilsson 1971], the situational calculus [McCarthy and Hayes
`1969], PLANNER and CONNIVER [Hewitt 1972; Sussman and McDermott
`1972], and semantic net representations [Hendrix 1979; Brachman 1979].
`To illustrate the issue of consistency, and to illustrate how various sorts of
`propositions can be represented in semantic nets, we address the question of how
`the order of inference can affect the set of provable theorems in a system.
`Consider the semantic net of Fig. 12.3. The idea is that in the absence of
`specific information to the contrary, one should assume that railroad bridges are
`narrow. There are exceptions, however, such as Bridge02 (which has a highway
`bridge above the rail bridge, say). The network is clearly inconsistent, but trouble
`is avoided if inferences are made "from specific to general." Such ordering implies
`that the system is incomplete, but in this case incompleteness is an advantage.
`Simple ordering constraints are possible only with simple inferential powers
`in the system [Winograd 1978]. Further, there is as yet little formal theory on the
`effects of ordering rules on computational inference, although this has been an ac
`tive topic [Reiter 1978].
`
`12.3 PRODUCTION SYSTEMS
`
`The last section explored why the process of inference itself could be an important
`part of the semantics of
` a knowledge representation system. This idea is an impor
`
`396
`
`Ch. 12
`
`Inference
`
`IPR2021-00921
`Apple EX1015 Page 408
`
`
`
`Fig. 12.3 An inconsistent network.
`
`tant part of production systems. Perceived limitations in logic inference mechan
`isms and the seductive power of arbitrary algorithmic processes for inference has
` rulebased systems which differ from firstorder logic
`spawned the development of
`in the following respects:
`• Arbitrary additions and deletions to the clausal data base are allowed.
`• An interpreter that controls the inference process in special ways is usually an
`integral part of the system.
`Early examples of systems with the first addition are STRIPS [Fikes and Nilsson
`1971] and PLANNER [Hewitt 1972]. Later examples of systems with both addi
`tions are given in [Waterman and HayesRoth 1978]. The virtues of trying to con
`trol inferences may be appreciated after our brief introduction to clausal automatic
`theorem proving, where there are no very good semantic heuristics to guide infer
`ences. However, the price paid for restricting the inference process is the loss of
`formal properties of consistency and correctness of the system, which are not
`guaranteed in rulebased systems. We shall look in some detail at a particular form
`of rulebased inference system called production systems.
`A production system supports a general sort of "inference." It has in common
`with resolution that matching is needed to identify which inference to make. It is
`different in that the action upon finding a matching data item is less constrained.
`Actions of arbitrary complexity are allowed. A production system consists of an ex
`plicit set of situationaction nodes, which can be applied against a data base of sit
`uations. For example, in a very constrained visual domain the rule
`
`(12.11)
`(Green (Region X)) — (Grass (Region^)
`could infer directly the interpretation of a given region. Segmentation rules can
`also be developed; the following example merges two adjacent green regions into a
`single region.
`
`Production Systems
`
`397
`
`IPR2021-00921
`Apple EX1015 Page 409
`
`
`
`(Green(Region X))A(Green(Region y))A
`
`(Adjacent (Region^), (Region Y)) * (Green (Region Z))
`
` A ((Region Z) : =
`(Union (Region X, Region y)))
`
`These examples highlight several points. The first is that basic idea of production
`systems is simple. The rules are easy to "read" by both the programmer and his
`program and new rules are easily added. Although it is imaginable that "situa
`tions" could extend down to the pixel level, and production systems could be used
`(for instance) to find lines, the system overhead would render such an approach
`impractical. In the visual domain, the production system usually operates on the
`segmented image (Chapters 4 and 5) or with the highlevel internal model. In the
`rules above, X and Y are variables that must be bound to specific instances of re
`gions in a data base. This process of binding variables or matching can become very
`complex, and is one of the two central issues of this kind of inference. The other is
`how to choose rules from a set all of whose situations match the current situation
`to some degree.
`
`12.3.1 Production System Details
`
`In its simplest form a production system has three basic components:
`1. A data base
`2. A set of rules
`3. An interpreter for the rules
`The vision data base is usually a set of facts that are known about the visual en
`vironment. Often the rules are considered to be themselves a manipulable part of
`the data base. Examples of some visual facts may be
`(ABOVE (Region 5) (Region 10))
`(SIZE (Region 5) 300)
`(SKY (Region 5))
`(TOP (Region 5) 255)
`The data base is the sole storage medium for all state variables of the system. In
`particular, unlike procedurally oriented languages, there is no provision for
`separate storage of control state information—no separate program counter, push
`down stack, and so on [Davis and King 1975].
` patterns with a lefthand side and a righthand
`A rule is an ordered pair of
`side. A pattern may involve only data base primitives but usually will have vari
`ables and special forms as subpatterns which are matched against the data base by
`the interpreter. For example, applying the following rule to a data base which in
`cludes (12.12),
`
`(12.12)
`
`398
`
`Ch. 12
`
`Inference
`
`IPR2021-00921
`Apple EX1015 Page 410
`
`
`
`(TOP (Region X) (GreaterThan200))
`
`(12.13)
`
`—
`(SKY (Region X))
`region 5 can be inferred to be sky. The lefthand side matches a set of database
`facts and this causes (SKY (Region 5)) to be added to the data base. This example
` do: (1) the primitive TOP in
`shows the kinds of matching that the interpreter must
`the data base fact matches the same symbol in the rule, (2) (Region X) matched
` 5 as a side effect, and (3) (GreaterThan 200) matches
`(Region 5) and Jis bound to
`255. Naturally, the user must design his own interpreter to recognize the meaning
`of such operational subpatterns.
`However, even the form of the rules outlined so far is relatively restrictive.
`There is no reason why the righthand side cannot do almost arbitrary things. For
` a rule may result in various productions being deleted
`instance, the application of
`or added from the set of productions; the data base of productions and assertions
`thus can be adaptive [Waterman and HayesRoth 1978]. Also, the righthand side
`may specify programs to be run which can result in facts being asserted into the
`data base or actions performed.
`Control in a basic production system is relatively simple: Rules are applied
`until some condition in the data base is reached. Rules may be applied in two dis
` a rule may result in the addition of
`tinct ways: (Da match on the lefthand side of
`the consequences on the righthand side to the data base, or (2) a match on the
`righthand side may result in the addition of the antecedents in the lefthand side
`to the data base. The order of application of rules in the first case is termed forward
`chaining reasoning, where the objective is to see if a set of consequences can be
`derived from a given set of initial facts. The second case is known as backward
`chaining, the objective is to determine a set of facts that could have produced a par
`ticular consequence.
`
`12.3.2 Pattern Matching
`
`In the process of matching rules against the data base, several problems occur:
`• Many rule situations may match data base facts
`• Rules designed for a specific context may not be appropriate for larger context
`• The pattern matching process may become very expensive
`• The data base or set of rules may become unmanageably large.
`The problem of multiple matches is important. Early systems simply resolved it by
`scanning the data base in a linear fashion and choosing the first match, but this is
`an ineffective strategy for large data bases, and has conceptual problems as well.
`Accordingly, strategies have evolved for dealing with these conflicts. Like most
`inferencecontrolling heuristics, their effectiveness can be domaindependent,
`they can introduce incompleteness into the system, and so on.
`On the principle of least commitment, when there are many chances of errors,
`one strategy is to apply the most general rule, defined by some metric on the com
`
`Sec. 12.3 Production Systems
`
`399
`
`IPR2021-00921
`Apple EX1015 Page 411
`
`
`
`ponents of the pattern. One simple such metric is the number of elements in a pat
`tern. Antithetical to this strategy is the heuristic of applying the most
` specific pat
`tern. This may be appropriate where the likelihood of making a false inference is
`small, and where specific actions may be indicated (match (MAD DOG) with
`(MAD DOG), not with (DOG)). Another popular but inelegant technique is to
` production application by using state markers
`exercise control over the order of
`which are inserted into the data base by righthand sides and looked for by left
`hand sides.
`
` 1 >.
`1. A — BA < marker
`2. A+B[\ <marker2>.
`3. Bi\ <marker 1> — C.
`4. B/\ < marker 2> * D.
` 3 is now execut
`Here if rule 1 is executed, "control goes to rule 3," i.e., rule
`able, whereas if rule 2 is applied, "control goes to rule 4." Similarly, such control
`paradigms as subroutining, iteration and coroutining may be implemented with
`production sytems [Rychner 1978].
`The use of connectives and special symbols can make matching become arbi
`trarily complex. Rules might be interpreted as allowing all partial matches in their
` 1978]. Thus
`antecedent clauses [Bajcsy and Joshi
`
`(A B C)
`
`(D)
`
`is interpreted as
`(ABC) V (BC) V (AB) V (AC) V (A) V (B) V (C)
`(D)
`where the leftmost actual match is used to compare the rule to others in the case of
`conflicts.
`The problem of large data bases is usually overcome by structuring them in
`some way so that the interpreter applies the rules only to a subset of the data base
`or uses a subset of the rules. This structuring undermines a basic principle of pure
`rulebased systems: Control should be dependent on the contents of the data base
`alone. Nevertheless, many systems divide the data base into two parts: an active
`smaller part which functions like the original data base but is restricted in size, and
`a larger data base which is inaccessible to the rule set in the active smaller part.
`"Metarules" have actions that move situationaction rules and facts from the
`smaller data base to the larger one and vice versa. The incoming set of rules and
`facts is presumably that which is applicable in the context indicated by the situation
`triggering the metarule. This twolevel organization of rules is used in "black
`board" systems, such as Hearsay for speechunderstanding [Erman and Lesser
`1975]. The metarules seem to capture our idea of "mental set," or "context," or
`"frame" (Section 10.3.1, [Minsky 1975]). The two data bases are sometimes re
`ferred to as shortterm memory and long term memory, in analogy with certain
`models of human memory.
`
`400
`
`Ch. 12 Inference
`
`IPR2021-00921
`Apple EX1015 Page 412
`
`
`
`12.3.3 An Example
`
`We shall follow the actions of a production system for vision [Sloan 1977; Sloan
`and Bajcsy 1979]. The intent here is
` to avoid a description of all the details (which
`may be found in the References) and concentrate on the performance of the sys
`tem as reflected by a sample of its output. The program uses
` a production system
`architecture in the domain of outdoor scenes. The goal is to determine basic
`features of the scene, particularly the separation between sky and ground. The
` in
`terpreter is termed the "observer" and the memory has a twotiered structure:
` (1)
`short term memory (STM)
` and (2) long term memory (LTM),
` a data base of all
`facts ever known or established, structured to prefer access to the most recently
`used facts. The image to be analyzed is shown in Fig. 12.4, and the action may be
`followed in Fig. 12.5. The analysis starts with the initialization command
`
`*(look 100000 100 nil)
`This command directs the Observer to investigate all regions that fall
` in the size
`range 100 to 100000, in decreasing order of
` size. The LTM is initialized to NIL.
`our first look at (region 11)
`
`right
`left
`bottom
`top
`size
`x
`rg yb wb
`y
`35
`127
`2
`97
`2132 35
`2 24
`29
`6
`This report is produced by an imageprocessing procedure that produces
`assertions about (region 11). This region is shown highlighted in Fig. 12.5c.
`Progress Report
`
`regions on this branch:
`(11)
`context stack:
`
`Fig. 12.4 Outdoor scene to be analyzed with production system.
`
`Sec. 12.3 Production Systems
`
`401
`
`IPR2021-00921
`Apple EX1015 Page 413
`
`
`
`
`Images corresponding to steps in production system analysis, (a) Tex
`Fig. 12.5
` 11 outlined, (c) SkyGround separation, (d) Skyline.
`ture in the scene, (b) Region
`
`nil
`contents of short term memory:
`((farleft (region 11)) (farright (region 11))
`(right (region 11) 127) (left (region 11) 2)
`(bottom (region 11) 97) (top (region 11) 35)
`(wb (region 11) minus) (yb (region 11) zero)
`(rg (region 11) zero) (size (region 11) 2132))
`end of progress report
`Note that graylevel information is represented as a vector in opponent color space
` RED
`(Chapter 2), where the components axes are WHITEBLACK
` (wb),
` and YELLOWBLUE (yb).
` Three values (plus, zero, minus) are
`GREEN (rg),
`used for each component. The display above is generated once after every itera
`tion of the Observer. The report shows that (REGION 11) is being investigated;
`there is no known context for this investigation; the information about (REGION
`11) created by the imageprocessing apparatus has been placed in STM. The con
` rules.
`text stack is for information only, and shows a trace of activated sets of
`
`402
`
`Ch. 12 Inference
`
`IPR2021-00921
`Apple EX1015 Page 414
`
`
`
`i think that (farleft (region 11))
`i think that (farright (region 11))
`i think that (right (region 11) 127)
`i think that (left (region 11)2)
`i think that (bottom (region 11) 97)
`i think that (top (region 11) 35)
`i think that (size (region 11) 2132)
`
`This portion of the trace shows assertions moving from STM to LTM. They
`are reported because this is the first time they have been REMEMBERed (a special
`procedure in the Observer).
`
`Progress Report
`
`regions on this branch:
`(11)
`context stack:
`nil
`contents of short term memory:
`((color (region 11) black))
`end of progress report
`The assertions created from the region data structure have been digested,
`and lead only to the conclusion that (REGION 11) is BLACK, based on a produc
`tion that looks like:
`
`(wb (region x) minus) A (rw (region x) zero)
`A (bw (region x) zero)
`—► (color (region x) black)
`
`Progress Report
`
`regions on this branch:
`(11)
`context stack:
`nil
`contents of short term memory:
`((ground (region 11)) (shadow (region 11)))
`end of progress report
`The observer knows that things that are black are GROUND and SHADOW.
`The facts it deduces about region 11 are again stored in the LTM.
`Having discovered a piece of ground, the Observer has activated the
`GROUNDRULES, and changed context. It now investigates the neighbors of
`(REGION 11).
`our first look at (region 16)
`wb
`y
`rg
`yb
`2
`23
`30
`
`x
`58
`
`3
`
`size
`1833
`
`top
`
`57
`
`bottom
`119
`
`left
`
`right
`127
`
`2
`
`Production Systems
`
`403
`
`IPR2021-00921
`Apple EX1015 Page 415
`
`
`
`
`(REGION 16) is a neighbor of (REGION 11), and the observer is trying to deter
`mine whether or not they are sufficiently similar, in both color and texture, to
`tify merging them.
`
` jus
`
`Progress Report
`
`regions on this branch:
`(161