throbber
4. Checklists
`
`5. Emergency procedures
`
`6. Data from nav systems
`
`7. Diagnostics
`
`8. Backup for EICAS displays
`
`EICAS
`(Engine Instrumentation
`and Crew Alerting Systems)
`
`
`
`EICAS systems are currently available for the 757/767 and the
`Airbus 300 and 310. To date EICAS has not been certified in a
`general aviation aircraft though all of the avionics vendors are
`working on systems for specific aircraft types. There are several
`issues with EICAS:
`
`1. Is there sufficient panel space for two tubes?
`
`Fig. 18 Crew Alerting Systems
`
`Future Directions
`
`It is obvious that the shell has only been cracked in the appli-
`cation of CRT's to the business aviation cockpit. Currently, work
`is going on in several areas:
`
`2. Will the airframe support the weight and power penalty that
`EICAS brings with it?
`
`1. Side by side 8”x8" tubes
`
`2. Integrated symbol generators
`
`3. During engine start, how will the system be powered?
`
`3. High speed CMOS to reduce heat
`
`4. Can the reversionary display tube be used as an MFD when
`not needed for EICAS?
`
`4. Fully integrated avionics shipsets that share a great deal of
`their hardware.
`
`3|. What should be displayed (all parameters, only the essential
`engine parameters, all warnings of only the noncritical adviso-
`ries)?
`
`5. Three dimensional displays
`
`6. Voice entry systems
`
`From the list above, it can be seen that the application of
`EICAS in general aviation aircraft needs careful study and per-
`haps some new technology before it can achieve the same level of
`acceptance as EFIS. The most promise lies in the integration of
`the entire cockpit by an all new system. In fact, this is the direc-
`tion that all of the avionics vendors are headed.
`
`7. Integrated FMS and EFIS systems
`
`8. CRT (stand-alone) air data instruments
`
`9. Raster only high resolution tubes
`
`..
`
`I
`
`ii. a
`
`
`
`Fig. 17 Engine Instruments
`
`10. Liquid crystal and other “flat plate” displays
`
`General aviation avionics finds its way into many other mar-
`kets. It is typically an evolutionary rather than revolutionary
`growth. When Boeing developed the all—digital shipset, general
`aviation systems engineers were literally green with envy because
`they recognized the advantages of this type of change.
`
`At this time, the first all-digital general aviation aircraft (the
`SF-340 and the BAe-800) have been certified and are starting the
`payoff period that EFIS initiated in 1982. The future is exciting,
`to say the least, now that the core systems are all-digital.
`
`References
`
`(1) Robert J. Tibor, John C. Hall, “Flight Management Systems
`III Where Are We Going And Will It Be Worth It,” Collins
`Air Transport Division, Rockwell International
`
`198
`
`BOEING
`
`Ex. 1031, p.251
`
`BOEING
`Ex. 1031, p. 251
`
`

`
`SESSION 8
`
`FAULT TOLERANT
`
`AVIONICS
`
`Chairmen:
`
`Billy L. Dove
`NASA Langley Research Center
`
`Dennis B. Mulcare
`
`Lockheed—Georgia Co.
`
`
`
`This session addresses the application of fault tolerant technologies to digital avionics,
`architectural concepts and software techniques used to achieve fault tolerance.
`
`including
`
`BOEING
`
`Ex. 1031 , p. 252
`
`BOEING
`Ex. 1031, p. 252
`
`

`
`BOEING
`
`Ex. 1031, p. 253
`
`BOEING
`Ex. 1031, p. 253
`
`

`
`ADVANCED INFORMATION PROCESSING SYSTEM
`
`84-2644
`
`Jaynarayan H. Lala'
`
`The Charles Stark Draper Laboratory,
`Cambridge, Massachusetts 02139
`
`Inc.
`
`is
`The Advanced Information Processing System (AIPS)
`designed to provide a fault tolerant and damage tol-
`erant data processing architecture for a broad range
`of
`aerospace vehicles.
`This paper describes the
`AIPS
`concept
`and
`its operational philosophy.
`A
`proof-of-concept
`(POC)
`system is now in the detailed
`design
`phase.
`The
`second
`half
`of
`the
`paper
`describes the architecture of the POC system build-
`ing blocks.
`
`The following sections define the AIPS in great-
`er detail. Section 2 is a conceptual definition of
`the system and its operational philosophy. Section
`3
`is
`a more
`specific
`description
`of
`the
`proof-of-concept
`(POC)
`system configuration,
`the
`AIPS building blocks, and their architecture.
`For a
`more complete description of
`the AIPS system please
`see ‘AIPS System Specification‘
`[2].
`
`application
`permits
`architecture
`AIPS
`The
`designers to select an appropriate set of the build-
`ing blocks and system services and configure a spe-
`cific processing system for their application. The
`application designer need not
`include all the build-
`ing blocks that are present
`in the POC system. The
`number and type of building blocks and their config-
`uration will be determined by the specific applica-
`tions requirements.
`
`1IQ_INIBQQfl£II£&L
`
`Z4Q_AIB§_£QN££EI§L
`
`2.1 OVERVIEW
`
`is
`The Advanced Information Processing System (AIPS)
`designed to provide a fault tolerant and damage tol-
`erant data processing architecture that meets aer-
`onautical
`and
`space
`vehicle
`application
`requirements.
`The requirements for seven different
`applications
`are described in
`the
`‘AIPS System
`Requirements‘
`[1]. The requirements can be divided
`into two categories: quantitative and qualitative.
`Examples
`of
`the former are processor
`throughput,
`memory size,
`transport
`lag, mission success proba-
`bility,
`etc.
`Examples of
`the latter are graceful
`degradation, growth and change tolerance,
`integra-
`bility. etc.
`The AIPS architecture is
`intended to
`satisfy the quantitative requirements and also have
`attributes that make
`it
`responsive to the qualita-
`tive requirements.
`
`system is comprised of hardware ‘building
`The
`blocks‘ which are fault
`tolerant processing ele-
`ments,
`a
`fault
`and damage tolerant
`intercomputer
`network and an
`input/output network,
`and a fault
`tolerant power distribution system.
`A network oper-
`ating system ties
`these elements
`together
`in
`a
`coherent system.
`
`that
`The system is managed by a Global Computer
`allocates functions to individual processing sites.
`performs
`system level
`redundancy management
`and
`reconfiguration, and maintains knowledge of the sys-
`tem state for distribution to the component ele-
`ments.
`The
`system management
`functions may
`be
`reassigned
`to
`an
`alternating
`processing
`site
`in-flight. Redundancy management,
`task scheduling,
`and other
`local services at
`individual processing
`sites are handled by local operating systems. The
`network operating system links local operating sys-
`tems together for such functions as intertask commu-
`nications.
`
`Member AIAA,
`
`IEEE.
`
`C°P3’|'lghl © American Institute of Aeronautics and
`Astronautics. lnc.. 1984. All rights reserved.
`
`199
`
`The Advanced Information Processing System con-
`sists of a number of computers located at processing
`sites which may be physically dispersed throughout
`the vehicle.
`These processing sites
`are
`linked
`together by a reliable and damage tolerant data com-
`munication bus, called the Intercomputer Bus or the
`IC bus.
`
`a given processing site may have
`A computer at
`access to varying numbers and types of Input/Output
`(I/O) buses. The I/0 buses may be global,
`regional
`or
`local
`in nature.
`Input/Output devices on the
`global
`I/0 bus are available to all or at
`least
`a
`majority of
`the AIPS computers. Regional buses con-
`nect I/O devices in a given region to the processing
`sites located in their vicinity.
`Local buses con-
`nect a computer
`to the I/O devices dedicated to that
`computer. Additionally,
`there may be memory mapped
`I/O devices that can be accessed directly by a com-
`puter similar to a memory location. Finally,
`there
`is a system Mass Memory that
`is directly accessible
`from all computers in the system on a dedicated mul-
`tiplex Mass Memory (MM) bus.
`
`I/O bus are
`Input/Output devices on the global
`available system wide.
`The global
`I/O bus is a time
`division multiple access (TDMA) contention bus. The
`intercomputer
`(IC) bus is used to transmit commands
`and data between computers. The IC bus is also a TDMA
`contention bus.
`
`shows a simplified system level diagram
`Figure 1
`of the AIPS architecture.
`
`Computers at various AIPS processing sites are
`General Purpose Computers (GPCS) of varying capabil-
`ities
`in terms of processing throughput, memory,
`reliability, fault tolerance, and damage tolerance.
`Throughput may range from that of a single micro-
`processor to that of a large multiprocessor. Memory
`size will be determined by application requirements.
`Reliability, as measured by probability of failure
`due to random faults, ranges from 10‘4 per hour for a
`
`BOHNG
`
`Ex.1031,p.254
`
`BOEING
`Ex. 1031, p. 254
`
`

`
`a multi-
`for
`hour
`to 10“° per
`simplex processor
`processor that uses parallel hybrid redundancy. For
`those functions
`requiring fault masking a
`triplex
`level of
`redundancy is provided.
`For
`low criticall-
`ty functions or noncritical functions,
`the GPCs may
`be duplex or simplex. Parallel hybrid redundancy is
`used
`for extremely high levels of fault
`tolerance
`and/or for
`longevity (long mission durations). GPCs
`can also be made damage tolerant by physically dis-
`persing redundant GPC elements and providing secure
`and damage
`tolerant
`communications between these
`elements. Within AIPS, computers of varying levels
`of fault tolerance can coexist such that
`less reli-
`able computers are not
`a detriment
`to higher reli-
`ability computers.
`
`framework in which AIPS operates can
`The overall
`be characterized as a
`limited form of a fully dis-
`tributed multicomputer system.
`A fully distributed
`fault
`and
`damage
`tolerant
`system must
`satisfy
`several
`requirements.
`The
`following subsections
`describe these requirements
`and characterize the
`AIPS architecture in the context of
`these require-
`ments.
`
`2.2 FUNCTION MIGRATION
`
`fully distributed system must have a multi-
`A
`plicity of resources which are freely assignable to
`functions on a short-term basis [3]. AIPS has mul-
`tiple processing sites; however.
`they are not freely
`assigned to functions on a short-term basis. During
`routine operations the General Purpose Computers at
`various processing sites are assigned to perform a
`fixed set of functions, each computer doing a unique
`set of
`tasks. However,
`in response to some internal
`or external
`stimulus,
`the computers can be reas-
`signed
`to
`a different
`set
`of
`functions.
`This
`results in some functions migrating from one proc-
`essing site to another site in the system. Under
`certain conditions,
`it may also result in some func-
`tions being suspended entirely for
`a brief time
`period or for
`the remainder of the mission.
`In AIPS
`this
`form of
`limited distributed processing is
`called semi-dynamic function migration.
`
`in function
`result
`that
`stimuli
`internal
`The
`migration may consist of detection of a fault in the
`system,
`a change in the system load due to a change
`in mission phase, etc.
`An example of
`an external
`stimulus is a crew initiated reconfiguration of
`the
`system.
`
`2.3 RESOURCE TRANSPARENCY
`
`a fully distributed
`Another characteristic of
`system is that the multiplicity of resources should
`be transparent
`to the user.
`To a large extent,
`this
`is true in the AIPS.
`Function migration is trans-
`parent
`to the function and the person implementing
`that function in software.
`Interfunction communi-
`cation is handled by the operating system such that
`the location of
`the two communicating functions is
`also transparent
`to both.
`The two functions could
`be collocated in a GPC or
`they may be executing in
`different GPCs.
`Indeed, at one time they may be col-
`located. while at a later time one of
`them may have
`been migrated to another site. This transparency is
`achieved through a layered approach to interfunction
`communication.
`One of
`these layers determines the
`current processing site of the function to which one
`wishes to communicate.
`If it is another GPC, anoth-
`er
`layer
`in the communication hierarchy is invoked
`that
`takes care of appropriate IC bus message for-
`mattinq and interface to the bus
`transmitters and
`
`GLOBAL I/O BUS
`i
`LOCAL I/0
`.3...
`La?
`
`comvuren 1
`
`INTERCOMPUTER aus
`i
`,..__.MAss MEMORY BUS
`A
`l—‘*—"‘l
`
`REGIONAL I/D BUS
`
`
`
`COMPUTER Z
`
`COMPUTER 3
`
`SYSTEMMASS
`
`
`MEMO"
`
`
`
`COMPUTER N
`
`
`
`
`Figure 1. AIPS Architecture: A Software View
`
`layer. This lay-
`the physical
`is,
`that
`receivers,
`ered approach is responsible for hiding the exist-
`ence of multiple computers
`from the applications
`programmer .
`
`2.4 SYSTEM CONTROL
`
`Another characteristic of a totally distributed
`system is that the system control
`is through multi-
`ple cooperating autonomous operating systems.
`The
`AIPS operational philosophy differs considerably in
`this regard.
`The overall AIPS system management and
`control authority is vested in one GPC at any given
`time. This GPC is called the Global Computer. All
`other GPCs are subservient to this GPC as far as sys-
`tem level
`functions are concerned. However, all
`the
`local
`functions are handled quite independently by
`each computer. This philosophy is more akin to a
`hybrid of hierarchical and federated systems. This
`is explained in the following.
`
`each GPC operates
`circumstances
`normal
`Under
`fairly autonomously of other computers.
`Each GPC
`has a Local Operating System that performs all
`the
`functions necessary to keep that processing site
`operating in the desired fashion. The local operat-
`ing system is responsible for an orderly start and
`initialization of
`the GPC, scheduling and dispatch-
`ing of tasks.
`input/output services,
`task synchroni-
`zation and
`communication
`services.
`and
`resource
`management.
`It also is responsible for maintaining
`the overall
`integrity of
`the processing site in the
`presence of faults. This involves fault detection,
`isolation,
`and
`reconfiguration (FDIR).
`The local
`operating system performs all of the redundancy man-
`agement
`functions
`including FDIR, background self
`tests,
`transient and hard fault analysis. and fault
`logging.
`
`The services provided by local operating systems
`at various processing sites are similar although
`they may differ in implementation.
`For example,
`the
`multiprocessor version of the operating system must
`take
`into account
`the multiplicity of processors
`
`200
`
`BOHNG
`
`Ex.1031,p.255
`
`BOEING
`Ex. 1031, p. 255
`
`

`
`it must also con-
`task scheduling. Similarly,
`for
`sider the more complex task of redundancy management
`and cycling of spare units. The uniprocessor oper-
`ating system can also have different variations
`depending upon the level of
`redundancy and the I/O
`configuration.
`
`The Local Operating System in each computer
`interfaces with the Network Operating System.
`The
`Network Operating System is responsible for system
`level
`functions. These include an orderly start and
`initialization of various buses and networks, commu-
`nication between processes executing in different
`computers,
`system level
`resource management,
`and
`system level
`redundancy management.
`System level
`resources are the GPCS,
`the I/O, IC, and the HM bus-
`es, and the shared data and programs stored in the
`mass memory or
`in some other commonly accessible
`location.
`System redundancy management
`includes
`FDIR in the I/O and IC node networks, correlation of
`faults
`in GPCs
`(both transient and hard faults),
`reassignment of
`computers
`to functions
`(function
`migration),
`and graceful degradation in case of
`a
`loss of a processing site.
`
`the Network Operating
`the functions of
`Some of
`System are centralized the Global Computer.
`The
`Global Computer
`is
`responsible for
`system start,
`resource management,
`redundancy management,
`and
`function migration.
`It needs status knowledge of
`all processing sites and it must be able to command
`other GPCs
`to perform specific functions. This com-
`munication is accomplished via the Network Operating
`System,
`a portion of which is resident
`in each com-
`puter.
`The Global Computer does not participate in
`every system level
`transaction.
`Some of
`the system
`level
`functions performed by the Network Operating
`System may involve only a pair of nonglobal GPCs.
`
`is designated to be the Global
`the GPCs
`One of
`the
`system bootstrap time. However,
`Computer at
`this designation can be changed during system opera-
`tion by an internal or an external stimulus.
`
`to be reliable, fault
`allows each hardware element
`tolerant,
`and
`damage
`tolerant.
`From a
`software
`viewpoint,
`however.
`this underlying complexity of
`the system is transparent. This is true not only in
`the context of
`the applications programs but
`for
`most of
`the operating system as well; however,
`those
`elements of
`the operating system that are concerned
`with fault detection and recovery and other redun-
`dancy management
`functions have an intimate know-
`ledge of the underlying complexity.
`
`Hardware redundancy in the AIPS is implemented
`at a fairly high level,
`typically at
`the processor,
`memory,
`and bus
`level. There are two fundamental
`reasons for providing redundancy in the system: one,
`to detect
`faults
`through comparison of
`redundant
`results, and two,
`to continue system operation after
`component failures. Processors, memories, and buses
`are replicated to achieve a very high degree of
`reliability and fault tolerance.
`In some cases cod-
`ed redundancy is used to detect faults and to pro-
`vide backups more efficiently than would be possible
`with replication.
`
`redundant elements are always operated in
`The
`tight synchronism which results in exact replication
`of computations and data. Fault detection coverage
`with this approach is one hundred per cent once a
`fault
`is manifested.
`To uncover latent faults,
`tem-
`poral and diagnostic checks are employed. Given the
`low probability of
`latent faults,
`the checks need
`not be run frequently.
`
`Fault detection and masking are implemented in
`hardware, relieving the software from the burden of
`verifying the correct operation of
`the hardware.
`Fault
`isolation and reconfiguration are largely per-
`formed in software with some help from the hardware.
`This
`approach
`has
`flexibility
`in
`reassigning
`resources after failures are encountered, and yet it
`is not burdensome since isolation and reconfigura-
`tion procedures are rarely invoked.
`
`2.5 DATA BASE
`
`2.7 DAMAGE TOLERANCE
`
`a distributed
`important attribute of
`Another
`system is the treatment of
`the data base. The data
`base can be completely replicated in all subsystems
`or
`it can be partitioned among the subsystems.
`In
`addition.
`the data base directory can be centralized
`in one subsystem, duplicated in all subsystems, or
`partitioned among the subsystems. The AIPS approach
`is a combination of these.
`
`GPCs will
`the mass memory data base, all
`For
`This can be
`contain a directory of the MMU contents.
`implemented as
`a
`‘directory to the directory‘
`in
`order to limit the involvement of GPCs
`in the direc-
`tory change process.
`The MMU directory will
`be
`static over extended intervals.
`
`system
`The data base that reflects the global
`state will be maintain ed by the Global Computer
`in
`its local memory.
`A copy will be maintained by any
`alternate Global Computer, also in local memory.
`
`The data base that reflects the distribution of
`functions among GPCs will be contained in all GPCS.
`
`2.5 FAULT TOLERANCE
`
`hardware
`considerable amount of
`a
`is
`There
`redundancy and complexity associated with each of
`the
`elements
`shown
`in Figure 1}
`This redundancy
`
`the AIPS survivability related require-
`One of
`ments
`is that
`the information processing system be
`able
`to tolerate those damage events that do not
`otherwise
`impair
`the
`inherent capability of
`the
`vehicle to fly, be it an aircraft or a spacecraft.
`
`tolerance will be
`for damage
`requirement
`The
`applied to redundant GPCs,
`intercomputer communi-
`cations, and to communication links between GPCs and
`sensors, effectors, and other vehicle subsystems.
`
`the redundant com-
`The internal architecture of
`puters supports the damage tolerance requirement
`in
`several ways. The links between redundant channels
`of
`a
`computer are point-to-point.
`That
`is, each
`channel has a dedicated link to every other channel.
`Second,
`these
`links
`can be
`several meters
`long.
`This makes
`it possible to physically disperse redun-
`dant
`channels
`in the target vehicle.
`The channel
`interface hardware is such that
`long links do not
`pose
`a problem in synchronizing widely dispersed
`processors.
`
`For communication between GPCs and between a GPC
`and I/O devices a damage and fault tolerant network
`is employed. The basic concept of the network is as
`follows.
`
`The network consists of a number of full duplex
`links
`that are interconnected by circuit switched
`
`201
`
`BOHNG
`
`Ex.1031,p.256
`
`BOEING
`Ex. 1031, p. 256
`
`

`
`In
`to form a conventional multiplex bus.
`nodes
`steady state.
`the network configuration is static
`and
`the circuit
`switched nodes pass
`information
`through them without the delays which are associated
`with packet
`switched networks.
`The protocols and
`operation of
`the network are identical
`to a multi-
`plex bus.
`Every transmission by any subscriber on a
`node is heard by all
`the subscribers on all
`the nodes
`just as
`if they were all
`linked together by a linear
`bus.
`
`a virtual bus.
`The network performs exactly as
`However,
`the network concept has many advantages
`over
`a bus. First of all, a single fault can disable
`only a small fraction of
`the virtual bus,
`typically
`a
`link connecting two nodes, or a node. The network
`is able to tolerate such faults due to a richness of
`interconnections between nodes.
`By
`reconfiguring
`the network around the faulty element, a new virtual
`bus
`is constructed.
`Except
`for
`such reconfigu-
`rations,
`the structure of
`the virtual bus
`remains
`static.
`
`to recognize
`The nodes are sufficiently smart
`reconfiguration commands
`from the network manager
`which is one of
`the GPCs.
`The network manager can
`change
`the
`bus
`topology by
`sending appropriate
`reconfiguration commands to the affected nodes.
`
`induced damage or other
`Second, weapons effect
`damage caused by electrical shorts, overheating, or
`localized fire would affect only subscribers in the
`damaged portion of the vehicle. The rest of the net-
`work.
`and
`the
`subscribers on it, can continue to
`operate normally.
`If the sensors and effectors are
`themselves physically dispersed for damage tolerance
`or other
`reasons
`and
`the damage event does not
`affect
`the
`inherent capability of
`the vehicle to
`continue to fly,
`then the control
`system would con-
`tinue
`to function in a
`normal manner or
`in some
`degraded mode
`as
`determined
`by
`sensor/effector
`availability.
`The communication mechanism,
`that is,
`the network itself, would not be a reliability bot-
`tleneck.
`
`in the
`isolation is much easier
`fault
`Third.
`network than in multiplex buses.
`For example,
`a
`remote terminal
`transmitting out of
`turn,
`a rather
`common failure mode, can be easily isolated in the
`network through a systematic search where one termi-
`nal
`is disabled at a time. This,
`in fact,
`is a stan-
`dard algorithm for isolating faults in the network.
`
`the network can be expanded very easily
`Fourth,
`by adding more nodes.
`In fact, nodes and subscrib-
`ers to the new nodes
`(I/O devices or GPCs)
`can be
`added without
`shutting down the existing network.
`In bus systems, power
`to buses must be turned off
`before
`new subscribers or
`remote terminals can be
`added.
`
`to distribute simplex data to redundant
`sufficient
`elements
`in one step.
`The redundant elements must
`exchange their copy of
`the data with each other to
`make sure that every element has a congruent value
`of the simplex data. The AIPS architecture not only
`takes
`this requirement
`into account but also pro-
`vides efficient ways of performing simplex source
`congruency through a mix of hardware and software.
`The simplex to redundant interface is also the place
`where the applications programmer gets
`involved in
`the processor
`redundancy and the applications code
`complexity multiplies.
`The AIPS processor
`level
`architecture is designed such that it separates the
`source congruency and computational
`tasks
`into two
`distinct functional areas. This reduces the appli-
`cations code complexity and aids validation.
`
`2.9 MASS MEMORY
`
`The mass memory in AIPS provides the following
`capabilities.
`
`1.
`
`2.
`
`System Cold Start/Restart.
`
`Function Migration Support.
`
`3. Overlays for
`Computers.
`
`local memory of General Purpose
`
`4.
`
`System Table Backup.
`
`5. Storage for system-wide common files.
`
`6.
`
`Program Checkpointing.
`
`3 Q EBQQE-QE_§QNcEEI 51515”
`
`the Advanced
`feasibility of
`To demonstrate
`Information Processing System concept described in
`the preceding sections,
`a
`laboratory proof-of-con-
`cept
`system will be built.
`Such a system is now in
`the detailed design phase.
`The POC system config-
`uration is shown in Figure 2.
`It consists of five
`processing sites which are interconnected by a tri-
`plex circuit
`switched network.
`Four of
`the five
`GPCs are uniprocessors, one simplex, one duplex, and
`two triplex processors.
`The fifth GPC is a multi-
`processor that uses parallel hybrid redundancy. The
`redundant GPCs are to be built such that they can be
`physically dispersed for damage tolerance.
`Each of
`the redundant channels of a GPC could be as far as 5
`meters from other channels of the same GPC.
`
`tolerant processors
`the triplex fault
`Each of
`(FTPs) and the fault tolerant multiprocessor
`(FTMH
`interfaces with three nodes of
`the Intercomputcr
`(IC)
`node network.
`The duplex and the simplex pro-
`cessors
`interface with two and one nodes,
`respec-
`tively.
`
`there are no topological constraints
`Finally,
`which might be encountered with linear or ring bus-
`es.
`
`The mass memory is a highly encoded memory that
`interfaces with the GPCs on a triplex multiplex bus.
`
`2.8 SOURCE CONGRUENCY
`
`An important consideration in designing AIPS is
`interface between redundant
`and simplex ele-
`the
`ments. This interface design is crucial
`in avoiding
`single point faults in a redundant system. One must
`perform source congruency operations on all simplex
`data
`coming into a redundant computer.
`It
`is not
`
`is mechanized using a 16 node
`The Input/Output
`circuit switched network that
`interfaces with each
`of
`the GPCs on 1
`to 6 nodes depending on the GPC
`redundancy level.
`
`Redundant system displays and controls are driv-
`en by the Global Computer and interface through the
`I/O network.
`
`Each GPC has a Local Operating System and a por-
`tion of
`the Network Operating System.
`For
`the
`proof-of-concept system,
`initially the FTMP will be
`
`202
`
`BOHNG
`
`Ex.1031,p.257
`
`BOEING
`Ex. 1031, p. 257
`
`

`
` ——1
`
`TO ALL
`PROCESSORS
`
`'
`MASS
`MEMDRV
`
`FTMP
`
`Figure 2.
`
`Proof-of-Concept
`AIPS
`Configuration
`
`System
`
`the Global Computer.
`
`3.1 ARCHITECTURE OF AIPS BUILDING BLOCKS
`
`hardware building
`the major
`Architecture of
`blocks of
`the AIPS Proof-of-Concept System config-
`uration is described in the following sections.
`
`
`
`the FTP is divided
`The architectural description of
`into three sections: Software View, Hardware View,
`and External Interfaces.
`
`3.1.1.1 Fault Tolerant Processor: Software View
`
`the uniprocessor architecture from a
`FTP or
`The
`software viewpoint appears as shown in Figure 3.
`
`DATA
`x
`c
`”
`G
`
`RAM
`ROM
`
`"0
`CPU
`
`MEMORY
`MAPPED
`no
`oevmes
`
`INTERVALTIMER
`+
`NEALTIMECLOCK
`’
`WATCHDOGIWMER
`
`Ji
`
`iiIOP BUS
`
`IC
`BUS
`
`I/0
`BUSES
`
`1
`
`IC
`aus
`'NT
`l
`
`I/O
`aus
`‘NT
`L >
`
` AT
`
`x
`c
`H
`G
`
`RAM
`+
`ROM
`
`°°MPUgF’,‘UT'°”AL
`
`CF BUS
`
`I
`
`
`
`INTERVAL TIMER
`+
`REAL TIME CLOCK
`+
`WATCHDOG TIMER
`
`
`SHARED
`MEMORY
`
`
`
`
`
`Figure 3.
`
`Tolerant
`Fault
`Architecture: Software View
`
`Processor
`
`The uniprocessor can be thought of as consisting
`two separate and rather independent sections:
`the
`Of
`°°mPutational core and the Input/Output channel.
`
`The computational core has a conventional pro-
`cessor architecture.
`It has a CPU, memory (RAM and
`ROM),
`a Real Time Clock, and interval timer(s). The
`Real Time Clock counts up and can be read as a memory
`location (a pair of words) on the CP bus.
`Interval
`timers are used to time intervals for scheduling
`tasks
`and keeping time—out
`limits on applications
`tasks (task watchdog timers).
`An interval
`timer can
`be
`loaded with a given value which it
`immediately
`starts counting down and when the counter has been
`decremented to zero,
`the CPU is interrupted with a
`timer
`interrupt.
`A watchdog timer
`is provided to
`increase fault coverage and to fail-safe in case of
`hardware
`or
`software malfunctions.
`The watchdog
`timer resets the processor and disables all
`its out-
`puts
`if
`the timer
`is not reset periodically. The
`watchdog timer
`is mechanized independently of
`the
`basic processor timing circuitry.
`
`There also appears on the processor bus a set of
`registers,
`called the data
`exchange
`registers.
`These are used in the redundant fault tolerant pro-
`cessor
`to exchange data amongst
`redundant process-
`ors.
`From a software viewpoint,
`this is the only
`form in which hardware redundancy is manifested.
`
`On a routine basis the only data that needs to be
`exchanged consists of error
`latches and cross chan-
`nel
`comparisons of
`results for
`fault detection.
`These operations can be easily confined to the pro-
`gram responsible for Fault Detection, Isolation, and
`Reconfiguration. Voting
`of
`the
`results of
`the
`redundant computational processors is performed by
`the Input/Output processors. Therefore.
`the remain-
`ing pieces of
`the Operating System software and the
`applications
`programs
`need not
`be
`aware of
`the
`existence of
`the data exchange registers. The task
`scheduler and dispatcher,
`for example, can view the
`computational core as a single reliable processor.
`
`the
`is
`processor
`the
`of
`half
`other
`The
`has a CPU
`I/O channel
`The
`Input/Output channel.
`(same instruction set architecture as the CP), memo-
`ry (RAM and ROM),
`a Real Time Clock. and an Interval
`Timer(s). This part of
`the I/O channel
`is identical
`to the CP except that it has less memory than the CP.
`
`The IOP has interfaces to the intercomputer bus,
`one or more
`I/0 buses,
`and memory mapped
`I/O
`devices.
`The CP
`and the IOP also have a shared
`interface to the system mass memory. These external
`interfaces of
`the FTP will be discussed in the next
`two sections.
`
`IOP and CP exchange data through a shared
`The
`The IOP and CP have independent operating
`memory.
`that cooperate to assure that
`the sensor
`systems
`and other data from Input devices is made
`values
`available to the control
`laws and other applications
`programs running in the CP
`in a timely and orderly
`fashion. Similarly,
`the two processors cooperate on
`the outgoing information so that
`the actuators and
`other output devices receive commands at appropriate
`times. This is necessary to minimize the transport
`lag for closed loop control functions such as flight
`control and structural control.
`
`CP and IOP actions are therefore synchro-
`The
`nized to some extent.
`To help achieve this syn-
`chronization in
`software,
`a hardware feature has
`been provided. This feature enables one processor
`to interrupt
`the other processor.
`By writing to a
`reserved address in shared memory the CP can inter-
`rupt
`the IOP
`and by writing to another
`reserved
`location the IOP can interrupt
`the CP. Different
`
`203
`
`BOEING
`
`Ex. 1031, p. 258
`
`BOEING
`Ex. 1031, p. 258
`
`

`
`meanings can be assigned to this interrupt by leav-
`ing an appropriate message, consisting of commands
`and/or data,
`in some other predefined part of
`the
`shared memory just before the cross-processor inter-
`rupt
`is asserted.
`
`both
`in
`information
`flow of
`routine
`For
`directions,
`the shared memory will be used without
`interrupts but with suitable locking semaphores to
`pass a consistent set of data. The interrupts can be
`used to synchronize this activity as well as to pass
`time critical data
`that must meet
`tight
`response
`time requirements.
`In order to assure data consist-
`ency it
`is necessary that while one side is updating
`a block of data the other side does not access that
`block of data.
`This
`can either
`be
`implemented
`through semaphores
`in
`software or
`through double
`buffering. Hardware support for semaphores,
`in the
`form of
`test
`8 set
`instruction,
`is provided in the
`IOPs and CPs.
`
`this
`are many attractive features of
`There
`The
`architecture from an operational viewpoint.
`most
`important of
`these is the decoupling of compu-
`tational stream and the input/output stream of tran-
`sactions.
`The computational processor
`is
`totally
`unburdened from having to do any I/O transaction. To
`the CP all I/0 appears memory mapped.
`And this not
`only includes I/O devices but also all other comput-
`ers
`in the system as well. That
`is, each sensor,
`actuator,
`switch, computer, etc.
`to which the FTP
`interfaces can simply be addressed by writing to a
`word or words in the shared memory.
`
`Data from other processing sites is received by
`each IOP on the redundant
`IC buses, hardware voted,
`and then deposited in their respective shared memo-
`ries.
`Simplex source data such as
`that
`from I/O
`devices,
`local processors, etc.
`is received by the
`single I/O processor that is connected to the target
`device. This data is then sent
`to the other two I/O
`processors using the IOP data exchange hardware.
`The congruent data is
`then deposited in all
`three
`shared memory modules.
`In either case,
`the computa-
`tional processors obtain all data from outside that
`has already been processed for faults and source
`congruency requirements by the I/O processors.
`
`The data exchange mechanism appears to the soft-
`as
`a set of registers on the processor bus.
`ware
`exchange between redundant processors
`takes
`Data
`one word at
`a
`time.
`Two
`types of data
`place
`exchanges are possible:
`a simplex

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket