`Ellenby et al.
`
`19
`
`54 ELECTRO-OPTIC VISION SYSTEM WHICH
`EXPLOITS POSITION AND ATTTUDE
`
`75 Inventors: John Ellenby; Thomas William
`Ellenby, both of Palo Alto, Calif.
`73 Assignee: Criticom Corporation, San Francisco,
`Calif.
`
`Appl. No.: 119,360
`21
`22 Filed:
`Sep. 10, 1993
`51
`Int. Cl. .............................. G09G 1/00; G09G 1/28;
`GO1C 11/26
`52 U.S. Cl. ............................ 364/559; 364/449.1; 34.5/9
`58 Field of Search ..................................... 364/443, 449,
`364/559,578, 424.02: 395/118, 125, 127,
`135, 133; 34.5/7, 8, 9; 348/115
`
`56)
`
`References Cited
`
`U.S. PATENT DOCUMENTS
`
`4,322,726 3/1982 Collier et al. ........................... 340/705
`4,380,024 4/1983 Olofsson ...........
`... 348/115
`4,489,389 12/1984 Beckwith et al. ...................... 364/522
`(List continued on next page.)
`FOREIGN PATENT DOCUMENTS
`05394517 7/1989 United Kingdom.
`OTHER PUBLICATIONS
`“Mission Accomplished", NASA Tech Briefs, Jul. 1993.
`“. .
`. Virtual Reality”, San Diego Union Tribune, Bruce
`Biglow No Date.
`“NASA Vision", Final Frontier, Mike Fisher, Aug. 1993.
`“Sextants in Space', Wall Street Journal, Susan Carey, Jul.
`20, 1993.
`“Digital Electronic Still Camera", NASA Tech Briefs, Jun.
`1993, Samual Holland and Herbert D. Yeates.
`“Mapping Wildfires in Nearly Real Time", NASA Tech
`Briefs, Jun. 1993, Joseph D. Nichols, et al.
`“From the Ground Up”, Publish magazine, Jan. 1994, Stuart
`Silverstone.
`
`
`
`USOO5815411A
`Patent Number:
`11
`(45) Date of Patent:
`
`5,815,411
`Sep. 29, 1998
`
`“Gyro Sensed Stereoscopic HMD, Information display Dec.
`1993.
`“Unveil Map Device”, Wall Street Journal No Author, No
`Date.
`“Lost? The little Box ... ', Wall Street Journal No Author,
`No Date.
`Primary Examiner Emanuel T. Voeltz
`Assistant Examiner M. Kemper
`Attorney, Agent, Or Firm-Page Lohr ASSociates
`57
`ABSTRACT
`The present invention is generally concerned with electronic
`Vision devices and methods, and is Specifically concerned
`with image augmentation in combination with navigation,
`position, and attitude devices. In the Simplest form, devices
`of the invention can be envisioned to include Six major
`components: A 1) camera to collect optical information
`about a real Scene and present that information as an
`electronic signal to; a 2) computer processor; a 3) device to
`measure the position of the camera; and a 4) device to
`measure the attitude of the camera (direction of the optic
`axis), thus uniquely identifying the scene being viewed, and
`thus identifying a location in; a 5) data base where infor
`mation associated with various Scenes is Stored, the com
`puter processor combines the data from the camera and the
`database and perfects a single image to be presented at; a 6)
`display whose image is continuously aligned to the real
`Scene as it is viewed by the user.
`The present invention is a vision system including devices
`and methods of augmented reality wherein an image of Some
`real Scene is altered by a computer processor to include
`information from a data base having Stored information of
`that Scene in a storage location that is identified by the real
`time position and attitude of the vision System. It is a
`primary function of the vision System of the invention, and
`a contrast to the prior art, to present augmented real images
`and data that is continuously aligned with the real Scene as
`that scene is naturally viewed by the user of the vision
`System. An augmented image is one that represents a real
`Scene but has deletions, additions and Supplements.
`
`5 Claims, 6 Drawing Sheets
`
`
`
`...:34
`
`(MPTER
`NTERFA:
`39
`
`ULTRA 8).JN)
`SYRANGING
`37
`
`
`
`20
`
`Y
`SE,
`INP.T. S.
`lJSERINTERFACE.
`
`IISPLAY
`its
`
`AJ)(C)
`
`31
`
`—
`
`?
`
`taMRA
`
`ASUS-1022, Page 1
`
`
`
`5,815,411
`Page 2
`
`U.S. PATENT DOCUMENTS
`395/127 X
`2/1987 Graf etal
`4,645,459
`Ia el al. . . . . . . . . . . . . . . . . . . . . . . . . . .
`2 - - - 2
`... 358/136
`4,661,849 4/1987 Hinman ...
`... 340/747
`4,667,190 5/1987 Fant ........
`... 358/133
`4,682,225
`7/1987 Graham ......
`4,688,092 8/1987 Kamel et al. ........................... 358/109
`4,807,158 2/1989 Blanton et al. ...
`364/424.01
`5. SE Nandra et al. ....................... :s
`2Y- . 12
`ye . . . . . . . . . . . . . . .
`4,908,704 3/1990 Fujioka et al
`
`... 358/108
`
`356/152
`4,930,888 6/1990 Freisleden .........
`340/905
`4,937,570 6/1990 Matsukawa et al.
`4,940,972 7/1990 Mouchot et al. ................... 395/127 X
`
`4,970.666 11/1990 Welsh et al. ............................ 364/522
`5,034,812
`7/1991 Rawlings ................................ 358/108
`5,072,218 12/1991 Spero et al. ............................ 345/8 X
`5,115,398
`5/1992 De Jong .................................. 364/443
`5,124,915
`6/1992 Krenzel ................................... 364/420
`5,133,050 7/1992 George ...
`... 395/135
`5,252,950 10/1993 Saunders et al. ........................... 345/9
`5,296,854 3/1994 Hamilton et al. ....................... 345/9 X
`
`
`
`5,311,203
`
`5/1994 Norton - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - 345/7
`
`6/1994 Lewis et al. ............................ 345/8 X
`5,322,441
`5,353,134 10/1994 Michel et al. ............................ 359/52
`5,394,517 2/1995 Kalawsky ................................ 395/129
`
`ASUS-1022, Page 2
`
`
`
`U.S. Patent
`US. Patent
`
`Sep. 29, 1998
`Sep. 29, 1998
`
`Sheet 1 of 6
`Sheet 1 0f 6
`
`5,815,411
`5,815,411
`
`
`
`EDEE
`
`ASUS-1022, Page 3
`
`ASUS-1022, Page 3
`
`
`
`
`
`U.S. Patent
`US. Patent
`
`Sep. 29, 1998
`Sep. 29, 1998
`
`Sheet 2 of 6
`Sheet 2 0f6
`
`5,815,411
`5,815,411
`
`
`
`
`
`ASUS-1022, Page 4
`
`ASUS-1022, Page 4
`
`
`
`U.S. Patent
`
`Sep. 29, 1998
`
`Sheet 3 of 6
`
`5,815,411
`
`
`
`Figure 2B
`
`ASUS-1022, Page 5
`
`
`
`U.S. Patent
`
`Sep. 29, 1998
`
`Sheet 4 of 6
`
`5,815,411
`
`16
`
`15
`
`Camera
`
`
`
`Data Store
`12
`
`Computer 14
`
`13
`
`Display
`
`Figure 3
`
`ASUS-1022, Page 6
`
`
`
`U.S. Patent
`
`Sep. 29, 1998
`
`Sheet 5 of 6
`
`5,815,411
`
`
`
`15
`ATTITUDE
`
`16
`GPS
`
`21
`
`23
`12
`FRAME
`-
`DATA STORE |/ GRABBER
`/
`- /
`25 EXTERNAL IO- c.
`or
`MEMORY
`26 AUDIO I/O
`- -
`27 CONTROL I/O
`VIDEO 28
`
`14
`
`13
`DISPLAY
`
`COMPUTER
`
`Figure 4
`
`ASUS-1022, Page 7
`
`
`
`US. Patent
`
`Sep. 29, 1998
`
`Sheet 6 0f 6
`
`5,815,411
`
`
`
`
`
`
`
`
`wmkfix
`
`azaom<55
`
`wzazé
`
`
`
`MmCLDmEOU
`
`
`mo<mMmHZ~
`
`mm
`
`ASUS-1022, Page 8
`
`ASUS-1022, Page 8
`
`
`
`1
`ELECTRO-OPTIC VISION SYSTEM WHICH
`EXPLOITS POSITION AND ATTTUDE
`
`BACKGROUND OF THE INVENTION
`The present invention is generally concerned with elec
`tronic vision devices and methods, and is specifically con
`cerned with image augmentation in combination with
`navigation, position, and attitude devices.
`One may have to look quite far into the annals of history
`to find the first uses of maps. Maps generally provide
`information to alert a user to things that are not readily
`apparent from Simple viewing of a real Scene from the users
`location. For example, a user of a city road map may not be
`able to See a tunnel on Elm Street if the user is currently
`Seven miles away on First Street and looking in the direction
`of the Elm Street tunnel. However, from the First street
`location, the user could determine from a road map that there
`is a tunnel on Elm Street. He could learn that the tunnel is
`three miles long, starts on Eighth Street and ends on Eleventh
`street. There may even be an indication of the size of the
`tunnel Such that it could accommodate four traffic lanes and
`a bicycle lane.
`Unfortunately, it is not always possible to translate the
`information from a map to the real Scene that the information
`represents as the Scene is actually viewed. It is common for
`users of maps to attempt to align the map to reality to get a
`better “feel” of where things are in relation to the real world.
`Those who are familiar with maps can verify that the fact
`that maps are drawn with north being generally in the
`direction of the top of the map, is of little use when
`translating the information to the Scene of interest. Regard
`less of where north is, one tends to turn the map So that the
`direction ahead of the user, or in the direction of travel, in a
`real Scene matches that direction on the map. This may result
`in the condition of an "upside down” map that is quite
`difficult to read (the case when the user is traveling South).
`Although translating the directions of the map to reality is a
`formidable task, it is an even greater problem to translate the
`Symbols on the map to those objects in reality which they
`represent. The tunnel Symbol on the map does not show what
`the real tunnel actually looks like. The fact that the appear
`ance of the tunnel from infinitely many points of View is
`prohibitively difficult to represent on a map accounts for the
`use of a simple Symbol. Furthermore, the map does not have
`any indication from which point of view the user will first
`See the tunnel, nor any indication of the path which the user
`will take to approach the tunnel.
`It is now possible to computerize city road map informa
`tion and display the maps according to the path taken by a
`user. The map is updated in “real-time' according to the
`progreSS of the user through the city Streets. It is therefore
`possible to relieve the problem of upside-down maps as the
`computer could re-draw the map with the text in correct
`orientation relative to the user even when one is traveling in
`a Southerly direction. The computer generated map is dis
`played at a monitor that can be easily refreshed with new
`information as the user progresses along his journey. Maps
`of this type for automobiles are well known in the art. Even
`very Sophisticated maps with computer generated indicia to
`assist the user in decision making are available and
`described in patents such as DeJong U.S. Pat. No. 5,115,398.
`This device can display a local Scene as it may appear and
`Superimpose onto the Scene, Symbolic information that
`Suggests an action to be taken by the user. For example, a left
`turn as is shown in FIG. 3 of the disclosure. Even in these
`advanced Systems, a high level of translation is required of
`
`15
`
`25
`
`35
`
`40
`
`45
`
`50
`
`55
`
`60
`
`65
`
`5,815,411
`
`2
`the user. The computer generated map does not attempt to
`present an accurate alignment of displayed images to the real
`object which they represent.
`Devices employing image Supplementation are known
`and include Head Up Displays, HUDs and Helmet Mounted
`Displays HMDs. A HUD is a useful vision system which
`allows a user to view a real Scene, usually through an optical
`image combiner Such as a holographic mirror or a dichroic
`beamsplitter, and have Superimposed thereon, navigational
`information for example Symbols of real or imaginary
`objects, vehicle Speed and altitude data, et cetera. It is a
`primary goal of the HUD to maximize the time that the user
`is looking into the Scene of interest. For a fighter pilot,
`looking at a display device located nearby on an instrument
`panel, and changing the focus of ones' eyes to read that
`device, and to return to the Scene of interest, requires a
`critically long time and could cause a fatal error. A HUD
`allows a fighter pilot to maintain continuous concentration
`on a Scene at optical infinity while reading instruments that
`appear to the eye to also be located at optical infinity and
`thereby eliminating the need to refocus ones' eyes. A HUD
`allows a pilot to maintain a "head-up' position at all times.
`For the airline industry, HUDs have been used to land
`airplanes in low visibility conditions. HUDs are particularly
`useful in a landing situation where the boundaries of a
`runway are obscured in the pilots field of view by fog but
`artificial boundaries can be projected onto the optical com
`biner of the HUD System to show where in the user's vision
`field the real runway boundaries are. The virtual runway
`projection is positioned in the vision field according to data
`generated by communication between a computer with and
`the airport instrument landing System, ILS which employs a
`VHF radio beam. The system provides the computer with
`two data figures. First a glide Slope figure, and Second, a
`localizer which is a lateral position figure. With these data,
`the computer is able to generate an optical image (photon)
`to be projected and combined with the real Scene (photon)
`that passes through the combiner and thereby enhancing
`certain features of the real Scene; for example runway
`boundaries. The positioning of the overlay depends on the
`accuracy of the airplane boresight being in alignment with
`the ILS beam and other physical limitations. The computer
`is not able to recognize images in the real Scene and does not
`attempt to manipulate the real Scene except for highlighting
`parts thereof. HUDs are particularly characterized in that
`they are an optical combination of two photon Scenes. The
`combination being a first Scene, one that is normally viewed
`by the users eyes passes through an optical combiner, and a
`Second, computer generated photon image which is com
`bined with the real image at an optical element. In a HUD
`device it is not possible for the computer to address objects
`of the real Scene, for example to alter or delete them. The
`System only adds enhancement to a feature of the real image
`by drawing interesting features thereon. Finally, HUDs are
`very bulky and are typically mounted into an airplane or
`automobile and require a great deal of Space and complex
`optics including holograms and Specially designed lenses.
`Helmet Mounted Displays HMDs are similar to HUDs in
`that they also combine enhancement images with real Scene
`photon images but they typically have very portable com
`ponents. Micro CRTs and small combiners make the entire
`System helmet mountable. It is a complicated matter to align
`computer generated images to a real Scene in relation to a
`fast moving helmet. HUDS can align the data generated
`image that is indexed to the Slow moving airplane axis which
`moves slowly in relation to a runway. For this reason, HMDs
`generally display data that does not change with the pilots
`
`ASUS-1022, Page 9
`
`
`
`25
`
`3
`head movements such as altitude and airspeed. HMDs suffer
`the same limitation as the HUDs in that they do not provide
`the capacity to remove or augment elements of the real
`image.
`Another related concept that has resulted in a rapidly
`developing field of computer assisted Vision Systems is
`known as virtual reality, VR. Probably best embodied in the
`fictional television program “Star Trek; The Next
`Generation', the "Holodeck is a place where a user can go
`to have all of his Surroundings generated by a computer So
`as to appear to the user to be another place or another place
`and time.
`Virtual reality Systems are useful in particular for a
`training means. For example in aircraft simulation devices.
`A student pilot can be surrounded by a virtual “cockpit”
`which is essentially a computer interface whereby the user
`"feels' the environment that may be present in a real aircraft,
`in a very real way and perhaps enhanced with computer
`generated Sounds, images and even mechanical Stimuli.
`Actions taken by the user may be interpreted by the com
`puter and the computer can respond to those actions to
`control the stimuli that Surround the user. VR machines can
`create an entire Visual Scene and there is no effort to
`Superimpose a computer generated Scene onto a real Scene.
`AVR device generally does not have any communication
`between its actual location in reality and the Stimuli being
`presented to the user. The location of the VR machine and
`the location of the Scene being generated generally have no
`physical relationship.
`VR systems can be used to visualize things that do not yet
`exist. For example, a home can be completely modeled with
`a computer So that a potential buyer can “walkthrough
`before it is even built. The buyer could enter the VR
`atmosphere and proceed through computer generated
`35
`images and Stimuli that accurately represent what a home
`would be like once it is built. In this way, one could know
`if a particular Style of home is likable before the large cost
`of building the home is incurred. The VR machine being
`entirely programmed with information from a designer does
`40
`not anticipate things that presently exist and there is no
`communication between the elements presented in the VR
`System to those elements existing in reality.
`While the systems and inventions of the prior art are
`designed to achieve particular goals, features, advantages,
`and objectives, Some of those being no less than remarkable,
`these Systems and inventions have limitations and faults that
`prevent their use in ways that are only possible by way of the
`present invention. The prior art Systems and inventions can
`not be used to realize the advantages and objectives of the
`present invention.
`SUMMARY OF THE INVENTION
`Comes now, an invention of a vision System including
`devices and methods of augmented reality wherein an image
`of Some real Scene is altered by a computer processor to
`include information from a data base having Stored infor
`mation of that Scene in a storage location that is identified by
`the real time position and attitude of the vision System. It is
`a primary function of the vision System of the invention, and
`a contrast to the prior art, to present augmented real images
`and data that is continuously aligned with the real Scene as
`that scene is naturally viewed by the user of the vision
`System. An augmented image is one that represents a real
`Scene but has deletions, additions and Supplements. The
`camera of the device has an optical axis which defines the
`direction of Viewing as in a simple "camcorder type video
`
`45
`
`50
`
`55
`
`60
`
`65
`
`5,815,411
`
`5
`
`15
`
`4
`camera where the image displayed accurately represents the
`real Scene as it appears from the point of view of one looking
`along the optical axis. In this way, one easily orients the
`information displayed to the World as it exists. A fundamen
`tal difference between the vision system of the invention and
`that of a camcorder can be found in the image augmentation.
`While a camcorder may present the Superposition of an
`image and data Such as a "low battery' indicator, etcetera,
`it has no “knowledge” of the scene that is being viewed. The
`data displayed usually is related to the vision device or
`Something independent of the Scene Such as the time and
`date. Image augmentation of the invention can include
`information particular to a Scene being viewed with the
`invention.
`The vision System of the invention can include a database
`with prerecorded information regarding various Scenes. The
`precise position and attitude of the vision System indicates to
`the data base, the Scene that is being viewed. A computer
`processor can receive information about the particular Scene
`from the data base and can then augment an image of the
`Scene generated by the camera of the vision System and
`present a final image at the display with includes a combi
`nation of information from the optical input and information
`that was Stored in the database. Particularly important, is the
`possibility of communication between the data from the data
`base and the real image. Analyzing and processing routines
`may include recognition of items in the real Scene and
`comparisons with artifacts of the stored data. This could be
`useful in alignment of the real images to the recalled data.
`In a situation where the optical input of a Scene is entirely
`blocked from the camera of the System, for example by
`dense fog, an image of the Scene can be generated which
`includes only information from the data base. Alternatively,
`the data base being of finite size may not have any infor
`mation about a particular Scene. In this case, the image
`presented at the display would be entirely from the optical
`input of the real Scene. This special case reduces the vision
`System to the equivalent of a simple camcorder or electronic
`binocular. It is also a very Special case where the features of
`a real Scene are Selectively removed. If the bright lights of
`a city-Scape obstruct the more Subtle navigation lights of a
`marine port of entry, then it is possible for the processing
`routines to discriminate between the city lights and the
`navigation lights. The undesirable city lights could be
`removed in the processor before the final image is displayed.
`In the final image, the important navigation lights Show
`clearly and the city lights are not present at all. Therefore, a
`final image of the invention can be comprised of information
`from two Sources, in various combinations, Superimposed
`together to form a single, high information-density image.
`The information of the two Sources are compared and
`combined together to form a single augmented image that is
`presented at a display and is aligned to the real Scene as the
`Scene is viewed by the user.
`In the Simplest form, devices of the invention can be
`envisioned to include six major components: A 1) camera to
`collect optical information about a real Scene and present
`that information as an electronic signal to, a 2) computer
`processor; a 3) device to measure the position of the camera;
`and a 4) device to measure the attitude of the camera
`(direction of the optic axis), thus uniquely identifying the
`Scene being viewed, and thus identifying a location in; a 5)
`database where information associated with various Scenes
`is Stored, the computer processor combines the data from the
`camera and the database and perfects a single image to be
`presented at; a 6) display whose image is continuously
`aligned to the real Scene as it is viewed by the user.
`
`ASUS-1022, Page 10
`
`
`
`S
`The camera is related to the position measuring device
`and the attitude measuring device in that the measurements
`of position and attitude are made at the camera with respect
`to arbitrary references. The position and attitude measure
`ment means are related to the database in that the values of
`those measurements Specify particular data base locations
`where particular image data are Stored. We can think of the
`position and attitude measurements as defining the database
`pointer of two orthogonal variables. The camera is related to
`the computer processor in that the image generated at the
`camera is an electronic image and is processed by the
`computer processor. The database is related to the computer
`in that the data base furnishes the processor information
`including images for use in processing routines. The display
`is related to the computer as it receives the final processed
`image and converts the computers electric image Signal into
`an optical image that can be viewed by the user. The display
`is boresight aligned with the optical axis of the camera Such
`that the information corresponds to reality and appears to the
`user in a way that allows the user to view the final aug
`mented image without needing to translate the image to the
`orientation of the real Scene.
`In the simplest form, methods of the invention can be
`envisioned to include Seven major steps: An 1) acquire Step
`whereby the light from a scene is imaged by a lens; a 2)
`conversion Step whereby optical information of the acquire
`Step is converted into an electrical signal; a 3) position
`determining Step in which the position of the camera is
`measured; an 4) attitude determining step in which the
`attitude of the camera is measured; a 5) data recall Step
`where a data location is Selected in accordance with the
`measurements in Steps 3 and 4 and user impute data, is
`recalled by a computer processor; a 6) processing step
`wherein data from the data Store and the electronic image are
`combined and processed; and a 7) display step wherein a
`processed final image is displayed.
`The product of the acquire Step, an optical image, is
`converted to an electric Signal in the conversion Step. The
`electronic image of the conversion Step is transmitted to the
`processor in the processing Step. The products of the posi
`tion determining Step and attitude determining Step are
`values that are used in the data recall Step. The result of the
`data recall Step is also transmitted to the processor to be
`combined with the electronic image of the conversion Step
`in the processing Step. The product of the processing Step, a
`final electronic representation of an augmented image is
`transmitted to, and displayed in, optical format in the display
`Step.
`The invention will be summarized further by presenting
`Six examples of the invention wherein a description of the
`devices, methods, and uses thereof, follow.
`In a first Summary example of the invention, the reader is
`to imagine a Scenario where a boat is approaching a port of
`entry and the user of the invention is a navigation officer of
`that boat. It is quite common for a navigation officer to
`require many aids to guide his course through a shipping
`channel. Charts, a compass, lighted buoys, Sonic devices,
`ranges, radar, binoculars are Some of the instruments that
`one may use to navigate a boat. Recent advances in position
`determining technologies, in particular the Global Position
`ing System, or GPS, have simplified the task of navigation.
`With the GPS, a navigation officer can rely on knowing the
`position of the craft to within approximately +300 feet, north
`and east; and in Some Special cases within less. Even with
`Such a good position determination, the navigation officer
`must locate where on the chart his position corresponds, and
`identify symbols on the chart to create a mental image of his
`
`15
`
`25
`
`35
`
`40
`
`45
`
`50
`
`55
`
`60
`
`65
`
`5,815,411
`
`6
`Surroundings. Then the navigation officer must look about at
`the real scene before him for identifiable objects to deter
`mine how what he sees corresponds to the Symbols on the
`chart. Frequently, visibility is limited by darkness or weather
`conditions and particular lights must be recognized to iden
`tify chart markings. These can be colored flashing lights and
`can easily be mistaken for the bright lights of a city Skyline.
`In other cases, the markers may be un-lit and may be
`impossible to find in the dark. Dangerous objects, for
`example Sunken ships, kelp, and reefs, are generally marked
`on the chart but can not be seen by the navigation officer
`because they can be partially or entirely Submerged. The
`navigation officer must imagine in his mind his position with
`respect to objects in the real Scene and those on the chart and
`must also imagine where in the real Scene that the chart is
`warning of dangers. This procedure requires many complex
`translations and interpretations between the real Scene, the
`markers of the chart, the Scene as it is viewed by the
`navigation officer, and the chart as understood by the navi
`gation officer. Obviously, there is great potential for mis
`takes. Many very skilled and experienced naval navigators
`have failed the complicated task of Safely navigating into a
`port resulting in tragic consequences. With the System of the
`invention, a navigation officer can look with certainty at a
`Scene and locate exactly, known marks. The System of the
`invention eliminates the need for determining where in a real
`Scene the Symbols of a chart correspond. The user of the
`invention can position the display between his eyes and the
`real Scene to See an image of the real Scene with the Symbols
`of a chart Superimposed thereon. In the navigator's mind it
`becomes very concrete where the otherwise invisible reefs
`are located and which lights are the real navigation lights
`and which lights are simply Street lights. It is possible for the
`computer to remove information Such as Stray lights from
`the image as it is recorded from the real Scene and to present
`only those lights that are used for navigation in the display.
`This is possible because the data base of the invention
`“knows” of all navigation lights and the processor can
`eliminate any others. The display that the user views
`includes a representation of a Scene with complicated unde
`Sirable objects removed and useful data and objects being
`added thereto. Whenever the navigator points the device of
`the invention in Some direction, the device records the
`optical image of the real Scene and Simultaneously deter
`mines the position and attitude of the device and calls on a
`data base for information regarding the Scene being
`observed. The processor analyzes the image and any data
`recalled and combines them to form the final displayed
`image.
`In a further Summary example of the invention, a city
`planning commission may wish to know how a proposed
`building may look in the skyline of the city. Of course it is
`possible to make a photograph of the Skyline and to airbrush
`the proposed building into the photograph. This commonly
`used method has Some shortfalls. It shows only a single
`perspective of the proposed building that may be presented
`in the “best light” by a biased developer (or “undesirable
`light” by a biased opponent/competitor). The building may
`be presented to appear very handsome next to the city hall
`as shown in the developer's rendition. Since only one
`perspective is generally shown in a photograph, it may be
`impossible to determine the full impact the building may
`have with respect to other points of view. It may not be clear
`from the prepared photograph that the beautiful bay view
`enjoyed by users of city hall would be blocked after the
`building is constructed. With the current invention, the
`details of every perspective could be easily visualized. Data
`
`ASUS-1022, Page 11
`
`
`
`5,815,411
`
`15
`
`25
`
`35
`
`40
`
`7
`that accurately represents the proposed building could be
`entered into a database of a device of the invention. When
`the camera of the invention is pointed in the direction of the
`new building, the camera portion records the real Scene as in
`appears and transmits that signal to a processor. The device
`accurately determines the position and attitude of the camera
`with respect to the Scene and recalls data from the database
`that properly represents the perspective of the building from
`that point of view. The processor then combines the real
`Scene with the data of the proposed building to create a final
`image of the building from that particular perspective. It
`would even be possible for a helicopter to fly in a circle
`around the location of the building and for a user to See it
`from all possible points of view. A council member could See
`what the future structure would be like in real life from any
`perspective before voting to approve the plan.
`In a still further Summary example, we choose a Scenario
`where an engineer uses products of the invention for analy
`sis and troubleshooting of an engineering problem. In
`particular, the case where a problem has been detected in the
`plumbing aboard a Submarine. The complicated works
`including pipes, tubes, pumps, cables, wires, etcetera, of a
`submarine may be extremely difficult to understand by
`looking at a design plan and translating the information from
`the plan to the real world. Immediate and positive identifi
`cation of a particular element may be critical to Survival of
`the ship in an emergency. The following illustrative engi
`neering use of the invention provides for a greatly Simplified
`way of positive identification of engineering features.
`An engineer aboard a Submarine is tasked to work in the
`torpedo room on a Saltwater pump used to pump down the
`torpedo tubes after use. In preparation for the job, the data
`base of a portable electro-optic vision device is updated
`from the ship's central computers with information regard
`ing the details of the pump in question and of the details of
`the torpedo room where the pump is located. In the event of
`a battle damaged ship or in case of limited visibility due to
`fire or power outages, the vision device can provide guid
`ance to the location of the pump through Visual and audio
`clues. AS the various pipes may be routed through bulkheads
`and behind walls, the vision System can be used to See
`through walls and to “fill-in' the locations of the pipes so
`that the otherwise hidden pipe can be followed continuously
`and without ambiguity. Upon arriving at the pump in
`question, the engineer points the camera axis of the vision
`device in the direction of the pump. In the display of the
`device, a real image of the pump is Superimposed with data
`Such as the part number of the pump and clues to features of
`the pump Such as the type of material being pumped and
`flow directio