`(12) Patent Application Publication (10) Pub. No.: US 2011/0199479 A1
`
`
` Waldman (43) Pub. Date: Aug. 18, 2011
`
`US 20110199479A1
`
`(54) AUGMENTED REALITY MAPS
`
`(52) US. Cl. .................. 348/116; 701/201; 348/E07.085
`
`(75)
`
`Inventor:
`
`Jaron Waldman, Palo Alto, CA
`(US)
`
`(73) Ass1gnee:
`
`Apple Inc., Cupertmo, CA (US)
`
`(21) Appl. No.:
`
`12/705,558
`
`(22)
`
`Filed:
`
`Feb. 12, 2010
`
`Publication Classification
`
`(51)
`
`Int. Cl.
`H04N 7/18
`G01C 21/00
`
`(2006.01)
`(2006.01)
`
`(57)
`
`ABSTRACT
`.
`.
`.
`.
`A user pomts a handheld communicatlon dev1ce to capture
`and display a real-time Video stream. The handheld commu-
`nication device detects geographic position, camera direc-
`tion, and tilt of the image capture device. The user sends a
`search request to a server for nearby points of interest. The
`handheld communication device receives search results
`based on the search request, geographic position, camera
`direction, and tilt ofthe handheld communication device. The
`handheld communication device visually augments the cap-
`tured video stream with data related to each point of interest.
`The user then selects a point of interest to visit. The handheld
`communication device visually augments the captured video
`stream with a directional map to a selected point of interest in
`response to the user input.
`
`
`
`
`
`Processor
`Memory
`Graphics accelerator ’
`52d ’
`@29
`$2.2.
`
`
`
`
`
`Accelerometer
`Cemrrlunicafion
`626
`
`
`
`.
`5
`Camera
`
`iererface
`
`
`
`28
`
`
`
`
`
`
`
`
`Compass
`Display
`,
`Input Device
`
`egg
`% we}
`gee
`
`
`
`
`
`602
`
`Snap Inc. EX. 1011 Page 0001
`
`Snap Inc. Ex. 1011 Page 0001
`
`
`
`Patent Application Publication
`
`Aug. 18, 2011 Sheet 1 0f 6
`
`US 2011/0199479 A1
`
`
`
`Snap Inc. EX. 1011 Page 0002
`
`Snap Inc. Ex. 1011 Page 0002
`
`
`
`Patent Application Publication
`
`Aug. 18, 2011 Sheet 2 0f 6
`
`US 2011/0199479 A1
`
`FIG.2
`
`Snap Inc. EX. 1011 Page 0003
`
`Snap Inc. Ex. 1011 Page 0003
`
`
`
`Patent Application Publication
`
`Aug. 18, 2011 Sheet 3 0f6
`
`US 2011/0199479 A1
`
`New
`
`NS“
`
`m.QE
`
`Snap Inc. EX. 1011 Page 0004
`
`Snap Inc. Ex. 1011 Page 0004
`
`
`
`Patent Application Publication
`
`Aug. 18, 2011 Sheet 4 0f 6
`
`US 2011/0199479 A1
`
`402
`
`404
`
`406
`
`408
`
`410
`
`412
`
`Capturing & displaying a video
`stream on a handheld
`electronic device
`
`Detecting geographic position,
`direction, and/or tilt of the
`handheld electronic device
`
`
`
`Sending a request for nearby
`POIs based on a search term
`
`Receiving nearby POI in
`response to the request
`
`Visually augmenting the
`captured video stream
`with POI data
`
`Visually augmenting the captured
`video stream with a directional
`map to a selected point of interest
`
`in response to a user input
`
`FIG. 4
`
`Snap Inc. EX. 1011 Page 0005
`
`Snap Inc. Ex. 1011 Page 0005
`
`
`
`Patent Application Publication
`
`Aug. 18, 2011 Sheet 5 0f 6
`
`US 2011/0199479 A1
`
`f
`
`“’71517
`
`
`
`Keybaard
`Q345-
`‘
`‘
`Microphgne iiiiiiii
`52.342
`
`‘
`Dispiay
`
`549
`
`N.V.St0rage
`
`550
`
`‘
`»
`
`‘
`RAM
`
`570
`
`Painting
`Device
`
`§§§
`
`Snap Inc. EX. 1011 Page 0006
`
`Snap Inc. Ex. 1011 Page 0006
`
`
`
`Patent Application Publication
`
`Aug. 18, 2011 Sheet 6 0f 6
`
`US 2011/0199479 A1
`
`
`
`
`
`Carmnunicaiions
`interface
`r
`
`828
`
`’
`
`Snap Inc. EX. 1011 Page 0007
`
`Snap Inc. Ex. 1011 Page 0007
`
`
`
`US 2011/0199479 A1
`
`Aug. 18, 2011
`
`AUGMENTED REALITY MAPS
`
`FIELD
`
`[0001] The following relates to searching for nearby points
`of interest, and more particularly to displaying information
`related to nearby points of interest overlaid onto a Video feed
`of a surrounding area.
`
`BACKGROUND
`
`[0002] Augmented reality systems supplement reality, in
`the form of a captured image or Video stream, with additional
`information. In many cases, such systems take advantage of a
`portable electronic device’s imaging and display capabilities
`and combine a Video feed with data describing objects in the
`Video. In some examples, the data describing the objects in
`the Video can be the result of a search for nearby points of
`interest.
`
`For example, a user Visiting a foreign city can point
`[0003]
`a handheld communication device and capture a Video stream
`of a particular View. A user can also enter a search term, such
`as museums. The system can then augment the captured Video
`stream with search term result information related to nearby
`museums that are within the View of the Video stream. This
`
`allows a user to supplement their View of reality with addi-
`tional information available from search engines.
`[0004] However, if a user desires to Visit one of the muse-
`ums, the user must switch applications, or at a minimum,
`switch out of an augmented reality View to learn directions to
`the museum. However, such systems can fail to orient a user’s
`with a poor sense of direction and force the user to correlate
`the directions with objects in reality. Such a transition is not
`always as easy as it might seem. For example, an instruction
`that directs a user to go north on Main St. assumes that the
`user can discern which direction is north. Further, in some
`instances, street signs might be missing or indecipherable,
`making it difficult for the user to find the directed route.
`
`SUMMARY
`
`Such challenges can be overcome using the present
`[0005]
`technology. Therefore, a method and system for displaying
`augmented reality maps are disclosed. By interpreting the
`data describing the surrounding areas, the device can deter-
`mine what objects are presently being Viewed on the display.
`The device can further overlay information regarding the
`presently Viewed objects, thus enhancing reality. In some
`embodiments, the device can also display search results over-
`laid onto the displayed Video feed. Search results need not be
`actually Viewable by a user in real life. Instead, search results
`can also include more-distant objects.
`[0006] The user can interact with the display using an input
`device such as a touch screen. Using the input device, the user
`can select from among objects represented on the screen,
`including the search results.
`[0007]
`In one form of interaction, a device can receive an
`input from the user requesting directions from a present loca-
`tion to a selected search result. Directions can be overlaid
`
`onto the presently displayed Video feed, thus showing a
`course and upcoming turns. As the user and associated device
`progress along a route, the overlaid directions can automati-
`cally update to show the updated path.
`[0008]
`In some embodiments the display can also include
`indicator graphics to point the user in a proper direction. For
`example, if the user is facing south but a route requires the
`
`user to progress north, “no route” would be displayed in the
`display because the user would be looking to the south but the
`route would be behind him or her. In such instances, an
`indicator can point the user in the proper direction to find the
`route.
`
`In some embodiments, multiple display Views can
`[0009]
`be presented based on the orientation of the device. For
`example, when the device is held at an angle with respect to
`the ground of 45 degrees to 180 degrees, the display View can
`present the augmented reality embodiments described herein.
`However, when the device is held at an angle less than 45
`degrees, an illustrated or schematic View can be represented.
`In such embodiments, when the device is held at an angle with
`respect to the ground of less than 45 degrees, the device is
`likely pointed at the ground, where few objects of interest are
`likely to be represented in the displayed Video. In such
`instances, a different map View is more likely to be useful. It
`should be appreciated that precise range oftilt can be adjusted
`according the actual environment or user preferences.
`[0010]
`In practice, a user points a handheld communication
`device to capture and display a real-time Video stream of a
`View. The handheld communication device detects a geo-
`graphic position, camera direction, and tilt of the image cap-
`ture device. The user sends a search request to a server for
`nearby points of interest. The handheld communication
`device receives search results based on the search request,
`geographic position, camera direction, and tilt of the hand-
`held communication device. The handheld communication
`
`device visually augments the captured Video stream with data
`related to each point of interest. The user then selects a point
`of interest to visit. The handheld communication device Visu-
`
`ally augments the captured video stream with a directional
`map to a selected point of interest in response to the user
`input.
`[0011] A method of augmenting a Video stream of a
`device’s present surrounding with navigational information is
`disclosed. The user can instruct the device to initiate a live
`
`Video feed using an onboard camera and display the captured
`Video images on a display. By polling a Global Positioning
`System (GPS) device, a digital compass, and optionally, an
`accelerometer, location, camera direction, and orientation
`information can be determined. By using the location, camera
`direction, and orientation information, the device can request
`data describing the surrounding areas and the objects therein.
`In some embodiments, this data includes map vector data.
`The can be requested from an onboard memory or a server.
`The data describing surrounding areas can further be
`requested in conjunction with a search request. The search
`request can also include a request for information about
`nearby places of interest.
`
`BRIEF DESCRIPTION OF THE DRAWINGS
`
`FIG. 1 illustrates an exemplary visually augmented
`[0012]
`captured image with data related to a search for points of
`interest;
`FIG. 2 illustrates a the results of a field-of—view and
`[0013]
`point-of-interest search;
`[0014]
`FIG. 3 illustrates an exemplary captured image Visu-
`ally augmented with a route to a selected point of interest;
`[0015]
`FIG. 4 is a flow chart illustrating an exemplary
`method of preparing and displaying an augmented reality
`map;
`FIG. 5 is a schematic illustration of an exemplary
`[0016]
`system embodiment; and
`
`Snap Inc. EX. 1011 Page 0008
`
`Snap Inc. Ex. 1011 Page 0008
`
`
`
`US 2011/0199479 A1
`
`Aug. 18, 2011
`
`FIG. 6 is a schematic illustration of an exemplary
`[0017]
`system embodiment.
`
`DESCRIPTION
`
`[0018] The technology described herein Visually augments
`a captured image or Video stream with data for points of
`interest related to search terms entered by the user. The tech-
`nology also Visually augments the captured image or Video
`stream with a directional map to a selected point of interest.
`[0019]
`FIG. 1 is a screenshot illustrating an augmented
`reality embodiment as described herein. As illustrated, a
`handheld communication deVice has captured an image 102
`of the northwest corner of the intersection of Dolores St and
`
`17th St. using its image-capturing deVice and displayed the
`image on its display. In this way, the display can function as a
`Viewfinder. As illustrated, the captured image 102 has been
`augmented with information corresponding to points of inter-
`est 104, 106 and street labels 110, 112.
`[0020]
`FIG. 1 illustrates a captured and presented image
`102 using an image capture deVice, i.e., the camera of a smart
`phone, which is but one type of handheld communication
`deVice to which the present disclosure can be applied. In this
`illustrated embodiment, the user has entered a search term
`“parks” in search bar 108 to conduct a search for nearby
`parks, i.e., a specific type of point of interest. Using map data
`that describes the area surrounding the present location of the
`deVice and the points of interest located in the surrounding
`area, the deVice augments the displayed image with addi-
`tional information. In this instance, the smart phone or hand-
`held communication deVice displays points of interest
`described by the data that are displayed in the Viewfinder
`(such as Dolores St. 110 and 17th St. 112) or within a field of
`View and range from the geographic position ofthe deVice but
`that are obstructed by other in-screen objects, e.g., Golden
`Gate Park 104 and Buena Vista Park 106. While other parks
`might also be nearby, they are not shown because they fall
`outside the field of View of the deVice. HoweVer, the user
`could locate these parks by panning the deVice around the
`intersection, in which case those parks would appear on the
`screen.
`
`In the captured image 102, the handheld communi-
`[0021]
`cation deVice augments the captured image with bubbles
`showing the relatiVe geographic position of “Golden Gate
`Par ” 104 and “Buena Vista Park” 106 within the captured
`image 102. This allows the user to determine a general direc-
`tion to a point of interest. A user can then select a point of
`interest, e.g., by selecting the “Buena Vista Park” 106 point of
`interest information bubble, e.g., by touching the point of
`interest information bubble with a finger or stylus if the smart
`phone employs a touch screen. In other implementations, a
`cursor and mouse can be used to select a desired point of
`interest.
`
`Points of interest can be any map feature, but most
`[0022]
`often a point of interest can be a map feature that identified as
`result of a search for a category of such map features. For
`example, a point ofinterest can be a park when a user searches
`for nearby parks. Likewise a point of interest can be places,
`buildings, structures, eVen friends that can be located on a
`map, when the point of interest is searched for. In some
`instances a point of interest is not necessarily identified as a
`result of a search. A point of interest can also be a map feature
`that is identified by the present system because it can be
`Viewed in the captured image. In short, a point of interest can
`be any map feature for which the user has an interest.
`
`FIG. 2 illustrates search results for point of interest
`[0023]
`results for nearby parks based on geographic position and
`also illustrates how a range and field ofView correspond to the
`results displayed in the Viewfinder. A handheld communica-
`tion deVice captures a Video stream of the View as shown in
`FIG. 1. The handheld communication deVice detects the geo-
`graphic position, camera direction, and tilt of the handheld
`communication deVice.
`
`[0024] The geographic position of the handheld communi-
`cation deVice can be determined using GPS coordinates or
`using triangulation methods using cell phone towers. In yet
`another example, a blend of GPS coordinates and triangula-
`tion information can be used to determine the position of the
`deVice.
`
`[0025] The camera direction is a direction relatiVe to a
`planet’s magnetic field (i.e., Earth’s magnetic field) in which
`the camera is pointing. The camera direction can be consid-
`ered a direction that canbe identifiedusing a compass, such as
`a digital compass. The camera direction can be used to iden-
`tify the direction in which the camera is pointing as it acquires
`an image to be augmented using the present technology.
`[0026] The tilt direction is a direction that determines the
`direction in which either the camera deVice or display deVice
`is pointing relatiVe to a horizontal or Vertical axis. The tilt
`direction can most commonly be determined using an accel-
`erometer.
`
`[0027] The user can enter a search request for nearby points
`of interest based on a search term. In this example, upon entry
`by the user of a search for nearby “Parks” the handheld
`communication deVice sends a request for data related to
`nearby parks to a map database.
`[0028] Either the request itself, or the database being que-
`ried can determine a releVant range from within which search
`results must be encompassed. Upon receipt ofthe request, the
`database will return search results for points of interest
`related to the search term that are also within a defined radius
`of the handheld communication deVice as illustrated in FIG.
`
`2. As shown in this example, the server returned points of
`interest “Golden Gate Park” 208, “Buena Vista Park” 206,
`“Midtown Terrace Playground” 210, and “Mission Dolores
`Par ” 212. The handheld communication deVice determines
`
`that ofthe point-of-interest search results, only “Golden Gate
`Park” 208 and “Buena Vista Park” 206 are within the field of
`
`View of the handheld communication deVice. The point-of-
`interest results “Golden Gate Park” 208 and “Buena Vista
`
`Park” 206 are displayed with their relatiVe spatial relationship
`to the handheld communication deVice. In the example shown
`in FIG. 2, the camera direction of the handheld communica-
`tion deVice is northwest.
`
`[0029] A field of View can be determined using a digital
`compass to inform the deVice ofthe camera direction in which
`the camera is facing or, altematiVely, the user could enter in a
`heading. As explained ab0Ve, in FIGS. 1 and 2, the camera is
`facing northwest and its theoretical line of sight is represented
`as 214 in FIG. 2.Any search results that are to be displayed on
`the Viewfinder must be within a certain angle of line 214. For
`example, a camera on a handheld communication deVice
`might only be able to display range of View encompassing 30
`degrees. In such an instance, a giVen display would represent
`those items encompassed within 15 degrees in each direction
`from the center ofthe field ofView. This concept is illustrated
`in FIG. 2 wherein 214 illustrates the center ofthe field ofView
`
`and angles 01 216:02 218 and they represent angles from the
`center of the field of View to the outer limits of the field of
`
`Snap Inc. EX. 1011 Page 0009
`
`Snap Inc. Ex. 1011 Page 0009
`
`
`
`US 2011/0199479 A1
`
`Aug. 18, 2011
`
`view. A distance from the device’s geographic location can
`also be used to define a field of view. As discussed above, a
`distance or range can be defined by the device in its request for
`search results or by the database serving the request. Only
`search results encompassed in this field of view will be dis-
`played on the display.
`[0030]
`In some embodiments, a device can also use an
`accelerometer to inform the device of what objects are dis-
`played in its viewfinder. For example, ifthe device is in a hilly
`location, the accelerometer can tell the device that it is point-
`ing downhill. In another example, the device can determine
`that, due to the topography surrounding its present location
`(described by map data) an object viewed at a certain angle
`from the horizon must be a neighboring hill or mountain peak
`in the distance. In yet another example, an angle from a
`horizon can indicate that the user is viewing a multiple story
`building having places of interest in multiple stories of the
`building. An accelerometer can inform the device ofthe angle
`at which the device is pointed.
`[0031]
`FIG. 3 illustrates a captured image that has been
`Visually augmented with route data to a selected point of
`interest. In this example, a user has selected the “Buena Vista
`Park” point of interest and, in response, the smart phone has
`Visually augmented the captured image 302 with a directional
`map 310 to the selected point of interest, i.e., “Buena Vista
`Park”. The route shows a direction 312 that the user must
`
`travel on Dolores St. to begin travelling to reach “Buena Vista
`Park.” The directional map 310 further indicates a turn 314
`that the user must take, i.e., a turn left onto Duboce Ave. from
`Dolores St. In the illustrated example, the map is shown
`overlaid onto Dolores St.
`
`[0032] The route 310 guides the user with complete navi-
`gation illustrations to reach “Buena Vista Park,” including
`any required turns. In some embodiments, the route can be
`represented as a schematic map, i.e., a simplified map that
`includes only relevant information for the user in an easy-to-
`read format.
`
`[0033] A schematic map can be thought of as similar to a
`subway map one would see on a subway train. While the
`subway track itself might wind and turn, a typical subway
`map represents the subway route as a mostly straight line.
`Further, the subway map often does not have any particular
`scale and frequently shows every destination approximately
`evenly dispersed along the route. Thus, a schematic map as
`discussed below is one that does not adhere to geographic
`“reality,” but rather represents map features in a schematic
`fashion by illustrating directions as a route made of one or
`more roads, trails, or ways that can be represented as substan-
`tially straight lines instead of by their actual shapes (which
`would be represented in a non-schematic map by adhering to
`geographic reality). The schematic map can also be devoid of
`uniform scale. Thus, in some parts of the map, such as an area
`of the map representing a destination, such area can be “dis-
`torted” somewhat to clearly illustrate important details, while
`map areas that represent portions of a route where there are no
`turns or other significant features can be very condensed. In
`short, the map can be a schematic of the real world that can
`provide a simple and clear representation that is sufficient to
`aid the user in guidance or orientation without displaying
`unnecessary map features or detail that could otherwise clut-
`ter a small display space.
`[0034]
`FIG. 4 is a flow chart illustrating an exemplary
`method of preparing and displaying an augmented reality
`map. As shown at block 402, the method includes capturing
`
`and displaying a video stream on a handheld communication
`device. Although described here in reference to a video
`stream, another embodiment of the disclosed technology
`includes capturing and displaying a single still image or a
`series of still images.
`[0035] As shown at block 404, the method includes detect-
`ing geographic position, camera direction, and/or tilt of the
`handheld communication device. This allows the device to
`
`determine features, such as streets, buildings, points of inter-
`est, etc., that are within a field of view for the captured video
`stream.
`
`[0036] As shown at block 406, the method includes sending
`a request for nearby points of interest based on one or more
`search terms. For example, the user can search for nearby
`hotels, parks, or restaurants. The request can be sent to a
`database located on a server that is separate from the handheld
`communication device and communicate via a wireless pro-
`tocol. In another embodiment, the database can be stored
`locally on the device and the search request remains internal
`(sometimes termed “onboard” the device) to the handheld
`communication device.
`
`In block 408, the method includes receiving nearby
`[0037]
`points of interest in response to the request. The server can
`filter point of interest results in one example. In this example,
`if the number of returned points of interest exceeds a set
`threshold, the server can filter the results to only return a fixed
`number of the best results. Various algorithms can be
`employed to filter points of interest to a desired number for
`visual augmentation of a captured video stream. In another
`embodiment, the handheld communication device can filter
`point-of-interest results received from the server for optimal
`display on a handheld communication device.
`[0038]
`In block 410, the handheld communication device
`visually augments the captured video stream with data related
`to each point of interest. As shown in FIG. 2, the handheld
`communication device can visually augment a captured video
`stream with a bubble for each point of interest within the field
`of view for the handheld communication device. The hand-
`
`held communication device determines which points of inter-
`est are within its field of view by analyzing the geographic
`position, camera direction, and/or tilt of the handheld com-
`munication device in concert with the known geographic
`position of the returned points of interest.
`[0039]
`In block 412, the handheld communication device
`visually augments the captured video stream with a direc-
`tional map to a selected point of interest in response to the
`user input. For example, as described in connection with FIG.
`3, the smart phone now visually augments the captured image
`302 with a directional map 310 to the selectedpoint ofinterest
`in response to the user input. The user input can be a selection
`of a displayed point of interest to indicate that the user wishes
`to view navigation data for reaching the selected point of
`interest.
`
`In some embodiments, the display can also include
`[0040]
`indicator graphics to point the user in a proper direction. For
`example, if the user is facing south but a route requires the
`user to progress north, “no route” would be shown in the
`display because the route would be behind him or her. In such
`instances, an indicator can point the user in the proper direc-
`tion to find the displayed route.
`[0041]
`In some embodiments, multiple display views can
`be presented based on the orientation of the device. For
`example, when the device is held at an angle with respect to
`the ground of 45 degrees to 180 degrees, the display view can
`
`Snap Inc. EX. 1011 Page 0010
`
`Snap Inc. Ex. 1011 Page 0010
`
`
`
`US 2011/0199479 A1
`
`Aug. 18, 2011
`
`present the augmented reality embodiments described herein.
`However, when the device is held at an angle less than 45
`degrees, an illustrated or schematic view can be presented. In
`such embodiments, when the device is held at an angle with
`respect to the ground of less than 45 degrees, the device is
`likely pointed at the ground, where few objects of interest are
`likely to be represented in the displayed Video. In such
`instances, a different map view than the augmented reality
`map is more likely to be useful. It should be appreciated that
`precise range of tilt can be adjusted according to the actual
`environment or user preferences.
`[0042]
`FIG. 5 illustrates a computer system 500 used to
`execute the described method and generate and display aug-
`mented reality maps. Computer system 500 is an example of
`computer hardware, software, and firmware that can be used
`to implement the disclosures above. System 500 includes a
`processor 520, which is representative of any number of
`physically and/or logically distinct resources capable of
`executing software, firmware, and hardware configured to
`perform identified computations. Processor 520 communi-
`cates with a chipset 522 that can control input to and output
`from processor 520. In this example, chipset 522 outputs
`information to display 540 and can read and write informa-
`tion to non-volatile storage 560, which can include magnetic
`media and solid state media, for example. Chipset 522 also
`can read data from and write data to RAM 570. A bridge 535
`for interfacing with a variety of user interface components
`can be provided for interfacing with chipset 522. Such user
`interface components can include a keyboard 536, a micro-
`phone 537, touch-detection-and-processing circuitry 538, a
`pointing device such as a mouse 539, and so on. In general,
`inputs to system 500 can come from any of a variety of
`machine-generated and/or human-generated sources.
`[0043] Chipset 522 also can interface with one or more data
`network interfaces 525 that can have different physical inter-
`faces 517. Such data network interfaces can include inter-
`faces for wired and wireless local area networks, for broad-
`band wireless networks, as well as personal area networks.
`Some applications ofthe methods for generating and display-
`ing and using the augmented reality user interface disclosed
`herein can include receiving data over physical interface 517
`or be generated by the machine itself by processor 520 ana-
`lyzing data stored in memory 560 or 570. Further,
`the
`machine can receive inputs from the user via devices key-
`board 536, microphone 537, touch device 538, and pointing
`device 539 and execute appropriate functions, such as brows-
`ing functions by interpreting these inputs using processor
`520.
`
`[0044] While FIG. 5 illustrates an example of a common
`system architecture, it should also be appreciated that other
`system architectures are known and can be used with the
`present technology. For example, systems wherein most or all
`of the components described within FIG. 5 can be joined to a
`bus, or the peripherals could write to a common shared
`memory that is connected to a processor or a bus can be used.
`Other hardware architectures are possible and such are con-
`sidered to be within the scope of the present technology.
`[0045]
`FIG. 6 illustrates an exemplary system embodiment.
`A server 602 is in electronic communication with a handheld
`
`communication device 618 having functional components
`such as a processor 620, memory 622, graphics accelerator
`624, accelerometer 626, communications interface 628, com-
`pass 630, GPS 632, display 634, input device 636, and camera
`
`638. None of the devices are limited to the illustrated com-
`
`ponents. The components may be hardware, software, or a
`combination of both.
`
`In some embodiments, the server can be separate
`[0046]
`from the handheld communication device. The server and
`handheld communication device can communicate wire-
`
`lessly, over a wired-connection, or through a mixture ofwire-
`less and wired connections. The handheld communication
`device can communicate with the server over a TCP/IP con-
`nection. In another embodiment, the handheld communica-
`tion device can be directly connected to the server. In another
`embodiment, the handheld communication device can also
`act as a server and store the points of interest locally.
`[0047]
`In some embodiments, instructions are input to the
`handheld electronic device 618 through an input device 636
`that instructs the processor 620 to execute functions in an
`augmented reality application. One potential instruction can
`be to generate an augmented reality map of travel directions
`to a point of interest. In that case, the processor 620 instructs
`the camera 638 to begin feeding video images to the display
`634. In some embodiments, video images recorded by the
`camera are first sent to graphics accelerator 624 for process-
`ing before the images are displayed. In some embodiments,
`the processer can be the graphics accelerator. The image can
`be first drawn in memory 622 or, ifavailable, memory directly
`associated with the graphics accelerator 624.
`[0048] The processor 620 can also receive location and
`orientation information from devices such as a GPS device
`
`632, communications interface 628, digital compass 630 and
`accelerometer 626. The GPS device can determine GPS coor-
`
`dinates by receiving signals from Global Positioning System
`(GPS) satellites and can communicate them to the processor
`Likewise, the processor can determine the location of the
`device through triangulation techniques using signals
`received by the communications interface 628. The processor
`can determine the orientation ofthe device by receiving direc-
`tional information from the digital compass 620 and tilt infor-
`mation from the accelerometer.
`
`[0049] The processor can also direct the communications
`interface to send a request to the server 602 for map data
`corresponding to the area surrounding the geographical loca-
`tion of the device. In some embodiments, the processor can
`receive signals from the input device, which can be inter-
`preted by the processor to be a search request for map data
`including features of interest.
`[0050] The processor can interpret the location and orien-
`tation data received from the accelerometer 626, compass
`630, or GPS 632 to determine the direction in which the
`camera 638 is facing. Using this information, the processor
`can further correlate the location and orientation data with the
`
`map data and the video images to identify objects recorded by
`the camera 638 and displayed on the display 634.
`[0051] The processor can receive other inputs via the input
`device 636 such as an input that can be interpreted as a
`selection of a point of interest displayed on the display 634
`and a request for directions. The processor 620 can further
`interpret the map data to generate and display a route over the
`displayed image for guiding the user to a destination (selected
`point of interest).
`[0052] As the user follows the specified direction to the
`selected points of interest, the processor can continue to
`receive updated location and directional
`information and
`video input and update the overlaid route.
`
`Snap Inc. Ex. 1011 Page 0011
`
`Snap Inc. Ex. 1011 Page 0011
`
`
`
`US 2011/0199479 A1
`
`Aug. 18, 2011
`
`above-described
`the
`to
`according
`[0053] Methods
`examples can be implemented using computer-executable
`instructions that are stored or otherwise available from com-
`
`for
`puter-readable media. Such instructions comprise,
`example, instructions and data which cause or otherwise con-
`figure a general-purpose computer, a special-purpose com-
`puter, or a special-purpose processing device to perform a
`certain function or group of functions. Portions of computer
`resources used can be accessible over a network. The com-
`
`puter-executable instructions may be, for example, binaries,
`intermediate format instructions such as assembly language,
`firmware, or source code. E