`a2) Patent Application Publication co) Pub. No.: US 2011/0199479 Al
` Waldman (43) Pub. Date: Aug. 18, 2011
`
`
`
`US 20110199479A1
`
`(54) AUGMENTED REALITY MAPS
`
`(75)
`
`Inventor:
`
`Jaron Waldman,Palo Alto, CA
`(US)
`
`(73) Assignee:
`
`Apple Inc., Cupertino, CA (US)
`
`(21) Appl. No.:
`
`12/705,558
`
`(22)
`
`Filed:
`
`Feb. 12, 2010
`
`Publication Classification
`
`(51)
`
`Int. Cl.
`HOAN 7/18
`GOIC 21/00
`
`(2006.01)
`(2006.01)
`
`(52) US. CL wee 348/116; 701/201; 348/E07.085
`
`(57)
`ABSTRACT
`. .
`.
`A user points a handheld communication device to capture
`and display a real-time video stream. The handheld commu-
`nication device detects geographic position, camera direc-
`tion, and tilt of the image capture device. The user sends a
`search request to a server for nearby points of interest. The
`handheld communication device receives search results
`based on the search request, geographic position, camera
`direction, and tilt ofthe handheld communication device. The
`handheld communication device visually augments the cap-
`tured video stream with data related to each pointof interest.
`The userthen selects a pointofinterest to visit. The handheld
`communication device visually augments the captured video
`stream with a directional mapto a selected pointof interest in
`response to the user input.
`
`8|
`
`
`
`
`
`Graphics accelerator |
`Processor
`Memory
`
`622
`620
`624 |
`
`
`
`
`
`
`Accelerometer
`Communications
`
`
`Camera
`
`
`626
`interface
`
`
`1|?BINS
`
`
`Compass
`| Display ||
`Input Device
`
`
`
`
`
`
`
`630 |6«6347 §36
`
`
`
`
`
`
`
`602-7
`
`Snap Inc. Ex. 1011 Page 0001
`
`Snap Inc. Ex. 1011 Page 0001
`
`
`
`Patent Application Publication
`
`Aug. 18, 2011 Sheet 1 of 6
`
`US 2011/0199479 Al
`
`
`
`Snap Inc. Ex. 1011 Page 0002
`
`Snap Inc. Ex. 1011 Page 0002
`
`
`
`Patent Application Publication
`
`Aug. 18,2011 Sheet 2 of 6
`
`US 2011/0199479 Al
`
`Snap Inc. Ex. 1011 Page 0003
`
`Snap Inc. Ex. 1011 Page 0003
`
`
`
`Patent Application Publication
`
`Aug. 18, 2011 Sheet 3 of 6
`
`OE
`
`WeBISIABuen,)
`
`CLE
`
`US 2011/0199479 Al
`
`&Old
`
`Snap Inc. Ex. 1011 Page 0004
`
`Snap Inc. Ex. 1011 Page 0004
`
`
`
`Patent Application Publication
`
`Aug. 18,2011 Sheet 4 of 6
`
`US 2011/0199479 Al
`
`402
`
`404
`
`406
`
`408
`
`410
`
`412
`
`stream on a handheld
`electronic device
`
`Detecting geographic position,
`direction, and/or tilt of the
`handheld electronic device
`
`Sending a request for nearby
`POls based on a search term
`
` Capturing & displaying a video
`
`Receiving nearby POI in
`response to the request
`
`Visually augmenting the
`captured video stream
`with POI data
`
`Visually augmenting the captured
`video stream with a directional
`map to a selected point of interest
`
`in responseto a userinput
`
`FIG. 4
`
`Snap Inc. Ex. 1011 Page 0005
`
`Snap Inc. Ex. 1011 Page 0005
`
`
`
`Patent Application Publication
`
`Aug. 18,2011 Sheet 5 of 6
`
`US 2011/0199479 Al
`
`Processor
`
`|
`
`|
`
`Yost?
`
`
`
`Keyboard
`526
`
`y
`Display
`
`|
`
`Y
`IN.V.Storagel
`
`|
`
`e
`RAM
`
`|
`
`Microphone.
`oof
`Bere
`
`
`
`Pointing
`Device
`539
`
`:
`
`Snap Inc. Ex. 1011 Page 0006
`
`4qi33ii i
`
`i
`
`Snap Inc. Ex. 1011 Page 0006
`
`
`
`Patent Application Publication
`
`Aug. 18,2011 Sheet 6 of 6
`
`US 2011/0199479 Al
`
`|
`
`Memory
`
`Graphics accelerator |
`624}
`
`22
`
`
`Processor
`|
`
`620)
`
`Accelerometer |
`826)
`
`Compass
`|
`
`830)
`
`
`
`618
`
`Communications
`interface
`.
`
`
`
`623 |
`
`
`
`GPS
`632
`
`Display
`634
`
`input Device
`6236
`
`FIG. 6
`
`Snap Inc. Ex. 1011 Page 0007
`
`Snap Inc. Ex. 1011 Page 0007
`
`
`
`US 2011/0199479 Al
`
`Aug. 18, 2011
`
`AUGMENTED REALITY MAPS
`
`FIELD
`
`[0001] The followingrelates to searching for nearby points
`of interest, and more particularly to displaying information
`related to nearby points of interest overlaid onto a video feed
`of a surroundingarea.
`
`BACKGROUND
`
`[0002] Augmented reality systems supplementreality, in
`the form of a captured imageor video stream, with additional
`information. In many cases, such systems take advantage of a
`portable electronic device’s imaging and display capabilities
`and combine a video feed with data describing objects in the
`video. In some examples, the data describing the objects in
`the video can be the result of a search for nearby points of
`interest.
`
`For example, a uservisiting a foreign city can point
`[0003]
`a handheld communication device and capture a video stream
`of a particular view. A user can also enter a search term, such
`as museums. The system can then augmentthe captured video
`stream with search term result information related to nearby
`museumsthat are within the view of the video stream. This
`allows a user to supplement their view of reality with addi-
`tional information available from search engines.
`[0004] However, if a user desires to visit one of the muse-
`ums, the user must switch applications, or at a minimum,
`switch out of an augmentedreality view to learn directions to
`the museum. However, such systemscan fail to orient a user’s
`with a poor sense of direction and force the user to correlate
`the directions with objects in reality. Such a transition is not
`alwaysas easy as it might seem. For example, an instruction
`that directs a user to go north on Main St. assumesthat the
`user can discern which direction is north. Further, in some
`instances, street signs might be missing or indecipherable,
`making it difficult for the user to find the directed route.
`
`user to progress north, “no route” would be displayed in the
`display because the user would be lookingto the south but the
`route would be behind him or her. In such instances, an
`indicator can pointthe userin the properdirectionto find the
`route.
`
`In some embodiments, multiple display views can
`[0009]
`be presented based on the orientation of the device. For
`example, when the device is held at an angle with respect to
`the ground of 45 degrees to 180 degrees, the display view can
`present the augmentedreality embodiments described herein.
`However, when the device is held at an angle less than 45
`degrees, an illustrated or schematic view can be represented.
`Tn such embodiments, whenthe device is held at an angle with
`respect to the ground of less than 45 degrees, the device is
`likely pointedat the ground, where few objects of interest are
`likely to be represented in the displayed video. In such
`instances, a different map view is more likely to be useful. It
`should be appreciated that precise range oftilt can be adjusted
`according the actual environmentor user preferences.
`[0010]
`Inpractice, a user points a handheld communication
`device to capture and display a real-time video stream of a
`view. The handheld communication device detects a geo-
`graphic position, camera direction, andtilt of the image cap-
`ture device. The user sends a search request to a server for
`nearby points of interest. The handheld communication
`device receives search results based on the search request,
`geographic position, camera direction, and tilt of the hand-
`held communication device. The handheld communication
`
`device visually augments the captured video stream with data
`related to each pointof interest. The user then selects a point
`of interest to visit. The handheld communication device visu-
`ally augments the captured video stream with a directional
`map to a selected point of interest in response to the user
`input.
`[0011] A method of augmenting a video stream of a
`device’s present surrounding with navigational informationis
`disclosed. The user can instruct the device to initiate a live
`
`SUMMARY
`
`video feed using an onboard camera and display the captured
`video images on a display. By polling a Global Positioning
`System (GPS) device, a digital compass, and optionally, an
`Such challenges can be overcomeusing the present
`[0005]
`accelerometer, location, camera direction, and orientation
`technology. Therefore, a method and system for displaying
`information can be determined. By using the location, camera
`augmented reality maps are disclosed. By interpreting the
`direction, and orientation information, the device can request
`data describing the surrounding areas, the device can deter-
`data describing the surrounding areas and the objects therein.
`mine what objects are presently being viewed on the display.
`In some embodiments, this data includes map vector data.
`The device can further overlay information regarding the
`The can be requested from an onboard memory ora server.
`presently viewed objects, thus enhancing reality. In some
`The data describing surrounding areas can further be
`embodiments, the device can also display search results over-
`requested in conjunction with a search request. The search
`laid onto the displayed video feed. Search results need not be
`request can also include a request for information about
`actually viewable byauserin reallife. Instead, search results
`nearby places of interest.
`can also include more-distant objects.
`[0006] Theuser can interact with the display using an input
`device such as a touch screen. Usingthe input device, the user
`can select from among objects represented on the screen,
`including the search results.
`[0007]
`In one form of interaction, a device can receive an
`input from the user requesting directions from a present loca-
`tion to a selected search result. Directions can be overlaid
`onto the presently displayed video feed, thus showing a
`course and upcoming turns. As the user and associated device
`progress along a route, the overlaid directions can automati-
`cally update to show the updated path.
`[0008]
`In some embodiments the display can also include
`indicator graphics to point the user in a properdirection. For
`example, if the user is facing south but a route requires the
`
`BRIEF DESCRIPTION OF THE DRAWINGS
`
`FIG. 1 illustrates an exemplary visually augmented
`[0012]
`captured image with data related to a search for points of
`interest;
`FIG.2 illustrates a the results of a field-of-view and
`[0013]
`point-of-interest search;
`[0014] FIG.3 illustrates an exemplary captured imagevisu-
`ally augmented with a route to a selected pointofinterest;
`[0015]
`FIG. 4 is a flow chart illustrating an exemplary
`method of preparing and displaying an augmentedreality
`map,
`FIG. 5 is a schematic illustration of an exemplary
`[0016]
`system embodiment; and
`Snap Inc. Ex. 1011 Page 0008
`
`Snap Inc. Ex. 1011 Page 0008
`
`
`
`US 2011/0199479 Al
`
`Aug. 18, 2011
`
`FIG. 6 is a schematic illustration of an exemplary
`[0017]
`system embodiment.
`
`DESCRIPTION
`
`FIG. 2 illustrates search results for point of interest
`[0023]
`results for nearby parks based on geographic position and
`alsoillustrates how a range andfield ofview correspondto the
`results displayed in the viewfinder. A handheld communica-
`tion device captures a video stream of the view as shown in
`FIG.1. The handheld communication device detects the geo-
`graphic position, camera direction, and tilt of the handheld
`communication device.
`
`[0024] The geographic position of the handheld communi-
`cation device can be determined using GPS coordinates or
`using triangulation methods using cell phone towers. In yet
`another example, a blend of GPS coordinates andtriangula-
`tion information can be used to determine the position of the
`device.
`
`[0025] The camera direction is a direction relative to a
`planet’s magnetic field (i.e., Earth’s magnetic field) in which
`the camera is pointing. The camera direction can be consid-
`ered a direction that can be identified using a compass, suchas
`a digital compass. The camera direction can be used to iden-
`tify the direction in which the camerais pointing as it acquires
`an image to be augmented using the present technology.
`[0026]
`Thetilt direction is a direction that determines the
`direction in which either the camera device or display device
`is pointing relative to a horizontal or vertical axis. Thetilt
`direction can most commonly be determined using an accel-
`erometer.
`
`[0018] The technology described herein visually augments
`a captured image or video stream with data for points of
`interest related to search terms entered by the user. The tech-
`nology also visually augments the captured image or video
`stream with a directional map to a selected point of interest.
`[0019]
`FIG. 1 is a screenshot illustrating an augmented
`reality embodiment as described herein. As illustrated, a
`handheld communication device has captured an image 102
`of the northwest corner of the intersection of Dolores St and
`17” St. using its image-capturing device and displayed the
`imageonits display. In this way, the display can function as a
`viewfinder. As illustrated, the captured image 102 has been
`augmented with information correspondingto pointsof inter-
`est 104, 106 andstreet labels 110, 112.
`[0020]
`FIG. 1 illustrates a captured and presented image
`102 using an image capture device, 1.e., the camera of a smart
`phone, which is but one type of handheld communication
`device to which the present disclosure can be applied.In this
`illustrated embodiment, the user has entered a search term
`“parks” in search bar 108 to conduct a search for nearby
`parks, 1.e., a specific type of point of interest. Using map data
`that describes the area surroundingthe presentlocation of the
`[0027] The user can enter a search request for nearby points
`device and the points of interest located in the surrounding
`of interest based onasearch term.In this example, upon entry
`area, the device augments the displayed image with addi-
`by the user of a search for nearby “Parks” the handheld
`tional information. In this instance, the smart phoneor hand-
`communication device sends a request for data related to
`held communication device displays points of interest
`nearby parks to a map database.
`described by the data that are displayed in the viewfinder
`(such as DoloresSt. 110 and 17” St. 112)or within a field of
`[0028] Either the request itself, or the database being que-
`view and range from the geographic position ofthe device but
`ried can determinea relevant range from within which search
`that are obstructed by other in-screen objects, e.g., Golden
`results must be encompassed. Uponreceipt ofthe request, the
`Gate Park 104 and Buena Vista Park 106. While other parks
`database will return search results for points of interest
`related to the search term that are also within a defined radius
`might also be nearby, they are not shown because theyfall
`of the handheld communication device asillustrated in FIG.
`outside the field of view of the device. However, the user
`could locate these parks by panning the device around the
`intersection, in which case those parks would appear on the
`screen.
`
`Inthe captured image 102, the handheld communi-
`[0021]
`cation device augments the captured image with bubbles
`showing the relative geographic position of “Golden Gate
`Park” 104 and “BuenaVista Park” 106 within the captured
`image 102. This allows the user to determine a generaldirec-
`tion to a point of interest. A user can then select a point of
`interest, e.g., by selecting the “BuenaVista Park” 106 point of
`interest information bubble, e.g., by touching the point of
`interest information bubble with a fingeror stylusif the smart
`phone employs a touch screen. In other implementations, a
`cursor and mouse can be used to select a desired point of
`interest.
`
`Points of interest can be any map feature, but most
`[0022]
`often a point of interest can be a map feature that identified as
`result of a search for a category of such map features. For
`example, a point ofinterest can bea park when auser searches
`for nearby parks. Likewise a pointof interest can be places,
`buildings, structures, even friends that can be located on a
`map, when the point of interest is searched for. In some
`instances a pointofinterest is not necessarily identified as a
`result ofa search. A point ofinterest can also be a map feature
`that is identified by the present system because it can be
`viewedin the captured image. In short, a pointof interest can
`be any map feature for which the userhas an interest.
`
`2. As shown in this example, the server returned points of
`interest “Golden Gate Park” 208, “Buena Vista Park” 206,
`“Midtown Terrace Playground” 210, and “Mission Dolores
`Park” 212. The handheld communication device determines
`
`that ofthe point-of-interest search results, only “Golden Gate
`Park” 208 and “Buena Vista Park” 206 are within the field of
`view of the handheld communication device. The point-of-
`interest results “Golden Gate Park” 208 and “Buena Vista
`Park”206 are displayed with theirrelative spatial relationship
`to the handheld communication device.In the example shown
`in FIG.2, the camera direction of the handheld communica-
`tion device is northwest.
`
`[0029] A field of view can be determined using a digital
`compassto inform the device ofthe camera direction in which
`the camerais facing or, alternatively, the user could enter in a
`heading. As explained above, in FIGS. 1 and 2, the camera is
`facing northwest andits theoretical line of sight is represented
`as 214 in FIG. 2. Anysearchresults that are to be displayed on
`the viewfinder must be within a certain angle of line 214. For
`example, a camera on a handheld communication device
`might only be able to display range of view encompassing 30
`degrees. In such an instance, a given display would represent
`those items encompassed within 15 degrees in each direction
`from the center ofthe field ofview. This conceptis illustrated
`in FIG.2 wherein 214 illustrates the center ofthe field ofview
`and angles 8, 216=0, 218 and they represent angles from the
`center of the field of view to the outerlimits of the field of
`
`Snap Inc. Ex. 1011 Page 0009
`
`Snap Inc. Ex. 1011 Page 0009
`
`
`
`US 2011/0199479 Al
`
`Aug. 18, 2011
`
`view. A distance from the device’s geographic location can
`also be used to define a field of view. As discussed above, a
`distance or range can be defined by the devicein its request for
`search results or by the database serving the request. Only
`search results encompassed in this field of view will be dis-
`played on the display.
`[0030]
`In some embodiments, a device can also use an
`accelerometer to inform the device of what objects are dis-
`playedin its viewfinder. For example, ifthe device is ina hilly
`location, the accelerometer can tell the device that it is point-
`ing downhill. In another example, the device can determine
`that, due to the topography surrounding its present location
`(described by map data) an object viewedat a certain angle
`from the horizon must be a neighboring hill or mountain peak
`in the distance. In yet another example, an angle from a
`horizon can indicate that the user is viewing a multiple story
`building having places of interest in multiple stories of the
`building. An accelerometer can inform the device ofthe angle
`at which the device is pointed.
`[0031]
`FIG. 3 illustrates a captured image that has been
`visually augmented with route data to a selected point of
`interest. In this example, a user has selected the “BuenaVista
`Park”point of interest and, in response, the smart phone has
`visually augmented the captured image 302 with a directional
`map 310 to the selected point of interest, i.e., “Buena Vista
`Park”. The route shows a direction 312 that the user must
`
`travel on DoloresSt. to begin travelling to reach “Buena Vista
`Park.” The directional map 310 further indicates a turn 314
`that the user musttake,i.e., a turn left onto Duboce Ave. from
`Dolores St. In the illustrated example, the map is shown
`overlaid onto Dolores St.
`
`[0032] The route 310 guides the user with complete navi-
`gation illustrations to reach “Buena Vista Park,’ including
`any required turns. In some embodiments, the route can be
`represented as a schematic map, i.e., a simplified map that
`includesonly relevant information for the user in an easy-to-
`read format.
`
`[0033] A schematic map can be thoughtof as similar to a
`subway map one would see on a subwaytrain. While the
`subwaytrack itself might wind and turn, a typical subway
`map represents the subway route as a mostly straight line.
`Further, the subway map often does not have any particular
`scale and frequently shows every destination approximately
`evenly dispersed along the route. Thus, a schematic map as
`discussed below is one that does not adhere to geographic
`“reality,” but rather represents map features in a schematic
`fashion byillustrating directions as a route made of one or
`moreroads, trails, or ways that can be represented as substan-
`tially straight lines instead of by their actual shapes (which
`would be represented in a non-schematic map by adhering to
`geographicreality). The schematic map can also be devoid of
`uniform scale. Thus, in someparts of the map, such as an area
`of the map representing a destination, such area can be “dis-
`torted” somewhatto clearly illustrate important details, while
`mapareasthat represent portions of a route wherethere are no
`turns or other significant features can be very condensed. In
`short, the map can be a schematic of the real world that can
`provide a simple and clear representation that is sufficientto
`aid the user in guidance or orientation without displaying
`unnecessary map features or detail that could otherwise clut-
`ter a small display space.
`[0034]
`FIG. 4 is a flow chart illustrating an exemplary
`method of preparing and displaying an augmented reality
`map. As shownat block 402, the method includes capturing
`
`and displaying a video stream on a handheld communication
`device. Although described here in reference to a video
`stream, another embodiment of the disclosed technology
`includes capturing and displaying a single still image or a
`series ofstill images.
`[0035] As shownat block 404, the method includesdetect-
`ing geographic position, camera direction, and/ortilt of the
`handheld communication device. This allows the device to
`determine features, such as streets, buildings, points of inter-
`est, etc., that are within a field of view for the captured video
`stream.
`
`[0036] Asshownatblock 406, the method includes sending
`a request for nearby points of interest based on one or more
`search terms. For example, the user can search for nearby
`hotels, parks, or restaurants. The request can be sent to a
`database located ona serverthat is separate from the handheld
`communication device and communicate via a wireless pro-
`tocol. In another embodiment, the database can be stored
`locally on the device and the search request remains internal
`(sometimes termed “onboard” the device) to the handheld
`communication device.
`
`In block 408, the method includes receiving nearby
`[0037]
`points of interest in response to the request. The server can
`filter pointof interest results in one example.In this example,
`if the number of returned points of interest exceeds a set
`threshold, the server canfilter the results to only return a fixed
`number of the best results. Various algorithms can be
`employed tofilter points of interest to a desired number for
`visual augmentation of a captured video stream. In another
`embodiment, the handheld communication device canfilter
`point-of-interest results received from the server for optimal
`display on a handheld communication device.
`[0038]
`In block 410, the handheld communication device
`visually augments the captured video stream with data related
`to each point of interest. As shown in FIG.2, the handheld
`communication device can visually augment a captured video
`stream with a bubble for each point of interest within thefield
`of view for the handheld communication device. The hand-
`
`held communication device determines which pointsof inter-
`est are within its field of view by analyzing the geographic
`position, camera direction, and/ortilt of the handheld com-
`munication device in concert with the known geographic
`position of the returned points ofinterest.
`[0039]
`In block 412, the handheld communication device
`visually augments the captured video stream with a direc-
`tional map to a selected point of interest in response to the
`user input. For example, as described in connection with FIG.
`3, the smart phone now visually augments the captured image
`302 witha directional map 310 to the selected point ofinterest
`in responseto the user input. The user input can be a selection
`of a displayed pointof interest to indicate that the user wishes
`to view navigation data for reaching the selected point of
`interest.
`
`In some embodiments, the display can also include
`[0040]
`indicator graphics to point the user in a proper direction. For
`example, if the user is facing south but a route requires the
`user to progress north, “no route” would be shown in the
`display because the route would be behind him or her. In such
`instances, an indicator can pointthe user in the properdirec-
`tion to find the displayed route.
`[0041]
`In some embodiments, multiple display views can
`be presented based on the orientation of the device. For
`example, when the device is held at an angle with respect to
`the ground of 45 degrees to 180 degrees, the display view can
`Snap Inc. Ex. 1011 Page 0010
`
`Snap Inc. Ex. 1011 Page 0010
`
`
`
`US 2011/0199479 Al
`
`Aug. 18, 2011
`
`638. None of the devices are limited to the illustrated com-
`
`ponents. The components may be hardware, software, or a
`combination of both.
`
`In some embodiments, the server can be separate
`[0046]
`from the handheld communication device. The server and
`handheld communication device can communicate wire-
`
`lessly, over a wired-connection, or through a mixture ofwire-
`less and wired connections. The handheld communication
`device can communicate with the server over a TCP/IP con-
`nection. In another embodiment, the handheld communica-
`tion device can be directly connectedto the server. In another
`embodiment, the handheld communication device can also
`act as a server andstore the points of interest locally.
`[0047]
`In some embodiments, instructions are input to the
`handheld electronic device 618 through an input device 636
`that instructs the processor 620 to execute functions in an
`augmented reality application. One potential instruction can
`be to generate an augmented reality map oftravel directions
`to a point ofinterest. In that case, the processor 620 instructs
`the camera 638 to begin feeding video imagesto the display
`634. In some embodiments, video images recorded by the
`cameraarefirst sent to graphics accelerator 624 for process-
`ing before the images are displayed. In some embodiments,
`the processer can be the graphics accelerator. The image can
`be first drawn in memory 622or, ifavailable, memory directly
`associated with the graphics accelerator 624.
`[0048] The processor 620 can also receive location and
`orientation information from devices such as a GPS device
`
`632, communications interface 628, digital compass 630 and
`accelerometer 626. The GPS device can determine GPScoor-
`dinates by receiving signals from Global Positioning System
`(GPS)satellites and can communicate them to the processor
`Likewise, the processor can determine the location of the
`device through triangulation techniques using signals
`received by the communicationsinterface 628. The processor
`can determinethe orientationofthe device by receiving direc-
`tional information from the digital compass 620 andtilt infor-
`mation from the accelerometer.
`
`present the augmented reality embodiments described herein.
`However, when the device is held at an angle less than 45
`degrees, an illustrated or schematic view can be presented. In
`such embodiments, when the device is held at an angle with
`respect to the ground of less than 45 degrees, the device is
`likely pointedat the ground, where few objects of interest are
`likely to be represented in the displayed video. In such
`instances, a different map view than the augmented reality
`map is morelikely to be useful. It should be appreciated that
`precise range oftilt can be adjusted according to the actual
`environmentor user preferences.
`[0042]
`FIG. 5 illustrates a computer system 500 used to
`execute the described method and generate and display aug-
`mentedreality maps. Computer system 500 is an example of
`computer hardware, software, and firmware that can be used
`to implement the disclosures above. System 500 includes a
`processor 520, which is representative of any number of
`physically and/or logically distinct resources capable of
`executing software, firmware, and hardware configured to
`perform identified computations. Processor 520 communi-
`cates with a chipset 522 that can control input to and output
`from processor 520. In this example, chipset 522 outputs
`information to display 540 and can read and write informa-
`tion to non-volatile storage 560, which can include magnetic
`media and solid state media, for example. Chipset 522 also
`can read data from and write data to RAM 570. A bridge 535
`for interfacing with a variety of user interface components
`can be provided for interfacing with chipset 522. Such user
`interface components can include a keyboard 536, a micro-
`phone 537, touch-detection-and-processing circuitry 538, a
`pointing device such as a mouse 539, and so on. In general,
`inputs to system 500 can come from any of a variety of
`machine-generated and/or human-generated sources.
`[0043] Chipset 522 also can interface with one or more data
`networkinterfaces 525 that can have different physicalinter-
`faces 517. Such data network interfaces can include inter-
`faces for wired and wireless local area networks, for broad-
`band wireless networks, as well as personal area networks.
`Someapplications ofthe methods for generating and display-
`ing and using the augmentedreality user interface disclosed
`herein can include receiving data over physical interface 517
`or be generated by the machineitself by processor 520 ana-
`lyzing data stored in memory 560 or 570. Further,
`the
`machine can receive inputs from the user via devices key-
`board 536, microphone 537, touch device 538, and pointing
`device 539 and execute appropriate functions, such as brows-
`ing functions by interpreting these inputs using processor
`520.
`
`[0049] The processor can also direct the communications
`interface to send a request to the server 602 for map data
`corresponding to the area surrounding the geographicalloca-
`tion of the device. In some embodiments, the processor can
`receive signals from the input device, which can be inter-
`preted by the processor to be a search request for map data
`including features of interest.
`[0050] The processor can interpret the location and orien-
`tation data received from the accelerometer 626, compass
`630, or GPS 632 to determine the direction in which the
`camera 638 is facing. Using this information, the processor
`[0044] While FIG.5illustrates an example of a common
`can furthercorrelate the location and orientation data with the
`system architecture, it should also be appreciated that other
`map data andthe video imagesto identify objects recorded by
`system architectures are known and can be used with the
`the camera 638 and displayed on the display 634.
`present technology. For example, systems wherein mostorall
`of the components described within FIG. 5 can be joined to a
`[0051] The processor can receive other inputs via the input
`bus, or the peripherals could write to a common shared
`device 636 such as an input that can be interpreted as a
`memory that is connected to a processoror a bus can be used.
`selection of a point of interest displayed on the display 634
`Other hardware architectures are possible and such are con-
`and a request for directions. The processor 620 can further
`sidered to be within the scope ofthe present technology.
`interpret the map data to generate and display a route over the
`displayed image for guiding the userto a destination (selected
`[0045]
`FIG.6illustrates an exemplary system embodiment.
`A server 602 is in electronic communication with a handheld
`point of interest).
`[0052] As the user follows the specified direction to the
`selected points of interest, the processor can continue to
`receive updated location and directional
`information and
`video input and update the overlaid route.
`Snap Inc. Ex. 1011 Page 0011
`
`communication device 618 having functional components
`such as a processor 620, memory 622, graphics accelerator
`624, accelerometer 626, communicationsinterface 628, com-
`pass 630, GPS 632, display 634, input device 636, and camera
`
`Snap Inc. Ex. 1011 Page 0011
`
`
`
`US 2011/0199479 Al
`
`Aug. 18, 2011
`
`above-described
`the
`to
`according
`[0053] Methods
`examples can be implemented using computer-executable
`instructions that are stored or otherwise available from com-
`puter-readable media. Such instructions comprise,
`for
`example, instructions and data which cause or otherwise con-
`figure a general-purpose computer, a special-purpose com-
`puter, or a special-purpose processing device to perform a
`certain function or group of functions. Portions of computer
`resources used can be accessible over a network. The com-
`
`puter-executable instructions may be, for example, binaries,
`intermediate format instructions such as assembly language,
`firmware, or source code. Examples of computer-readable
`media