`(12) Patent Application Publication (10) Pub. No.: US 2010/0171763 A1
`(43) Pub. Date:
`Jul. 8, 2010
`Bhatt et al.
`
`US 20100171763A1
`
`(54) ORGANIZING DIGITAL IMAGES BASED ON
`LOCATIONS OF CAPTURE
`
`(75) Inventors:
`
`Nikhil Bhatt, Cupertino, CA (US);
`Eric Hanson, Emeryville, CA (US);
`Joshua Fagans, Redwood City, CA
`(US); Greg Gilley, Los Altos, CA
`(US); Timothy B. Martin,
`Sunnyvale, CA (US); Gregory
`Charles Lindley, Sunnyvale, CA
`(US)
`
`Correspondence Address:
`FSH & RICHARDSON P.C.
`PO BOX 1022
`MINNEAPOLIS, MN 55440-1022 (US)
`
`(73) Assignee:
`
`APPLE INC., Cupertino, CA (US)
`
`(21) Appl. No.:
`
`12/545,765
`
`(22) Filed:
`
`Aug. 21, 2009
`
`Related U.S. Application Data
`(60) Provisional application No. 61/142.558, filed on Jan.
`5, 2009.
`
`Publication Classification
`
`(51) Int. Cl.
`(2006.01)
`G09G 5/00
`(2006.01)
`G06F 3/048
`(52) U.S. Cl. ......... 345/660; 715/800; 715/810; 715/764:
`71.5/781
`
`ABSTRACT
`(57)
`Methods, apparatuses, and systems for organizing digital
`images based on locations of capture. On a small scale map of
`a geographic region that is displayed on a device, an object
`representing digital media items associated with a location in
`the geographic region are displayed. In response to receiving
`an input to display a portion of the map that includes the
`object, in a larger Scale, multiple objects are displayed in the
`larger scale map, each of which represent a location of at least
`one of the multiple digital media items represented by the
`object in the Small scale.
`
`
`
`All Countries () All States (3)
`
`Album Name 135
`
`All Cities (7) || All Places (10)
`Place 1
`tO
`Place 10
`
`Library
`Information
`110
`
`Image
`Information
`120
`
`115
`
`125
`
`130
`
`MemoryWeb Ex. 2005
`Apple v. MemoryWeb – IPR2022-00031
`1 of 19
`
`
`
`Patent Application Publication
`
`Jul. 8, 2010 Sheet 1 of 6
`
`US 2010/0171763 A1
`
`colº
`
`
`
`9 || ||
`
`
`
`
`
`|(é) sele?s IV || (/) seinunoo (v
`
`ÕTT
`
`MemoryWeb Ex. 2005
`Apple v. MemoryWeb – IPR2022-00031
`2 of 19
`
`
`
`Patent Application Publication
`
`Jul. 8, 2010 Sheet 2 of 6
`
`US 2010/0171763 A1
`
`oo!’
`
`
`
`OZZ
`
`MemoryWeb Ex. 2005
`Apple v. MemoryWeb – IPR2022-00031
`3 of 19
`
`
`
`Patent Application Publication
`
`Jul. 8, 2010 Sheet 3 of 6
`
`US 2010/0171763 A1
`
`G || ||
`
`0 || 9
`
`GOZ
`
`colº
`
`
`
`D/ Sebela IV || 1/sen?o IV
`
`|(é)seles IV
`
`|(/) seinunoolv
`
`MemoryWeb Ex. 2005
`Apple v. MemoryWeb – IPR2022-00031
`4 of 19
`
`
`
`Patent Application Publication
`
`Jul. 8, 2010 Sheet 4 of 6
`
`US 2010/0171763 A1
`
`
`
`
`
`3 S
`
`3 s S
`
`O
`N
`
`N.
`
`N.
`N.
`
`O
`V
`9
`
`
`
`MemoryWeb Ex. 2005
`Apple v. MemoryWeb – IPR2022-00031
`5 of 19
`
`
`
`Patent Application Publication
`
`Jul. 8, 2010 Sheet 5 of 6
`
`US 2010/0171763 A1
`
`505
`
`
`
`Size
`
`Latitude
`Longitude
`Country
`State
`County
`City
`
`Image Properties
`
`Shutter
`Aperture
`Exposure
`Focal length :
`Distance
`Sensing
`
`FIG. 5
`
`MemoryWeb Ex. 2005
`Apple v. MemoryWeb – IPR2022-00031
`6 of 19
`
`
`
`Patent Application Publication
`
`Jul. 8, 2010 Sheet 6 of 6
`
`US 2010/0171763 A1
`
`099
`
`
`
`0 | 9
`
`OZ9
`
`G99
`
`MemoryWeb Ex. 2005
`Apple v. MemoryWeb – IPR2022-00031
`7 of 19
`
`
`
`US 2010/0171763 A1
`
`Jul. 8, 2010
`
`ORGANIZING DIGITAL MAGES BASED ON
`LOCATIONS OF CAPTURE
`
`CROSS-REFERENCE TO RELATED
`APPLICATIONS
`0001. This application claims priority to U.S. Provisional
`Application Ser. No. 61/142.558 filed on May 1, 2009,
`entitled "Organizing Digital Images based on Locations of
`Capture, the entire contents of which are incorporated herein
`by reference.
`
`TECHNICAL FIELD
`0002 The present specification relates to presenting digi
`tal media, for example, digital photographs, digital video, and
`the like.
`
`BACKGROUND
`0003 Digital media includes digital photographs, elec
`tronic images, digital audio and/or video, and the like. Digital
`images can be captured using a wide variety of cameras, for
`example, high-end equipment Such as digital single lens
`reflex (SLR) cameras, low resolution cameras including
`point-and-shoot cameras and cellular telephone instruments
`with Suitable image capture capabilities. Such images can be
`transferred either individually as files or collectively as fold
`ers containing multiple files from the cameras to other devices
`including computers, printers, and storage devices. Software
`applications enable users to arrange, display, and edit digital
`photographs obtained from a camera or any other electronic
`image in a digital format. Such software applications provide
`a user in possession of a large repository of photographs with
`the capabilities to organize, view, and edit the photographs.
`Editing includes tagging photographs with one or more iden
`tifiers and manipulating images tagged with the same identi
`fiers simultaneously. Additionally, Software applications pro
`vide users with user interfaces to perform Such tagging and
`manipulating operations, and to view the outcome of Such
`operations. For example, a user can tag multiple photographs
`as being black and white images. A user interface, provided
`by the Software application, allows the user to simultaneously
`transfer all tagged black and white photographs from one
`storage device to another in a one-step operation.
`
`SUMMARY
`0004. This specification describes technologies relating to
`organizing digital images based on associated location infor
`mation, Such as a location of capture.
`0005 Systems implementing techniques described here
`enable users to organize digital media, for example, digital
`images, that have been captured and stored, for example, on a
`computer-readable storage device. Geographic location
`information, such as information describing the location
`where the digital image was captured, can be associated with
`one or more digital images. The location information can be
`associated with the digital image either automatically, for
`example, through features built into the camera with which
`the photograph is taken, or Subsequent to image capture, for
`example, by a user of a Software application. Such informa
`tion serves as an identifier attached to or otherwise associated
`with a digital image. Further, the geographic location infor
`mation can be used to group images that share similar char
`acteristics. For example, based on the geographic informa
`tion, the systems described here can determine that all
`
`photographs in a group were captured in and around San
`Francisco, Calif. Subsequently, the systems can display, for
`example, one or more pins representing locations of one or
`more images on a map showing at least a portion of San
`Francisco. Further, when the systems determine that a new
`digital image was also taken in or around San Francisco, the
`systems can include the new photograph in the group. Details
`of these and additional techniques are described below.
`0006. The systems and techniques described here may
`provide one or more of the following advantages. Displaying
`objects on maps to represent locations allows users to create
`a travel-book of locations. Associating location-based iden
`tifiers with images enables grouping images associated with
`the same identifier. In addition to associating an identifier
`with each photograph, users can group multiple images that
`fall within the same geographic region, even if the proximate
`locations of different photographs are nonetheless different.
`Enabling the coalescing and dividing of objects based on
`Zoom levels of the maps avoids cluttering of objects on maps
`while maintaining objects for each location.
`0007. The details of one or more implementations of the
`specification are set forth in the accompanying drawings and
`the description below. Other features, aspects, and advan
`tages of the specification will become apparent from the
`description, the drawings, and the claims.
`
`BRIEF DESCRIPTION OF THE DRAWINGS
`0008 FIG. 1 is schematic of an exemplary user interface
`for displaying multiple images.
`0009 FIG. 2 is a schematic of an exemplary user interface
`for receiving image location.
`0010 FIG. 3 is a schematic of an exemplary user interface
`for displaying image location information.
`0011
`FIGS. 4A-4C are schematics of exemplary user
`interfaces for displaying image locations at different Zoom
`levels.
`0012 FIG. 5 is a schematic of an exemplary user interface
`for displaying image file metadata.
`0013 FIG. 6 is a schematic of an exemplary user interface
`for entering location information to be associated with an
`image.
`0014. Like reference numbers and designations in the
`various drawings indicate like elements.
`
`DETAILED DESCRIPTION
`Digital media items, for example, digital images,
`00.15
`digital photographs, and the like, can be captured at different
`locations. For example, a user who resides in San Jose, Calif.,
`can capture multiple photographs at multiple locations, such
`as San Jose, Cupertino, Big Basin Redwoods State Park, and
`the like, while traveling across Northern California. Simi
`larly, the user can also capture photographs in different cities
`across a state, in multiple states, and in multiple countries.
`The multiple photographs as well as locations in which the
`photographs are captured can be displayed in user interfaces
`that will be described later. Further, the systems and tech
`niques described below enable a user to edit the information
`describing a location in which a photograph is captured and
`also to simultaneously manipulate multiple photographs that
`are related to each other based upon associated locations,
`Such as if the locations are near each other.
`0016 FIG. 1 is schematic of an exemplary user interface
`100 for displaying multiple images. The user interface 100
`
`MemoryWeb Ex. 2005
`Apple v. MemoryWeb – IPR2022-00031
`8 of 19
`
`
`
`US 2010/0171763 A1
`
`Jul. 8, 2010
`
`can be displayed in a display device operatively coupled to a
`computer. Within the user interface 100, the images (Image 1,
`Image 2, ..., Image n) can be displayed in either portrait or
`landscape orientation in corresponding thumbnails 105 that
`are arranged in an array. The images, that are stored, for
`example, on a computer-readable storage device, can be
`retrieved from the storage device and displayed as thumbnails
`105 in the user interface 100. In addition, the storage device
`can include a library of multiple digital media items, for
`example, video, audio, other digital images, and the like.
`Information about the library can be displayed in the library
`information panel 110 in the user interface 100. For example,
`the storage device includes multiple folders and each folder
`includes multiple digital media items. The library informa
`tion panel 110 displays the titles of one or more folders and
`links through which a user can access the contents of the
`displayed one or more folders. Additionally, links to recently
`accessed albums and images also can be displayed in the
`library information panel 110.
`0.017. A user can access an image, for example, Image 2,
`by actuating the associated thumbnail 105. To do so, the user
`can position a cursor 115 that is controllable using, for
`example, a mouse, over the thumbnail 105 representing the
`image and opening that thumbnail 105. The mouse that con
`trols the cursor 115 is operatively coupled to the computer to
`which the display device displaying the user interface 100 is
`coupled. Information related to the accessed image can be
`displayed in the image information panel 120. Such informa
`tion can include a file name under which the digital image is
`stored in the storage device, a time when the image was
`captured, a file type, for example, JPEG, GIF, BMP file size,
`and the like. In some implementations, information about an
`image can be displayed in the image information panel 120
`when the user selects the thumbnail 105 representing the
`image. Alternatively, or in addition, image information can be
`displayed in the image information panel 120 when a user
`positions the cursor 115 over a thumbnail 105 in which the
`corresponding image is displayed.
`0.018. In addition, the user interface 100 can include a
`control panel 125 in which multiple control buttons 130 can
`be displayed. Each control button 130 can be configured such
`that selecting the control button 130 enables a user to perform
`operations on the thumbnails 105 and/or the corresponding
`images. For example, selecting a control button 130 can
`enable a user to rotate a thumbnail 105 to change the orien
`tation of an image from portrait to landscape, and vice versa.
`Any number of functions can be mapped to control buttons
`130 in the control panel 125. Further, the user interface 100
`can include a panel 135 for displaying the name of the album
`in which Image 1 to Image n are stored or otherwise orga
`nized. For example, the album name displayed in the panel
`135 can be the name of the folder in which the images are
`stored in the storage device.
`0019. In some implementations, the user can provide geo
`graphic location information related to each image displayed
`in the user interface 100. The geographic location informa
`tion can be information related to the location where the
`image was captured. The names of the locations and addi
`tional location information for a group of images can be
`collected and information about the collection can be dis
`played in panels 140,145, 150, and 155 in the user interface.
`For example, if a user has captured Image 1 to Image n in
`different locations in the United States of America (USA),
`then all images that are displayed in thumbnails 105 in the
`
`user interface were captured in one country. Consequently,
`the panel 140 entitled “All Countries' displays “1” and the
`name of the country. Within the USA, the user can have
`captured a first set of images in a first state, a second set of
`images in a second state, and a third set of images in a third
`state. Therefore, the panel 145 entitled “All States' displays
`'3' and the names of the states in which the three sets of
`images were captured. Similarly, panel 150 entitled “All Cit
`ies' displays “7” and the names of seven cities, and panel 155
`entitled “All Places' displays “10' and the names of ten
`places of interest in the seven cities.
`0020. The geographical designations or other such labels
`assigned to panels 140, 145, 150, and 155 can vary. For
`example, if it is determined that the place of interest is a group
`of islands, then an additional panel displaying the names of
`the islands in which images were captured can be displayed in
`the user interface 100. Alternatively, the names of the islands
`could be displayed under an existing panel. Such as a panel
`corresponding to cities or places. The panels can be adapted to
`display any type of geographical information. For example,
`names of oceans, lakes, rivers, and the like also can be dis
`played in the user interface 100. In some implementations,
`two or more panels can be coalesced and displayed as a single
`panel. For example, panel 145 and panel 150 can be coalesced
`into one panel entitled “All States and Cities. Techniques for
`receiving geographic location information, grouping images
`based on the information, and collecting information to dis
`play in panels such as panels 140, 145, 150, and 155 are
`described below.
`0021
`FIG. 2 is a schematic of an exemplary user interface
`100 for receiving image location information. Image location
`refers to the geographic location information related to an
`image. In some implementations, the location information
`can be obtained when the image is captured. For example, the
`camera with which the user captures the image can be opera
`tively coupled to a location identifier, for example, a Global
`Positioning System (GPS) receiver that is built-into the cam
`era, Such that when the image is captured, in addition to
`storing the image on a storage device, the GPS coordinates of
`the location in which the image is captured also are stored on
`the storage device. The GPS coordinates for an image can be
`associated with the image, for example, in the form of image
`file metadata. In some implementations, the user can capture
`the image using a first device, for example, a camera, obtain
`the GPS coordinates of the camera's location using a second
`device, and subsequently associate the GPS coordinates to
`one or more captured images, for example, by Syncing the two
`devices.
`0022. As an alternative to, or in addition to, using GPS
`coordinates as geographic location information associated
`with captured images, the user can manually input a location
`corresponding to an image. The manually input location
`information can be associated with the corresponding image,
`Such as in the form of image file metadata. In this manner, the
`user can create a database of locations in which images were
`captured. Once entered, the manually input locations also can
`be associated with additional images. Methods for providing
`the user with previously input locations to associate with new
`images is described later.
`0023 To associate geographic location information with
`an image, the user can select the image, for example, Image 1,
`using the cursor 115. In response, a location panel 200 can be
`displayed in the user interface 100. The location panel 200
`can be presented Such that it appears in front of one or more
`
`MemoryWeb Ex. 2005
`Apple v. MemoryWeb – IPR2022-00031
`9 of 19
`
`
`
`US 2010/0171763 A1
`
`Jul. 8, 2010
`
`thumbnails 105. In some implementations, the selected
`image, namely Image 1, can be displayed as a thumbnail
`within the location panel 200. In implementations in which
`the geographic location of the selected image, for example,
`GPS coordinates, is known, a map 205 of an area including
`the location in which the selected image was captured can be
`displayed within the location panel 200. The map can be
`obtained from an external source (not shown). In addition, an
`object 210 resembling, for example, a pin, can be displayed in
`the map 205 at the location where the selected image was
`captured. In this manner, the object 210 displayed in the map
`205 can graphically represent the location associated with the
`selected image.
`0024. In implementations in which the geographic loca
`tion information is associated with the image after the
`selected image is uploaded into the user interface, the map
`205 and the object 210 can be displayed after the location
`information is associated with the selected image. For
`example, when an image is selected for which no geographic
`location information is stored, the location panel 200 displays
`the thumbnail of the image. Subsequently, when the GPS
`coordinates and/or other location information are associated
`with the image, the map 205 is displayed in the location panel
`200 and the object 210 representing the selected image is
`displayed in the map 205.
`0025. In some implementations, the camera that is used to
`capture the image and obtain the GPS coordinates also can
`include a repository of names of locations for which GPS
`coordinates are available. In such scenarios, the name of a
`location in which the selected image was captured can be
`retrieved from the repository and associated with the selected
`image, for example, as image file metadata. When such an
`image is displayed in the location panel 200, the name of the
`location can also be displayed in the location panel 200, for
`example, in the panel entitled “Image 1 Information.” In some
`scenarios, although the GPS coordinates are available, the
`names of locations are not available. In Such scenarios, the
`names of the locations can be obtained from an external
`source, for example, a repository in which GPS coordinates of
`multiple locations and names of the multiple locations are
`stored.
`0026. For example, the display device in which the user
`interface 100 and the location panel 200 are displayed is
`operatively coupled to a computer that is connected to other
`computers through one or more networks, for example, the
`Internet. In such implementations, upon obtaining the GPS
`coordinates of selected images, the computer can access other
`computer-readable storage devices coupled to the Internet
`that store the names of locations and corresponding GPS
`coordinates. From Such storage devices, names of the loca
`tions corresponding to the GPS coordinates of the selected
`image are retrieved and displayed in the location panel 200.
`The GPS coordinates obtained from an external source can
`include a range Surrounding the coordinates, for example, a
`polygonal boundary having a specified planar shape. Alter
`natively, or in addition, the range can also be latitude/longi
`tude values.
`0027. In scenarios where the computer is not coupled to a
`network, the user can manually input the name of a location
`into a text box displayed in the location panel 200, for
`example, the Input Text Box 215. As the user continues to
`input names of locations, a database of locations is created.
`Subsequently, when the user begins to enter the name of a
`location for a selected image, names of previously entered
`
`locations are retrieved from the database and provided to the
`user as Suggestions available for selection. For example, if the
`user enters “Bi' in the Input Text Box 215, and if "Big Basin.”
`“Big Sur, and “Bishop, are names of three locations that
`have previously been entered and stored in the database, then
`based on the similarity in spelling of the places and the text
`entered in the Input Text Box 215, these three places are
`displayed to the user, for example, in selectable text boxes
`220 entitled “Place 1 Place 2, and “Place 3, so that the user
`can select the text box corresponding to the name of the
`location rather than re-enter the name. As additional text is
`entered into the Input Text Box 215, existing location names
`that no longer represent a match can be eliminated from the
`selectable textboxes 220. In some implementations, the data
`base of locations can be provided to select locations even
`when the computer is coupled to the network. In some imple
`mentations, a previously created database of locations is pro
`vided to the user from which the user can select names of
`existing locations and to which the user can add names of new
`locations.
`0028. In some implementations, the name of the location
`can be new, and therefore not in the database. In Such imple
`mentations, the user can select the textbox 225 entitled “New
`place.” enter the name of the new location, and assign the new
`location to the selected image. The new location is stored in
`the database of locations and is available as a Suggestion for
`names that are to be associated with future selected images.
`Alternatively, a new location can be stored in the database
`without accessing the textbox 225 if the text in the Input Text
`Box 215 does not match any of the location names stored in
`the database. Once the user enters the name of a location or
`selects a name from the Suggested names, the textboxes 215.
`220, and 225 can be hidden from display. Subsequently, a
`thumbnail of the selected image, information related to the
`image, the map 205 and the object 210 are displayed in the
`location panel 200.
`0029 When a user enters a name of a new location, the
`user can also provide geographic location information, for
`example, latitude/longitude points, for the new location. In
`addition, the user can also provide a range, for example, in
`miles, that specifies an approximate size around the points.
`The combination of the latitude/longitude points and the
`range provided by the user represents the range covered by the
`new location. The name of the new location, the location
`information, and the range are stored in the database. Subse
`quently, when the userprovides geographic location informa
`tion for a second new location, if it is determined that the
`location information for the second new location lies within
`the range of the stored new location, then the two new loca
`tions can be grouped.
`0030 Geographic location information for multiple
`known locations can be collected to form a database. For
`example, the GPS coordinates for several hundreds of thou
`sands of locations, the names of the locations in one or more
`languages, and a geographical hierarchy of the locations can
`be stored in the database. Each location can be associated
`with a corresponding range that represents the geographical
`area that is covered by the location. For example, a central
`point can be selected in San Francisco, Calif., Such as down
`town San Francisco, and a five-mile circular range can be
`associated with this central point. The central point can rep
`resent any center, such as a geographic center or a Social/
`population center. Thus, any location within a five-mile cir
`cular range from downtown San Francisco is considered to be
`
`MemoryWeb Ex. 2005
`Apple v. MemoryWeb – IPR2022-00031
`10 of 19
`
`
`
`US 2010/0171763 A1
`
`Jul. 8, 2010
`
`lying within and thus associated with San Francisco. The
`example range described here is circular. Alternatively, or in
`addition, the range can be represented by any planar Surface,
`for example, a polygon. In some implementations, for a loca
`tion, the user can select the central point, the range, and the
`shape of the range. For example, for San Francisco, the user
`can select downtown San Francisco as the central point,
`specify a range offive miles, and specify that the range should
`be a hexagonal shape in which downtown San Francisco is
`located at the center.
`0031. In some implementations, to determine that a new
`location at which a new image was captured lies within a
`range of a location stored in the database, a distance between
`the GPS coordinates of the central point of the stored location
`and that of the new location can be determined. Based on the
`shape of the range for the stored location, if the distance is
`within the range for the stored location, then the new location
`is associated with the stored location. In some implementa
`tions, the range from a central point for each location need not
`be distinct. In other words, two or more ranges can overlap.
`Alternatively, the ranges can be distinct. When the geographic
`location information associated with a new image indicates
`that the location associated with the new image lies within
`two ranges of two central points, then, in Some implementa
`tions, the location can be associated with both central points.
`Alternatively, the location of the new image can be associated
`with one of the two central points based on a distance between
`the location and the central point. In the geographical hierar
`chy, a collection of ranges of locations at a lower level can be
`the range of a location at a higher level. For example, the Sum
`of ranges of each city in California can be the range of the
`state of California. Further, in some implementations, the
`boundaries of a territory, such as a city or place of interest, can
`be expanded by a certain distance outside of the land border.
`Thus, e.g., a photograph takenjust offshore of San Francisco,
`Such as on a boat, can be associated with San Francisco
`instead of the Pacific Ocean. The boundaries of a territory can
`be expanded by any distance, and in some implementations
`the amount of expansion for any given territory can be cus
`tomized. For example, the boundaries of a country can be
`expanded by a large distance, such as 200 miles, while the
`boundaries of a city can be expanded by a smaller distance,
`such as 20 miles.
`0032 FIG.3 is a schematic of an exemplary user interface
`100 for displaying image location information. When images,
`for example, Image 1 to Image n, have been associated with
`corresponding locations of capture, two or more images can
`be grouped based on location. For example, if Image 1 and
`Image 2 were both taken in Big Basin Redwoods State Park in
`California, USA, then both images can be grouped based on
`the common location. Further, a location-based association
`can be formed without respect to time, Such that Image 1 and
`Image 2 can be associated regardless of the time period by
`which they are separated.
`0033. In scenarios in which the locations are based on GPS
`coordinates, the coordinates of two images may not be the
`same, eventhough the locations in which the two images were
`captured are near one another. For example, if the user cap
`tures Image 1 at a first location in Big Basin Redwoods State
`Park and Image 2 at a second location in the park, but at a
`distance of five miles from the first location, then the GPS
`coordinates associated with Image 1 and Image 2 are not the
`same. However, based on the above-description, both images
`can be grouped together using Big Basin Redwoods State
`
`Park as a common location if Image 2 falls within the geo
`graphical area associated with the central point of Image 1.
`0034. In some implementations, instead of the geographi
`cal hierarchy being based on countries, states, cities, and the
`like, the hierarchy of grouping can be distance-based, such as
`in accordance with a predetermined radius. For example, a
`five mile range can be the lowest level in the hierarchy. As the
`hierarchy progresses from the lowest to the highest level, the
`range can also increase from five miles to, for example, 25
`miles, 50 miles, 100 miles, 200 miles, and so on. In such
`scenarios, two images that were captured at locations that are
`60 miles apart can be grouped at a higher level in the hierar
`chy, such as a grouping based on a 100 mile range, but not
`grouped at a lower level in the hierarchy, Such as a grouping
`based on a 50 mile range. In some implementations, the
`default ranges can be altered in accordance with user input.
`Thus, a user can specify, e.g., that the range of the lowest level
`of the hierarchy is three miles.
`0035 Alternatively, or in addition, the range for each level
`in the hierarchy can be based upon the location in which the
`images are being captured. For example, if based on GPS
`coordinates or userspecification, it is determined that the first
`image was captured within the boundaries of a specific loca
`tion, such as Redwoods State Park, Disneyland, or the like,
`then the range of the lowest level of the hierarchy can be
`determined based on the boundaries of that location. To do so,
`for example, the GPS coordinates of the boundaries of Red
`woods State Park can be obtained and the distances of the
`reference location from the boundaries can be determined.
`Subsequently, if it is determined that a location of a new
`image falls within the boundaries of the park, then the new
`image can be grouped with the reference image. A higher
`level of hierarchy can be determined to be the boundary of a
`larger location, for example, the boundaries of a state or
`country. An intermediate level of hierarchy can be the bound
`ary of a region within a larger location, for example, the
`boundaries of Northern California or a county, such as
`Sonoma. Any number of levels can be defined within a hier
`archy. Thus, all captured images can be grouped based on the
`levels of the hierarchy.
`0036. In some implementations, a user can increase or
`decrease the boundaries associated with a location. For
`example, the user can expand the boundary of Redwoods
`State Park by a desired amount, e.g., one mile. Such that an
`image captured within the expanded boundaries of the park is
`grouped with all of the images captured within the park. In
`Some scenarios, the distance by which the boundary is
`expanded can depend upon the position of a location in the
`hierarchy. Thus, in a default implementation, at a higher level.
`the distance can be higher. For example, because “Country’
`represents a higher level in the geographical hierarchy, the
`default distance by which the boundary is expanded can be
`200 miles. In comparison, at a lower level in hierarchy, such
`as “State' level, the default distance can be 20 miles. The
`distances can be altered based on user input.
`0037. In some implementations, the user can specify a new
`reference image and identify a new reference location. For
`example, after capturing images in California, the user can
`travel to Texas, capture a new image, and specify the location
`of the new image as the new reference location. Alternatively,
`it can be determined that a distance between a location of the
`new image and that of the previous reference image is greater
`than a threshold. Because the location of the new image
`
`MemoryWeb Ex. 2005
`Apple v. MemoryWeb – IPR2022-00031
`11 of 19
`
`
`
`US 2010/0171763 A1
`
`Jul. 8, 2010
`
`exceeds the threshold distance from the refer