`(19) World Intellectual Property
`Organization
`International Bureau
`
`(43) International Publication Date
`21 February 2013 (21.02.2013)
`
`P O P C T
`
`(10) International Publication Number
`WO 2013/023705 Al
`
`(51) International Patent Classification:
`G06T 19/00 (201 1.01)
`
`(21) International Application Number:
`
`(22) International Filing Date:
`
`(25) Filing Language:
`
`(26) Publication Language:
`
`PCT/EP20 11/06425 1
`
`18 August 201 1 (18.08.201 1)
`
`English
`
`English
`
`(71) Applicant (for all designated States except US): LAYAR
`B.V. [NL/NL]; Rietlandpark 301, NL-1019 DW Amster
`dam (NL).
`
`(72) Inventors; and
`(75) Inventors/Applicants (for US only): HOFMANN, Klaus
`Michael [DE/NL]; Zeeburgerpad 4c, NL-1018 AJ Amster
`dam (NL). VAN DER KLEIN, Raimo Juhani [NL/NL];
`De Muy 357, NL-2134 XJ Hoofddorp (NL). VAN DER
`LINGEN, Ronald [NL/NL]; Jan Campertlaan 125, NL-
`2624 PB Delft (NL). VAN DE ZANDSCHULP, Klasien
`[NL/NL]; Ligusterstraat 25, NL-6543 SV Nijmegen (NL).
`
`(74) Agents: VISSCHER, Erik Henk et al; Overschiestraat
`180, NL-1062 XK Amsterdam (NL).
`
`(81) Designated States (unless otherwise indicated, for every
`kind of national protection available): AE, AG, AL, AM,
`AO, AT, AU, AZ, BA, BB, BG, BH, BR, BW, BY, BZ,
`CA, CH, CL, CN, CO, CR, CU, CZ, DE, DK, DM, DO,
`DZ, EC, EE, EG, ES, FI, GB, GD, GE, GH, GM, GT, HN,
`HR, HU, ID, IL, IN, IS, JP, KE, KG, KM, KN, KP, KR,
`KZ, LA, LC, LK, LR, LS, LT, LU, LY, MA, MD, ME,
`MG, MK, MN, MW, MX, MY, MZ, NA, NG, NI, NO, NZ,
`OM, PE, PG, PH, PL, PT, QA, RO, RS, RU, SC, SD, SE,
`SG, SK, SL, SM, ST, SV, SY, TH, TJ, TM, TN, TR, TT,
`TZ, UA, UG, US, UZ, VC, VN, ZA, ZM, ZW.
`
`(84) Designated States (unless otherwise indicated, for every
`kind of regional protection available): ARIPO (BW, GH,
`GM, KE, LR, LS, MW, MZ, NA, SD, SL, SZ, TZ, UG,
`ZM, ZW), Eurasian (AM, AZ, BY, KG, KZ, MD, RU, TJ,
`TM), European (AL, AT, BE, BG, CH, CY, CZ, DE, DK,
`EE, ES, FI, FR, GB, GR, HR, HU, IE, IS, IT, LT, LU, LV,
`MC, MK, MT, NL, NO, PL, PT, RO, RS, SE, SI, SK, SM,
`TR), OAPI (BF, BJ, CF, CG, CI, CM, GA, GN, GQ, GW,
`ML, MR, NE, SN, TD, TG).
`
`Published:
`— with international search report (Art. 21(3))
`
`(54) Title: METHODS AND SYSTEMS FOR ENABLING CREATION OF AUGMENTED REALITY CONTENT
`
`example data
`target object
`
`i
`
`i Hello!
`
`Object
`Storage
`1010
`
`HQ features;
`ref image;
`tion; obj id
`loca
`
`Feature
`Extractor;
`Object
`Recognizer
`
`System
`1014
`
`Fingerprint
`D B
`132
`
`obj id;
`augmentation
`
`features; / image;
`
`FIG. 1
`
`o ©
`
`(57) Abstract: Methods and systems for enabling creation of augmented reality content on a user device including a digital imaging
`o part, a display, a user input part and an augmented reality client, wherein said augmented reality client is configured to provide an
`o User input is received from the user input part to augment a target object that is at least partially seen on the display while in the aug
`augmented reality view on the display of the user device using an live image data stream from the digital imaging part are disclosed.
`
`mented reality view. A graphical user interface is rendered to the display part of the user device, said graphical user interface en -
`abling a user to author augmented reality content for the two-dimensional
`image.
`
`Niantic's Exhibit No. 1035
`Page 001
`
`
`
`Methods and Systems for Enabling Creation of Augmented Reality
`
`Content
`
`RELATED APPLICATIONS
`
`This application i s related to co-pending to an
`
`International
`
`(Patent Cooperation Treaty) Patent Application
`
`No. XXXXXXXXXXXXX,
`
`filed on August 18, 2011, entitled
`
`"Computer-vision based augmented reality system" which
`application is incorporated herein by reference and made a
`
`part hereof in its entirety.
`
`FIELD OF INVENTION
`
`The disclosure generally relates to methods and
`
`systems that enable the authoring and management of augmented
`
`reality content.
`
`In particular, though not necessarily, the
`
`disclosure relates to methods and systems for enabling a user
`
`to author augmented reality content onto real world objects.
`
`BACKGROUND
`
`Due to the increasing capabilities of multimedia
`
`equipment, mobile augmented reality (AR) applications are
`
`rapidly expanding.
`
`These AR applications allow enrichment o f
`
`a real scene with additional content (also referred to as
`"augmentation" or "augmented reality content") , which may be
`
`displayed to a user in the form of a graphical layer
`
`overlaying the real-world scenery.
`
`Example augmented reality content may include two-
`
`dimensional graphics, three-dimensional
`
`objects that aims to
`
`augment a real world object with virtual content.
`
`Augmented
`
`reality content may exist in a three-dimensional
`
`(virtual)
`
`space.
`
`In particular, at least one o f placement /position,
`
`shape, size, movement and any other spatial attributes of the
`
`augmented reality content correspond to a virtual three-
`
`dimensional space.
`
`For example, a rectangular billboard
`
`Niantic's Exhibit No. 1035
`Page 002
`
`
`
`poster a s augmented reality content has at least properties
`
`related to: position, orientation, size and shape that exists
`
`in a three-dimensional
`
`augmented reality space.
`
`While an experienced user may program and create
`
`three-dimensional
`
`objects easily using sophisticated three-
`
`dimensional graphics software running on a computer, a person
`
`without experience in creating virtual three-dimensional
`
`objects would find it difficult to create augmented reality
`
`content using devices such a s a handheld tablet or mobile
`
`phone.
`
`The limited user interface offered by user devices
`
`objects because the
`hinders the authoring of three-dimensional
`user input methods and user interfaces does not easily allow
`
`the manipulation of objects in a three-dimensional
`
`space.
`
`Hence, it is desirable to provide methods and systems
`
`that facilitate the creation o f augmented reality content that
`
`at least alleviate the problems disclosed herein.
`
`Furthermore, it i s desirable to provide a platform that
`
`manages a collection o f augmented reality content created by
`
`users .
`
`SUMMARY
`
`Augmented reality systems enable the visual
`
`presentation of augmented reality content over real objects in
`
`the real-world.
`
`Within the system, augmented reality content
`
`may be represented a s objects occupying a three-dimensional
`
`The .augmented reality
`virtual space of the real world.
`content may have a particular spatial relationship with the
`
`objects in the real world.
`
`For instance, a virtual billboard
`
`poster used as augmented reality content may be positioned on
`
`the North side o f an office building, with the front of the
`
`poster facing outward from the office building.
`
`Accordingly,
`
`the poster has a position, size, shape, and/or orientation
`
`properties in relation to the virtual three-dimensional
`
`augmented reality space.
`
`In the context of this disclosure,
`
`the augmented reality space may include a virtual
`
`representation o f the three-dimensional
`
`environment that
`
`represents the real world.
`
`Augmented reality content exists
`
`in the augmented reality space.
`
`Niantic's Exhibit No. 1035
`Page 003
`
`
`
`An augmented reality system or an augmented reality
`
`device may include a display part (e.g., LED screen) that
`
`shows the augmented reality space (referred to as "augmented
`
`reality view") by combining image frames from an live image
`
`data stream from a digital imaging part (e.g., camera) with
`
`the augmented reality content.
`
`Furthermore, the augmented
`
`reality system includes a user input part where a user may
`provide user input.
`For example, the user input part may
`
`include a touch screen.
`
`Typically, the touch screen or the
`
`user input part is limited to receiving user input in a two-
`
`dimensional space (e.g., receiving user input events
`associated with x , y coordinates) . This poses a problem for
`
`users wanting to create three-dimensional
`
`objects in the
`
`virtual augmented reality space, because the two-dimensional
`
`user input does not correspond directly to the three-
`
`dimensional virtual space a s seen by the user through the
`
`display part of the augmented reality device.
`
`If the user
`
`input i s mapped to the three-dimensional
`
`space that i s
`
`unnatural for the user (e.g., a user clicks on one o f two
`
`buttons, the intended button does not become activated but the
`
`other button becomes activated due to a poor transformation of
`
`the user input event into three-dimensional
`
`space) , user
`
`experience i s degraded.
`
`Furthermore, from the augmented reality system's
`
`perspective, there i s a technical problem with processing user
`
`input that exists in the two-dimensional space.
`
`When the user
`
`input was intended to interact with objects in the three-
`
`dimensional virtual space, the user input received by the
`augmented reality system only exists in two-dimensional space,
`
`thereby leaving one degree of freedom where the system i s free
`
`to interpret how the two-dimensional point may be projected
`
`into a three dimensional space.
`
`A coarse projection could be
`
`performed.
`
`But when a user i s performing a task where
`
`precision matters, such a s drawing or creating objects in
`
`three-dimensional
`
`space, user inputs may not be projected
`
`properly onto the real world objects existing in the augmented
`
`reality space.
`
`The situation may b e worsened when the user
`
`device and the user may be continuously making small or large
`
`Niantic's Exhibit No. 1035
`Page 004
`
`
`
`movements, causing further jitter in the accuracy of the
`
`pro jection .
`
`When creating augmented reality content (e.g.,
`
`drawing, sketching, etc.) on a two-dimensional plane, taking
`
`the user input and projecting the user input in three-
`
`dimensional space, the projection can be workable and drawing
`
`in three-dimensional
`
`context i s possible in theory if given
`
`sufficient information about the user input in two-dimensional
`
`space and the user and surroundings hold still.
`
`If the
`
`projection processes has jitter, this jitter will also be
`
`visually apparent and present in the augmented reality content
`(e.g., drawing or sketch) itself.
`Touching the screen or
`
`providing any user input requiring physical input on a mobile
`
`user device generally also causes slight movement of the user
`
`device, causing even more problems in accuracy.
`
`The user input in two-dimensional
`
`space may not
`
`provide sufficient information to accurately translate/project
`
`the two-dimensional user inputs into a three-dimensional
`For example, a user taps on the screen at position x ,
`
`space.
`
`y .
`
`The augmented reality system is lacking information such
`
`a s the desired direction of the tap (e.g., is the user
`
`directing the tap upwards or downwards and at what angle?)
`
`such that the x , y coordinates may be more accurately
`
`projected into a three-dimensional
`
`space.
`
`Accordingly, it is
`
`desirable to have methods and systems that enables users to
`
`create augmented reality content that at least alleviates some
`
`of the problems disclosed herein.
`A method for enabling creation of augmented reality
`
`content (also referred to as user-generated content) on a user
`
`device including a digital imaging part, a display output, a
`
`user input part and an augmented reality client i s disclosed.
`
`An example user device may be a mobile phone or a mobile
`
`computing tablet having a touch-sensitive or pressure-
`
`sensitive screen.
`
`Said augmented reality client i s configured
`
`to provide an augmented reality view on the display output
`
`using an live image data stream from the digital imaging part.
`
`An augmented reality client, implemented at least in part a s
`
`software running on the user device, preferably includes a
`
`Niantic's Exhibit No. 1035
`Page 005
`
`
`
`graphics engine to compose image frames from a live image data
`
`stream to form an augmented reality view.
`
`A first user input i s received, preferably with a
`
`user event listener running on the user device, from the user
`
`input part to select a target object that i s at least
`
`partially seen in the augmented reality view.
`
`A target object
`
`is an object of interest to which a user i s wishing to add
`
`augmented reality content.
`
`A graphical user interface i s
`
`rendered for display on the display output, said graphical
`
`user interface enabling a user to create the augmented reality
`
`In this disclosure, a graphical user interface
`content.
`comprises the visual aspect of a user interface a s well a s any
`
`software or hardware components that enable a user to
`
`manipulate the state of the user device and/or the augmented
`
`reality client.
`
`The enabling step comprises creating a graphical user
`
`interface object (an object preferably in the software
`
`environment) having a two-dimensional
`
`image of the target
`
`object, said graphical user interface object enabling the user
`
`to author the augmented reality content on top o f the two-
`
`dimensional image, and rendering the graphical user interface
`
`object for display on the display output.
`
`The resulting graphical user interface (comprising
`
`graphics and user event listener (s), interactivity elements
`
`for enabling the receipt and processing user input thereby
`
`providing user interactivity) appears stuck to the display
`output screen, and the graphical user interface object (in
`
`software preferably) that makes up the graphical user
`
`interface is rendered such that the object is placed in
`
`parallel with the display output.
`
`A s such, a plane of the
`
`graphical user interface object is substantially in parallel
`
`with a plane of the display output.
`
`Using the graphical user
`
`interface, a second user input representative of the augmented
`
`reality content authored using the graphical user interface i s
`
`received, preferably with a user event listener running on the
`
`user device.
`
`In one embodiment, graphical user interface for
`
`enabling the user to author the augmented reality content on
`
`top of the two-dimensional
`
`image using the graphical user
`
`Niantic's Exhibit No. 1035
`Page 006
`
`
`
`interface is a what-you-see-is-what-you-get
`
`(WYSIWYG) editor
`
`that enables the capture of the spatial relationship
`
`between
`
`the second user input and the two-dimensional
`
`image o f the
`
`target object.
`
`A WYSIWYG editor enables a user to draw
`
`directly onto the two-dimensional
`
`image, enabling a direct one
`
`to one mapping of the user input space (e.g., the screen
`
`resolution)
`
`with the two-dimensional
`
`image (e.g., the image
`
`resolution) .
`
`In this manner, the content a s provided by the
`
`user appears later in the augmented
`
`reality view a s if the
`
`user had drawn directly onto the target object.
`
`The editor
`
`captures the information
`
`needed to display the augmented
`
`reality content in the correct position when it is rendered
`
`for display in augmented reality view.
`
`In one embodiment,
`
`an image frame from the live image
`
`data stream is captured in response to receiving the first
`
`user input.
`
`The user input may include a user tapping on the
`
`user input part to indicate that he/she wishes to take a photo
`
`of the target object, to recognize the target object, to begin
`
`augmenting
`
`the object, etc.
`
`The captured image frame is
`
`processed
`
`to extract tracking features.
`
`Preferably using a
`
`tracker, a three dimensional
`
`pose information
`
`of the target
`
`object is determined
`
`on the basis o f the extracted tracking
`
`features and the image data stream.
`
`In one embodiment,
`
`the user may prefer to attach the
`
`augmented
`
`reality content onto the target in augmented reality
`
`view as quickly a s possible,
`
`even before features are
`
`extracted at the remote object recognition/f eature extraction
`
`system.
`
`Accordingly,
`
`the tracking features extracted locally
`
`on the user device have a quality that i s lower than the
`
`quality of other tracking features that are associated with
`
`the target object and are extracted by an object recognition
`
`system remote from the user device.
`
`The processing
`
`of the
`
`image frame is performed
`
`if the tracking features from the
`
`object recognition
`
`system are not (yet) available at the user
`
`device.
`
`If desired, higher quality tracking features may be
`
`provided by a feature extraction module in a system remote
`
`from the user device.
`
`In one embodiment,
`
`an image frame from
`
`the live image data stream is captured in response to
`
`Niantic's Exhibit No. 1035
`Page 007
`
`
`
`receiving the first user input and the image frame is
`
`transmitted or a derivation of the image frame to an object
`
`recognition system remote from the user device.
`
`An identifier
`
`associated with the target object, tracking features and the
`
`two-dimensional
`
`image are received from the object recognition
`
`system.
`
`A three dimensional pose information of the target
`
`object i s determined on the basis of the received tracking
`
`features and the image data stream.
`In one embodiment, the user device further includes a
`
`tracker part.
`
`The tracker part, preferably at least partially
`
`implemented on the user device a s software, comprises
`
`processes for estimating the pose information about the target
`object using for example an image captured from the live image
`
`stream.
`
`The tracker enables the generation o f matrices that
`
`would later be used by a graphics engine to create transformed
`
`graphics objects so that augmented reality content appears
`
`(even though it i s rendered in a two-dimensional space) to
`
`virtual world.
`have a shape and pose in a three-dimensional
`Using the tracker part, the augmented reality content
`
`(sometimes part of a graphics object) i s transformed by
`
`scaling, rotating and translating the augmented reality
`
`content based on three-dimensional
`
`pose information in the
`
`tracker part to generate a graphics object having the
`
`transformed augmented reality content.
`
`In some situations,
`
`the graphics object is created first with the non-transformed
`
`augmented reality content, and then the graphics object i s
`transformed using the three-dimensional
`pose information in
`
`the tracker part to render the graphics object in perspective
`
`with the target object.
`
`In some situations, the augmented
`
`reality content is transformed first and then a graphics
`
`object i s created in the three-dimensional
`
`environment for
`
`rendering and display.
`
`The graphics object i s rendered for
`
`display in the display output, the graphics object appearing
`
`in perspective with the target object in the augmented reality
`
`view.
`
`In some embodiments, the graphics object i s referred to
`
`a s a graphical overlay that i s used in combination with images
`
`from the live image feed in composing the augmented reality
`
`view .
`
`Niantic's Exhibit No. 1035
`Page 008
`
`
`
`One embodiment, the augmented reality content
`
`(sometimes part of a graphics object) i s transformed by
`
`scaling, rotating and translating the augmented reality
`
`content based on (1) three-dimensional
`
`pose information in the
`
`tracker part and (2) the spatial relationship, to generate a
`
`graphics object having the transformed augmented reality
`
`content.
`
`The graphics object i s rendered for display in the
`
`display output, the graphics object appearing in perspective
`
`with the target object in the augmented reality view.
`The augmentation is preferably stored in a format and
`
`data object that is suitable for retrieval, storage, and
`
`manipulation.
`
`The augmentations are preferably maintained
`
`remotely from the user device for the long term.
`
`The
`
`augmentations are preferably easy to transform.
`In one embodiment, the second user input is received
`
`representative of the augmented reality content through the
`
`graphical user interface object from the user input part.
`
`The
`
`second user input or a derivation of the second user input i s
`
`stored as a graphics data file in a non-transient computer
`
`readable medium.
`
`The graphics data file is associated with
`
`the target object.
`
`The second user input may be converted
`
`from user input events into data for the graphics data file.
`
`In one embodiment, the storing of the derivation of
`
`the second user input data comprises deriving a scalable
`
`vector graphic of the augmented reality content based on the
`
`second user input and using the scalable vector graphic a s the
`
`derivation of the user input data.
`
`A scalable vector graphic
`
`may be used a s the format to facilitate the transformation
`process, which may involve scaling, transforming, and
`
`rotating.
`
`Naturally, other types of formats may be used a s
`
`long a s the format facilitate the transformation of graphics.
`
`To promote the addition and proliferation of the
`
`augmented reality content, various target objects and the
`
`associated augmented reality content may belong to users
`
`within a social community.
`
`The users and their target objects
`
`and/or augmented reality content may be associated with a user
`
`profile associated with the individual users and/or user
`
`devices.
`
`A s such, the graphics data file may b e associated
`
`with a user profile associated with the user device.
`
`Niantic's Exhibit No. 1035
`Page 009
`
`
`
`In one embodiment, the graphical user interface
`
`object comprises at least one of the following interactive
`
`parts for augmenting the target object: a drawing part for
`
`drawing on the two-dimensional
`
`image displayed on the display
`
`output, a stamping part for adding a copy of a stored image
`
`onto the two-dimensional
`
`image displayed on the display
`
`output, a three-dimensional
`
`drawing part for adding a three
`
`dimensional object to the target object, and a text part for
`
`adding a text onto the two-dimensional
`
`image displayed on the
`
`display output.
`
`The graphical user interface having at least
`
`one of these interactive parts facilitate the creation and
`
`authoring of content on top of the two-dimensional
`
`image of
`
`the target object.
`A s an extension, the two-dimensional
`
`image of the
`
`target object, the graphical user interface for authoring the
`
`content, the augmented reality content itself, and any other
`
`suitable graphics objects or graphical user interface objects
`
`may be flipped, attached and/or detached.
`
`Flipping comprises
`
`animating the object such that it i s rotated around/about
`
`an
`
`axis in the plane of the object (preferably the object has a
`
`two-dimensional
`
`plane) by 180 degrees.
`
`Accordingly,
`
`the
`
`object having a front side facing one direction is turned from
`
`front to back to show a back side of the object a s a result.
`
`Attaching involves taking an object and sticking it to a
`
`target object.
`
`The object i s preferably animated to begin in
`
`a position parallel to the display output, and resulting in an
`
`end position being rendered in perspective with the target
`
`object.
`
`Detaching involves the reverse of the attaching
`
`processes, preferably animating an object rendered in
`
`perspective with a tracked object to an end position where the
`
`object i s stuck to the display output (out of perspective and
`
`in parallel with the display output) . User input i s received
`
`from the user to either flip, attach or detach the object.
`
`The user input may include any suitable user input such a s
`
`motion gesture, clicking, tapping, voice command, etc.
`
`In one embodiment,
`
`a third user input i s received,
`
`preferably by a user input event listener, to flip the two-
`
`dimensional
`
`image.
`
`The two-dimensional
`
`image i s animated on
`
`the display output by showing an effect of flipping over the
`
`Niantic's Exhibit No. 1035
`Page 0010
`
`
`
`two-dimensional image and displaying content associated with
`the target object.
`In some embodiments, the graphics object
`i s animated on the display output by showing an effect of
`
`flipping over the augmented reality content and displaying
`other content associated with the target object.
`In another embodiment, a third user input is
`received, preferably by a user input event listener, to detach
`the graphics object from the target object.
`The graphics
`object i s updated by scaling, transforming, and rotating the
`
`graphics object to having a pose where the graphics object has
`a two-dimensional plane substantially parallel to the plane of
`the display output.
`In yet another embodiment, a third/fourth user input
`is received to attach the graphics object to the tracked
`Updated three-dimensional pose information of the
`object.
`tracked object i s retrieved/received
`from the tracker part.
`The graphical object for display on the display output i s
`updated by scaling, rotating and translating the graphical
`object based on the updated three-dimensional pose
`information.
`In one embodiment, receiving a fifth user input to
`flip the graphics object, the graphics object having a first
`pose, such that the graphics object i s rotated from the first
`pose to a second pose by substantially 180 degrees around an
`axis lying in the plane of the graphics object from the first
`pose to a second pose.
`Back-side content to be displayed on
`the display output for a back-side of the graphics object i s
`retrieve/received.
`The back-side of the graphics object i s
`updated to include the back-side content.
`An animated
`sequence for the graphics object is generated, the animated
`sequence including graphics from the first pose to the second
`pose by scaling, rotating and translating the graphics object.
`An augmented reality client configured to enable
`creation of augmented reality content on a user device having
`a digital imaging part, a display output and a user input
`
`part.
`
`The augmented reality client comprises a user input
`event listener and a graphics engine.
`A user input event
`
`listener configured to receive a first user input through the
`
`user input part to select a target object that i s at least
`
`Niantic's Exhibit No. 1035
`Page 0011
`
`
`
`partially seen in the augmented reality view.
`
`A user input
`
`event listener may be partially implemented in the operating
`
`system or the augmented reality content to listen for user
`
`input events coming from the user input part.
`
`User input
`
`events may include the type of event and the coordinates of
`
`the event itself, as well a s any relevant timing information.
`
`A graphics engine is configured to render a graphical user
`
`interface to the display, said graphical user interface
`
`enabling a user to create the augmented reality content by
`
`creating a graphical user interface object having a two-
`
`dimensional
`
`image o f the target object, said graphical user
`
`interface object enabling the user to author the augmented
`
`reality content on top of the two-dimensional
`
`image, rendering
`
`the graphical user interface object for display on the display
`
`output, wherein a plane of the graphical user interface object
`
`i s substantially
`
`in parallel with a plane of the display.
`
`The
`
`user input event listener further configured to receive a
`
`second user input representative
`
`of the augmented reality
`
`content authored using the graphical user interface.
`
`The disclosure may also relate to a computer program
`
`product, implemented on computer-readable
`
`non-transitory
`
`storage medium, wherein the computer program product may
`
`comprise software code portions configured for, when run a
`
`computer, executing the method steps according to any of the
`
`methods described in the present disclosure.
`
`The computer
`
`program product i s preferably implemented at least in part in
`
`any of: a computer processor, an operating system, an
`
`augmented reality client, a graphics engine, a user input
`
`event listener, etc. of the user device.
`
`A method for enabling creation of user-generated
`
`content on a user device associated with a digital imaging
`
`part, a display output, a user input part and an augmented
`
`reality client is disclosed.
`
`Said augmented reality client is
`
`configured to provide an augmented reality view on the display
`
`output using image data from the digital imaging part.
`
`A
`
`first user input is received from the user input part to
`
`select a target object displayed in said display output.
`
`A
`
`first graphical user interface is provided, said interface
`
`comprising a two-dimensional
`
`image o f at least part of the
`
`Niantic's Exhibit No. 1035
`Page 0012
`
`
`
`target object, said graphical user interface being configured
`
`to receive second user input associated with user-generated
`content, preferably said user-generated content being aligned
`with said two-dimensional image.
`A third user input i s
`
`received from the user input part to attach said user-
`generated content to said target object.
`In a tracker part of
`the augmented reality client, three-dimensional pose
`information associated with said selected target object i s
`determined on the basis of at least an image of the target
`object from the digital imaging part.
`Said user-generated
`content is rendered for display in the display output, on the
`basis of said three-dimensional pose information such that the
`user-generated content is displayed in perspective with the
`target object, said user-generated content rendered matching
`the three-dimensional pose of said selected target object in
`
`the display output.
`An augmented reality client configured to enable
`creation of user-generated content on a user device having a
`digital imaging part, a display output and a user input part
`i s disclosed.
`The augmented reality client comprises a user
`input listeners, graphics engine, and a tracker part.
`A first
`user input listener (preferably software processes configured
`to listen for user input events) is configured to receive a
`first user input from the user input part to select a target
`object displayed in said display output.
`A graphics engine i s
`configured to provide a first graphical user interface
`comprising a two-dimensional image of at least part of the
`target object, said graphical user interface being configured
`to receive second user input associated with user-generated
`content, preferably said user-generated content being aligned
`with said two-dimensional image.
`A second user input listener
`i s configured to receive a third user input from the user
`input part to attach said user-generated content to said
`target object.
`A tracker part is configured to determine
`three-dimensional pose information associated with said
`selected target object on the basis of at least an image of
`
`The graphics
`the target object from the digital imaging part.
`engine is further configured to render said user-generated
`
`content for display in the display output, on the basis of
`
`Niantic's Exhibit No. 1035
`Page 0013
`
`
`
`said three-dimensional
`
`pose information such that the user-
`
`generated content is displayed in perspective with the target
`
`object, said user-generated content rendered matching the
`
`three-dimensional
`
`pose of said selected target object in the
`
`display output.
`Graphical user interface for enabling the creation o f
`
`user-generated content on a user device having a digital
`
`imaging part, a display output and a user input part is
`
`disclosed.
`
`The graphical user interface comprises three
`
`(display) states.
`
`A first display state comprises a first
`
`user input listener configured to receive a first user input
`from the user input part to select a target object displayed
`
`in said display output.
`
`A second display state, having a
`
`first transition from the first state in response to receiving
`
`the first user input, comprises a two-dimensional image of at
`least part of the target object, a second user input listener
`
`being configured to receive second user input associated with
`
`user-generated content, said user-generated content being
`
`image, and a
`preferably aligned with said two-dimensional
`third user input listener to receive a third user input from
`
`the user input part to attach said user-generated content to
`
`said target object.
`
`A third display state, having a second
`
`transition from the second state in response to receiving the
`third user input, comprises said user-generated content for
`
`display in the display output, said user-generated content
`
`being rendered on the basis of said three-dimensional pose
`
`information such that the user-generated content i s displayed
`
`in perspective with the target object and matching the three-
`
`dimensional pose of said selected target object in the display
`
`output, said three-dimensional pose determined by a tracker
`
`part of the augmented reality client.
`A user device having an augmented reality client (as
`
`disclosed herein) , configured to enable creation of user-
`
`generated content, said user device having a digital imaging
`
`part, a display output and a user input part.
`
`The disclosure will further be illustrated with
`
`reference to the attached drawings, which schematically show
`
`embodiments according to the disclosure. It will be understood
`
`Niantic's Exhibit No. 1035
`Page 0014
`
`
`
`that the disclosure is not in any way restricted to these
`
`specific embodiments.
`
`BRIEF DESCRIPTION OF THE DRAWINGS
`
`Aspects of the invention will be explained in greater
`detail by reference to exemplary embodiments shown in the
`
`drawings, in which:
`FIG. 1 shows an illustrative system and data
`structure for enabling creation of augmented reality content
`accor