`RICHIR Simon, SHIRAI Akihiko Editors. International conference organized by Laval Virtual.
`
`
`
`The Jumper Metaphor: An Effective Navigation Technique
`
`for Immersive Display Setups
`
`
`
`BOLTE, Benjamin, STEINICKE, Frank, BRUDER, Gerd
`
`Visualization and Computer Graphics Research Group, University of Münster, Germany
` {b.bolte|fsteini|gerd.bruder}@uni-muenster.de
`
`
`
`(a)
`
`
`
`(b)
`
`(c)
`
`
`
`
`
`Figure 1. Illustration of the jumper metaphor showing a jump sequence: (a) user's view to a target location before the jump, (b) frame during the
`jump, and (c) after the jump at the target location.
`
`
`Abstract—Recently, several novel user interfaces have been
`introduced, which allow users to actively move the body in
`order to interact with a displayed immersive virtual
`environment (IVE). However, in most situations the virtual
`world in which the user is immersed represents a space that
`is considerably larger than the available interaction space
`within which she can move. To overcome this limitation,
`traditionally, users can travel indirectly by exploiting input
`devices, with which the viewpoint in the environment can be
`changed without actually requiring a
`large physical
`movement. However, for several applications, it appears
`reasonable to consider more natural methods for traveling
`through IVEs. In this paper, we introduce the jumper
`metaphor, which combines natural direct walking with
`magical locomotion through large-scale IVEs. The key
`characteristic of the jumper metaphor is that it supports
`real walking for short-distances, whereas if the user intends
`to travel a larger distance, the metaphor predicts the
`planned target location in the virtual world and then lets the
`user virtually jump to that particular target. We evaluated
`this method in an IVE and found that the jumper metaphor
`has the potential to allow more effective exploration in
`comparison to real walking, with only minor effects on
`space cognition and disorientation.
`
`Keywords-3D user interface; navigation; locomotion;
`immersive display environments
`
`I.
`
` INTRODUCTION AND BACKGROUND
`
`During the last few years, virtual reality (VR) display
`technologies such as head-mounted displays (HMDs),
`
`stereoscopic projection screens and autostereoscopic
`displays became more and more popular for applications
`in
`the fields of entertainment, serious games and
`edutainment. These technologies can provide users with
`an unchallenged spatial impression of an immersive
`virtual environment (IVE) as well as understanding of
`distances between objects or
`landmarks
`in
`the
`environment [28]. However, immersive displays are often
`combined with interaction devices, e.g., mouse, keyboard,
`joystick or gamepads, for providing (often unnatural)
`inputs to generate self-motion.
`
`More and more research groups are investigating
`natural, multimodal methods of generating self-motion.
`For example, in [23] Ware and Osborne compared three
`different metaphors to modify the viewpoint in desktop-
`based environments using six degrees of freedom (DOF)
`input devices. Due to the fact that each considered
`metaphor had various advantages and disadvantages,
`Ware and Osborne suggested that the choice of method
`should depend on the requirements of the interaction task.
`In the context of immersive virtual environments, world-
`in-miniature (WIM) metaphors are often used as an
`indirect metaphor for navigation [18, 22, 13, 27]. With
`these approaches the virtual representation of the user is
`directly manipulated within a hand-held miniature or map
`of the IVE. These manipulations are applied directly to the
`user’s point of view in the IVE.
`
`Several novel devices and user interfaces have been
`developed over the past years, which allow to capture
`user’s body movements in front of a display and map
`
`Laval Virtual VRIC 2011 Proceedings - RICHIR Simon, SHIRAI Akihiko Editors
`
`Falkbuilt Ex. 1021, Page 001
`
`
`
`BOLTE, Benjamin & al., Jumper Metaphor: An Effective Navigation Technique for Immersive Display Setups
`VRIC 2011 proceedings
`
`
`detected movements to camera motions in a virtual world.
`These devices include motion trackers, such as Nintendo
`Wii, but also video- and depth-based solutions such as the
`Microsoft Kinect or Sony EyeToy. Corresponding
`interaction metaphors are often based on navigation
`techniques, which have been introduced years ago in the
`context of 3D user interface and virtual reality research
`[4]. For instance, in [4] Bowman et al. conducted a series
`of experiments in which they compared two different
`metaphors to specify the direction of travel: (i) gaze-
`directed and (ii) hand-directed. Regarding the effects of
`different transition techniques on spatial awareness, they
`found that an abrupt change of view is particularly
`disorienting and suggested to use smooth transitions.
`
`With such tracking and immersive display devices,
`users may navigate through IVEs by real walking in a
`limited interaction space [12, 17, 29]. An obvious
`approach to support real walking in such a setup is to map
`a one meter movement of the user in the real world to a
`one meter movement of the camera in the virtual
`environment. This approach has the drawback that the
`user's movements are restricted by the limited range of
`tracking sensors and a rather small workspace in the real
`world. Thus, concepts for virtual locomotion methods are
`needed that enable walking over large distances in the
`virtual world while remaining within a relatively small
`space in the real world. Various prototypes of advanced
`VR-based interface devices have been developed to
`prevent a displacement in the real world. These devices
`include
`torus-shaped omni-directional
`treadmills [3],
`motion foot pads [8], robot tiles [9], motion carpets [15]
`and stroller-based walking platforms [19]. Although these
`hardware systems represent enormous
`technological
`achievements, they are still very expensive and will not be
`generally accessible in the foreseeable future.
`
`In the context of video games, some simple devices
`such as Nintendo's Power Pad and Balance Board have
`been proposed, which supports walking-in-place (WIP)
`[12, 2, 16, 24] and leaning techniques [11, 6], and thus
`enable
`simplified
`locomotion. These body-centric
`navigation methods allow hands-free navigation, e.g.,
`LaViola et al. developed several body- and foot-based
`metaphors for navigation in IVEs, including a leaning
`technique, with which users could travel short and
`medium distances, whereas larger distances could be
`traveled with a floor-based WIM [11]. However, real
`walking has been shown to be a more presence-enhancing
`locomotion technique than other navigation metaphors
`[20]. While walking in the real world, sensory information
`such as vestibular, proprioceptive, and efferent copy
`signals as well as visual information create consistent
`multi-sensory cues that indicate one's own motion, i.e.,
`acceleration, speed and direction of travel [14]. In this
`context walking is the most basic and intuitive way of
`moving. Keeping such an active and dynamic ability to
`navigate through large-scale environments is of great
`interest [5], i.e., several approaches suggest supporting
`real walking, but simply scale translation motions. For
`instance, Williams et al. have exploited uniform tracker
`gains [25] and used mechanisms, which reset the position
`of the participant within an IVE [26]. Using these
`approaches, users could travel through moderately large
`
`virtual spaces by directly walking within a smaller real
`space. Interrante et al. proposed the seven-league-boots
`metaphor in which translational motions are only scaled in
`the user's main walk direction and therefore, avoids
`discomfort due to lateral bumping [7].
`
`In this paper we introduce and discuss the jumper
`metaphor, which is a new metaphor for hands-free
`traveling through moderately large IVEs, which is based
`on real walking, but in which the mapping between the
`user's actual movement in the real world and her
`movement in the virtual world is manipulated.
`
`
`
`II. THE JUMPER METAPHOR
`
`In this section we describe the jumper metaphor for
`effective exploration of IVEs. The main idea of this
`metaphor is to combine natural interaction in the real
`world with the magical world of VR and games to provide
`an effective, but natural navigation technique. For the
`exploration of objects in a small range, we simply use real
`walking such that the user can walk around objects, or use
`small head movements to explore the environment while
`perceiving motion parallax and occlusion effects similar
`to the real world. To travel over large distances, the user is
`able to specify the travel destination using her viewing
`direction, and then can initiate a jump, which will start a
`smooth viewpoint animation that transfers the user to the
`corresponding target position. In the following, we
`assume that we can track the user's head position as well
`as orientation, for example, by an optical tracking system
`or Kinect sensors, and map it to a corresponding virtual
`camera. The jumper metaphor is composed of the three
`steps described in the following subsections.
`
`
`
`A. Jump Target Prediction
`At first, the intended target position for the jump has to be
`identified. Two possibilities exist for how this target
`position can be specified, i.e., explicit or implicit.
`
`As mentioned in Section I, we wanted to avoid explicit
`target selection via additional input devices for the
`navigation task. Therefore, we determine the target
`position pt∈ of the jump implicitly by calculating the
`first intersection point with the scene geometry of the ray
`extending from the user's virtual head position pu∈ (i.e.,
`the position of the virtual camera) along the user's viewing
`direction dview∈ . Hence, if the user wants to specify the
`target position for a jump, she simply has to look to that
`position for tgaze∈ milliseconds.
`
`In head-tracked environments, usually the user slightly
`moves the head during the entire time, which complicates
`the specification of a jump target if the user has to look to
`one specific position over time. Therefore, we predict the
`jump target based on all focus points within tgaze
`milliseconds and tolerate slight variations in the user’s
`viewing direction dview. To ease target selection with
`distant objects, we unproject all focus points with the
`inverted projection and model-view matrix of the first
`focus point within the last tgaze milliseconds into image
`
`Laval Virtual VRIC 2011 Proceedings - RICHIR Simon, SHIRAI Akihiko Editors
`
`Falkbuilt Ex. 1021, Page 002
`
`
`
`BOLTE, Benjamin & al., Jumper Metaphor: An Effective Navigation Technique for Immersive Display Setups
`VRIC 2011 proceedings
`
`
`space coordinates. If all focus points in image space
`coordinates are within a circle with radius rgaze∈ pixels
`for tgaze milliseconds, we use the projection of the center of
`all focus points as the target point pt. As users can move
`effectively by real walking for short distances, a jump can
`only be initiated to positions which are at least 2 meters
`away from the user's current position.
`
`In order to give the user corresponding visual
`feedback about the target position, we display a visual
`target projected to the target position according to the
`user's viewing direction dview and the face normal vector
`n∈ at the target position pt (see Fig. 1(a)). The visual
`target grows constantly to its full size within tgrow∈
`milliseconds. If the user wants to choose a different target
`position, she simply has to focus on a point outside the
`displayed target area. As soon as one focus point is
`outside the circle with radius rgaze pixels, the projected
`visual target disappears and the user can specify a new
`target.
`
`(cf.
`observation
`experimental
`to
`According
`Section III), we use tgaze = 500 milliseconds, rgaze = 75
`pixels and tgrow = 2000 milliseconds, i.e., when a user
`looks within a circle with radius 75 pixels in image space
`coordinates for 500 milliseconds she has specified a jump
`target position and the visual target, which is projected on
`this position, grows
`to
`its full size within 2000
`milliseconds.
`
`
`
`B. Jump Activation
`After the user has specified the jump target position
`pt∈ , she can initiate the jump to that target by moving
`towards the target with a reasonable speed. Therefore, we
`define an acceleration threshold at∈
` in meters per
`square second, which the user has to exceed in order to
`initiate the jump (see Fig. 4). We use this threshold to
`avoid unintended jumps and to allow the user to explore
`near objects by real walking with accelerations below at.
`
`Due to the jitter and noise of the tracking system as
`well as head bumping during real walking (cf. Section I),
`numerical
`inaccuracies can occur when using
`two
`consecutive tracked head positions for velocity and
`acceleration calculations. Therefore, we use the tracked
`head positions p1,…,pn∈ during the last ∆∈ seconds
`~
`to determine the direction of travel dtravel∈ , velocity
`~3
`v∈ in meters per second and acceleration a∈ in meters
`~
`per square second as follows:
`
`d1ravel = Pn - Pl,
`lld1ravel II
`V=-,1-,
`
`a= V-
`
`Vp
`.1
`
`,
`
`
`
`II · II : ~3 ---+ ~
`is the Euclidean distance and
`where
`vp∈ is the predicted velocity ∆ seconds ago.
`
`walking with an acceleration greater than 1.5 meters per
`square second, where the prediction of the acceleration is
`based on the received tracking data during the last 0.25
`seconds.
`
`
`
`C. Jump Animation
`As mentioned above, the jump is initiated if the user has
`specified a target and exceeds the acceleration threshold
`by walking
`towards
`the
`target. The start point
`ps = pu∈ of the animation is defined by the current
`position of the user pu∈ . The end point pe = pt + λn∈
`is given by the target position pt, which is adjusted by
`λ∈ meters in the direction towards the user's current
`position along the face normal n∈ at the target position
`in order to prevent jumping directly into an object (cf. Fig.
`1(c)). In addition, we adjust the height of the end point
`pe,y∈ according to the start point height ps,y∈ and the
`~
`difference between the terrain height at the start point
`hs∈ and end point he∈ , i.e., pe,y = ps,y + (he – hs).
`~
`The position of the virtual camera during the jump
`animation is calculated using an interpolation function
`
` ,
`
`where t∈[0,1] denotes the time progress of the
`animation, i.e., the animation starts on t = 0 and ends on t
`= 1. The duration of the animation tanim∈ in milliseconds
`depends on the distance d∈ to the target position pt and a
`scaling factor sanim∈ , i.e., tanim = d · sanim.
`
`In order to avoid disorientation, we use a smooth ease-
`in/ease-out jump animation of the straight connection of
`the start and end point (cf. Section I). We achieve this by
`using a sigmoid logistic function for our interpolation
`function
`
` .
`
`To support the notion of a “magic” metaphor, we use a
`motion blur effect in the border of the viewport which
`fades out to the center (see Fig. 1(b)).
`
`(cf.
`observation
`experimental
`to
`According
`Section III), we use sanim = 180 and λ = 1 meter, i.e., the
`jump animation towards the end position, which is
`adjusted by 1 meter towards the user’s current position
`along the face normal at the jump target position, lasts for
`1800 milliseconds in case of a jump distance of 10 meters.
`
`
`
`III. EVALUATION
`
`In this section we describe the user study that we have
`conducted in order to evaluate the proposed jumper
`metaphor navigation technique. In the evaluation we
`compared the jumper metaphor to real walking to
`teleportation. The goal of the study was to evaluate if the
`jumper metaphor can be used as an effective navigation
`technique in immersive game environments.
`
`(cf.
`observation
`experimental
`to
`According
`Section III), we use ∆ = 0.25 seconds and at = 1.5 meters
`per square second, i.e., a user can initiate a jump by real
`
`
`
`Laval Virtual VRIC 2011 Proceedings - RICHIR Simon, SHIRAI Akihiko Editors
`
`Falkbuilt Ex. 1021, Page 003
`
`
`
`BOLTE, Benjamin & al., Jumper Metaphor: An Effective Navigation Technique for Immersive Display Setups
`VRIC 2011 proceedings
`
`
`
` (a) (b)
`
`
`
`Figure 2. Example images of the evaluation: (a) simple game environment with a user's avatar (note: during the experiment, the user could not see
`her own avatar from a third person's view), and (b) subject walking through the laboratory space in order to navigate to randomly highlighted virtual
`objects.
`
`A. Materials and Methods
`We performed the experiments in a 10 meters times 7
`meters darkened laboratory room. The subjects wore a
`HMD (ProView SR80, 1280x1024@60Hz, 80° diagonal
`field of view) for the visual stimulus presentation. On top
`of the HMD an infrared LED was fixed, which we tracked
`within the laboratory with an active optical tracking
`system (PPT X8 of WorldViz), which provides sub-
`millimeter precision and sub-centimeter accuracy at an
`update rate of 60Hz. The orientation of the HMD was
`tracked with a three DOF inertial orientation sensor
`(InertiaCube 3 of InterSense) with an update rate of
`180Hz. For visual display, system control and logging we
`used an Intel computer with Core i7 processor, 6GB of
`main memory and nVidia Quadro FX 4800.
`
`At the beginning of the experiment, we introduced
`subjects to the metaphors in the three experimental
`conditions:
`
`• Condition RW: In this condition users could
`navigate through the game environment only by
`real walking, for which we used a one-to-one
`mapping between real and virtual motions.
`
`• Condition JM: In this condition users could
`navigate through the game environment by real
`walking and the jumper metaphor as explained in
`Section II (see Fig. 4).
`
`• Condition TP: This condition was similar to
`condition JM, but we did not use an animation
`sequence for the jump, but rather placed the user
`directly to the predicted target location.
`
`We tested the conditions with a within-subject design
`method. During the experiment the room was darkened,
`and a black curtain was fixed around the HMD in order to
`reduce the subject’s perception of the real world. The
`visual stimulus consisted of a simple virtual island with
`five virtual primitive objects, i.e., blue box, red torus,
`orange sphere, green pyramid and pink cylinder, rendered
`
`stereoscopically by Crytek's CryEngine 3 (see Fig. 2) as
`well as our own software. The game environment covered
`an enclosed space of 9 meters times 7 meters and fitted
`entirely into our laboratory space.
`
`In the first trial subjects started in the center of one
`side of the room in the IVE as well as in the laboratory.
`Now one object was randomly highlighted and subjects
`had to navigate to this object by one of the three
`techniques and touch it with their hand. After the user
`successfully touched the object, another object was
`randomly highlighted and subjects had to move from the
`current location to the next target location and so on. After
`each object was reached three times (15 trials), the series
`was over and the next condition was tested. We measured
`the time the subject needed to fulfill the task. The
`sequence of conditions in which subjects participated was
`randomly chosen. We used a different randomly generated
`arrangement of the five virtual objects for each condition.
`The assignment of an arrangement to a particular
`condition was chosen randomly.
`
`After each condition, the subjects had to draw the
`virtual primitive objects into a top view grid and fill out a
`user questionnaire for the used condition. The usability
`questionnaire contained questions concerning the ease-of-
`use,
`ease-of-learn,
`effectiveness,
`and
`satisfaction.
`Furthermore, subjects had to judge the difficulty of the
`task. All questions allowed responses on a 5-point Likert-
`scale.
`
`The maps sketched by the subjects were compared
`with the original map and scored on a 5-point Likert-scale
`by an experimental observer regarding object position and
`dimension. The observer did not know which map
`belonged to which condition. In addition, subjects filled
`out Kennedy's simulator sickness questionnaire (SSQ)
`[10] immediately before and after the experiment as well
`as the Slater-Usoh-Steed (SUS) presence questionnaire
`[21].
`
`
`
`Laval Virtual VRIC 2011 Proceedings - RICHIR Simon, SHIRAI Akihiko Editors
`
`Falkbuilt Ex. 1021, Page 004
`
`
`
`4.00-t-----------------------
`
`3.00
`
`2.00
`
`1.00
`
`0.00
`
`Ease of Learn
`
`Ease of Use
`
`
`Satisfaction
`
`Effectiveness
`
`
`
`BOLTE, Benjamin & al., Jumper Metaphor: An Effective Navigation Technique for Immersive Display Setups
`VRIC 2011 proceedings
`
`B. Participants
`9 male and 2 female (age 22-33, ø: 26.18) subjects
`participated in the study. All subjects were students of
`computer science, mathematics or psychology. All had
`normal or corrected to normal vision. 2 had no game
`experience, 1 had some, and 8 had much game
`experience. 5 of the subjects had experience with walking
`in a HMD setup. All subjects were naïve to the
`experimental conditions. The total time per subject
`including
`pre-questionnaires,
`instructions,
`training,
`experiments, breaks, and debriefing was 45 minutes.
`Subjects were allowed to take breaks at any time.
`
`Figure 3. Pooled results of the usability questionnaires of the (blue)
`real walking (RW), (red)
`jumper metaphor (JM) and (green)
`teleportation (TP) condition.
`
`
`
`C. Results
`Figure 3 shows the average Likert-scale scores as colored
`bars with the standard errors (SE) for conditions RW
`(blue), JM (red) and TP (green). The x-axis shows the
`usability categories, the y-axis represents a 5-point Likert-
`scale (0 corresponds to a negative and 4 to a positive
`rating of the metaphor).
`
`We analyzed the mean Likert-scale scores for each
`usability
`category,
`i.e.,
`ease-of-learn,
`ease-of-use,
`effectiveness and satisfaction, the map sketching task and
`the time to fulfill the task for each of the conditions with a
`one-way ANOVA and performed a post-hoc Tukey test in
`order to analyze statistical effects between the conditions.
`We have found a significant main effect (F(2, 30) =
`10.975; p < 0.01) of the condition on the category
`satisfaction. Post-hoc analysis showed that subjects were
`significantly more satisfied while using real walking (p <
`0.01) or the jumper metaphor (p < 0.01) compared to the
`teleportation metaphor. But, there was no significant
`difference in satisfaction between real walking and the
`jumper metaphor. The average Likert-scale scores for the
`conditions were 3.18 (SE: 0.12) for RW, 2.91 (SE: 0.16)
`for JM, and 1.82 (SE: 0.32) for TP.
`
`In addition, we have found a significant main effect
`(F(2, 30) = 4.731, p < 0.05) of the condition on the
`subject's task judgment. The Tukey test showed that
`subjects judged that they were significantly better at the
`task using real walking (p < 0.05) and the jumper
`metaphor (p < 0.05) compared to teleportation. We found
`the following average Likert-scale scores: 2.70 (SE: 0.31)
`for condition RW, 2.58 (SE: 0.25) for condition JM, and
`1.67 (SE: 0.21) for condition TP.
`
`Furthermore, we have found a significant main effect
`(F(2, 30) = 4.500; p ≤ 0.02) of the condition on the map
`drawing task. In the post-hoc analysis, we found that
`subjects were significantly better at the map drawing task
`using real walking compared to the teleportation metaphor
`(p < 0.02). However, there was no significant difference
`in sketching the map after real walking compared to the
`jumper metaphor. The average Likert-scale scores for the
`conditions were 2.73 (SE: 0.24) for RW, 1.91 (SE: 0.21)
`for JM, and 1.64 (SE: 0.34) for TP. We have not found a
`significant main effect of the condition on the categories
`ease-of-learn, ease-of-use, effectiveness, and the time to
`fulfill the task.
`
`The subjects' mean estimation of their level of feeling
`present in the IVE averaged to 4.38 on a scale from 1 to 7,
`where a higher score
`indicates greater presence.
`Kennedy's SSQ showed an averaged pre-experiment score
`of 1.27 and a post-score of 3.0 for the experiment, which
`is a typical result for HMD setups [1].
`
`
`
`D. Discussion
`On average subjects judged that the jumper metaphor is
`slightly less (but not significant) easy to learn and use
`compared to real walking in an immersive setup (0.41
`average Likert-scale score difference for ease-of-learn,
`respectively 0.27 for ease-of-use). Since the jumper
`metaphor extends real walking by an additional way of
`locomotion, it is reasonable that additional learning and
`training is required to use the jumper metaphor compared
`to real walking.
`
`Although we have not found a significant effect on the
`effectiveness and time to fulfill the task, on average
`subjects were 15.5 seconds (11.81%) faster using the
`jumper metaphor compared to real walking even for the
`short distances (<6m) during
`the experiment. The
`efficiency benefit of the jumper metaphor compared to
`real walking depends on the distance a player intends to
`travel, i.e., the larger the distance to travel the larger the
`efficiency benefit of the jumper metaphor.
`
`Subjects were significantly worse at map sketching
`after using teleportation compared to real walking, which
`is an indicator for disorientation during the experiment.
`Subjects were also slightly (but not significant) worse in
`map sketching after using the jumper metaphor in
`comparison to real walking. However, subjects stated that
`they were not disoriented more often using the jumper
`metaphor compared to real walking (0.0 average Likert-
`scale score difference). We assume two possible reasons
`for this: (1) subjects were more focused on the metaphor
`itself than on the object remembering task when using the
`jumper metaphor, (2) the duration of the jump animation
`was too short (as also one of the subjects stated as a
`comment in the questionnaire).
`
`The jump animation seemed to play an important role
`not exclusively on disorientation, but also on the user
`acceptance, because subjects evaluated the teleportation
`metaphor as significantly less satisfying and more difficult
`to fulfill the traveling task. Even for the short distances
`
`Laval Virtual VRIC 2011 Proceedings - RICHIR Simon, SHIRAI Akihiko Editors
`
`Falkbuilt Ex. 1021, Page 005
`
`
`
`BOLTE, Benjamin & al., Jumper Metaphor: An Effective Navigation Technique for Immersive Display Setups
`VRIC 2011 proceedings
`
`
`
`
`
` (a) (b) (c) (d)
`
`
`
`Figure 4. Illustration of a user triggering a jump during the evaluation: (a)-(b) user accelerating from a standing position until (c) the acceleration
`threshold is exceeded, triggering the jump, and (d) user decelerating.
`
`during the experiment, 54.55% of the subjects preferred
`the jumper metaphor over real walking and teleportation.
`For long distances such as in typical game levels, 90.91%
`of the subjects preferred the jumper metaphor.
`
`
`
`IV. CONCLUSION
`
`In this paper we introduced the jumper metaphor, which
`combines the strengths of real walking with magical jump
`navigation for effective exploration of large-scale IVEs.
`We showed that a jump can be initiated based on real
`walking only, without the need for additional 3D input
`devices. The evaluation has shown that the jumper
`metaphor has the potential to allow a more effective
`exploration of IVEs in comparison to real walking, with
`only minor effects on space cognition and disorientation.
`
`The jumper metaphor can easily be used with current
`immersive display setups and novel video game interfaces
`such as the Microsoft Kinect sensor. Due to the positive
`evaluation of this metaphor, we plan to expand the study
`to such video game interfaces. In addition, we will
`analyze different animation effects as well as camera
`trajectories in order to further improve the jumper
`metaphor and to increase efficiency and space cognition.
`
`
`
`ACKNOWLEDGMENT
`
`Authors of this work are supported by the Deutsche
`Forschungsgemeinschaft (DFG 29160962). Furthermore,
`we acknowledge the development of the CryEngine by
`Crytek GmbH. All rendering assets in the figures, except
`the visual target and the torus were provided by Crytek’s
`CryEngine.
`
`
`
`iJFCi
`
`
`
`
`
`
`
`
`
`REFERENCES
`
`[1] B. Bolte, G. Bruder, F. Steinicke, K. H. Hinrichs, and M. Lappe,
`“Augmentation techniques for efficient exploration in head-
`mounted display environments,” in Proceedings of the 17th ACM
`Symposium on Virtual Reality Software and Technology (VRST
`’10), pp.11–18, ACM, 2010.
`
`[2] L. Bouguila, F. Evequoz, M. Courant, and B. Hirsbrunner,
`“Walking-pad: a step-in-place locomotion interface for virtual
`environments,” in Proceedings of the 6th International Conference
`on Multimodal Interfaces (ICMI ’04) , pp.77–81, ACM, 2004.
`
`[3] L. Bouguila, M. Sato, S. Hasegawa, H. Naoki, N. Matsumoto, A.
`Toyama, J. Ezzine, and D. Maghrebi, “A new step-in-place
`locomotion interface for virtual environment with large display
`system,” in Proceedings of ACM SIGGRAPH 2002 Conference
`Abstracts and Applications (SIGGRAPH ’02), pp.63–63, ACM,
`2002.
`
`[4] D. A. Bowman, D. Koller, and L. F. Hodges, “Travel in
`immersive virtual environments: an evaluation of viewpoint
`motion control techniques,” in Proceedings of the 1997 Virtual
`Reality Annual International Symposium (VRAIS ’97), vol. 7,
`pp.45–52, IEEE, 1997.
`
`[5] G. Bruder, F. Steinicke, and K. H. Hinrichs, “Arch-Explore: a
`natural user interface for immersive architectural walkthroughs,”
`in Proceedings of the 2009 IEEE Symposium on 3D User
`Interfaces (3DUI ’09), pp.75–82, IEEE, 2009.
`
`[6] G. de Haan, E. J. Griffith, and F. H. Post, “Using the Wii balance
`board as a low-cost VR interaction device,” in Proceedings of the
`2008 ACM Symposium on Virtual Reality Software and
`Technology (VRST ’08), pp.289–290, ACM, 2008.
`[7] V. Interrante, B. Ries, and L. Anderson, “Seven league boots: a
`new metaphor for augmented locomotion through moderately
`large scale immersive virtual environments,” in Proceedings of the
`2007 IEEE Symposium on 3D User Interfaces (3DUI ’07),
`pp.167–170, IEEE, 2007.
`
`[8] H. Iwata, H. Yano, H. Fukushima, and H. Noma, “CirculaFloor,”
`IEEE Computer Graphics and Applications, vol. 25(1), pp.64–67,
`2005.
`
`[9] H. Iwata, H. Yano, and H. Tomioka, “Powered shoes,” in
`Proceedings of ACM SIGGRAPH Emerging Technologies
`(SIGGRAPH ’06), (28), ACM, 2006.
`
`[10] R. S. Kennedy, N. E. Lane, K. S. Berbaum, and M. G. Lilienthal,
`“Simulator sickness questionnaire: an enhanced method for
`quantifying simulator sickness,” International Journal of Aviation
`Psychology, vol. 3(3), pp.203–220, 1993.
`
`[11] J. J. LaViola Jr., D. A. Feliz, D. F. Keefe, and R. C. Zeleznik,
`“Hands-free multi-scale navigation in virtual environments,” in
`Proceedings of the 2001 Symposium on Interactive 3D Graphics
`(I3D ’01), pp.9–15, ACM, 2001.
`
`[12] S. Razzaque, “Redirected Walking,” PhD thesis, University of
`North Carolina, Chapel Hill, 2005.
`
`Laval Virtual VRIC 2011 Proceedings - RICHIR Simon, SHIRAI Akihiko Editors
`
`Falkbuilt Ex. 1021, Page 006
`
`
`
`BOLTE, Benjamin & al., Jumper Metaphor: An Effective Navigation Technique for Immersive Display Setups
`VRIC 2011 proceedings
`
`
`[13] T. Ropinski, F. Steinicke, and K. H. Hinrichs, “A constrained
`road-based VR navigation technique for travelling in 3D city
`models,” in Proceedings of the 2005 International Conference on
`Augmented Tele-Existence (ICAT ’05), pp.228–235, ACM, 2005.
`
`[22] D. Valkov, F. Steinicke, G. Bruder, and K. H. Hinrichs,
`“Traveling in 3D virtual environments with foot gestures and a
`multi-touch enabled WIM,” in Proceedings of Virtual Reality
`International Conference (VRIC ’10), pp.171–180, 2010.
`
`[14] R. A. Ruddle, and S. Lessels, “The benefits of using a walking
`interface to navigate virtual environments”, ACM Transactions on
`Computer-Human Interaction, vol. 16(1), pp.1–18, 2009.
`[15] M. Schwaiger, T. Thümmel, and H. Ulbrich, “Cyberwalk:
`implementation of a ball bearing platform for humans,” in
`Proceedings of the

Accessing this document will incur an additional charge of $.
After purchase, you can access this document again without charge.
Accept $ ChargeStill Working On It
This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.
Give it another minute or two to complete, and then try the refresh button.
A few More Minutes ... Still Working
It can take up to 5 minutes for us to download a document if the court servers are running slowly.
Thank you for your continued patience.

This document could not be displayed.
We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.
You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.
Set your membership
status to view this document.
With a Docket Alarm membership, you'll
get a whole lot more, including:
- Up-to-date information for this case.
- Email alerts whenever there is an update.
- Full text search for other cases.
- Get email alerts whenever a new case matches your search.

One Moment Please
The filing “” is large (MB) and is being downloaded.
Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!
If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document
We are unable to display this document, it may be under a court ordered seal.
If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.
Access Government Site