`(ENGLISH TRANSLATION)
`
`
`
`
`
`1
`
`Petition for Inter Partes Review of
`U.S. Pat. No. 7,477,284
`IPR2013‐00327
`EXHIBIT
`Sony‐
`
`
`
`..sA 1‘
`
`i
`
`
`
`\
`
`'.
`
`lo
`F
`2
`‘- Ni-.
`is
`.--V
`gm
`ii
`TT?: Ti: 1334;‘: a*T*T{x\W~
`
`March 28, 2013
`
`Certification
`
`Park IP Translations
`
`I, Christopher Girsch, hereby declare:
`
`and English
`Japanese
`the
`of
`knowledge
`advanced
`possess
`I
`to the best
`of my
`The attached translation is,
`languages.
`and belief,
`a
`true and accurate
`translation from
`knowledge
`Japanese into English of the VRSJ Research Report.
`
`I declare under penalty of perjury that the foregoing is true and
`correct.
`
`/_4_
`
`1‘
`
`..
`
`C6
`
`-
`
`fr
`
`Christopher Girsch
`
`Park Case # 38124:
`
`134 W. 293‘ Street 5th Floor ° New York, N.Y. 10001
`Iflu0ne:212-581-8870 ' Fax;212L581-5577
`
`2
`
`
`
`VP Gaku Kenpo Vol. 2, No. 1
`ISSN 1343-0572
`
`- VRSJ Research Report -
`Virtual Reality Society of Japan Research Report
`
`
`
`SIG-CyberSpace
`Virtual City Research Group
`
`
`
`November 27, 1997
`
`
`
`Virtual Reality Society of Japan
`
`
`
`
`
`
`
`
`3
`
`
`
`Table of Contents of the Virtual Reality Society of Japan Research Report
`CONTENTS
`
`
`
`[Virtual City Research Group]
`[SIG-CyberSpace]
`
`
`
`
`November 27, 1997 (Thursday)
`
`
`
`VCR 97-11 A Virtual Park on Top of Spline - Bicycle Media Park –
`TAKAHASHI Katsuhide, Eric Young-Sang SHIM, MIYAUCHI Nobuhito, SAEKI Toshiaki,
`FUKUOKA Hisao (Mitsubishi Denki)
`
`VCR 97-12 Generation of Panoramic Stereo Images from Monocular Moving Images KAWAKITA
`Yasuhiro, HAMAGUCHI Yoshitaka, TSUKAMOTO Akitoshi, MIYAZAKI Toshihiko (Oki
`Denki)
`
`VCR 97-13 Video Workthrough for Positioning Live Video in a CG Space
`KIHARA Tamio, NISHIMURA Tsuyoshi, NAKAKURA Kazuaki (NTT)
`
`VCR 97-14 Methods for Expressing Realistic Action in Virtual Space
`HONDA Shinkuro, KIMURA Shoryo, OZAWA Ryuji, OTA Kenji, OKADA Ken’ichi,
`MATSUSHITA Yutaka (Keio University)
`
`VCR 97-15 Examination of the Registration of Information Icons in 3-Dimensional Virtual
`Space
`INOUE Masayuki, KIYOSUE Yasuyuki (NTT)
`
`
`
`4
`
`
`
`Virtual City Research Group VCR 97-11
`November 27, 1997
`
`A Virtual Park on Top of Spline - Bicycle Media Park -
`TAKAHASHI Katsuhide, Eric Young-Sang SHIM, MIYAUCHI Nobuhito, SAEKI Toshiaki,
`FUKUOKA Hisao
`
`Information Technology R&D Center, Mitsubishi Electric Corporation
`
`
`
`
`
`
`
`We built the Bicycle Media Park, a virtual part in which multiple users can interact
`through a network. The Bicycle Media Park is equipped with the facilities of aquarium, space
`walk building, Ferris wheel and cycling racetrack inside a park that is 1.6 square kilometers, and
`multiple users who are distributed geographically can meet inside one park, share an experience
`with these facilities, and talk with one another by audio. This park was built with Spline, a
`software platform for distributed virtual environments (DVE) that we developed previously. We
`were able to confirm that it is possible to create distributed virtual environments flexibly and
`efficiently by employing Spline. In this essay, we describe the characteristics of the
`implementation of the service facilities inside Bicycle Media Park.
`
`
`
`
`
`
`
`5
`
`
`
`
`
`1. Introduction
`In recent years, research on distributed virtual environments (DVE) has been thriving [1],
`[2], [3]. We built the Bicycle Media Park, a virtual part in which multiple users can interact
`through a network. This park was implemented on Spline [4], [5], [6], , a software platform for
`distributed virtual environments (DVE) that we developed previously. In this essay, we describe
`the characteristics of the implementation of the service facilities inside Bicycle Media Park.
`
`2. Bicycle Media Park
`
`In this section, we provide an overview of Bicycle Media Park and describe the service
`contents for each facility that is disposed inside the park.
`
`2.1 Overview
`
`This is a system wherein multiple users roam with a bicycle-type input-output device
`through 1.6 square kilometer park. The park is composed of a rugged topography, and the weight
`of the pedals of the bicycle-type input-output device change according to its uphill and downhill
`areas.
`A user appears inside the park as an avatar (described below along with the motorbike),
`
`which is depicted as being mounted on a bicycle), and two users who meet one another can enjoy
`a chat with audio. Background noises, such as the chirping of birds, the water sounds of a pond,
`the fluttering of flags to express the wind or the rustling of the leaves of trees, elegant jazz
`sounds, etc., have been set for each place inside the park.
`
`Service facilities that can be enjoyed by users like an aquarium and a space walk
`building, and user participation type service facilities such as a Ferris wheel and bicycle
`racetrack, has been set up inside the park.
`
`2.2 Facilities and Services inside the Park
`
`
`
` Aquarium
`At the huge water tank installed inside the park, users can enjoy the many fish swimming
`
`around inside it, as shown in Figure 1. In addition, when one approaches the water tank, the
`sound of bubbles can be heard.
`
`
`
`Figure 1 Fish swimming around the inside of the water tank
`
`6
`
`
`
`
`
` Space walk building
`As shown in Figure 2, outer space that is broader than the capacity of the building
`
`stretches out inside the building. Users can observe the stars and planets, which revolve around a
`fixed star, and the astronauts from the observation platform, or walk in outer space.
`
`
`Figure 2 Interior of the space walk building
`
`
`
`
`
` Ferris wheel
`When the motorbike reaches the boarding spot, it automatically boards the passenger car
`
`of the Ferris wheel, and rotates along with the passenger car. Users can look out over the scenery
`inside the park from a rotating viewpoint. When the passenger car makes one round, the
`motorbike automatically exits from the passenger car. Figure 3 shows the situation when one
`more motorbike is observing a motorbike that is riding on the Ferris wheel.
`
`
`
`Figure 3 A motorbike watching the Ferris wheel
`
`
`
` Cycling racetrack
`
`
`
`7
`
`
`
`This is a facility that was created with a general racetrack with a 400 meter bank as its
`
`model. It is possible to race for 1 course with a pair of motorbikes, and in addition users can
`engage in time trial races, and can also race with a robot bicycle by computer simulation in
`which they can choose three speeds. An entry area for participating in these 5 kinds of races has
`been set up inside the racetrack, and a race is started automatically when a motorbike enters this
`area. Figure 4 shows a situation where a race with a robot bicycle has just started.
`
`
`
`
`
`Figure 4 Race with a robot bicycle
`
`3. Spline
`
`Spline is a software platform on which DVEs are built, and is composed of a distributed
`world models that retain the information of the virtual space, a communications layer that
`conducts communications between each world model, and an application support layer that
`provides Spline API. The DVE application program is implemented on top the of the application
`support layer (Figure 5).
`
`
`Application
`
`Application Support Layer
`
`World Model
`
`World Model Communications Layer
`
`Spline Internal Communications Layer
`
`Spline API
`
`Spline Protocol Stack
`
`
`
`Socket
`TCP/UDP protocol stack
`
`IP protocol stack
`
`
`Figure 5 Configuration of Spline
`
`
`
`8
`
`
`
`
`
`In Spline, one Spline session is composed of the multiple processes composed of such a
`configuration.
`
`3.1 Distributed World Model
`
`Each application process that is participating in the same session under the Spline
`environment theoretically shares one world model. Each process acquires information from this
`world model, and carries out interactions inside the virtual space. In many dispersed virtual
`environment systems [1], [2], [3], the world model is managed by a concentration-type server,
`but in the case of Spine attention has been paid to scalability and flexible expandability, and a
`distributed world model that maintains the reproduction of the world model is adopted for each
`process.
`
`In each process, the changes carried out for the respective world models are automatically
`reflected in the world model possessed by the other processes by the Spline communications
`layer, which has a reliability multicast mechanism. The information held by each world model is
`not entirely the same at the same instant, and maintains a loose consistency. The world models
`are composed of the instances of the classes defined by Spline (Spline objects) and their parent-
`child relationships.
`
`3.2 Application Support Layer
`
`The application support layer provides an API library in order that the application can
`access one’s own world model. Each Spline process can freely add new objects inside the world
`model with this API, and can make changes to the objects for which the process itself holds
`ownership. In addition, it is also possible to transfer the ownership of each object to another
`process.
`
`Spline provides two methods for expressing the movement of objects inside the 3-
`dimensional virtual space. The first method involves changing the position coordinates of the
`object. The changes to one’s own world model of the position coordinates are updated to the
`other world models. The other method is the function called smooth motion. In smooth motion,
`the pathway of the object and the time until it reaches the terminus are specified. This
`information is transmitted to the other world models, and it is possible to move the objects inside
`the respective world models while describing a smooth orbit along the specified pathway,
`without this involving message communications thereafter. In addition, even after the terminus is
`passed, it is possible to keep making the objects move based on that pathway.
`
`4. Implementation of Services
`
`A description is now provided of the implementation and characteristics of the respective
`facilities of the space walk building, aquarium, Ferris wheel and bicycle racetrack inside the
`Bicycle Media Park.
`
`4.1 Flexible Configuration
`
`A motorbike process is present for each user who operates a motorbike the processes
`comprising the Bicycle Media Park. In addition, the respective facilities and services of the space
`walk building, aquarium, Ferris wheel and bicycle racetrack are implemented as separate
`processes. The functions of the respective processes are shown below.
`
`9
`
`
`
`(1) Motorbike process
`This creates a motorbike for each user, carries out the input-output to the bicycle-type
`input-output devices, and displays a 3-dimensional virtual space on the screen.
`(2) Space walk building process
`This creates the observation platform, hallways, planets and astronauts inside the
`building, and imparts movement to the planets.
`(3) Aquarium process
`
`This creates the fish inside the water tank, and makes them move in circles.
`(4) Ferris wheel process
`This creates the Ferris wheel facilities, causes the passenger cars and rotating plate of the
`Ferris wheel to rotate, and causes the motorbike to enter or exit a passenger car of the
`Ferris wheel.
`(5) Bicycle racetrack process
`This creates the racetrack, and controls the motorbikes and robot bicycles in each race.
`
`
`
`The respective processes are composed in such a manner that they add facilities and
`services to the virtual park, and they can change the form of the facilities inside the park flexibly.
`If the facilities are not used, it is all right if the process is not booted up. In addition, in the event
`that new facilities and services are provided to the Bicycle Media Park, these can be realized by
`adding processes that manage the facilities, and provide the services.
`
`4.2 Smooth Motion
`
`The fish in the aquarium, the planets inside the space walk building and the passenger
`cars and rotating plate of the Ferris wheel repeat fixed motions. These motions have been
`realized as the rotating movement that employs Spline’s smooth motion.
`
`
`
` Rotation and orbit of planets
`In the space walk building, one fixed star and three planets have been respectively
`
`arranged therein, and these are rotated on their own axes. At the same time, the three planets are
`made to orbit with the fixed star as the center. As shown in Figure 6, the orbiting of the planets is
`realized by setting the object of the fixed star as the parent object of the three planet objects. The
`planets, which are the children, are disposed at positions that are removed by only the rotational
`diameter of the orbit from the position of the fixed star. An orbital movement is imparted to the
`three planet objects due to the fact that the fixed star object rotates on its own axis.
`
`
`Axis of Rotation of the Orbital Center Object
`
`Axis of Rotation of the
`Planets
`
`Planets
`
`
`Orbital Center Object
`
`
`Figure 6 Rotation and orbit of planets
`
`
`10
`
`
`
`
`
` Passenger cars of the Ferris wheel
`Figure 7 shows the use of smooth motion for the Ferris wheel. The rotating plate of the
`
`Ferris wheel rotates in a clockwise direction, and the passenger cars of the Ferris wheel rotate in
`a counter-clockwise direction. By setting the rotational speed of the rotating plate and passenger
`cars as the same speed, it maintains the orientation in the vertical direction like that of the
`passenger cars of actual Ferris wheels.
`
`
`Direction of Rotation of the Passenger Cars
`
`Direction of Rotation of
`the rotating plate
`
`
`Figure 7 Rotation of the Ferris wheel
`
`
`
`
`
` Movement in circles of fish
`Figure 8 shows the circular swimming motion of the fish, which employs smooth motion.
`
`A rotating object (parent 2 object) that does not have visual data is created at the center of the
`orbit, and this is set as the parent of the parent 1 object. Moreover, a fish object is set as the child
`of that parent 1 object. By rotating the 2 parents, the double orbital movement overlaps, and the
`fish objects are caused to engage in a complicated circular motion.
`
`
`Axis of Rotation of Parent 1
`
`Parent 1 object
`
`Axis of Rotation of Parent 2
`
`Parent 2 object
`
`Orbit of Parent 1
`
`
`
`
`
`Figure 8 Movement in circles of fish objects
`
`
`
`11
`
`
`
`Examples of the use of smooth motion for the three facilities have been shown above. In
`
`the event that facilities that have objects that repeat a fixed movement, the implementation can
`be done simply by employing smooth motion. In addition, a reduction in the communications
`volume can also be cited as one of the advantages of smooth motion. The reason for this is that in
`the case of smooth motion the position coordinates of the object to be moved are not updated
`between world models, compared to the movement processing of ordinary objects.
`
`4.3 Position monitoring and transfer of ownership
`
`The entering and exiting the passenger cars of the Ferris wheel by the motorbike and the
`racetrack races are realized by position monitoring of the motorbike and the transfer of
`ownership from the motorbike.
`
`
`
` Entering and exiting the passenger cars of the Ferris wheel
`The Ferris wheel process monitors the position of the motorbike, and if the motorbike is
`
`at the boarding spot for the Ferris wheel, it acquires ownership from the motorbike. The Ferris
`wheel process selects the passenger car in which the motorbike will ride, and sets the motorbike
`as the child of the object of the passenger car. The motorbike rotates together with the passenger
`car. At the moment when the passenger car for which the motorbike has been set as the child has
`made one round, the Ferris wheel process sets of the position of the motorbike at the unloading
`spot, and returns ownership to the motorbike.
`
`
`
` Bicycle race
`The bike racetrack process monitors the position of the motorbike, and if the motorbike is
`
`at the entry area, it acquires ownership of the motorbikes that will participate in the race and of
`the robot bikes according to the race. Then, the bike racetrack process returns under the control
`of the respective processes by returning the ownership to the motorbikes and the robot bicycle,
`and the race starts. In addition, the bike racetrack process determines the conclusion of each race
`and monitors whether the motorbikes and the robot bicycle are traveling in the right direction on
`the bank. These two functions that are performed in order to monitor the race are also realized by
`position monitoring of the motorbikes and the robot bicycle.
`
`The services of all of the facilities are realized due to the fact that the motorbike process
`and the processes for managing the facilities act in coordination, with the position monitoring
`and the function for transferring ownership as the media. The only processing that is required on
`the motorbike side is the processing for turning over ownership to the processes on the service
`side and making a request for the return of ownership. In this manner, user participation-type
`services can be easily provided by employing position monitoring and the function for
`transferring ownership.
`
`5. Summary
`
`In this essay, we introduced the aquarium, space walk building, Ferris wheel and cycling
`racetrack, which are the service facilities of the Bicycle Media Park, and described Spline.
`Moreover, we demonstrated that the services provided by these can be flexibly expanded by
`employing Spline, and we related that user participation-type services can be easily realized by
`employing the three functions provided by Spline, namely slow motion, transfer of ownership
`and position monitoring.
`
`12
`
`
`
`At present, in the means for providing services based on position determination, the
`
`motorbikes that enter the entry area are compelled to participate in the bicycle race, and there is
`no room for choice by the users. As a means for allowing the users who operate the motorbikes
`to choose, a user interface wherein for example a menu for selecting the races in which to
`participate is displayed when that user is inside the bicycle racetrack is conceivable, but it is
`necessary to add a great deal of processing to the motorbike on the side where the services are
`provided.
`
`In addition, in the event that services such as schools and municipal offices, which will
`likely be included in virtual cities, are realized, even more processing will be needed, and adding
`processing to the avatar side in order to receive services is unrealistic. From the standpoint of
`development of the virtual city, it is necessary to consider a general purpose distribution means
`or use means for the service processing functions, from the side providing the services (facilities)
`to the side to which they are provided (avatar), and this is a future issue.
`
`Based on the above-mentioned issues, we are planning to carry out an examination of the
`unified standards for avatar and classification of the service processing functions, through the
`addition of new avatars and the creation of a user interface for participation in the bicycle
`racetrack races.
`
`Bibliography
`
`[1] Pak, “A VRLM Application System That Creates a Virtual Space” FUJITSU.48., 2, March
`1997.
`
`
`
`
`
`[5] FUKUOKA, et al, “Spline, a Software Platform for Distributed Virtual Environments”,
`Mitsubishi Denki Giho, February 1997.
`
`[6] SATO et al, “Virtual Tradeshow: One Application System of Spline”, First Symposium of
`the Virtual City Research Group, VR Society, July 1997.
`
`
`
`13
`
`
`
`SIG-CyberSpace, VCR 97-12
`November 27, 1997
`
`
`Generation of Panoramic Stereo Images from Monocular Moving Images
`Yasuhiro Kawakita, Yoshitaka Hamaguchi, Akitoshi Tsukamoto, Toshihiko Miyazaki
`Kansai Laboratory Research & Development Group
`OKI Electric Industry Co., Ltd.
`
`This paper presents a technique for generating left-right panoramic images for binocular
`stereoscopic viewing from images from a single video camera, and a technique for stereoscopic
`display of panoramic images corresponding to the orientation of the user's line of sight by
`making use of parallax corresponding to depth. In these techniques, a video camera is attached to
`a tripod and rotated manually, and the optical flow between the continuously captured images is
`used to excise the vertical slit images and combine the images to generate left-right panoramic
`images. In addition, parallax corresponding to the distance to objects in the panoramic image
`from the video camera position is used for alignment in compositing the left-right panoramic
`images.
`
`
`
`
`14
`
`
`
`
`1. Introduction
`In recent years, a number of research papers have been presented on techniques of building
`virtual spaces based on photographic images [1][2]. Panoramic images represents one of these
`techniques.[3][4] In the case of a 360-degree panoramic image, a single panoramic image can be
`generated from images in all direction with the photographing position at the center. However,
`most generation of panoramic images requires that the camera be rotated while maintaining a
`precise angular speed. This limitation exists because a slit width of fixed size is set in advance
`corresponding to the rotation angle speed of the camera, images are excised as vertical slits
`having that width, and are sequentially combined to generate the panoramic image.
`Furthermore, although a standard panoramic image enables one to see in all directions, it remains
`a 2-dimensional image with no sense of depth reproduced on a flat (cylinder) surface, and there
`is a problem with respect to the sense of realism.
`This paper presents a technique for generating panoramic images with few restrictions on the
`picture-taking time and capable of being viewed in binocular stereo. Our technique consists
`generally of the following steps. First, a single video camera is used to capture continuous
`images. The optical flow between frames of the captured images is detected, and the slit width is
`determined based on the optical flow detection results. By excising the slit images from the
`frame images and combining those images, the left and right panoramic images are generated. In
`addition, a description is provided of stereo viewing of the panoramic image using parallax that
`corresponds to depth. The following sections describe the technique in detail, following the
`sequence of generated display.
`
`2. Image Capture
`A video camera is attached to a tripod, and the camera is rotated counterclockwise smoothly
`with the rotating axle of the tripod as the center to record images. At this time, the focal point of
`the video camera is set to a position a distance L from the rotating axle. In addition, there is no
`need to keep the rotation speed of the video camera constant, and the camera is rotated manually
`at an appropriate speed.
`
` L
`
`
`
`Fig. 1: Image Capture Technique Using a Video Camera
`
`
`3. Detecting the Optical Flow
`Frame images are captured from the recorded video images, and the optical flow between the
`next frame image and the captured frame image (320x240 (pixels)) (see Fig. 2) is calculated. (b)
`of the same figure presents the result of detecting optical flow with the next frame image using
`template matching. The size of the flow vector at each representative point in the template used
`
`15
`
`
`
`for matching is expressed as the length of a line segment with the representative point as the
`starting point. The search range for template matching extends only horizontally, since the
`movement of the camera is a rotating motion around a fixed axis. As a result, only horizontal line
`segments appear in the detection results. Also, areas where no optical flow line segments are
`present are uncertain areas where the difference between the value of the matching result for the
`1st candidate and the value of the matching result for the 2nd candidate was not at least a certain
`set value.
`
`
` (a) Frame Image (b) Optical Flow (c) Slit
`Image
`
`Fig. 2: Optical Flow Detection Result and Slit Image
`
`
`4. Excising the Slit Images
`Once the optical flow is determined between all the continuous frame images, the size of the
`flow vector is used to set the slit width, and the respective right eye and left eye slit images are
`excised from the frame images. Here, the flow vector size can be used as the slit width because
`template matching performs matching in pixel units and the limitation that the search range is in
`the horizontal direction only.
`First, we derive the positions at which the left eye and right eye slit images are to be excised. In
`Fig. 3, the left and right camera positions correspond to the left and right eye positions when
`viewing a panoramic image stereoscopically. If the video camera rotation radius is represented as
`L(m), the sight line (eye) interval as 2e(m), and the video camera rotation angle as 2θ(deg), the
`following relationship obtains.
`
`L sinθ = e
`For example if L = 0.17(m) and e = 0.03(m), the result is θ = 10.2(deg). However, based on Fig.
`4, if the distance from the center of the screen to the slit position is represented as x(pixels), the
`window width as w(pixels), and the horizontal field of vision angle as fovx(deg), the result is as
`follows:
`
`w/2 : x = tan(fovx/2) : tanθ
`For example, if w = 320(pixels), θ = 10.2(deg), and fovx = 31(deg), the result is x = 104(pixels).
`Consequently, an image with a width corresponding to the flow vector is excised from a position
`104 (pixels) horizontally from the center position of the image, thus generating the slit image.
`
`
`
`16
`
`
`
`
`
`Image
`surface
`
`2e
`
`Image
`surface
`
`θ
`
`θ
`
`Slit position
`
`Camera
`
`Camera
`
`2θ
`
`L
`
`Image surface
`
`Center of Screen
`
`w
`x
`
`θ
`
`fovx
`
`Rotation center
`
`Camera focal point
`
`
`Fig. 3: Relationship Between Rotation Radius and Rotation Angle
`Fig. 4: Relationship Between Rotation Angle and Slit Position
`
`
`Next, we determine the width of the slit image to be excised. The flow vector is detected only
`as the number of the line of the representative point in the excision position for the slit. We used
`the mode value for the size of the flow vector as the slit width. Using the mode value signifies
`that an object at the position corresponding to the size of the flow vector occupies most of the
`area within the slit, and attention will be focused on that object when the panoramic image is
`generated. Consequently, we take a majority vote of the valid flow vector values at the
`previously derived slit position (items representing horizontal line segments in Fig. 2(b)), and the
`most frequently detected value is set as the slit width sw. The slit image that is ultimate excised
`is a slit image of width sw from the position w/2+x for the left eye and w/2-x-sw for the right
`eye. Fig. 2(c) shows the excised left eye slit image.
`
`5. Compositing the Slit Images
`All of the slit images excised from the frame images are continuously composited in sequence.
`Fig. 5 shows left and right panoramic images created from images taken of an elevator hallway.
`The upper row is the left eye image, and the lower row is the right eye image.
`
`
`
`
`
`
`Fig. 5: Panoramic Images of an Elevator Hallway
`
`
`6. Stereoscopic Viewing Using Depth Parallax Angle
`When the left and right panoramic images obtained using the foregoing procedure are viewed
`binocular stereoscopically, a stereoscopic view is possible that faithfully reproduces the
`positional relationships, if the image was captured from a sufficient distance. However, if the
`
`17
`
`
`
`camera was placed at a comparatively close distance, or if the distance from the camera to the
`objects varies greatly, the positions representing the left and right panoramic images must be
`adjusted. The reason why this process is necessary can be explained as follows. Fig. 6(a) shows
`the appearance of the imaging of objects A and B at different distances while rotating on a
`circle perimeter R with the axis of rotation O at the center. At this time, object A is recorded for
`the right eye and the left eye, respectively, at positions C and D in the panoramic image.
`Similarly, object B is recorded at positions E and F in the panoramic image. However, if the
`panoramic image is viewed stereoscopically as-is, since θ is the parallax angle calculated from
`parallel light beams, using a captured image in which the objects were placed at a close distance,
`object A and object B are displayed as infinitely distant. Therefore, objects appear to overlap
`or some other fault, making faithful stereoscopic viewing impossible.
`
`B
`
`θ
`θ
`
`A
`
`θ
`
`θ
`
`C
`
`R
`
`D
`E
`
`O
`
`F
`
`G
`
`(a)
`
`A
`
`H
`
`θ’
`
`C’
`2θ
`R
`
`O
`
`θ’
`D’
`E’
`
`θ’’
`
`θ’’
`
`F’
`
`(b)
`
`B
`
`Fig. 6: Explanation of Depth Parallax Angle
`
`
`
`
`
`Because of this, it is necessary to make corrections so that displayed objects will appear
`to be at the correct position by using parallax angles corresponding to the distances to object
`A and object B, and dynamically changing the panoramic image display position.
`As shown in Fig. 6(b), when ∠C’OD’ and ∠E’OF’ are equalized to 2θ, the relationship
`between θ' and θ'' in the figure is θ<θ’’<θ’. In other words, as the position of the object draws
`nearer in relation to the parallax angle θ calculated based on parallel light beams, the parallax
`angle increases, and object A and object B must be seen at parallax angles θ' and θ''.
`Consequently, rotating the left and right panoramic images by only the parallax angle θ' or θ''
`with the respective viewpoint positions at the center enables accurate stereoscopic viewing of
`object A and object B. In this paper, since the parallax is based on the distance to an object
`(depth) captured by a camera at this parallax angle, we refer to it as the depth parallax angle.
`Fig. 7 presents an example of rotating the left and right panoramic images for display
`according to the depth parallax angle. In Fig. 6(a), assuming that the camera began rotation for
`capture from position G (this position is referred to as the reference position for the panoramic
`image), in the left eye panoramic image, object A is recorded to the left by an angle of only θ
`from the reference position (position D). For its part, in the right eye panoramic image, object A
`is recorded to the right by an angle of only θ from the reference position (position C). In
`addition, by rotating the left eye panoramic image clockwise by θ' and the right eye panoramic
`image counterclockwise by θ' with the viewpoint as the center, the recorded object A can be
`viewed as located at the correct position.
`
`18
`
`
`
`C
`
`G
`
`θ’
`θ
`
`O
`
`Viewpoint
`position
`
`Right eye
`
`
`
`D
`
`G
`
`θ’ θ
`
`O
`
`Viewpoint
`position
`
`
`
`Left eye
`
`Fig. 7: Rotation of the Left and Right Panoramic Images According to the Depth Parallax Angle
`
`
`
`
`
`Next we show the relationship between the distance to the object and the depth parallax angle.
`In Fig. 6(b), if the distance from the rotation axis to the object A AO = a, the rotation radius C'O
`= L, the depth parallax angle in relation to parallel light beams ∠C'OA = θ, and the depth
`parallax angle in relation to object A ∠HC'A = θ', the following relationship obtains.
`a = L sinθ’ / sin ( θ’ – θ )
`When this equation is modified, it becomes:
`tanθ’ = sinθ / ( cosθ – L / a )
`thus yielding the depth parallax angle θ' corresponding to the distance to the object. By recording
`this depth parallax angle in correspondence to the panoramic image and applying it to the left
`and right panoramic images, alignment of the left and right panoramic images can be
`dynamically controlled, making stereoscopic viewing of the panoramic image possible.
`
`7. Field Test
`A field test was conducted applying these techniques to panoramic images of an elevator
`hallway in which the distance to objects varies greatly. First, while actually looking at the
`panoramic images, alignment was performed in several sight line directions so faithful
`stereoscopic viewing would be possible, and the depth parallax angle in each sight line direction
`was recorded. In addition, the depth parallax angle obtained was continuously varied using linear
`interpolation, the depth parallax angle for all sight line directions was calculated, and
`correspondences were made with the panoramic images. As a result of stereoscopic viewing with
`alignment control of the panoramic images using the calculated depth parallax angles with 10
`research personnel, t