`
`VOLUME 23 NUMBER 4 1992
`
`D
`
`SCRIPT A TECHNIC A, INC.
`A Subsidiary of John Wiley & Sons, Inc.
`
`ENGLISH EDITION PUBLISHED JUNE 1992
`
`Petition for Inter Partes Review
`of U.S. Pat. No. 7,477,284
`IPR2013‐00219
`EXHIBIT
`Sony‐
`
`
`
`SYSTEMS AND COMPUTERS IN JAPAN
`(formerly Systems • Computers • Controls)
`
`A Translation of Denshi Joho Tsushin Gakkai Ronbunshi
`
`DENSHI JOHO TSUSHIN GAKKAI RONBUNSHI (formerly Denshi Tsushin Gakkai Ronhunshi) [ :C-1-i!I.1T€o!;::t!t
`the
`],
`Transactions of the Institute of Electronics, Information and Communication Engineers of Japan is published in seven monthly sections
`(part A (ISSN 0913-5707), B-1 (ISSN 0915-1877), B-11 (ISSN 0915-1885), C-1 (ISSN 0915-1893), C-11 (ISSN 0915-1907), D-1 (ISSN
`0915-1915), and D-11 (ISSN 0915-1923)), comprising 84 issues per year. Papers translated from Denshi Joho Tsushin Gakkai Ronhunshi
`are published in Systems and Computers in Japan (formerly Systems • Computers • Controls), as well as Part 1: Communications, Part 2:
`Electronics, and Part 3: Fundamental Electronic Science, of Electronics and Communications in Japan. All four journals are published
`by Scripta Technica, Inc., a subsidiary of John Wiley & Sons, Inc., Publishers, with the permission and cooperation of the Institute of
`Electronics, Information and Communication Engineers of Japan. The Editorial Board of Denshi Joho Tsushin Gakkai Ronhunshi is
`headed by Toshio Sekiguchi. The Associate Editors are headed by Toshio Sekiguchi. The Associate Editors are: Kunihiro Asada, Hideo
`Fujiwara, Hiroshi Harashima, Kazuo Horiuchi, Fumiaki Machihara, Kazuo Murano, Makoto Nagao, Michiharu Nakamura, Shoji Shinoda,
`Takuo Sugano, Shigeo Tsujii, and Haruo Yamaguchi. The consulting editor for Systems and Computers in Japan is C. M. Park, Catholic
`University of America, Washington, D.C.
`
`SYSTEMS AND COMPUTERS IN JAPAN (ISSN 0882-1666), comprises 14 issues (one volume, monthly, except twice in June and
`November) per year, and publishes papers on computer architecture, large system design, advanced digital circuitry, data transmission,
`interface devices, data processing, programming techniques, automata, formal languages, and biomedical applications of computers.
`Papers are published in Systems and Computers in Japan approximately 18 weeks after the publication of their Japanese-language
`equivalents. The views expressed in these journals are exclusively those of the Japanese authors and editors. The Japanese authors are
`generally consulted regarding the translation of their papers, but are not responsible for the published English versions.
`
`Copyright © 1992 by Scripta Technica, Inc. All right reserved. Reproduction or translation of any part of this work beyond that permitted
`by Sections 107 and 108 of the U. S. Copyright Law without the permission of the copyright owner is unlawful. The code and copyright
`notice appearing at the bottom of the first page of an article in the journal indicates the copyright holder's consent that copies may be
`made for personal or internal use, or for personal or internal use of specific clients, on the condition that the copier pay for copying beyond
`that permitted by Sections 107 or 108 of the United States Copyright Law. The per-copy fee for each article appears after the dollar sign
`and is to be paid through the Copyright Clearance Center, Inc., 21 Congress Street, Salem, Massachusetts, 01970. (The fee for items
`published prior to 1992 is $7.50 per paper.) This consent does not extend to other kinds of copying, such as copying for general
`distribution, for advertising or promotional purposes, for creating new collective works or for resale. Such permission requests and other
`permission inquiries should be addressed to the publisher.
`
`SUBSCRIPTIONS (1992): Systems and Computers in Japan, $925 per volume in the U.S., $1,100 outside the U.S. Claims for
`undelivered copies will be accepted only after the following issue has been received. Please enclose a copy of the mailing label. Missing
`copies will be supplied when losses have been sustained in transit and where reserve stock permits. Please allow four weeks for processing
`a change of address.
`
`POSTMASTER: Send address changes to Systems and Computers in Japan, Subscription Department, John Wiley & Sons, Inc., 605
`Third Avenue, New York, New York 10158.
`
`The contents of this journal are indexed or abstracted in Mathematical Reviews. Engineering index, JNSPEC, and Zentralhlatt fz'ir
`Mathematik/MATH data base.
`
`ADVERTISING: Inquiries concerning advertising should be forwarded
`to Roberta Fredericks, John Wiley & Sons, Inc.,
`Advertising/Reprints Sales, 605 Third Ave., New York, NY 10158, (212) 850-6289. Advertising Sales, European Contact Michael
`Levermore, Advertising Manager, John Wiley & Sons, Ltd., Baffins Lane, Chichester, Sussex PO 19 IUD, England.
`
`Scripta Technica, Inc.
`A Subsidiary of John Wiley & Sons, Inc.
`
`Second class postage paid in New York and additional offices
`
`
`
`SYSTEMS AND COMPUTERS
`IN JAPAN
`
`1992
`
`VOLUME 23
`
`NUMBER 4
`
`CONTENTS
`
`Tsuyoshi Kawaguchi, HiroshiMasuyama, and Tamotsu Maeda. An Asynchronous Parallel Branch-and-
`Bound Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
`
`Masahiko Morita. A Neural Network Model of the Dynamics of a Short-Term Memory System in
`the Temporal Cortex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
`
`Sadayuki Hongo, Mitsuo Kawato, Toshio Inui, and Sei Miyake. Contour Extraction by Local Parallel
`and Stochastic Algorithm Which Has Energy Learning Faculty . . . . . . . . . . . . . . . . .
`
`Kouichirou Yamauchi, Takashi Jimbo, and Masayoshi Umeno. Self-Organizing Architecture for
`Learning Novel Patterns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
`
`Hiroshi Ishiguro, Masashi Yamamoto, and Saburo Tsuji. Acquiring Omnidirectional Range
`Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
`
`Hiroshi Naruse, Atsushi Ide, Mitsuhiro Tateda, and Yoshihiko Nomura. Edge Feature Determination
`by Using Neural Networks Based on a Blurred Edge Model . . . . . . . . . . . . . . . . . . .
`
`Yoshiyuki Saito and Shigeru Niinomi. Development of Software System Organized by Minimal
`Automata Realization Theorem for Gait Measuring . . . . . . . . . . . . . . . . . . . . . . . .
`
`Minoru Okada, Shigeki Yokoi, and Jun-ichiro Toriwaki. A Method of Digital Figure Decomposition
`Based on Distance Feature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
`
`Yasuhiro Wada and Mitsuo Kawato. A New Information Criterion Combined with Cross-Validation
`Method to Estimate Generalization Capability . . . . . . . . . . . . . . . . . . . . . . . . . . . .
`
`Page
`
`1
`
`14
`
`26
`
`36
`
`47
`
`57
`
`69
`
`80
`
`92
`
`
`
`Systems and Computers in Japan, Vol. 23, No. 4, 1992
`Transiated from Denshi Joho Tsushin Gakkai Ronbunshi, Vol. 74-D-11, No. 4, April 1991, pp. 500-508
`
`Acquiring Omnidirectional Range Information
`
`Hiroshi Ishiguro, Masashi Yamamoto, and Saburo Tsuji, Members
`
`Faculty of Engineering Science, Osaka University, Toyama, Japan 560
`
`SUMMARY
`
`This paper describes a method of obtaining a precise
`omnidirectional view and coarse ranges to objects by
`rotating a single camera. The omnidirectional view is ob-
`tained by arranging image data taken through a vertical
`slit on the image plane while the camera rotates around
`the vertical axis. The omnidirectional view contains
`precise azimuth information determined by the resolution
`of the camera rotation. The range is obtained from two
`omnidirectional images taken through two slits when the
`camera moves along a circular path. This range estimate
`contains errors due to a finite resolution of the camera
`rotation system, while a conventional binocular stereo
`method contains errors due to quantization of images. The
`representation of an environment, "panoramic representa-
`tion, " by the omnidirectional view and range information
`is useful for a vision sensor of a mobile robot.
`
`1. Introduction
`
`A camera used for a vision sensor of a conventional
`mobile robot is mounted on the robot. To recognize an
`environment more flexibly, it is necessary to design a
`camera to have more active functions. To achieve this
`requirement, a camera should be able to sense the whole
`environment and its specific part.
`
`Fixation of gaze upon the interesting object is used
`in the paradigm of "Active Vision" [l, 21 which is a
`current topic in Computer Vision. In the Active Vision,
`a camera gazes upon a particular feature point in an
`
`environment while moving,and generates an environmental
`description with respect to the point.
`
`Viewing the whole environment from an observation
`point is also important for a robot. It is necessary to build
`an accurate map when a robot works in an unknown
`environment. For this purpose, it is more efficient to use
`a wide view than integrating partial views ofthe whole
`environment by using a camera with a limited visual field.
`Since the conventional camera has a limited visual field,
`it is necessary to add extra facility to the camera to obtain
`an omnidirectional view field.
`
`Yagi and Kawato [3] obtained an omnidirectional
`image by using a conic mirror which yields the image of
`the environment from the entire direction and by taking
`this image by a camera viewing from the top of the cone.
`Their method takes an omnidirectional image in real time
`and can analyze the image using a property whereby the
`optical flow of a point for a linear camera motion forms
`a curve. However, the angular resolution of their method
`is rather limited (the minimum resolution is about 0.2"
`in an image of 512 X 512 pixels), the resolution being
`coarse near the vertex of the cone, since the image of the
`entire environment is projected onto a single image plane.
`
`Morita et al. [4] obtained a wideview image by
`using a camera with a fisheye lens. The angular resolution
`of their method is also not very high, although they
`reconstruct a three-dimensionaI (3-D) structure using the
`vanishing point of straight lines in its environment and not
`directly using omnidirectional information.
`
`47
`
`ISSN0882-1666/92/0004-0047$7,50/0
`1992 Scriota Technica. Inc.
`
`@
`
`
`
`Sarachik [5] has devised a method to obtain an
`omnidirectional image and the distance to a ceiling edge
`(a horizontal edge between a ceiling and a wall) by using
`two rotatable cameras on a robot. Her method can locate
`the robot in a room and estimate the camera direction.
`
`By contrast to these passive methods, there have
`been active methods which project an optical pattern onto
`objects to measure their distances. Blais et al. [12] have
`constructed a compact stereo-based distance-measurement
`system by projecting a light onto the environment. By
`rotating their system, omnidirectional range information
`can be obtained. A disadvantage of the active method is
`that its environment is restricted by physical conditions,
`especially those of illumination.
`
`The method proposed in this paper is a passive one
`without projecting a light pattern and uses a camera
`rotating around a fixed vertical axis, similar to Sarachik's
`system, to obtain an omnidirectional image. However,
`unlike her method which uses two cameras to obtain the
`distance information, the proposed method uses a single
`camera and can obtain both the view and range informa-
`tion with a high angular accuracy over 360". I R ~ us call
`this camera-centered 2.5-D representation containing both
`the visual information and the range information "pan-
`oramic representation." The method of Blais et al. [12j
`also can obtain both the omnidirectional view and range
`information, The difference of the proposed method from
`their method, other than the light projection, is the cause
`of errors in azimuth measurements. The error in their
`method depends on the resolution of its image sensor, and
`that in the proposed method depends on the angular
`accuracy of rotation of the camera around its axis (see
`section 2).
`
`Advantages of the panoramic representation are as
`follows:
`
`(A) Omnidirectional visual information has the
`following features:
`
`(1) When an omnidirectional image is taken
`in a rectangular room, four vanishing
`points a p p r every 90".
`
`(2) When a camera moves on a straight line
`on a plane perpendicular to the rotation
`axis of the camera, two "foci of expan-
`sion" (FOE) of the optical flow of objects
`in its environment appear at an interval
`of 180" in the omnidirectional image.
`
`(3) The environmental structure is obtained
`by a set of two omnidirectional images.
`
`By using feature (1) as a constraint, the vanishing
`points in an omnidirectional image can be determined
`more precisely; and this is useful in obtaining the rotation
`angle of a mobile robot. By using feature (2). the rotation
`angle and the direction of the robot motion can be ob-
`tained from the projection of the features onto a Gaussian
`sphere [6]. Feature (3) can be used for stereovision by
`using two omnidirectional images taken at different loca-
`tions in an environment. The correspondence of the two
`images can be obtained by using "circular dynamic pro-
`gramming" [7] based on the structure of the omnidirec-
`tional image.
`
`(B) Precise angular information
`
`An image obtained by the proposed method has a
`resolution equal to that of the camera rotation; i.e., by
`using a precise camera rotation control system, the resolu-
`tion of the angular information could be higher than that
`of a conventional method. This is an excellent feature of
`the proposed method. For example, the stereovision using
`a pair of two omnidirectional images (feature (3) in (A))
`is useful for determining both the structure of the wide
`environment and positions of objects in it.
`
`(C) Omnidirectional range information
`
`The proposed method can obtain an Omnidirectional
`view and at the same time the range information. The
`latter is useful in finding free regions in its environment,
`establishing the correspondence of a pair of two omnidi-
`rectional images, and determination of motion parameters
`of a robot.
`
`Image plaric
`
`Fig. 1 , Imaging method (omnidirectional view).
`
`48
`
`
`
`Fig. 2. Omnidirectional view.
`
`a horizontal line. Figure 2 shows an example of omnidi-
`rectional image (900 X 265 pixels) of a computer room
`obtained by rotating the camera with a step of 0.4".
`
`We can improve the resolution of the angular infor-
`mation by controlling the camera rotation with a finer
`step. If the resolution of camera rotation is reduced fur-
`ther, it becomes smaller than an angle corresponding to
`a single pixel (Fig. 3). For example, when a camera with
`the focal length of 600 pixels is used, the image angle
`corresponding to a single pixel is about 0.1 O . However,
`the azimuth of each vertical edge of an object (which is
`parallel to the rotation axis of the camera) in a 3-D envi-
`ronment can be measured more accurately by rotating the
`camera with a smaller step than the azimuth information
`obtained by using a conventional image sensor. In an edge
`image made by a gradient operator (e.g., Sobel operator),
`an edge can be located where the edge strength is rnaxi-
`mum. We confirmed experimentally that the edge image
`of an omnidirectional view taken with a very small rota-
`tion step also has a peak in each edge portion. From this
`result, it is expected that an omnidirectional image has a
`resolution equal to that of the camera rotation system.
`
`2.2. Omnidirectional stereo method
`
`Let us describe the method to obtain the omnidirec-
`tional view and distance information simultaneously by
`modifying the imaging method described already.
`
`A stereovision system using two cameras is a well-
`known method to obtain the distance information from
`imagesSarachik [5] has obtained the distance information
`from a set of two omnidirectional images taken by two
`cameras (arranged in the upper and lower parts of the
`rotation system), and by finding the correspondence of the
`two images.
`
`Rotation
`cenler
`Camera motion
`
`i 1 pixel
`\ \ \
`: View range of 1 pixel
`I : v i e w range of 1 pixel
`'
`
`I
`
`Fig. 3. Rotating angle and resolution of
`the image.
`
`The panoramic representation proposed herein has
`features (B) and (C).
`
`2. Method of Taking Images
`
`2.1. Omnidirectional image
`
`Figure 1 shows a mechanism to take an omnidirec-
`tional image. A camera rotates around the vertical axis
`passing through its focal point, the optical axis of its lens
`being perpendicular to the rotation axis.
`
`The omnidirectional image is obtained by arranging
`the image through a vertical slit with a width of a single
`pixel along a horizontal line. Figure 2 shows an example
`
`49
`
`
`
`L = Rs in p
`sinrp- 8 )
`
`(1)
`
`where cp is a half of the view angle of the two slits or
`cp
`taIil(l/J).
`
`-
`
`Figure 4 shows the imaging method for the omnidi-
`rectional stereo. A camera rotates at a distance R from the
`rotation axis. We set two vertical slits with a 1-pixel width
`symmetrically to the image center. By arranging the
`images through the slits, we acquire a pair of omnidirec-
`tional images, as shown in Fig. 5. A feature point in 3-D
`space appears in and passes through one slit and then
`appears in the other as the camera rotates. By measuring
`28 of the camera rotating the feature point from one slit
`to the other, we can estimate the range L of the point
`from the rotation center as
`
`Rotation axis
`
`...
`
`Image plane
`
`Fig. 4. Imaging method (omnidirectional stereo).
`
`Rotation ceti[er
`
`Fig. 6 . Range estimation by omnidirectional stereo.
`
`Fig. 5. Two omnidirectional views for stereo method.
`
`50
`
`
`
`f
`
`R
`
`4
`
`L A
`- 7
`
`b
`
`Camera optical axis
`4
`
`Rotation axis
`
`Fig. 7. Height estimation by omnidirectional
`stereo.
`
`To measure a distance to an object, a conventional
`stereo method uses its parallax between a pair of images.
`Instead, the proposed method uses the difference of two
`angles viewing the same object through the two slits (i.e.,
`this corresponds to the parallax in the conventional stereo
`method).
`
`Referring to Fig. 7, the height of an object If in an
`omnidirectional stereo image is given by
`
`This shows that the vertical position y of the object in the
`image and the focal length of the camerafare needed for
`estimating the height of the object.
`
`The correspondence of an object in a set of two
`omnidirectional images can easily be found by tracking
`it between the two slits. This tracking is robust since the
`direction of movement is known and its amount is very
`small. If an object is hidden by another, its distance can
`be obtained by the rotation angle of the camera before
`occlusion. We use a conventional method [ 101 for finding
`correspondence.
`
`Another advantage of the proposed method is ease
`of equipment setting. Unlikea conventional stereo method
`using two cameras which requires calibrations of many
`parameters related to the cameras, the proposal using a
`single camera reduces such procedures.
`
`(;I) C'oiiveri I ioii;il
`hnocular slereo
`
`(b) Omnidirectional
`stereo
`
`Fig. 8. Conventional binocular stereo vs.
`omnidirectional stereo.
`
`2.3. Errors in omnidirectional stereo
`
`A distance measured by a conventional binocular
`stereo method contains an error caused by itsquantization.
`This is equivalent to the resolution A 0 of the camera
`rotation system in the proposed omnidirectional stereo
`method as shown in Fig. 8. When the camera rotation
`system has an error of be, the error in the distance to
`a feature point is given by
`
`This shows that AL is proportional to the square of the
`distance L between the camera rotation center and the
`object if the object is sufficiently far (t > Rsincp); and
`this is inversely proportional to Rsincp which corresponds
`to the distance between the two cameras in a conventional
`binocular stereo system.
`
`Figure 9 shows the relationship between the rotation
`radius R, the distance and the error, when the resolution
`of the camera rotation system A 0 is 0.4". Although the
`error reduces when R is large, there is a practical limit
`in its maximum value when the system is mounted on a
`mobile robot. In the experiment to be. described herein,
`the focal length of the camera f = 600 pixels, R = 0.2 m,
`and the distance of 200 pixels from the central line of the
`
`51
`
`
`
`Error(in)
`
`Rotation axis
`
`Iiefe
`
`0.5
`
`1.0
`
`1.5
`
`2.0
`
`2.5
`3.0
`D islance( m)
`
`Fig. 11. Imaging model (omnidirectional stereo).
`
`Fig. 9. Relationship between radius R and error.
`
`Kotation axis
`
`Fig. 10. Imaging model (omnidirectional view).
`
`system. The camera rotation system used for the experi-
`ment has a solution of 0.005". Therefore, assuming that
`the system is operated with 0.003' step and that the error
`depends solely on the resolution of the rotation system,
`the error in the distance to an object separated from the
`rotation center by 1.0 m will be about 0.0014 m.
`
`3. Projection to Omnidirectional
`Image
`
`Mathematical properties of projection onto an omni-
`directional image are described here. An omnidirectional
`image has features of cylindrical projection of a 3-D
`space. Referring to Figs. 10 and 11, let us represent a 3-
`D space by using a cylindrical coordinate system. The C
`axis is taken along the camera rotation axis, the origin 0
`is a point on the C axis at the height of the focal point of
`the camera, and 8 is an angle on a horizontal plane
`crossing with the origin and measured from an arbitrary
`reference direction.
`
`of the image plane to each slit are used. Despite the
`forementioned limit of R, the resolution of the camera
`rotation system A8 can be very small (i.e., the error in
`distance measurement can be small) in the proposed
`method, since the azimuth of an object can be determined
`with an accuracy equal to that of the camera rotation
`
`3.1. Omnidirectional image taken by method
`shown in Fig. 1
`
`Figure 10 shows a model of projection onto an
`omnidirectional image taken by the method described in
`section 2.2. Point P = (p, 8, Z) in a 3-D environment
`
`52
`
`
`
`is projected as point p = (x, y) in the omnidirectional
`image plane by
`
`plane, this does not form an ellipse but a more complex
`pattern given by
`
`(4)
`
`where f is the focal length of the camera.
`
`A line in the 3-D space is projected on an ellipse
`which appears when the cylinder is cut by a plane contain-
`ing this line and the origin. This line appears as a sinusoi-
`dal curve in the omnidirectional image which is an ex-
`panded plane of the cylindrical surface. For example, a
`simple case where a straight horizontal line (height H.
`distance from the C axis D) perpendicular to the reference
`axis, i.e.,
`
`However, a vertical line in the 3-D space appears also as
`a vertical.
`
`Almost all the information taken by the camera in
`this configuration can be extracted by some means. For
`example, in the omnidirectional stereo method (section
`2.2). the angle difference produced by the rotation of the
`camera initially is extracted; and then this is converted
`into the range information. Note, this angle difference is
`equivalent to "parallax" in a conventional stereo method.
`
`is represented by
`
`and this is projected as a sinusoidal curve.
`
`As shown in Fig. 2, an object taken by this method
`is distorted when it is projected onto an omnidirectional
`image. It is easy to reconstruct the original image from
`the distorted image. Considering this, an omnidirectional
`image taken by the proposed method contains all the
`information of an environment around the camera.
`
`3.2. Omnidirectional image using method
`shown in Figure 4
`
`Figure 11 shows the omnidirectional imaging
`through the vertical slit passing the center of the image
`when the method shown in Fig. 4 is used. Point P = (p,
`0, C) in the environment is projected onto point p = (x,
`y ) using
`
`where f is the focal length of the camera.
`
`When a horizontal straight line in a 3-D space
`represented by Eq. (5) is projected onto a cylindrical
`
`53
`
`4. Experimental Results
`
`The experiments were camed out in an indoor
`environment. The camera is arranged so that its optical
`axis is always parallel to the floor of the room and the
`axis rotates at a height of 1.0 m. The camera (from
`Matsushita) has a color CCD with a lens (focal length 16
`mm). The camera control system has a dc motor with
`worm-geared speed-reduction mechanism for the camera
`rotation. An optical encoder is connected directly to the
`shaft of the motor to form a feedback circuit so that the
`rotationof thecamera is controlled accurately with 0.005 O
`step.
`
`The input image has 512 X 512 pixels. The focal
`length of the lens is calibrated as 591 pixels. The Nexus
`6000 was used for image digitization, an Epson laptop
`computer was used for camera control, and a Sun 4
`Workstation was used for the host computer in this experi-
`men t .
`Vertical straight lines were used as the feature lines
`in the environment. Almost all the indoor environments
`have many vertical lines, and it is practical to choose them
`as features for the robot navigation.
`
`4.1. Acquisition of accurate azimuth information
`
`Figure 2 shows an example of omnidirectional view
`obtained by using the proposed method in the config-
`uration shown in Fig. 1 with a single vertical slit at
`the center of the image plane, and this contains the
`
`
`
`the intensity of the edge sampled along a line A-B in Fig.
`12(b)-
`
`The intensity of the edge shown in Fig. 12(c) indi-
`cates that its maximum occurs at the position where the
`vertical edge is in Fig. 12(a). As this example shows, the
`azimuth of a vertical edge in an environment can be
`detected with the accuracy of the camera rotation by
`detecting the position of the maximum intensity of the
`edge image.
`
`Even in a conventional image, the position of an
`edge can be determined with a sub-pixel accuracy by
`using the blur of the edge as the liner mixing model [ 1 11.
`In the proposed method, the shape of a blur can be deter-
`mined accurately by moving the camera, as the foremen-
`tioned experiment shows, and this can be applied to a
`difficult case where even the linear mixing model is not
`applicable.
`
`Although Fig. 12(a) was obtained by the method
`described in Fig. 1, the azimuth in an image obtained by
`the method shown in Fig. 4 can also be measured with
`accuracy equal to that of the camera rotation.
`
`4.2. Acquisition of distance information
`
`Experiments to obtain the omnidirectional distance
`information around the camera were carried out.
`
`The camera rotation axis (vertical) must cross its
`optical axis (horizontal) perpendicularly. This condition
`is achieved by adjusting the camera position so that both
`feature points on a horizonta1 line through the cross point,
`and in directions separated by 180", are seen respectively,
`on the center of the image plane when the camera is
`rotated by 180". The rotation radius of the camera R was
`measured by using an object with a known distance.
`
`Figure 5 shows an example of a pair of two omni-
`directional images taken by the method shown in Fig. 4.
`The image has 900 X 512 pixels (reduced vertically to
`one-half). The camera was rotated with 0.4" step over
`360" with R = 0.2 m, and the two slits were set with the
`distance of 200 pixels from the image center. The corre-
`spondence between the two images was obtained by
`tracking vertical lines in the environment between the two
`slits [ 131.
`
`Figure 13 shows the positions on the room floor of
`vertical lines in the environment estimated by the
`omnidirectional stereo method. This shows that the error
`
`(c) Edge strength along A-B
`
`Fig. 12. Precise azimuth information.
`
`information of an omnidirectional view. The image is
`taken by rotating the camera at 0.4" step over 360". The
`image consists of 900 x 512 pixels (the image is reduced
`vertically to one-half).
`
`that this image con-
`It is not certain, however,
`tains a full azimuth information, considering that even in
`an image taken by a conventional method with a focal
`length of 600 pixels, the azimuth per pixel is about 0.1 "
`(= tan-'(1/600)). To confirm this point, the accuracy of
`the azimuth was checked experimentally by rotating the
`camera with 0.01 " step.
`
`Figure 12(a) shows a part of an omnidirectional
`image obtained by rotating the camera degrees with 0.01 "
`step over 5.0. Figure 12(b) shows an edge emphasized by
`applying Sobel's gradient operator. Figure 12(c) shows
`
`54
`
`
`
`error (cm)
`
`-
`
`10
`
`6 -
`
`-s
`
`-
`
`-7
`
`-411
`
`.i: -1
`.c
`
`100
`
`.I
`
`.12
`.14
`
`.a
`
`3 0 0
`200
`400
`distance from the rotation center (cm)
`
`Fig. 14. Error of heights.
`
`measured with a higher accuracy even in a
`small azimuth. Note that an azimuth in the
`proposed method corresponds to a parallax in
`a conventional stereo method.
`
`(2) The correspondence problem between a set of
`two images in the proposed method can be
`established by tracking features between the
`two slits.
`
`(3) Setting of the camera system is easier in the
`proposed method.
`
`A disadvantage of the proposed method is its relatively
`slow imaging. This problem can be improved by develop-
`ment of a customdesigned hardware in the future and by
`limiting the sensing direction depending on each applica-
`tion.
`
`An application of the method to an indoor mobile
`robot is currently under study.
`
`REFERENCES
`
`1 .
`
`2.
`
`3.
`
`J. Aloimonous, A. Bandopdhay, and I. Weiss.
`Active Vision. Proc. In!. Conf. Computer Vision,
`pp. 35-54 (1987).
`D. H. Ballard. Reference Frame for Animate
`Vision. Proc. In!. Joint Conf. Artificial Intelligence,
`pp. 1635-1641 (1989).
`Y. Yagi and S. Kawato. Panorama Scene Analysis
`with Conic Projection. Proc. IEEE/RSJ In!. Work-
`shop on Intelligent Robots and Systems '90, pp.
`181-187 (1990).
`
`Fig. 13. Position of 3-D vertical lines.
`
`of positions of vertical lines within a radius of 1.0 m
`around the camera rotation center is about 0.05 m. This
`error can be reduced further by improving the resolution
`of the camera rotation system which can be controlled
`with a resolution of 0.005".
`
`Figure 14 shows errors in the height measurements
`of the lower ends of the vertical edges, which are within
`about 0.1 m. The accuracy of height measurements is
`poorer than that of position measurements, since Eq. (2)
`contains parameter y whose resolution yields additional
`error.
`
`5. Conclusions
`
`This paper describes a method to obtain both the
`accurate azimuth and distance information in omnidirec-
`tion simultaneously by rotating a camera. The features of
`the method are described including the accuracy in the
`distance information and the projection of objects in an
`environment. Experiments were c a m 4 out to confirm the
`usefulness of the method. The features of the method are
`summarized as follows:
`
`(1) A distance measurement using a conventional
`binocular stereo method has errors due to
`quantization of its image; in contrast with this
`the proposed method has errors due to the
`resolution of the camera rotation system. The
`resolution of a camera rotation system can
`easily be improved so that a distance can be
`
`55
`
`
`
`4.
`
`5.
`
`T. Morita, Y. Yasukawa, Y. Inamoto, T. Uchiy-
`ama, and S . Kawakami. Measurement in Three
`Dimensions by Motion Stereo and Spherical Map-
`ping. Proc. Conf. Computer Vision & Pattern Rec-
`ognition, pp. 422-428 (1989).
`K. B. Sarachik. Characterizing an Indoor En-
`vironment with a Mobile Robot and Uncalibrated
`Stereo. Proc. IEEE Lnt. Conf. Robotics & Automa-
`tion, pp. 984-989 (1989).
`6 . R. C. Nelson and 1. Aloimonos. Finding Motion
`Parameters from Spherical Flow Fields. Proc.
`Workshop on Computer Vision, pp. 145-150
`(1987).
`J. Y. Zheng and S . Tsuji. Panoramic Representa-
`tion of Scenes for Route Understanding. Proc. 10th
`Int. Conf. Pattern Recognition (1990).
`J. Y. Zheng and S . Tsuji. From Anorhoscope
`Perception to Dynamic Vision. Proc. IEEE Int.
`Conf. Robotics & Automation (1990).
`
`7.
`
`8.
`
`9. H. Ishiguro, M. Yamamoto. and S . Tsuji. Analysis
`of Omnidirectional Views at Different Locations.
`Proc. IEEEIRSJ Int. Workshop on Intelligent Ro-
`bots & Systems '90, pp. 659-664 (1990).
`10. L. Matthies and T. Kanade. Kalman Filter-Based
`Algorithms for Estimating Depth from Image Se-
`quences. Int. Journal Computer Vision, 3, pp. 209-
`236 (1989).
`11. M. Asada, H. Ichikawa, and S . Tsuji. Deter-
`mining surface orientation by projecting a stripepat-
`tern. IEEE Trans. Pattern Analysis and Machine
`Intelligence, 10, 5