`
`
`
`July 9, 2014
`
`Certification
`
`
`
`This is to certify that the attached translation is, to the best of my knowledge and
`belief, a true and accurate translation from Japanese into English of the article
`that is entitled: Television Image Engineering Handbook, The Institute of
`Television Engineers of Japan, Seal of the Tokyo Metropolitan Library, Date: Jan.
`17, 1981, Ohmsha, Ltd.
`
`
`
`
`
`_______________________________________
`
`Abraham I. Holczer
`
`Project Manager
`
`Project Number: OSLI_1407_020
`
`
`
`
`15 W. 37th Street 8th Floor
`New York, NY 10018
`212.581.8870
`ParkIP.com
`
`VALEO EX. 1024_001
`
`
`
`
`
`Television Image Engineering
`Handbook
`
`The Institute of Television Engineers of Japan
`
`
`
`
`
`Seal of the Tokyo Metropolitan Library
`Date: Jan. 17, 1981
`
`
`Ohmsha, Ltd.
`
`
`
`
`
`VALEO EX. 1024_002
`
`
`
`
`
`All rights reserved. No part of this publication may be
`reproduced without the express written consent of Ohmsha,
`Ltd., using a copier or other means without encroaching on the
`copyright and publishing rights of the publisher.
`
`
`
`
`
`
`
`
`VALEO EX. 1024_003
`
`
`
`Forward
`
`
`
`Thirty years have passed since the Institute of Television Engineers of Japan was
`established in 1980. A plan was made to commemorate the event by publishing this
`handbook with Professor Yasuo Taki as the chief editor and the handbook has now been
`completed. This is truly a happy day for the Institute of Television Engineers of Japan.
`The first Television Engineering Handbook was published in 1959 with professor
`Kenjiro Takayanagi as the chief editor, again in 1969 with Kei Mizokami as the chief editor
`and for the 20th anniversary the Television Engineering Handbook was completely revised
`and published. Television engineering has progressed remarkably in the ten years since that
`time. The field of television engineering has experienced innovative development in areas
`such as semiconductor electronics, digital circuitry and systems, image electronics and
`many other areas. Moreover, this handbook is more of a completely new edition than a
`revision to the old handbook and this is why the name was changed to the Television Image
`Engineering Handbook.
`Television and image engineering has both a direct and indirect close relationship
`with our social lives. Almost 100% of the television broadcasting in Japan delivers high
`quality visuals related to news, education, culture and entertainment. There is increasing
`momentum to utilize digital technologies to provide an even wider range of teletext
`broadcasting and image broadcasting. On the industrial side, imaging technology is
`becoming increasingly more high tech and there is an increasing importance in using an
`industrial television system for integrated control systems at nuclear power plants, steel
`mills and other plants. Television and image applications are also expanding in fields such
`as space science, medicine, transportation, communications and information processing.
`This handbook integrates all of the knowledge currently known about television and
`image engineering academics at this time. The writer is someone that is on the front lines of
`television science and he is the first of such people in the institute. The hope is that this
`handbook is not only a beneficial reference that can be used daily for members of the
`Institute of Television Engineers of Japan, but also for technicians and students that are
`concerned with image engineering.
`The information in this handbook related to television and image engineering
`technology is bound to continue evolving at a rapid pace in the future. I am sure we will
`need to plan another revised edition later. I believe handbooks such as this are extremely
`valuable from the standpoint of knowing the process of how technology develops and in
`what direction future development will go.
`In closing, I would like to offer my sincere gratitude for everyone that contributed
`to the editing and writing of this handbook over the last two editions including the editors,
`writers, publisher and everyone in the institute office that had a hand in the publication of
`this handbook.
`
`Chairman of the Institute of Television Engineers of Japan, Toshio Utsunomiya
`
`December 30, 1980
`
`
`VALEO EX. 1024_004
`
`
`
`464
`
`VOLUME 5 IMAGE INFORMATION PROCESSING
`
`Pseudocolor Image Processing
`[4]
`The conversion processing that has been described up to the preceding paragraph is
`
`a conversion from a black-and-white image to another black-and-white image. However, a
`conversion is often used in which a black-and-white image is artificially colored according
`to a density level of the black-and-white image to create a color image. This is called
`pseudocolor image processing. This is effective to clearly display a fine intensity change.
`
`2.3
`
`
`Geometrical Conversion3), 6), 9)
`
`2.3.1 Geometrical Conversion of Image
`
`The geometrical conversion of an image is a process such as conversions of
`magnification, reduction, rotation, parallel displacement, etc. of a given image; a correction
`of geometrical distortion of an image caused by the distortion of optical or electrical
`characteristics of the imaging system; and a production of an image in which the viewpoint
`or the field of view is moved. This conversion processing can be generally handled as a
`problem of a coordinate conversion.
`
`General Method
`[1]
`The image after the conversion or after the distortion of the image is corrected is
`
`represented by f, its coordinate is represented by (x, y), the original image is represented by
`g, and its coordinate is represented by (u, v). The coordinate conversion from (x, y) to (u, v)
`is shown as follows.
`
`
`
`If h1 and h2 are known, a converted image can be basically obtained from the
`
`following formula.
`
`
`
`However, for a digital image, a coordinate (us ≡ h1(xs, ys), vs ≡ h2(xs, ys)) on the g
`
`corresponding to an arbitrary sample point (xs, ys) of the f does not generally match a
`sample point of the g. Therefore, it is necessary to approximately determine a value of f(xs,
`ys) from the sample point of g that is obtained in the vicinity of (us, vs).
`
`The basic methods for determining the value are as follows.
`As the value of f(xs, ys),
`
`(1) a value of the sample point that is closest to the (us, vs),
`
`(2) a maximum value among four sample values in the vicinity of (us, vs), or
`
`(3) an interpolation value of four sample values in the vicinity of (us, vs)
`
`is chosen from the sample values of g. That is,
`
`
`
`,
`
`VALEO EX. 1024_005
`
`
`
`
`
`CHAPTER 2 IMAGE CONVERSION
`
`wherein (u0, v0), (u0 +1, v0), (u0, v0 +1), and (u0 +1, v0 +1) are the sample points in the
`vicinity of (us, vs); and u0 ≤ us ≤ u0 +1, v0 ≤ vs ≤ v0 +1, α = us-u0, and = vs-v0.
`
`It is necessary to determine which method shall be used according to the purpose of
`the image processing due to the accuracy and the calculation time of the processing. An
`accurate result can be obtained with method (3). However, it requires about ten times more
`calculation time compared to methods (1) and (2).
`
`[2]
`
`
`Linear Conversion
`When the coordinate conversions h1 and h2 are considered to be linear,
`
`.
`In this case, the parallel displacement can be implemented when A = E =1 and B = D = 0,
`the rotation around the origin of the coordinate can be implemented when A2+B2 = D2+E2 =
`1 and C = F =0, and magnification and reduction can be implemented when A/B = D/E and
`C = F =0. In addition, the correction and moving of the viewpoint can be expressed by a
`linear conversion. FIG. 5.6 is one example of such correction of the viewpoint.
`
`Correction of Distortion
`[3]
`The shapes of the coordinate conversions h1 and h2 cannot be often determined in
`
`advance for the image distortion due to a complicated main cause. Even in such case, h1
`and h2 can be approximately determined and the correction can be performed using h1 and
`h2 if a known standard point can be sufficiently found in the image. For example, it
`becomes necessary in a remote sensing to convert the distorted image data onto the
`coordinate system of a map. In this case, ternary to quinary conversions
`
`
`
`
`
`
`
`
`
`
`
`VALEO EX. 1024_006
`
`
`
`VOLUME 5 IMAGE INFORMATION PROCESSING
`
`465
`
`
`
`
`are assumed between a coordinate (x, y) on the map and the coordinate (u, v) of the image,
`and a sufficient number of the corresponding points of the map that can be found on the
`map are selected to determine optimal values of the coefficients αpq and pq with a least-
`squares method.
`
`For the correction of the pincushion distortion of a television camera, a method can
`be used as a practical method in which a lattice pattern for calibration is taken in advance to
`measure a distortion vector of each point on the display and a correct image is reproduced
`using the distortion vectors.
`
`2.3.2 Registration (Positioning)9)
`
`An operation becomes necessary of superimposing two or more images of the same
`object with accuracy when processing a multispectral photograph in a remote sensing,
`comparing the image data of the same object that are taken at times, comparing the image
`data that are taken using a different sensor, etc. This operation is called a registration.
`
`If a standard point is found that is common for the images that are being
`superimposed, the superimposition from one to the other can be implemented with the
`approximate coordinate conversion in section 2.3.1 [3]. The superimposition of a plurality
`of images for the same area can be implemented normally by mapping all images onto the
`map using the standard point on the map.
`
`The accuracy in the unit of pixels is desired in the registration. However, it is
`generally very difficult to distinguish the standard point in the unit of pixels. Even when
`the characteristics of an image are clear when a certain region of the image is viewed
`collectively, they become unclear when each point of the image is viewed separately.
`Because of this, a method is used in which a standard image of a small region is prepared
`having clear characteristics and a correctly known position to automatically find out a
`portion in the given image that is the same as the standard image by matching. The size of
`the standard image is about 8 X 8 to 32 X 32 pixels.
`
`
`
`VALEO EX. 1024_007
`
`
`
`
`
`
`
`CHAPTER 2 IMAGE CONVERSION
`
`(a) Original Image
`
`(b) Corrected Image
`FIG. 5.6 Visual Correction (JPL)6)
`
`
`
`
`
`The basic method of matching is to calculate the cross correlation between the
`standard image and the contrasting region at each position while successively moving the
`standard image on the given image and finding out the region where the cross correlation
`becomes maximum. A problem of this method is that the amount of calculation becomes
`enormous. However, it becomes more practical due to realization of a special processor
`that executes the FFT (Fast Fourier Transform).
`SSDA (Sequential Similarity Detection Algorithm)10) can be used as a high speed
`
`calculation method for matching. In this method, the pixel is successively and randomly
`selected from the objective image region, the absolute value is calculated of the density
`difference between the selected pixel and the corresponding pixel of the standard image,
`and the results are added. When the added value exceeds a threshold value that is
`determined in advance, the contrastive region is considered to be a portion where matching
`cannot be obtained, and the calculation is interrupted. This operation is repeated for each
`region, and the region where the number of pixels that is added to exceed the threshold
`value becomes maximum is considered to be a region where matching can be obtained.
`
`
`
`VALEO EX. 1024_008
`
`
`
`466
`
`
`
`VOLUME 5 IMAGE INFORMATION PROCESSING
`
`The calculation time of the SSDA is said to be reduced by double digits or more
`
`compared to a case of directly calculating the correlation.
`
`These matching methods are practical when the relationship between the given
`image and the standard image is the parallel displacement. However, the amount of
`calculation becomes enormous and unrealistic when the rotation is involved.
`
`Orthogonal Conversion5), 6), 11), 12), 14)
`2.4
`The Fourier transform has been mainly used as the orthogonal conversion for a
`
`signal processing. However, there has been a problem that the calculation takes a long time
`with a conventional method.
`
`When the high speed calculation method (FFT) of a discrete Fourier transform was
`developed in 1965 by Colly-Tukey13) together with the development of the computer and
`digital devices, the Fourier transform has begun to be applied for filtering in digital image
`processing, band compression, extraction of the pattern characteristics, etc. Further,
`various types of the discrete orthogonal conversions that are appropriate for digital signal
`processing have come into practical use.
`
`2.4.1 Discrete Fourier Transform (DFT)
`When the sampled input signal is one-dimensional [X(n)]T = [X(0), X(1), …, X(N-
`
`1)], the DFT is defined as follows with a matrix Wkn having elements of k rows and n
`columns
`
`
`wherein W = exp(-i2/N). The inversion is
`
`
`
`.
`
`When the input signal is two-dimensional [X(m, n)] (n = 0, 1, …, N-1, m = 0, 1, …, M-1), a
`two-dimensional DFT is defined as follows
`
`
`The inversion is
`
`
`
`.
`
`
`
`.
`
`The linearity, the transition theorem, the correlation, the convolution, and Parseval’s
`
`theorem are established between the time domain [X(n)] and the frequency domain [Cx(k)]
`of this DFT.
`
`VALEO EX. 1024_009
`
`
`
`
`
`CHAPTER 2 IMAGE CONVERSION
`
`
`2.4.2 Walsh-Hadamard Transform (WHT)
`
`Using a Hadamard matrix [HN] that is an orthogonal matrix having +1 and -1 as the
`elements, the transform of the WHT is defined as
`
`
`the inversion is defined as
`
`, and
`
`.
`
`The powers of two have been mainly considered as the order N of this Hadamard
`
`matrix (N = 2k) as well as other orders15). The Hadamard matrix corresponding to the
`order N = 2k can be produced from a Kronecker product (also called as a direct product)
`based on a matrix
`
`.
`
`For example, the Hadamard matrix of N = 8 is
`
`
`
`
`
`
`
`
`
`
`Sequency
`
`.
`
`VALEO EX. 1024_010
`
`
`
`VOLUME 5 IMAGE INFORMATION PROCESSING
`
`467
`
`
`
`A half of the value that is obtained by adding a number of change and a change of
`
`the sign of both ends of +1 and -1 of each column in this Hadamard matrix is called a
`sequency corresponding to a frequency in the DFT.
`
`This Hadamard matrix produced from a direct product is not lined up in order of
`sequency. However, the Hadamard matrix lined up in order of sequency is called the
`Walsh-Hadamard matrix. This Hadamard matrix is an orthogonal matrix as well as a
`symmetric matrix ([H] = [H]T, T is a transposed matrix). Therefore, the transform and the
`inversion become almost the same operation. That is, the inversion becomes
`
`.
`
`The relationship is established in the time domain and the frequency domain in this
`
`WHT that is similar to the relationship of the time domain and the frequency domain of the
`correlation or the convolution in the DFT11).
`For the two-dimensional input signal X(n, m), a two-dimensional WHT is defined
`
`as same for the DFT. A case is shown in FIG. 5.7 of this two-dimensional WH function
`where the order is 4 X 4.
`
`2.4.3 Haar Transform (HT)
`
`The elements in the first and second rows of the Haar matrix are same as those of
`the Walsh-Hadamard matrix, and the elements in the third row and up of the Haar matrix
`consist of {±√2n, 0}. The Haar transform is defined by the Haar matrix in the same way as
`the DWHT. It is effective in the edge detection of the image processing.
`
`
`
`
`FIG. 5. 7 Two-Dimensional Walsh-Hadamard Function (black represents +1, and white
`represents -1.)
`An 8th order Haar matrix is shown as follows
`
`VALEO EX. 1024_011
`
`
`
`
`
`CHAPTER 2 IMAGE CONVERSION
`
`.
`
`
`2.4.4 Karhunen-Loeve Transform (KLT)
`
`The KLT is an orthogonal conversion of orthogonally converting the input signal of
`the N sample to minimize the error when the conversion coefficients C(k) (k = 0, 1, …, N-
`1) are truncated after M terms (M < N) as a squared error. This is also used as an optimum
`conversion of filtering and band compression in order to make C(k) be uncorrelated. This
`KLT can be obtained as follows.
`First, a correlation matrix R of the input signal X(n) is made, and a conversion
`
`matrix Φ is obtained that satisfies the following formula.
`
`
`
`wherein Ʌ is an eigenvalue, and Φ is an eigenfunction. The conversion and the inversion
`are defined using the obtained eigenfunction Φ.
`
`
`
`Because this Φ differs depending on the correlation matrix R of the input signal, it
`
`must be obtained for every input signal. Therefore, the processing method is complicated.
`
`2.4.5 Discrete Cosine Transform (DCT)
`
`Using a matrix in which all of the elements in the first row are 1, and the elements
`in the second row and up consist of {± cos(2x+1)k/2N} (x = 0, 1, …, N-1, k = 1, 2, …,
`N-1), the conversion and the inversion of the DCT are defined in the same way as the DFT
`and the WHT. The conversion characteristics are close to those of the KLT, and the
`processing method is simpler than that of the KLT. Therefore, the DCT has recently begun
`to be used in image processing.
`
`
`
`
`
`VALEO EX. 1024_012
`
`
`
`468
`
`
`
`VOLUME 5 IMAGE INFORMATION PROCESSING
`
`2.4.6 High Speed Orthogonal Conversion
`
`When the one-dimensional orthogonal conversion and inversion are carried out, a
`multiplication of N2 becomes necessary in a matrix of the Nth order. However, a mod N
`operation is performed on the exponent portion kx of the matrix element Wkx that is used in
`the DFT. Therefore, by using this characteristic, the same values often appear in the matrix.
`A method was developed by Cooley-Tukey13) in which the same calculations are performed
`together to reduce the amount of calculation and the memory can be economically used.
`This method is called FFT (Fast Fourier Transform) in contrast with the DFT. This FFT is
`nothing but to decompose a matrix consisting of the element Wkx into a sparse matrix
`having most of elements being zero.
`
`For example, when N = 8, a matrix that is used in the Fourier Transform is
`decomposed into three matrixes as follows. By decomposing in such way, the
`multiplication is performed in each sparse matrix eight times, and 64 times of
`multiplication of the original matrix are reduced to 24 times.
`
`
`
`
`
`VALEO EX. 1024_013
`
`
`
`
`
`CHAPTER 2 IMAGE CONVERSION
`
`
`wherein the blank elements are zero. By using the characteristic of the orthogonal matrix,
`the multiplication of N2 is reduced to the multiplication of Nlog2N in the FFT when N is a
`power of 2.
`
`Also in case of the WHT, the Hadamard matrix is decomposed as follows, and the
`addition operation of N2 is reduced to the addition operation of 2Nlog2N.
`
`
`
`VALEO EX. 1024_014
`
`
`
`
`
`VOLUME 5 IMAGE INFORMATION PROCESSING
`
`469
`
`
`The high speed operation in the two-dimensional orthogonal conversion is the same
`
`as the one-dimensional case. For example, the two-dimensional DFT is written in a matrix
`as
`
`Image Reconstruction6), 16)
`
`.
`
`Therefore, the high speed operation can be possible by rewriting the Fourier matrix into a
`sparse matrix.
`
`2.5
`
`2.5.1 Reconstruction of Tomographic Image from Projection
`
`The radiation that transmits an object becomes weakened by the inner absorption of
`the object. In FIG. 5.8, a tomographic plane of the object by a beam is shown when a fine
`radiation beam was passed through the object while it is moved in a parallel direction. The
`original intensity of the beam is I0, the intensity after transmission is I, and the linear
`absorption rate of the object at each point in the tomographic plane is µ(x, y). Then, the
`following relation is established.
`
`
`
`wherein the integration is performed along the beam path. The formula (2.1) becomes the
`formula (2.2).
`
`
`VALEO EX. 1024_015
`
`
`
`CHAPTER 2 IMAGE CONVERSION
`
`Radiation Source
`
`Section of Object
`
`Projection Image Pa(u)
`
`Detector
`Standard coordinate
`Coordinate showing the direction of the beam
`
`Fig. 5.8 Projection Image
`
`
`
`
`
`
`
`
`
`The value of the formula (2.2) that is obtained for each beam is called a projection,
`
`and the one-dimensional distribution that is obtained on the u axis is called a projection
`image.
`When many projection images in which the direction θ is changed are obtained, the
`
`inner linear absorption rate of the object can be determined from the projection images.
`That is, the distribution of µ(x, y) in the tomographic plane is reconstructed as an image.
`Further, a three-dimensional image of the object can be formed by obtaining many
`tomographic images at fine intervals.
`
`Such tomographic image construction method has been recently remarkably
`developed as an X-ray tomography (refer to Computerized Tomography, Volume 12,
`Chapter 3, Section 3.5) by computer processing since the development of an EMI scanner
`by Hounsfield.
`
`2.5.2 Method of Reconstruction17)
`
`Many methods have been proposed. However, a basic method is explained here. In
`the following explanation, a parallel beam shown in FIG. 5.8 is presupposed. However, it
`can be modified and applied to a fan beam.
`
`As shown in FIG. 5.9, a result of applying the one-dimensional Fourier Transform
`to the projection image of a two-dimensional image matches a value on a straight line
`through the origin and corresponding to the direction of projection in the space frequency
`spectrum in which the two-dimensional Fourier transform is performed on the original
`image (a projection cutting theorem).
`
`VALEO EX. 1024_016
`
`
`
`VOLUME 5 IMAGE INFORMATION PROCESSING
`
`Real Space
`
`Two-
`Dimensional
`Reverse Fourier
`Transform
`
`
`Fourier Space
`
`(c)
`
`
`470
`
`
`
`
`
`
`
`(a)
`
`One-
`Dimensional
`Fourier Space
`
`(b)
`
`
`Fig. 5.9 Principle of Two-Dimensional Fourier
`
`Transform Method17)
`
`
`
`Therefore, the two-dimensional Fourier Transform of the original image is obtained
`in a form of polar coordinates by arranging the result of the one-dimensional Fourier
`Transform of the obtained projection image on an axis corresponding to the projection in
`the two-dimensional frequency domain. The original image is reproduced by performing
`interpolation on the polar coordinates to convert them into a form of rectangular
`coordinates and performing the two-dimensional Fourier inverse transform.
`
`When the above-described operation is performed in the space domain, a next
`convolutional integration is performed to the projection image. When the convolutional
`integration is propagated backward, a reproduction image is obtained.
`
`
`
`Wherein φ(τ) is a weight function v, and various forms have been proposed by considering
`noise, etc. This convolution method has been most broadly used. Besides this, a method of
`successively solving the problem by forming simultaneous linear equations for the
`projection has been studied in various ways, and it is effective especially when the
`projection angle is limited.
`
`
`
`CHAPTER 3 IMAGE ENHANCEMENT AND RESTORATION
`
`
`
`Enhancement is a process of making a certain feature of the input image be easily
`visible, and restoration is a process of restoring a blurred image due to the optical
`characteristics into an original state. Any of these processes can be considered as a filter
`operation in a broad sense.
`
`3.1
`
`
`Enhancement
`
`VALEO EX. 1024_017
`
`
`
`
`
`CHAPTER 2 IMAGE CONVERSION
`
`3.1.1 Graduation Processing
`
`The simplest enhancement is to perform a graduation conversion to every pixel to
`focus on. The contrast improvement, the histogram equalization, the pseudocolor display,
`etc. are often used (refer to the present volume, Chapter 2, Section 2.2).
`
`3.1.2 Smoothing18) to 22)
`
`Smoothing is mainly used when a large change of the region is captured by
`suppressing a false effect generated by sampling or the A-D conversion and noise and by
`restraining imperceptible changes.
`
`
`
`
` (b) 9 pixels
`(a) 5 pixels
`FIG. 5.10 Neighboring Pixels Averaging
`
`
`
`
`Also refer to Section 3.2 of the present chapter for the handling related to the suppression
`of noise.
`
`The simplest smoothing can be implemented by an averaging operation with the
`neighboring pixels. For example, the center pixel value can be replaced by an average
`value of 5 pixels of FIG. 5.10 (a) or 9 pixels of FIG. 5.10 (b). Care must be used in
`smoothing because excessive smoothing causes blurring.
`
`In the frequency domain, smoothing can be implemented by a low pass filter. When
`space frequencies are represented by u and v, Fourier transformed image is represented by
`F(u,v), and a transfer function is represented by H(u, v), the filtering is generally shown as
`
`.
`
`The inverse Fourier transform is performed on this to obtain a processed image.
`
`
`
`VALEO EX. 1024_018
`
`
`
`VOLUME 5 IMAGE INFORMATION PROCESSING
`
`471
`
`
`
`When the Fourier transform is performed so that the DC component lies at the
`
`origin, as shown in FIG. 5.11, an ideal low pass filter transfer function of a cutoff frequency
`D0 is given as
`
`
`wherein
`
`
`
`.
`
`The ideal low pass filter is simple. However, a striped pattern may appear in the processed
`image because a point spread function performs a current damping movement.
`
`FIG. 5.12 (b) is an example of processing by the ideal low pass filter.
`Besides, an nth order Butterworth low pass filter as follows is also used.
`
`
`
`
`(The amplitude at the cutoff frequency D0 is 1/ of the maximum value.)
`
`Being different from the smoothing, elimination of a specified space frequency
`domain becomes effective to remove the periodic interference that is superimposed in the
`image.
`
`
`
`FIG. 5.11 Transfer Function H(u, v) of Ideal Low Pass Filter (a) and Ideal High Pass Filter (b)
`
`
`VALEO EX. 1024_019
`
`
`
`CHAPTER 2 IMAGE CONVERSION
`
`(a) Original Image
`
`
`
`(b) Low Pass Filter
`
`
`
`
`
`(c) Gradient
`
`
`
`(d) High Pass Filter
`
`
`
`
`(e) High Enhancement Filter
`
`
`
`
`
`
`
`FIG. 5.12 Examples of Filtering
`
`
`3.1.3 Sharpening18) to 22)
`
`The useful information of the image is often the edge or the portion where the
`density changes, and the attention of humans gathers to such portions. Therefore,
`sharpening is often very effective of enhancing the characteristics of such portions. It is
`also deeply related to the detection of the edge line (the present volume, Chapter 4, Section
`4.2) and the restoration of blurring (the present volume, the present chapter, Section 3.3).
`
`VALEO EX. 1024_020
`
`
`
`
`
`VOLUME 5 IMAGE INFORMATION PROCESSING
`
`The processing of displaying only the simplest edge portion is achieved by a
`
`differential operation. In case of a digital image, a finite difference is used in the first
`derivation, and a processing can be performed of taking the amplitude of the gradient as
`follows.
`
`
`
`The same result can be obtained by a processing of taking an absolute value as
`
`follows.
`
`
`
`
`VALEO EX. 1024_021
`
`
`
`530
`
`
`
`CHAPTER 2 IMAGE CONVERSION
`
`Magnetic Disc
`
`Image
`Input and Output
`
`Minicomputer
`
`Color
`Display
`
`Input: Television Camera
`Output: Dot Printer
`
`Interactive Function
`
`Interactive
`Terminal
`
`Joystick, Function Key, etc.
`
`FIG. 5.103 Interactive Image Processing System by
`Minicomputer
`
`
`
`
`9.5.1 Hardware for Interactive System
`
`The forms in which the interactive image system is implemented range from a form
`using TSS of a large computer to a form using a minicomputer. A system configuration
`using a minicomputer is shown in FIG. 5.103 that can be implemented relatively less
`expensively.
`A special feature of the interactive system is an image display218) that instantly
`
`shows the accumulation of the results of processing, etc. to be able to change the algorithm.
`Further, it is essential to be able to perform a conversation with human efficiently. The
`image display desirably has a function that indicates the points and the regions in the image,
`a function key that can assign various operations, etc. It is useful if it also has a color
`display function. The image processing hardware (Section 9.2) that has been actively
`developed recently also becomes necessary as the processing speed increases when a
`number of the objective images increases or the size of the objective image increases. In
`order to proceed a smooth conversation with humans, it is desired to display the result
`within a time in which the instruction of processing is sent.
`
`9.5.2 Software for Interactive System
`
`The software for the interactive system has a built-in image processing language
`that is easily used by humans. The procedure of the image processing to the objective
`image is programmed easily on the spot using this language, the parameters can be changed
`from the processing result, and a reprocessing can be partially performed. In order to
`perform such operations, it is necessary to have a program of the basic image processing
`function in abundance and to make the program and the image data be easily accessible.
`
`For executing interactive image processing, there are a method of processing by
`selecting an appropriate image processing function from the menu and a method of
`processing by instructing the image processing by a command input. The former method is
`for people who are not skilled in the system, and it is used when the target data are limited.
`In the latter method, the image processing function can be freely instructed, and it is
`introduced to a versatile system. The command input method desirably has a function of
`collectively and continuously executing a series of processing (making into macros) and a
`
`VALEO EX. 1024_022
`
`
`
`
`
`VOLUME 5 IMAGE INFORMATION PROCESSING
`
`function of automatically performing a processing of a large image that cannot be
`processed in a main memory without human intervention.
`Further, a highly developed system has been studied219) such as a system in which
`
`an image processing model can be described by a logical tree using AND and OR.
`
`9.5.3 Examples of Interactive Image Processing System
`An example of the software system by a large computer is a system220) in which a
`
`subroutine for image processing written in FORTRAN or an assembly language is called
`and executed by a command. This is targeted for the image data on a magnetic disc or a
`magnetic tape.
`There are several examples of the interactive image processing system217), 221), 222) by
`
`a minicomputer. It is a system in which the image processing program that is registered in
`a magnetic disc device is executed by the command input, and each of its characteristics is
`used in setting of parameters, making the program into macros, processing of a large image,
`etc.
`Besides, there is also an interactive system223) in which the data of moving images
`
`recorded in a video tape, a movie film, etc. are targeted.
`
`VALEO EX. 1024_023
`
`
`
`
`
`531
`
`References
`27) N.E. Nahi: Estimation Theory and Applications,
`John Willy & Sons
`28) N.E. Nahi: Proc. IEEE, 60, 7, pp.872~877,
`(1972)
`29) N.E. Nahi, T. Assefi : IEEE Trans, Comput., 21,
`7, pp. 734~738, (1972)
`30) N.E. Nahi, C.A. Franco: IEEE Trans. Compt.,
`22, 4, pp.305~311, (1973)
`31) A. Habibi: Proc. IEEE, 60, 7, pp.878~883,
`(1972)
`32) R.E. Graham: IRE Trans. IT, 8, 2, pp.129~144,
`(1962)
`33) H.J. Trussell: IEEE Trans. Syst. Man &
`Cybern., 7, 9, pp. 677~678, (1977)
`34) N.E. Nahi, A. Habibi: IEEE Trans. Circuit &
`Syst. 22, 3, pp.286~293, (1975)
`35) Ishizuka, Inose: Electronics Society, IEICE,
`61-B, 8, pp. 753~760, (1978)
`36) H.C. Andrews, B.R. Hunt: Digital Image
`Restoration, (1977), Prentice-Hall
`37) A. V. Oppenheim: Applications of Digital
`Signal Processing, (1978), Prentice-Hall
`38) B.R. Hunt: IEEE Trans. Compt., 22, 9, pp.
`805~812, (1973)
`39) Nagao, Kanade: Electronics Society, IEICE, 55,
`12, pp. 1618~1627, (1972)
`40) Kanade, Journal of Television, 31, 5, pp.
`385~392, (1977)
`41) G. VanderBrug: Comput. Graphics Image Proc.,
`4, 3, pp.287~293, (1975)
`42) M.H. Hueckel: J. ACM. 18, 1, pp.113~125.
`(1971)
`43) L. Mero, Z. Vassy: Proc. 4th Int. Joint Conf. on
`Artificial Intelligence, pp.650~655. (1975)
`44) I.E. Abdou : USCIPI Report, 830, (July 1978),
`Univ. of Southern California
`45) M.H. Hueckel: J. ACM, 20, 4, pp.634~647,
`(1973)
`46) R. Duda, P. Hart: C. ACM, 15, 1, pp 11~15,
`(1972)
`47) J. Sklansky : IEEE Trans.