`
`Cytometry 25:235-245 (1996)
`
`Automated 3-D Montage Synthesis From
`Laser-Scanning Confocal Images: Application to
`Quantitative Tissue-Level Cytological Analysis
`Douglas E. Becker, Hakan Ancin, Donald H. Szarowski, James N. Turner, and Badrinath Roy-
`Rensselaer Polytechnic Institute, Troy (D.E.B., H.A., J.N.T., B.R.), and Wadsworth Center for Laboratories and Research, New
`York State Department of Health, Albany (D.H.S., J.N.T.), New York
`Received for publication October 17, 1995; accepted June 6, 1996
`
`This paper presents a landmark based method for
`efEcient, robust, and automated computational syn-
`thesis of high-resolution, two-dimensional (2-D) or
`three-dimensional (3-D) wide-area images of a spec-
`imen from a series of overlapping partial views. The
`synthesized image is the set union of the areas or
`volumes covered by the partial views, and is called
`the “montage.” This technique is used not only to
`produce gray-level montages, but also to montage
`the results of automated image analysis, such as 3-D
`cell segmentation and counting, so as to generate
`large representations that are equivalent to process-
`ing the large wide-area image at high resolution.
`The method is based on computing a concise set of
`feature-tagged landmarks in each partial view, and
`
`establishing correspondences between the land-
`marks using a combinatorial point matching algo-
`rithm. This algorithm yields a spatial transforma-
`tion linking the partial views that can be used to
`create the montage. Such processing can be a &st
`step towards high-resolution large-scale quantita-
`tive tissue studies. A detailed example using 3-D la-
`ser-scanning confocal microscope images of acrifla-
`vine-stained hippocampal sections of rat brain is
`presented to illustrate the method.
`0 1996 wiley-us, hc.
`
`Key terms: Higher-level tissue architecture, confo-
`cal microscopy, automated 3-D image analysis, cell
`counting, computational montage synthesis
`
`An efficient, robust, and widely applicable technique is
`presented for computational synthesis of high-resolution,
`three-dimensional (3-D) wide-area images of a specimen
`from a series of overlapping 3-D partial views. The syn-
`thesized image is the set union of the volumes covered by
`the partial views, and is called the “3-D montage.” One
`application of this technique is the high-resolution digital
`imaging of specimens that are much wider than the field
`of view of the microscope (see Fig. 2). A powerful aspect
`of this technique is that it can be used to combine the
`results of various forms of image analysis, such as auto-
`mated 3-D segmentation and cell counting (1, 3), to gen-
`erate large visual and computer database representations
`that are equivalent to processing the large wide-area im-
`age at a high resolution all at once (see Figs. 2-4) (12,
`27). It is often impractical or impossible to analyze the
`full data set at once due to storage and computer pro-
`cessing limitations. A related problem is that equipment
`to obtain 3-D data may limit the amount of data obtained
`at one time. In this case, a large specimen may be imaged
`in several parts that are subsequently combined (7).
`Currently, montaging is carried out manually, or with
`interactive computer assistance (21 ). By automating this
`procedure, this work allows montaging to be carried out
`rapidly, accurately, and routinely over large volumes of
`
`tissue, and for large batches of specimens. This improves
`upon the two-dimensional (2-D) correlation-based work
`of Dani and Chaudhuri (13). Their method can be too
`computationally intensive for 3-D volumetric data, such
`as that obtained by confocal microscopes. In addition, it
`is difficult to extend this method to transformations other
`than translation. For instance, they use a feature-based
`method to handle rotations. Finally, much of the reported
`work has involved gray-level montaging, rather than the
`montaging of image analysis results (13, 14, 22, 23).
`The data collection requirements of our method are
`minimally different from conventional microscopy. It
`only requires each partial view to have an adequate over-
`lap with the adjoining partial view. Adequate overlap is
`needed to ensure a sufficient number of common land-
`mark points. The precise extent of the overlap need not
`be recorded, or even known. However, if the overlap is
`
`This work has been supported by grants from Procter and Gamble Co.,
`The National Science Foundation (SGER MIP 9412500 and BIR
`9108492). National Institutes of Health (RR01219), The Whitaker Foun-
`dation, AT&T Foundation, and Digital Equipment Corporation.
`Address reprint requests to Badrinath Roysam, Associate Professor,
`Electrical, Computer, and Systems Engineering Department, Rensselaer
`Polytechnic Institute, Troy, NY 12180-3590. E-mail: roysam@ecse.
`rpi.edu.
`
`ZEISS, et al., Ex. 1015 Page No. 001
`
`
`
`236
`
`BECKER ET AL.
`
`known to be in a fixed range a priori, then the algorithm
`is capable of exploiting this information to limit its
`search, and hence provide greater speed and robustness.
`Our method does not require a highly precise x-y speci-
`men translation stage because the montaging is carried
`out by automatically matching computed landmarks on
`the partial view independently of their absolute location.
`The precision required to image a series of fields trans-
`lated relative to each other is easily achieved by using
`stepper motors driving precision spring-loaded lead
`screws in open loop operation. Because of these rela-
`tively relaxed mechanical precision requirements, back-
`lash does not need to be compensated for and elaborate
`sensors for feedback operation are avoided. Detailed con-
`sideration of the precision of the z-axis motion is a more
`complex issue and is beyond the scope of the present
`work. However, it is important to emphasize that the
`results presented here are from images collected with a
`simple z-drive stepper motor operated in open loop
`mode. Finally, the examples presented in this paper in-
`volve partial views that are simply translated relative to
`each other, although the core mathematical techniques
`discussed in this paper are general enough to allow the
`partial views to be rotated, scaled, and translated relative
`to each other.
`The problem of registering images is a much studied
`one. A comprehensive survey of these methods has been
`compiled by Brown (10). Briefly, there are two types of
`methods: those based on pixel intensity correlations; and
`those based on landmark features. Correlation based
`methods make direct comparisons (of one sort or another)
`among pixel or voxel(3-D pixel) intensity levels ( 1 1,38).
`These methods have two major disadvantages. First, they
`can fail when faced with differences between images such
`as local lighting changes. Second, they require comparison
`of each voxel in each image for each potential registration,
`making computation time impractically long. Landmark
`based methods begin by identifying a concise set of land-
`mark points in each image. Comparisons between the
`images are made based on these landmarks alone, reducing
`the computation time dramatically (32). The performance
`of such methods depends heavily on the availability, ro-
`bustness, reproducibility, and spatial accuracy of the land-
`mark points. Note that the landmarks can be completely
`artificial in that they do not need to correspond to bio-
`logically significant structures. In this context, correlation
`based methods may still be appropriate when reliable
`landmarks are unavailable. Once a set of landmarks has
`been established, the problem of computing a correspon-
`dence between a pair of landmark sets arises. Many well-
`known methods, such as that of Arun et al. ( 5 ) , assume that
`correspondences are given and use them to perform a
`least-squares estimation of the transformation parameters.
`Others, such as Umeyama (35), assume that every point
`in each set has a correspondence in the other, i.e., no
`spurious or missed points. The problem of establishing
`correspondences is necessarily a combinatorial one, and
`ohen computationally complex(6,15,16,24). Yuille (37)
`has discussed some of these issues, and described deform-
`
`able template methods for their solution. Ravichandran
`(26) has improved upon this work, and described a ge-
`netic optimization algorithm that simultaneously performs
`correspondence, as well as transformation estimation.
`The algorithm reported here additionally uses ideas used
`by Ravichandran, such as pose clustering (30, 31). Our
`approach is robust to small numbers of spurious and miss-
`ing points, and achieves significant speedups by using
`landmark features and solution constraints, when avail-
`able.
`
`METHODS
`Specimen Preparation and Imaging
`Wistar rats were anesthetized and perfusion-fixed with
`4% paraformaldehyde in 0.2M phosphate buffer. Brains
`were removed, placed in fresh fixative overnight at 4"C,
`blocked to expose the hippocampus, and sliced on a vi-
`bratome at a thickness of 75 pm (33,34). The slices were
`stained with acriflavine to contrast fluorescently the nu-
`clei using our previous procedure (20) which is a mod-
`ification of that of Schulte (29). Slices were post-fixed in
`4% paraformaldehyde dissolved in 0.1 M phosphate
`buffer at pH 7.4 with 5% sucrose for 2 hours, and rinsed
`several times in buffer followed by distilled water. En-
`dogenous aldehydes were blocked by immersion in 5
`mg/mL sodium borohydride for 1 hour, rinsed several
`times, and stored overnight in a large amount of distilled
`water. Slices were hydrolyzed in 6N HCI for 30 minutes,
`stained in 0.5% acriflavine for 30 minutes, rinsed, dehy-
`drated through graded ethanols, rehydrated to distilled
`water, and mounted in glycerol with 0.1% n-propylgal-
`late. The dehydration removed all nonspecifically bound
`dye and the rehydration expanded the tissue to its fixa-
`tion volume, making 3-D imaging and quantitation easier
`and more accurate.
`Confocal microscopy (25) was performed using a Bio-
`Rad (Hercules, CA) MRC 600 attachment mounted on an
`Olympus IMT-2 inverted microscope using 488 nm illu-
`mination from the argon-ion laser and either a 540-nm
`long pass filter (Ealing Electro Optic, Corp., S. Natick,
`MA) or a 488 nm notch filter (Kaiser Optical Systems,
`Inc., Ann Arbor, MI) as the barrier filter. Optical sections
`were recorded with a X40 1.0 NA objective lens re-
`corded at 1.0-pm intervals along the optic- or z-axis using
`the X 1.2 electronic zoom. The images were initially
`stored in the Bio-Rad pic format on the microscope host
`computer (IBM compatible), and uploaded to a net-
`worked IBM RS/6000 computer. The images were then
`transferred to a Silicon Graphics Indigo I1 computer over
`the network, where they were processed using custom
`programs (1-4).
`The montaged sampling of large fields was accom-
`plished by imaging the specimen in a two-dimensional
`array of overlapped 3-D windows by systematically step-
`ping the motorized x-y stage of the microscope under the
`control of the host computer. The collected image files
`were numbered in the order that they were collected,
`and processed in the same order, although the image anal-
`ysis algorithm is inherently capable of processing them in
`
`ZEISS, et al., Ex. 1015 Page No. 002
`
`
`
`AUTOMATED 3-D IMAGE ANALYSIS MONTAGE SYNTHESIS
`
`237
`
`a different order, as discussed later in this paper. For the
`montages presented here, the extent of the overlap was
`varied randomly, and the overlap factors were not pro-
`vided to the computer program. For each window, the
`microscope was focused manually on the top and bottom
`of the specimen, and the appropriate number of optical
`slices were captured. All the 3-D images in the montage
`have the same z-axis spacing. In order to test the robust-
`ness of the algorithm, a series of image pairs were col-
`lected with overlap factors at 10, 15,20, and 30%. For the
`examples, the algorithm was supplied with an overlap
`range of 10-75% in any one dimension in which to seek
`out plausible translation estimates. This allowed the
`method to operate about 6 times faster than in the case
`when no such constraint information was provided.
`Image Analysis
`The synthesis of a montage can be expressed mathe-
`matically in terms of spatial transformation of the individ-
`ual images in the series. Each partial view is stored in the
`computer as a digital image made up of voxels. Each
`voxel has a numeric gray scale value representing the
`brightness at that point of the image. Each digital image
`implicitly defines a coordinate system, with its units the
`size of the image voxels and its origin at a corner of the
`image. By transforming each voxel(x, y, z ) of an image to
`a new location, various processes such as translation, scal-
`ing, rotation, warping, and other effects can be per-
`formed. The problem of creating a montage is equivalent
`to determining the transformation of each digital image
`onto a common coordinate system (the “montage coor-
`dinate system”). These transformations must be deter-
`mined based on the contents of the partial views. There-
`fore, the partial views must be recorded with some
`degree of overlap, although it is not necessary to record
`the precise extent of the overlap. If the areas of overlap
`can be determined, then the transformations of the partial
`views onto the montage coordinate system can be deter-
`mined and the montage synthesized.
`First, a concise set of landmark points is identified in
`each partial view. In an area of overlap between two par-
`tial views, the pattern of landmarks is about the same.
`Therefore, if one set of landmarks is transformed cor-
`rectly onto another, the landmarks will fall on approxi-
`mately the same locations in the region of overlap. This
`leads to the following strategy. One partial view is se-
`lected to form the basis of the montage, and defines the
`montage coordinate system. The other partial views are
`added to the montage in turn. First, correspondences are
`hypothesized between landmarks in the montage (ini-
`tially a selected partial view) and landmarks in the partial
`view. A “correspondence” between two landmarks indi-
`cates a belief that the landmarks represent the same fea-
`ture at the same location. Associated with each set of
`correspondences is a spatial transformation (translation,
`scaling, and/or rotation) relating the montage and the
`next partial view. Knowing which landmarks correspond
`allows us to transform the partial view to align it with the
`montage. Feasible transformations are evaluated by
`
`counting the number of points that nearly coincide in the
`aligned images, and the best possible transformation is
`computed. The partial view is then merged with the mon-
`tage, and duplicate landmarks are removed. When all par-
`tial views have been added to the montage, the process is
`complete.
`
`Fast Transformation Search
`Given two sets of points representing landmarks over a
`pair of images, and, optionally, a set of features associated
`with each landmark, it is desired to extract a 3-D spatial
`transformation that maps one set of points into the other.
`The matching algorithm is not provided correspondences
`between the points in the two sets, so they must be com-
`puted. In general, this problem is known to be computa-
`tionally intensive. However, the nature of our application
`restricts the possible transformations between sets of
`points. Each landmark in the image is at a location v = (&
`y, 2). A transformation T: v-w can be defined that maps
`each vector v to a new position vector w. The transfor-
`mations that are applicable for the problem of interest are
`3-D translational transformations of the type T: w = v +
`t, where (t, t,, t,) is the translation vector. This is valid
`when the specimen is moved in a plane perpendicular to
`the optical axis of the microscope without rotation, and
`the magnification of the microscope is fixed. This algo-
`rithm may be readily modified to provide for more gen-
`eral types of transformations, e.g., to account for inten-
`tional or inadvertent rotation, changes in magnification,
`and image warping. These changes could be deliberate,
`for studying particular regions, or due to aberrations.
`The landmark point matching algorithm has two steps.
`First, a set of correspondences are hypothesized between
`the landmark sets. Second, the transformations are evalu-
`ated by computing a “score” for each transformation that
`measures how well each hypothesized transformation
`corresponds to reality. The algorithm operates by corre-
`sponding pairs of landmarks. Every “plausible” corre-
`spondence between a point in the first set and a point in
`the second set defines a transformation. A “plausible” cor-
`respondence is a match that induces a transformation that
`is within the expected limits of the amount of overlap
`between images. Given a plausible correspondence be-
`tween a point a, in the first set and a point a, in the
`second set, the translation vector is simply t = a, - a,.
`If there are n points in the first set and rn points in the
`second set, then there are n X m point-to-point matches.
`Although few of these matches are plausible, there are
`still a large number of transformations to evaluate.
`Since many of these transformations will often be sim-
`ilar or, ideally, the same, it would be wasteful to evaluate
`each transformation. The algorithm avoids this by main-
`taining a list of transformation “clusters” (31). For the
`results reported here, 45 transformation clusters were
`evaluated. This represents an extremely conservative
`choice; one could evaluate far fewer clusters. Each trans-
`formation cluster represents a set of plausible matches
`which induce similar transformations. Each transforma-
`tion T = (tx, t,,, t,) can be considered to be a point in a
`
`ZEISS, et al., Ex. 1015 Page No. 003
`
`
`
`238
`
`BECKER ET AL.
`
`3-D space. These points will be called “transformation
`points.” Transformation clusters can be defined as groups
`of transformation points that are within a certain distance
`(in 3-D space) of the center of the cluster, defined as the
`average of all the points in the cluster. Because there is no
`need to access the individual transformation points of a
`cluster, a cluster may be stored as a transformation point
`identifying the cluster center and a count variable indi-
`cating how many transformation points are in the cluster.
`In this way, the evaluation of the transformation, which is
`computationally expensive, need not be computed for
`each match, but only for each cluster. The algorithm for
`generating clusters is the following. For each plausible
`match, the transformation vector T induced by the match
`is computed. The vector T is then compared to the trans-
`formation represented by each cluster. If the distance (in
`transformation space) between any cluster center and T
`is less than a certain threshold value 4 the match in ques-
`tion is added to the cluster, the number of transformation
`points in the cluster is incremented, and the cluster cen-
`troid is re-computed. Otherwise, a new cluster is created
`consisting of one match at T.
`Each of the clusters computed as above defines a trans-
`formation. The next step is to select the optimal transfor-
`mation. Because a correct transformation should map a
`large number of landmark points correctly, it is intuitively
`reasonable that the optimal transformation cluster will
`have a large number of transformation points associated
`with it. In fact, it is expected that the correct transforma-
`tion cluster would have more points than any other clus-
`ter, although there may be exceptions. Therefore, all the
`clusters do not need to be evaluated; only the ones with
`the most transformation points.
`Evaluation of Transformations
`To evaluate a transformation, the landmark points of
`the partial view are transformed and compared with the
`points in the current montage. Let P and Q denote the set
`of landmark points in a partial view, and the current mon-
`tage, respectively. A point pi. in the set P is transformed to
`T(p,). The closest landmark in the current montage to the
`transformed point pi. is denoted q*(pi), and computed as
`follows:
`
`( 4 )
`
`Next, a set of points P* is computed that contains all
`points in P that, when transformed by T, are less than
`distance D from a point in the current montage:
`P* = (pi E P:IIT(pi) - q*@Jl < D}.
`( 5 )
`Then, we can define an evaluation function as follows:
`
`AT,P,Q) = C<D* - IIT@i> - q*(Pi>(2>. ( 6 )
`
`pt€F-
`This function assigns high values for points in the partial
`view that are mapped close to points in the montage.
`Because the function does not require exact matches be-
`tween points, but degrades gracefully, allowing some dif-
`
`ference in the locations of points, the system will allow
`some error in the landmark point identification. The cen-
`ter of each of the N best transformation clusters is eval-
`uated, where N can be selected to effect a trade-off be-
`tween computation time and failure rate. In the results
`presented in this paper, N was selected to be 45, a very
`conservative value which virtually guarantees a minimal
`failure rate. The transformation with the highest evalua-
`tion function value is chosen as the transformation from
`the image onto the montage coordinate system.
`Method for Combining Landmarks
`As the montage is generated, extent of the overlap be-
`tween the partial montage and the new image frame is
`determined. The portion of the new image that is not
`represented in the current wide-area map is now inserted
`into the montage. In addition, the portion of the montage
`that overlaps with the new image frame is updated to
`reflect the new level of confidence in the landmark
`points. In particular, associated with every landmark
`point in the montage is a count of the number of times it
`coincided with a landmark in a new image (coincidence
`counts), and also a count of the number of times a new
`image reliably overlapped with the relevant spatial region
`(observation counts). A “confidence value” is computed
`by dividing the number of coincidence counts by the
`number of observation counts. Points with higher confi-
`dence values are considered more reliable. Landmarks
`from the montage that are approximately in the same
`location as landmarks in the transformed partial view are
`considered to be duplicates. Duplicate landmarks are re-
`placed by a single landmark with an increased coinci-
`dence count. As more overlapping images are added to
`the montage, a set of very reliable landmark points be-
`come available.
`
`EXPERIMENTAL RESULTS
`Nine individual 3-D image fields were montaged in all
`three dimensions. The algorithm was shown to work over
`a wide variation of image feature sizes and densities
`within the montage set. Nuclei in the extrapyramidal re-
`gion of the hippocampus tend to be small and are often
`separated by several nuclear diameters; those in the py-
`ramidal layer are always very densely packed and have a
`wider distribution of sizes with most of them larger than
`the majority of extrapyramidal cells. The algorithm per-
`formed well in both regions.
`The results of montaging two of the nine fields de-
`scribed above are detailed in Figure 1. Maximum inten-
`sity value projections of the 3-D image data are shown in
`Figure la. The first field has been assigned a pure red
`tone and the second a pure green. The region of overlap
`is, therefore, yellow. The quality of the montage can be
`visually accessed by comparing individual nuclei that
`span the boundaries of the overlapped (yellow) region.
`In the bottom right corner of this region, there are four
`nuclei that are partially red. This is due to the fact that the
`optical sections composing the second field did not in-
`clude the entire volume of these nuclei while the first
`
`ZEISS, et al., Ex. 1015 Page No. 004
`
`
`
`AUTOMATED 3-D IMAGE ANALYSIS MONTAGE SYNTHESIS
`
`239
`
`FIG 1. The result of montaging two adjacent and overlapping fields of
`acriflavine stained nuclei recorded from a thick tissue slice of a rat
`hippocampus. a A 24-bit color-tagged maximum-intensity projection of
`the two montaged fields. The first field of the montage is shown using a
`red color scale by assigning the 8 bits of the red channel the intensities
`of the first field. The second field is shown using a green color scale,
`
`using thc 8 bits of the green channel's intensity values. The overlapping
`registered region appears in shades of yellow. b: The result of montaging
`the segmentations of the same two fields. Each segmented nucleus is
`assigned a randomly-chosen color. The two fields have a width of 200
`pm, and are composed of 60 optical sections recorded at l-pm intervals.
`A X 40 1.0NA objective lens was used.
`
`ZEISS, et al., Ex. 1015 Page No. 005
`
`
`
`240
`
`BECKER ET AL.
`
`field did. Thus, nuclear cross sections in the first field
`that were not in the second remain red. Figure l b shows
`the result of montaging the segmentations of the two
`fields. This was accomplished by separately segmenting
`the two fields using methods described elsewhere (54).
`The object centroid estimates resulting from this segmen-
`tation were used as landmark points for the montaging.
`Nuclei that were partially imaged, i.e., were at the image
`edges, were processed using Howard’s brick rule ( 17) by
`consistently deleting such nuclei at the left and upper
`boundaries of each partial view image. In the merged
`region, each detected object has two segmentations from
`the two individual partial views. The segmentation with
`the larger volume was included in the montage. The pro-
`jections of each segmented nucleus were randomly as-
`signed colors. The pyramidal cell layer is the densely
`nucleated band running roughly horizontally across the
`center of the field.
`An x-y view of the 3-D montage of nine individual
`fields that are overlapped in x, y, and z is shown in Figure
`2a, displayed as a maximum-intensity projection. For
`each joining operation, the displayed gray level in the
`overlap region is simply obtained from the maximum-
`intensity projection of the first image in the montaging
`sequence. This processing can optionally be performed
`using other operators, such as a linear blend operator.
`The pyramidal layer is seen to form an arc running across
`the entire montage. The first 6 fields from the left are
`mainly displaced with respect to each other in the x-di-
`rection (left-to-right) with a relatively small change in the
`y-direction. The remaining three fields are significantly
`displaced in both directions. Table 1 lists the values of the
`x, y, z displacements of the nine partial fields. There are
`large changes in x between all the fields with relatively
`small changes in y for the first six fields. Fields 7, 8, and
`9 are significantly displaced in both x and y. The z-dis-
`placement varies from 12 voxels along one direction in z
`and 27 in the opposite direction. The variation in z is
`caused by the tissue warping. This is a common problem
`with such thick tissue slices.
`The same projection of the montaged field is rendered
`in Figure 2b with each nucleus replaced with an artificial
`colored sphere. Each image field is delineated by a dif-
`ferent colored rectangle. The spheres have a diameter
`such that the volume of the sphere is equal to the volume
`of the image of the corresponding nucleus. The smallest
`nuclei are represented by red spheres with the color pro-
`gressing along the spectrum to violet as the size in-
`creases. It can be seen that the amount of overlap be-
`tween adjacent fields was widely varied.
`The vast majority of small (red) spheres in Figure 2b
`are located in the extrapyramidal regions and the vast
`majority of the larger spheres are located in the pyrami-
`dal layer. This corresponds to the known anatomic dis-
`tribution of neural and glial cells in the hippocampus. It
`is easy to visualize the overall distribution of the cells
`based on nuclear volume. A few large green, blue, and
`violet nuclei are seen distributed in the extrapyramidal
`regions and the majority of the pyramidal nuclei are seen
`
`to be in the size range corresponding to the yellow and
`green spheres.
`The montage is shown from another perspective pro-
`jection angle in Figure 3a. This view is from above the left
`end relative to Figures 2a and 2b. The image fields pre-
`viously delineated by rectangles are now delineated by
`3-D boxes of different colors, giving a better 3-D impres-
`sion of the distribution of the various sized nuclei. By
`comparing the extrapyramidal nuclei in this projection to
`that of Figure 3, it is immediately recognized that these
`nuclei are distributed throughout the z-direction.
`The acriflavine stain is stoichiometric for the purine
`bases in DNA and its fluorescent signal is, therefore, a
`good indicator of DNA content. In Figure 3b, the color of
`the spheres indicates the average intensity level of each
`nucleus. We have classified the cells into two categories
`based on average intensity using an unsupervised hierar-
`chical cluster analysis program described earlier by us ( 1,
`27). A red color was assigned to the group of nuclei with
`the significantly higher average intensity, and a green
`color to the rest of the population. It is apparent that the
`ones with the higher average intensity are nearly all ex-
`trapyramidal cells and therefore have denser nuclei than
`those of the pyramidal layer. This image is only one ex-
`ample of a large variety of informative representations
`that can be computed in this manner. For instance, one
`can classify the cells by other properties, such as size and
`shape descriptors, or even by weighted combinations of
`such properties. It is also important to note that the ren-
`derings such as those shown in Figures 2 through 4 are
`just a few representative snapshots of images that can be
`rendered and modified interactively in real time. For ex-
`ample, the coloring scheme used for the spheres, their
`surface texture, size, and viewing parameters such as the
`magnification, angle, type of projection, and lighting ar-
`rangements can all be modified interactively on the com-
`puter, and the total cell population can also be moved
`around by the experimenter as desired.
`To produce montages of large tissue volumes most ef-
`ficiently from the standpoint of data collection and com-
`putation, it is desirable to make the overlap as small as
`possible. Figure 4 shows the montage of a series of fields
`with variable overlaps (left to right 30,20, 15, and 10% of
`the field width along the horizontal direction). No differ-
`ence in image quality is discernible in the overlapped
`regions, demonstrating that for these data sets the overlap
`can be as small as 10%. An enlargement of the montaging
`of field 4 and 5 of Figure 4a, i.e., the 10% overlap case,
`not presented here, showed excellent montaging.
`
`FIG. 2. a A frontal (x. y) 2-D maximum-value projection of the 3-D
`gray scale montage of nine adjacent fields, including the two from Figure
`1. b: An x-y rendered projection of the image analysis result correspond-
`ing to the montage shown in a. Each detected nucleus is represented by
`a sphere. The color of the sphere is representative of the size of the
`nucleus, with the largest ones being violet, the smallest red, with the size
`being related to the visual color spectrum. The dense band through the
`center of the montage is the hippocampal pyramidal layer. The boxes
`indicate the relative locations of the individual 3-D partial views and
`depict the randomly varying x-y overlap.
`
`ZEISS, et al., Ex. 1015 Page No. 006
`
`
`
`ZEISS, et al., Ex. 1015 Page No. 007
`
`
`
`242
`
`BECKER ET AL.
`
`Image number
`1
`2
`3
`4
`5
`6
`7
`8
`9
`
`Table 1
`3-0 Displacement Values Corresponding to the Montage
`Displayed in Figures 2-3"
`Ax (voxels)
`Az (voxels)
`Ay (voxels)
`0
`0
`0
`122.9
`2.4
`0.5
`12.5
`3.4
`257.3
`12.9
`7.4
`396.8
`24.7
`9.3
`565.6
`724.2
`27.1
`11.6
`16.3
`69.5
`923.7
`107.3
`1,095.3
`0.4
`-12.1
`145.4
`1,194.7
`The first image is defined to be at the origin. The voxel size
`along the x and y axes was 0.393 Fm, and was 1 pm along the
`z-axis. The individual images were of size 512 X 512 X 60,
`which were down-sampled to 256 X 256 X 60 after segmen-
`tation at the full resolution. The images are numbered from left
`to right, the order in which they were acquired. All displace-
`ments are expressed as multiples of the voxel size along the
`corresponding dimension of the down-sampled images.
`
`The 9-imag