`hypermedia documents
`
`Authors:
`
`Abstract:
`
`Michael D. Doyle, Ph.D.
`Chairman and CEO
`Eôlas Technologies Incorporated
`7677 Oakport St., Ste. 646
`Oakland CA, 94621
`email: mddoy1eCnetcom.com
`
`Cheong S. Ang, M.S.
`Prograimner/Analyst
`Innovative Software Systems Group
`Center for Knowledge Management
`University of California, San Francisco
`email: cheongckm.ucsf.edu
`
`David C. Martin, M.S.
`Head Innovative Software Systems Group
`Center for Knowledge Management
`University of California, San Francisco
`email: martincckin.ucsLedu
`
`The World-Wide-Web (WWW) has created a new paradigm for online information retrieval by providing
`immediate and ubiquitous access to digital information ofany type from data repositories located
`throughout the world. The web's development enables not only effective access for the generic user, but
`also more efficient and timely information exchange among scientists and researchers. We have extended
`the capabilities ofthe web to provide an improvement to the current paradigm for interacting with inline
`images, and to allow multidimensional image datasets to be embedded, together with realtime interactive
`viewers, within WWW decuments. Those datasets can then be accessed via our modified version of
`NCSA's Mosaic WWW browser. This paper will provide a briefbackground on the World- Wide-Web, an
`overview ofthe extensions necessary to support these new data types and a description of an
`implementation ofthis approach ina WWW-compliant distributed visualization system.
`
`1. Introduction
`
`This paper describes two proprietary technologies, CCI++ and MetaMAP®, which drastically expand
`the the ability to implement robust client/server applications over the Internet. Much ofthis work relates
`to commercial applications for the World Wide Web (WWW). The WWW was developed by Tim
`Berners-Lee, at Switzerland's Particle Physics Laboratory (CERN), in 1989 as a way to manage technical
`documents for groups of geographically-remote collaborators. It wasn't until 1993, when Marc
`Andreessen at the University of fflinois' National Center for Supercomputing Applications (NCSA)
`developed MOSAIC, that things really began to take off. MOSAIC presented people with such an easy-to-
`use interface to the Internet that it was soon called the "Killer App for the Information Superhighway"
`[1,20J. Andreessen left NCSA to found a company, originally called Mosaic Communications, Inc., now
`called Netscape Communications, Inc., to create a commercial competitor to MOSAIC. Despite the great
`activity in the WWW browser market, current browsers are severly limited in the types of information
`which they can handle. The CCI++' and MetaMAP technologies allow WWW developers to go far
`beyond those limits in creating robust applications based on the Web..
`
`O819417645/95j$6M0
`
`SPIE Vol. 2417 / 503
`
`Petitioner Microsoft Corporation - Ex. 1075, p. 1
`
`
`
`A fewfacts will ifiustrate the speed at which new business opportunities are developing in this area.
`Although Netscapewas founded less than a year ago, it's most rent funding round valued the company
`at over $200 million. Several months after Netscape was founded, NCSA signed a deal with another
`company, Spyglass, Inc., to commercialize MOSAIC. To date, Spyglass has licensed over 30 million
`copies ofMOSAIC. Microsoft recently announced that Spyglass MOSAIC would be bundled th their
`next version ofWindows (Windows 95)[3]. Financial institutions, such as Bank ofAmerica, MasterCard,
`and Visa are already allied with either Spyglass or Netscape in anticipation ofprofiting from the vast
`business opportunities the WWW prodes [3J. Much ofthis commercial interest relates to the user-
`friendly nature ofMOSIAC and the other WWW browsers.
`
`2. MetaMAP®
`
`In order to achieve the types ofuser-friendly interfaces that provide the "point and click" simplicity that
`today1s users demand, multimedia programmers typically encode "hot spot? into their applications. Hot
`spots are areas on the screen that allow the program to respond in some fashion when they are clicked
`upon. The earliest interactive applications employed only rectangular hot spots. These were the easiest
`for the programmers to implement, since the program only had to compare ranges ofx and y coordinates
`to see if it should do anything when a mouse click was detected This worked fine for things like menus
`and dialog boxes, but around 1988 the approach began to prove inadequate for many ofthe newer
`hypermedia applications that were beginning to appear.[17] Those newer applications often included
`realistic photographic images or illustrations that included many components. A good example would be
`an illustration ofthe anatomy of a frog in a program to teach high school biology. Teachers wanted
`programs that would allow students to click anywhere on the picture ofthe frog and have the computer
`tell them what the anatomical structure was. Since each ofthe anatomy "objects" in the illustration was
`irregularly shaped, the use of rectangular hot spots proved woefully inaccurate. Such a limitation played a
`significant role in the failure of IBlvFs InfoWindows system to gain significant market share against
`competitors like Supercard on the Macintosh, which supported irregular hot spots.
`
`Many innovative programmers at the time began to implement systems which allowed users to define
`irregular hotspots on images. During the authoring phase ofa multimedia application, the user would
`interactively trace the outhne ofeach object in an image that was intended to be "hot." The computer
`would store the coordinates ofeach object as a separate polygon, together with the name ofthe object, in
`the system's memory. The list ofpolygons, together with their associated names, then became a "map"of
`the objects in the original image. Later, when a student would click on the frog in the irregular-hotspot
`version of the teaching program, the system would automatically look through the list of polygons
`associated with that image and use geometric operations to decide whether the mouse was pointing to an
`area inside or outside ofeach polygon. When the search found a polygon that surrounded the point of
`interest; the object name associated with that polygon would be displayed to the student [61
`
`This example is a little deceptive, since the geometric operations needed in order to decode the polygonal
`obect's identity were so complex that applications could use only a very few simple images before they
`either ran out of memory or became too slow to operate effectively. An image as complex as a precise
`illustration of frog anatomy, much less a full biology atlas, could never have been successfully "mapped"
`using the these outline-based techniques on the PC technology of the late 'SOs.
`
`These limitations had been recognized several years earlier by Dr. Doyle, then a cell biology graduate
`student, as he was trying to come up with a way to create an interactive atlas of medical histology using a
`4.77 MHz IBM PC with 256K of RAM and a 3rd-party 8-bit (256 color) graphics board The problem
`was that the graphics board could display a high quality color image, similar to today's SVGA adapters,
`bat there was no way that the IBM PC would ever be able to effectively run an interactive atlas program
`based upon irregular polygon-based hot spots. An additional problem involved the fact that histology
`images can often contain thousands of individual objects, such as blood cells, and these objects could fall
`
`504 ISPIE Vol. 2417
`
`Petitioner Microsoft Corporation - Ex. 1075, p. 2
`
`
`
`into a number of general classes. For example, in an image of a section of lymph node tissue there could
`be several hundred red blood cells, several hundred white blood cells, 40 or 50clumps of connective
`tissue, and so on. Even the relatively efficient method ofusing rectangular hot spots couldn't deal with so
`many objects in a single image.
`
`Dr. Doyle then began to think about how a computer displays a 256-color image on the screen. The
`computer screen is laid out like an x,y cartesian coordinate grid with the origin (0,0), usually, at the upper
`left hand corner. The upper right hand corner is x,y=640,O. The lower left is 0,480, and the lower right
`is 640,480. The computer then stores a number in its memory for each intersection of the grid. Each of
`those numbers corresponds to the color which the monitor displays at the respective grid location. Since
`color computer monitors have three color tubes, one each for red, green, and blue, one would think that
`the computer would need to store three numbers for each location on the screen, each "pixel." That is,
`indeed, how the most expensive display systems work, but, a much more economical method has been
`developed for mainstream machines. This system stores only one 8-bit number, ranging in value from O
`255, ateach pixel location on the screen. Each ofthese numbers, or color index values, then points to one
`ofthe rows ofa "color look up table," or CLIfF (also called a palette). This table has rows 0-255
`(corresponding to the 256 possiblevalues for each pixel) and three columns; one each for red, green and
`blue (RGB). These ROB values can be "programmed," or set, by individUal software applications as they
`run on the machine. Each row in the table stores three 8-bit numbers corresponding to the ROB values
`for a given color index.
`
`So, when the computer needs to know what color to draw at a given pixel location, it first looks at the
`color index value at that location. Using that color index, it then finds the corresponding row of the
`CLUT, and looks across that row to find the red, green and blue numbers which determine what actual
`color is displayed on the screen. It is possible for many different pixel locations in an image to use the
`same color index value. Ifthe CLUT ROB numbers for that color index are changed in some way, then
`all ofthe pixels on the screen which store that particular color index will change simultaneously to reflect
`the new ROB values.
`
`Dr. Doyle realized that, although such a system was capable of displaying up 4, 256simultaneous shades
`of gray, the human visual system can only distinguish around 64 shades of gray, at best, and that a good
`quality grayscale image could be rendered with as few as 16 gray values. The computer only needs to use
`a 6-bit number (ranging from 0-63) in order to store 64 possible values, and only 4 bits for 16 possible
`values. It appeared that 2-4 bits of information were being wasted for each pixel on the screen. Dr. Doyle
`then realized that the unneededbits couldbe used in order to store information about the objects in the
`image. The image could be processed so that, for our frog anatomy example, the heart was rendered with
`color index values from 0 to 15, the liver used 16 to 3 1, the brain used values 32 to 47, and so on. Here,
`only 4bits (16 gray values) were being used to render the frog, so the other 4 bits were used to "segment"
`the CLUT so that each of 16 anatomically-distinct regions ofthe frog illustration could "own" a unique
`segment ofthe CLUT.
`
`By programming the ROB values in the CLUT appropriately, each ofthese 16 segments could be made
`into a small grayscale palette, ranng from black to white. In the heart, all ofthe pixels with a value of 0
`would be black, 1-14 would be different shades of gray, and 15 would be white. In the liver, color index
`16 would be black, 17-30 would be gray, and 31 would be white. Another way of thinking about this is
`that the only pixels in the entire image that used color index values 16 to 31 were those found in the liver
`region. If one wanted to highlight the liver on the computer screen, then, simply increasing the RGB
`values in the "liver segment" of the CLUT (rows 16-3 1) would result in all of the liver pixels getting
`brighter, compared with the rest of the image.
`
`This segmentation of the palette could also be used as the basis for an efficient means for object
`identification, as well. Each of the 16 different anatomical regions in the frog illustration could be turned
`into an interactively selectable hot spot by writing a program subroutine that would look to see where the
`
`SPIE Vol. 2417/505
`
`Petitioner Microsoft Corporation - Ex. 1075, p. 3
`
`
`
`student was clicking, find outthe color index ofthe screen pixel at that location, and then determine
`which palette segment that the color index fell into. This method of object identification required no
`polygons, and therefore no geometric operations, in order to decode an object's identity. The obect
`identities in the image were encoded into the image's pixel values instead ffthe image was stored using
`the popular GIF file format, then identification strings for the objects could even be saved as comment
`fields in the GIF file. Essentially, each pixel in the image had been turned into an independently
`adressable hot spot, and the information needed to identify each hoispot was filly "encapsulated" together
`with the image data. This process of object encoding and interactive identification is called "palette
`segment indexing," and is commercially referred to as the MetaMAP® process. Dr. Doyle was awarded a
`U.S. patent (#4,847,604) for this technology in 1989.18,9,10]
`
`This scheme ofobject indexing had other advantages as well. Dr. Doyle realized that it solved his
`histology atlas problem ofhaving hundreds ofcells on the screen that could be divided into a few
`categories. All ofthe red blood cells, for example, could be rendered with the "red blood cell palette
`segment" to allow simple identification. The white blood cells would use a different palette segment. No
`matter how many ofthese cells were on the screen, each category ofceils only used a single palette
`segment, so memory overhead was decreased. Objects could be identified simply by looking them up in
`an object table, so computational overhead was decreased. And each object "class" on the screen owned
`its own segment ofthe palette, so that all ofthe objects ofa given type could be easily highlighted by
`simply changing the ROB values in that palette segment. Furthermore, the efficiency ofthis system was
`completely independent ofthe spatial resolution ofthe system. Lfit took 10 milliseconds to identify an
`object in a 640 x 480 image, it would take 10 milliseconds to identify an object in a 8000 x 8000
`resolution image. Since polygon-based identification methods are intimately linked to the spatial image
`resolution, MetaMAP-baSed object identification becomes more and more advantageous as computer
`image resolutions increase, as they inevitably do.
`
`On the other hand, as PCs and Macs have become more powerful the performance degradation that
`polygon hotspot encoding imposes has become less and less noticeable for small databases. In fact,
`polygon hotspot encoding is still the standard technique used by popular multimedia software authoring
`systems, such as Hypercard and Toolbook. The WWW, however, presents new opportunities for
`MetaMAN' encoding. Most ofthe "home pages" browsable on the WWW now include images with
`hotspots. The current method for defining and interacting with those hotspots is called ISMAP [ret], is
`based upon polygon hotspot encoding, and is veiy cumbersome. As is discussed below, the same
`efficiencies that made the MetaMAP® process so effective on a single platform make it an excellent
`alternative for the definition of hotspots on WWW pages.
`
`2.1 MetaMAP® as an improvement to ISMAP
`
`Hotspots on images are a major way designers of WWW home pages try to distinguish themselves. The
`latest and greatest home pages, such as those at www.whitehouse.gov or www.ibm.com, usually contain
`large ISMAPped images [11. The intent is that users will use these images as menus for browsing the
`remaining pages at those Web sites. The ISMAP approach for doing this has several problems, however:
`
`a Unlike the hotwords in the text areas of the WWW pages, there is no indication of hotspots as the
`user's mouse cursor passes over them There is also no display of anchor URLs, therefore the look and
`feel of the system is different for the user when browsing these ISMAPs than when they interact with
`hotwords in the text
`a ISMAPs are difficult to implement, requiring complex server-side files and operations
`a There is no capability to create nested hotspots (hotspots surrounded by other hotspots)
`a Interaction is inefficient due to the need for geometric operations (polygon queries) to determine the
`identity of the object selected by each of the user's mouse clicks
`
`506 ISPIE Vol. 2417
`
`Petitioner Microsoft Corporation - Ex. 1075, p. 4
`
`
`
`MetaMAP® solves these problems in several ways:
`
`a Anchor URLs are dynamically displayed, and hotspots are highlighted, as the cursor passes over
`them, so the look and feel are identical for interaction with both image-based and text-based hotspots
`R MetaMAP® is easy to implement for the Web designer, since the image and object names are fully
`encapsulated. All decoding happens at the browser
`a Since each pixel is an independently-addressable hoispot, unlimited nesting is possible
`U Greater efficiency is provided, since no geometric operations are needed
`
`The end result of these improvements is that use of MetaMAP® encoding and decoding of interactive
`hotspots allows a much greater degree ofuser friendliness for WWW-based online services than the
`currentlypopular ISMAP approach. Mother limitation ofthe WWW, however, relates to the poor page-
`formatting features and limited interactive capabilities ofHTML, the language upon which the Web is
`based. Nevertheless, the WWW has become a defacto global standard for distributed hypermedia. The
`large installed base ofMOSAIC users makes it impractical to reengineer HTML to solve these problems.
`One solution, therefore, would be to design a system where additional functionality could be added to
`WWW browsers simply through the use of"plug in" embeddable program objects.
`
`3. CCFH-"
`
`While researthing ways to implement dynamic three-dimensional real-time imaging through theWWW,
`the core technology for CCI++ was conceived and developed by the authors (Doyle, Martin & Mg)
`during the summer of 1993 at the University of California, San Francisco's Center for Knowledge
`Management [21. The first implementation, in 1993, allowed real-time manipulation of 3D biomedical
`image thta, drivenby a distributed parallel array ofpowerful computers, embedded within a WWW
`document, and accessed via a remote low-end workstation running the our enhanced version of NCSA
`MOSAIC [14,15].
`
`Advanced scanning devices, such as magnetic resonance imaging (MRI) and computer tomography (CT),
`have been widely used in the fields of medicine, quality assurance and meteorology. The need to visualize
`resulting data has given rise to a s'ide variety ofvolume visualization techniques and computer graphics
`research groups have implemented a number of systems to provide volume visualization(e.g. AVS, ApE,
`Sunvision Voxel and 3D Viewnix) [18]. Previously these systems have depended upon specialized
`graphics hardware for rendering and significant local secondary storage for the data. The expense of
`these requirements has limited the ability of researchers to exchange findings. To overcome the bather of
`cost, and to provide additional means for researchers to exchange and examine three-dimensional volume
`data, we have implemented a distributed volume visualization tool for general purpose hardware, we have
`further integrated that visuahzafion service with the distribUted hypermedia system provided by the
`World-Wide-Web.
`
`Our distributed volume visualization tool, VIS, utilizes a pool of general purpose workstations to generate
`three dimensional representations of volume data. The VIS tool provides integrated load-balancing across
`any number of heterogeneous UNIX workstations (e.g. SGI, Sun, DEC, etc.) taking advantage of the
`unused cycles that are generally available in academic and research environments. In addition VIS
`supports specialized graphics hardware (e.g. the RealityEngme from Silicon Graphics), when available,
`for real-time visualization.
`
`Distributing information that includes volume data requires the integration of visualization with a docu-
`ment delivery mechanism. We have integrated VIS and volume data into the WWW, taking advantage of
`the client-server architecture of WWW and its ability to access hypertext documents stored anywhere on
`the Internet. We have enhanced the capabilities of the most popular WWW client, Mosaic, from the
`
`SPIEVo!. 2417/507
`
`Petitioner Microsoft Corporation - Ex. 1075, p. 5
`
`
`
`FIgure 1: A stereo-pair Illustration of Interactive real-time control, embedded within an NCSA MOSAIC document,
`of a 3-dimensional volume reconstruction of human embryonic anatomy [7,11], showIng a 7-week old human
`embryo found In the Cemegie Collection en 1997 serIal cross sections. Realtime Interactive volume visualization
`was supported by a farm of networked graphics workstations. The stereo pair was created by ucIonIngN the Mosaic
`wIndow and then Interactively rotating the embryo 6° to the right In the cloned window, using the control panel
`seen to the right of the Mosaic windows. This technology was developed by DoyIe Martin & Ang at the Center for
`Knowledge Management at the University of California, San Francisco, and was demonstrated there in November,
`1993.
`
`National Center for Supercomputer Applications (NCSA), to support volume data and have defined an
`inter-client protocol for communication between VIS and Mosaic for volume visualization. It should be
`noted that other types of interactive applications could be "embedded" within HTML documents as well.
`Our approach can be generalized to allow the implementation of object linking and embedding over the
`Internet, similar to the features that OLE 2.0 provides users of Microsoft Windows on an individual
`machine.[16] We have begun working on several extensions and improvements on this software system:
`
`Since the VIS display, when mapped into the Mosaic ikindow, can be thought of as similar to an
`interactive movie which is sent across the net, one way to reduce network transferring time would be to
`compress the data before delivery. We propose to use the MPEG compression technique, whichwill not
`only perform redundancy reduction, but also a quality-adjustable entropy reduction.[191 Furthermore, the
`MPEG algorithm performs interframe, beside intraframe, compression. Consequently, only the
`compressed difference between the current and the last frames is shipped to the client.
`
`508 / SPIE Vol. 2417
`
`Petitioner Microsoft Corporation - Ex. 1075, p. 6
`
`
`
`The protocols we have implemented are simple and general enough to allow most image-producing
`programs be modified to display in the Mosaic document page. We have successfully incorporated an in-
`house CAD model rendering program into Mosaic.
`
`With multiple users, the VIS/Mosaic distributed visualization system will need to better manage the server
`resources, since multiple users utilizing the same computational servers will slow the servers down
`significantly. The proposed solution is for the server resource manager to allocate servers per VIS client
`request only ifthose servers are not overloaded. Otherwise, negotiation between the resource manager
`and the VIS client will be necessary, and, perhaps the resource manager.will allocate less busy alternatives
`to the client. Since the load distributing algorithm in the current VIS implementation is not the most
`optimal load distribution solution, we expect to see some improvement in future implementations, which
`will use sender-initiated algorithms [12].
`
`Our system takes the technology of networked multimedia system (especially the World Wide Web) a step
`further by proving the feasibility of adding new interactive data types to both the WWW servers and
`clients. The addition ofthe 3D-volume-data object type in the form ofan HDF file to the WWW has been
`welcomed by many medical researchers, for it is now possible for them to view volume datasets without a
`high-cost workstation. Furthermore, these visualizations can be accessedvia the WWW, through
`hypertext and hypergrapbics links within an HTML page. Future implementations ofthis approach using
`other types ofembedded applications will allow the creation of a new paradigm for the online distribution
`ofboth passively-browsable multimedia information and interactive high-end applications via the Internet
`
`This software represents the first time interactive program objects (which have since been coined
`"Weblets") have been embedded within distributed hypermedia documents. The resulting system
`provides the user ofthe enhanced MOSAIC client with the ability to interactively control vast remote
`computational server resources from a low-end (sub $5000) client macbAne, connected via the Internet.
`This also allows the creation ofcompound documents which combine text and graphic elements, as well
`as fully interactive plug-in applications embedded inline (using the jargon ofthe WWW) into seamless
`units, with the same look and feel on Uni,ç Mac and DOS client platforms.
`
`We have created a standard application programming interface for this technology, called CCI++ which
`has been released as an "Internet Draft Standard" (hflp:/Msembryo.ucsf.edu/eolaWccipp.html). This API
`represents an extension to the University ofilhinois' Common Client Interface (CCI), which specifies how
`external applications can "drive" MOSAIC. WWW browsers supporting the CCI++ API ill provide
`users with the type offunctionality Microsoft's object linking and embedding (OLE) API provides to MS
`Windows users on a single machine, but extending that functionality to easily link objects and documents
`over open-standard wide area networks, such as the Internet.
`
`This technology also represents the first public demonstration of a working distributed-object-compound-
`document model using the Internet. The "distributed-object" term refers to the nature ofthe server-side
`computation driving the embedded application objects. In conventional compound-document client/server
`systems a two-tier structure is Standard. The embedded objects in such a system are hosted either by the
`client machine (first tier) or by a second tier of server computers, where each embedded object runs on a
`single server machine. A distributed-object system, on the other hand, allows each of the embedded
`objects to be computed more efficiently by distributing the computational load across a large number of
`server machines [51. Such systems had been generally proposed by a number of manufacturers, but none
`had been implemented prior to the invention of CCI++.
`
`The CCI++' distributed-object system allows the design of three-tier client/server systems, where
`server/side logic can be coordinated at the second tier, while computational efficiency can be achieved at
`the third tier. An additional benefit of the CCI++ approach is that the look and feel of the compound
`documents is identical across Unix, Mac and Windows client platforms, and the various machines
`involved in the system only need to be connected to the Internet in order to intercommunicate.
`
`SPIE Vol. 2417/ 509
`
`Petitioner Microsoft Corporation - Ex. 1075, p. 7
`
`
`
`4. Combination of both technologies into a single system
`
`One ofthe major advantages of the CCI-H- AJI is that it can be used to embed full applications, with
`only minor modifications, within WWW documents. This means the embedded application's entire
`display, graphical user interface (GUI) and all, would appear within the compund document Essentially,
`the application is converted into an "interactive movie" where the embedded application is hosted on a
`remote server. The server sends a compressed video stream to the MOSAIC client, and the client sends
`0113 selection messages back to the server. To accomplish this a flexible means to decode the user's GUI
`interactions at the client side is needei Since the system needs to be able to deal with any arbitrary GUI,
`using polygonal hotspots to represent the GUI elements would be cumbersome, inefficient and limiting.
`Using the MetaMAI process solves this problem in an efficient and flexible way, with palette segment
`indexing encoding hotspots on the digital "movie" video stream. This will allow us to embed any existing
`application within a WWW document with a minimum of effort. Regardless ofwhat operating system or
`computer system the user has, all he or she will need in order to run anycomputer application will be a
`CCI++-enabled WWW browser and a connection to the Internet. Such a technology has the power to
`redefine the paradigm ofpersonal computing in the coming years.
`
`References:
`
`1) Andreessen, M., "NCSA Mosaic Technical Summary", fromFTP site ftp.ncsa.inuc.edu, 8 May
`(1993).
`
`2) Mg, C.S., D.C. Martin and M.D. Doyle: Integrated Control ofDistributed Volume Visualization
`Though the World Wide Web., Proc. Visualization '94, IEEE Press, (1994)
`
`3) Ayre, R., and Reichard, K. : "Web Browsers: The Web Untangled," PC Magazine, 14/3:173-196,
`(1995)
`
`4) Bemers-Lee, T., et al.,: The World Wide Web, Communications ofthe ACM. 37:/8 76-82. (1994)
`
`5) Bloomer, J.: "Distributed Computing and the OSF/DCE," Dr. Dobbs J., 20/2:18-32, (1995)
`
`6) Brinkley, J.F., Eno, K., Sundsten, J.W., "Knowledge-basedcient-server approach to stuctural
`information retrieval: the Digital Anatomist Browser", Computer methods and Programs in Biomedicine,
`Vol. 40, No. 2, June, 13 1-145 (1993).
`
`7) Doyle, M.D., Raju, R, Ang, C., Klein,G., Goshtasby, A, and DeFanti, T. The Visible Embryo
`distributed (workstation/supercomputer) interactive visualization of high-density 3D image data of
`embryonic anatomy, presented at the High Performance Computing and Communication Showcase,
`SIGGRAPH '92, the annual meeting of the Association for Computing Machinerys Special Interest Group
`for Graphics, Chicago. (August, 1992).
`
`8) Doyle, M.D. and Sadler, L.L.: Health Informatics: MetaMap indexing and retrieval of image-based
`data for an institutional PACS system, presented at the 1991 Forum in Cellular and Organ Biology,
`American Association of Anatomists, Chicago, (April 1991).
`
`9) Doyle, M.D.: Palette Segmentation Indexing: The MetaMap Process, SIGBIO Newsletter (The Journal
`of the ACM Special Interest Group for Biological Computing), 12/1, (1992).
`
`510/SPIE Vol. 2417
`
`Petitioner Microsoft Corporation - Ex. 1075, p. 8
`
`
`
`10) Doyle, M.D. A New Method for Identifying Features ofan Image on a Digital Video Display,
`Biostereometrics Technology and Applications, SPIE Press (1991).
`
`1 1) Doyle, M.D. C. Mg, R Raju, 0. Klein, B.S. Williams, T. DeFanti, A. Goshtasby, R Grzesczuk, and
`A. Noe: 'Processing cross-sectional image data for reconstruction ofhu.man developmental anatomy from
`museum specimens," SIGBIO Newsletter (The Journal of the ACM Special Interest Group for Biological
`Computing), 13/1 (1993)
`
`12) Giertsen, C. and Petersen, J., "Parallel VoluineRendering on a Network of Workstations", IEEE
`Computer Graphics and Applications, 1623. November (1993).
`
`13) Green, E. L. Genetics and Probability in animal breeding experiments. Oxford University Press,
`(1981).
`
`14) http://visembryo.ucsf.edu/, "The Visible Embryo Project Home Page," (1994)
`
`15) http:Ilvisembryo.ucsf.eduleolas/ccipp.html, "CCI++/l .0 Internet Draft Standard," (1995)
`
`16) LaPlante, J.: "Building an OLE Server Using Visual C++ 2.0," Dr. Dobbs 1., 20/2:82-88, (1995)
`
`17) McConathy, D.A. and Doyle, M.D. Interactive Displays in Medical Art, chapter 6 in: Pictorial
`Communication in Virtual and Real Environments, Stephen R. Ellis, Editor, Pub: Taylor & Francis,
`London. (1991).
`
`18) Udupa, J.K. Course notes: Visualization of biomedical data: principles and algorithms. 1st ConI. on
`Visualization in Biomed. Computing. IEEE Computer Society Press, p14-16 (1990).
`
`19) Vaaben, J. and Niss, B. Photos of the future. Iris Universe 20: 62-66 (1992).
`
`20) Wolf, G.: "The (Second Phase of the) Revolution Has Begun," Wired, October, 1994, 116-154,
`(1994)
`
`SPIEVo!. 2417/511
`
`Petitioner Microsoft Corporation - Ex. 1075, p. 9
`
`