throbber
See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/4247465
`
`A Hybrid User Interface for Manipulation of Volumetric Medical Data
`
`Conference Paper · April 2006
`
`DOI: 10.1109/VR.2006.8 · Source: IEEE Xplore
`
`CITATIONS
`46
`
`5 authors, including:
`
`READS
`275
`
`Alexander Bornik
`Ludwig Boltzmann Institute for Archaeological Prospection and Virtual Archaeology
`
`Bernhard Reitinger
`Robotic Eyes
`
`42 PUBLICATIONS   739 CITATIONS   
`
`SEE PROFILE
`
`55 PUBLICATIONS   1,380 CITATIONS   
`
`SEE PROFILE
`
`Dieter Schmalstieg
`Graz University of Technology
`
`539 PUBLICATIONS   13,252 CITATIONS   
`
`SEE PROFILE
`
`Some of the authors of this publication are also working on these related projects:
`
`HYDROSYS - Mobile environmental monitoring tools View project
`
`Training and Workflows in Mixed Reality View project
`
`All content following this page was uploaded by Alexander Bornik on 05 June 2014.
`
`The user has requested enhancement of the downloaded file.
`
`0001
`
`Align EX1023
`Align v. 3Shape
`IPR2022-00144
`
`

`

`A Hybrid User Interface for Manipulation of Volumetric Medical Data
`
`Alexander Bornik∗
`
`Reinhard Beichel
`
`Ernst Kruijff
`
`Bernhard Reitinger
`
`Dieter Schmalstieg
`
`Institute for Computer Graphics and Vision
`Graz University of Technology
`
`ABSTRACT
`
`This paper presents a novel system for interactive visualization and
`manipulation of medical datasets for surgery planning based on a
`hybrid VR / Tablet PC user interface. The goal of the system is to
`facilitate efficient visual inspection and correction of surface mod-
`els generated by automated segmentation algorithms based on x-ray
`computed tomography scans, needed for planning surgical resec-
`tions of liver tumors. Factors like the quality of the visualization,
`nature of the dataset and interaction efficiency strongly influence
`system design decisions, in particular the design of the user inter-
`face, input devices and interaction techniques, leading to a hybrid
`setup. Finally, a user study is presented, which characterizes the
`system in terms of method efficiency and usability.
`CR Categories:
`C.2.4 [Computer-Communication Networks]:
`Distributed Systems—Distributed Applications; I.3.5 [Computer
`Graphics]: Computational Geometry and Object Modeling—
`Boundary Representations; I.3.7 [Computer Graphics]: Three-
`Dimensional Graphics and Realism—Virtual Reality; J.3.2 [Med-
`ical information systems]: Project and People Management—Life
`Cycle
`
`1
`
`INTRODUCTION
`
`Imaging modalities like X-ray computed tomography (CT) or
`magnetic resonance tomography (MR) are important information
`sources for surgical planning. Proper planning requires physicians
`to understand the 3D relations within the dataset. For example, the
`resection of liver tumor requires understanding the arrangement of
`liver tissue, vasculature and tumor. Looking at individual 2D slices
`of CT data using conventional radiological workstation software
`makes this task difficult. Our aim was therefore to build a system
`for liver surgery planning using Virtual Reality (VR) techniques,
`capable of supporting radiologists and surgeons by providing vi-
`sualization of 3D medical models and tools for computer-assisted
`planning of the surgical intervention.
`Typically, the first step in liver surgery planning is segmentation
`of the individual structures, needed to plan the surgical interven-
`tion. This task can be done manually, but this is tedious and time
`consuming, since it involves drawing contours on several hundred
`slices.
`However, a fully automated segmentation of the liver is difficult
`to achieve, because the shape of the human liver highly varies. This
`fact makes it almost impossible to use a priori shape knowledge for
`the design of a segmentation algorithm. In addition, the gray-value
`appearance can show large variations due to pathological changes
`of the liver, which can cause problems in distinguishing the liver
`from adjacent organs with similar gray-values (e.g. heart or colon).
`Furthermore, tumors located close to the liver boundary might be
`excluded from the segmentation.
`
`∗e-mail:bornik@icg.tu-graz.ac.at
`
`In advanced automatic segmentation algorithms, the segmenta-
`tion problems are usually limited to local errors, while most areas of
`the liver boundary can be correctly found using the automatic algo-
`rithms. A radiologist’s task can therefore be simplified from manual
`contour specification to interactively correcting errors in segmented
`datasets. This segmentation refinement approach is expected to be
`much less time consuming in most cases.
`At a first glance 3D segmentation refinement tools afford VR
`techniques: Stereoscopic visualization provides good 3D percep-
`tion of the dataset, whereas tracked input devices allow for direct
`3D interaction with the dataset. However, 2D screens have a much
`higher resolution than their 3D counterparts, and an inexpensive op-
`tical mouse easily outperforms high-end tracking devices in terms
`of accuracy when precision input in 2D is required. In the medical
`field, where imprecision may have dire consequences, the virtues of
`established 2D techniques should not be discarded lightly. More-
`over, physicians are used to desktop interfaces, and in particular for
`system control, VR interfaces are not yet mature.
`These considerations lead us to the design of a hybrid user in-
`terface that combines multiple display and interaction techniques,
`in order to match the work processes at hand. The objective of the
`hybrid user interface is to pair 3D perception and direct 3D interac-
`tion with 2D system control and precise 2D interaction. For such an
`interface, it is important that the flow of action of working between
`2D and 3D visualization and interaction techniques is not disturbed.
`Both the different views and the interaction with the data need to be
`handled coherently.
`To ease the transition between the interface modalities, a hybrid
`input device, which can be conveniently used in all 2D and 3D
`tasks, was designed and developed. A focus was put on analyz-
`ing the differences between action performance in the 2D and 3D
`domain, leading to a more extensive human factors study. This pa-
`per presents results on the complexity of tasks and their associated
`tools, and the duration of usage in reflection to ergonomics.
`
`2 RELATED WORK
`
`The three-dimensional nature of surgery planning and surgery sim-
`ulation has led researchers to the use of VR techniques. An
`overview of VR systems and human interface issues in the medi-
`cal context can be found in [21] and [9]. Liver surgery planning in
`particular has been addressed by a number of groups, although VR
`aspects are rudimentary in most projects.
`The German cancer research center (DKFZ) located in Heidel-
`berg has developed a computer-aided planning system for liver
`surgery [20]. Research was focused on medical image process-
`ing tasks such as segmentation. There have also been attempts to
`deal with segmentation errors described in [16], although the VR
`aspects of the system are limited, since all image segmentation and
`planning procedures are performed on a normal desktop PC.
`The Center for Medical Diagnostic Systems and Visualization
`(MeVis) in Bremen has developed a desktop-based liver surgery
`planning system. The main research focus was segmentation and
`modeling of liver structures [15].
`Researchers at INRIA have addressing several aspects of liver
`surgery planning, such as the segmentation of the liver surface using
`
`0002
`
`

`

`deformable surface models. Later the group worked on surgical
`simulation with realistic liver tissue models using force feedback
`input devices [8]. The virtual liver surgery planning system [3],
`which is the foundation of the work presented in this paper, has
`been developed at Graz University of Technology since 2000. An
`earlier version of the VR based segmentation refinement toolset was
`described in [4].
`Segmentation refinement is a rather new field and there are
`hardly any publications dealing directly with it. 2D segmentation
`can be trivially implemented as a painting tool, but this approach
`is ineffective for large 3D datasets. Interactive segmentation tech-
`niques can be seen as closely related, although they address the
`segmentation problem and not the correction of erroneous segmen-
`tation. An interactive segmentation approach named Live-Wire was
`introduced in [1]. It reduces the amount of user interaction required
`to segment the object boundary. An extension to 3D data can be
`found in [10]. To our knowledge, the desktop based interactive ap-
`proach described in [19] is the only method based on the segmen-
`tation refinement principle. An application in the context of data
`preparation for liver surgery was reported in [2].
`3D interaction with medical volumetric datasets is a reoccurring
`topic in VR. Some input devices have been developed that specif-
`ically focus at exploring medical data, including Hinckley’s prop-
`based system [18] or the Fakespace CubicMouse [13].
`Hybrid user interfaces in general are an emerging research field
`[11]. Most developments focus at combining different visual dis-
`plays, like in mixed reality setups [23]. Only a few have focused at
`truly hybrid input, such as the Virtual Tricorder [27], the Pick-and-
`Drop approach by Rekimoto [22], and some tangible user interfaces
`[26]. Handhelds and touch screens have been integrated into im-
`mersive environments for interaction purposes, like [12], [27], [14]
`and [6]. These handhelds are mostly used for GUI-style control
`elements (system control), only little work is done on direct manip-
`ulation of immersive data. Similar to handhelds, tablet interfaces
`have been designed for the display of 2D data and menus on a flat
`surface, for example [25]. Finally, a frequent approach to system
`control in immersive environments are pen-like devices, such as
`the Stylus products from Polhemus or Intersense. For an extensive
`overview of pen devices, please refer to [5].
`
`3 HYBRID USER INTERFACE
`
`Figure 1: Hybrid Setup: camera of the optical tracking system (1),
`Tablet PC and Eye of Ra (2), stereoscopic large screen projection
`system (3).
`
`s
`
`Figure 2: Desktop Setup: Tablet PC with conventional 2D User
`interface for system control; 2D view for viewing and interaction
`with the dataset using the Eye of Ra input device, which behaves
`similar to a conventional stylus in the desktop setup.
`
`3.1 Hardware Setup
`
`The hardware setup consists of two main parts, the VR system and
`the 2D system. The VR system’s display is a large stereo wall
`(stereoscopic back projection screen, 375cm diameter, 1280x1024
`pixels) viewed with shutter glasses. A Barco Galaxy 3-chip DLP
`projector provides high quality active stereo rendering with very
`good channel separation, which is important when displaying vir-
`tual objects close to the user. The stereo wall is driven by a PC
`workstation (dual 3GHz Xeon, NVidia Quadro FX 3400). Optical
`tracking of the user’s head and the input device is done using an
`4-camera infrared system from Advanced Realtime Tracking.
`The desktop system is a Tablet PC (Toshiba Port´eg´e M200, 1.8
`GHz CPU, GeForce Go 5200 graphics card, 12-inch TFT touch-
`screen at 1400x1050 pixels). The Tablet PC is placed on a desk ap-
`proximately 2 meters in front of the screen, tilted at approximately
`60 degree for convenient readability. The user is seated at the desk
`so that both stereo wall and Tablet PC are within the field of view
`as shown in Figure 1.
`
`3.2 Hybrid Interaction
`
`When referring to hybrid interaction, it is important to differentiate
`between two approaches: serial and parallel integration. Using se-
`rial integration, 2D and 3D methods are used in a sequential order,
`one after each other. In parallel integration, 2D methods are quasi
`embedded and used directly to control and adapt the data in the
`immersive environment. In the virtual liver planning system, the
`interaction makes use of serial integration, in which the 2D and and
`the VR system are two separate systems that can be synchronized.
`The combination of 2D and 3D interactions can have consider-
`able advantages. 2D actions can be performed with relatively high
`precision, whereas 3D actions are executed at high speeds in spe-
`cific task situations. As such, a clear speed-accuracy trade-off can
`be noticed, depending on the task at hand. In that respect, the vir-
`tual liver planning application contains actions that are inherently
`2D (like contour editing or point-based segmentation refinement)
`or 3D (including visual inspection of mixed data, or approximation
`of surfaces).
`The main factor is the flow of action at macro and micro level,
`for which the mapping and mode changes of functionality, the syn-
`chronization of desktop and spatial environments and the focus of
`
`0003
`
`

`

`attention play a key role.
`At macro level, the performance of the actions is influenced
`by the work process preferences of the end-users: the radiologist
`prefers the desktop, whereas the surgeon can better work within
`a spatial setting. In order to access the functionality, an effective
`system control method is needed, that allows consistent interaction
`at both desktop and spatial environment. Redundant mapping of
`functionality is deliberately chosen, in order to support the work
`preferences of the end-users: all actions can be performed in the
`desktop and the spatial environment.
`At micro-level, one of the issues that affect the flow of action
`in an application is the effectiveness of performing mode changes.
`The high amount of functions cannot be accessed effectively by
`any of the currently available 3D system control techniques. There-
`fore, the only possible way for mode changes is to access most of
`the functions on a standard GUI-style menu on the desktop screen.
`However, to support frequent interaction loops in the immersive en-
`vironment, some functions are mapped to the input device. Specifi-
`cally, manipulation actions are mixed with visual inspection actions
`(e.g. navigation) at high regularity. Therefore both CT data move-
`ment and general camera movement actions are directly mapped to
`two buttons on the input device.
`Due to the different locations of the desktop display and the
`stereo wall in relation to the user, switching between desktop and
`spatial interaction (for example during mode change) necessarily
`results in a change of visual focus. Best practice demands that head
`rotation and focal plane difference are as limited as possible, with-
`out the desktop display occluding the stereo wall.
`In the current hardware setup, the Tablet PC is placed at a table,
`and put in a tilted angle towards the users. The user can conve-
`niently use the touch screen for selecting menu items or manipulat-
`ing objects. The user’s arm may be placed on the table to reduce
`fatigue. The table is placed at a specific distance from the stereo
`wall, so that stereo objects are viewed in a depth place that seems
`to be above or just behind the visuals viewed at the desktop screen.
`Consequently, both the angular movements of the head are limited,
`as well as the change of focus between depth planes. There are still
`field of view differences. Combining a smaller stereo wall with a
`larger touch screen in an L-shape like configuration may improve
`this issue.
`Performing the different actions in desktop or spatial mode nec-
`essarily leads to different kinds of input.
`In the spatial environ-
`ment, most actions are coarse, mixed with some more fine-grained
`actions, whereas at the desktop, all actions are fine-grained. The
`different kinds of performances, and the necessity to make use of
`a pen-like device to control the touch screen lead to different kinds
`of dynamical coupling between hand and device. This is mostly
`caused by the different kinds of grips to the device that match the
`precision needed to perform the task, as will be illuminated in the
`next section.
`
`3.3 Hybrid Input Device
`
`For the design of the new device, a close analysis of the tasks and
`the associated hand-device couplings and movements was made. In
`initial tests, a specific device for two-handed interaction was not
`found necessary because all tasks could be performed well with
`one hand. However, we observed that the device would need to
`allow for both power and precision grasps. Rotational movements
`for putting clipping plane or CT data plane are generally performed
`in high-speed and lower accuracy (sweeping task), whereas other
`tools like deformation demand lower speed and higher accuracy,
`and are better performed in precision grip.
`To get an idea of a basic form of the device, we tried to match the
`movement and rotation patterns to existing devices. It was found
`that the hand activity matches partially a flying mouse, partially a
`
`pen-like device. The pen characteristics were considered impor-
`tant, since the device needed to function as pen-input device for the
`Tablet PC.
`This resulted in an attempt to merge flying mouse and pen shapes
`into one single design, which allows for an unobtrusive switching
`between power and precision grasps. From clay models, we ar-
`rived at the shape shown in Figure 3. Due to the visual shape of
`the device, it was nicknamed Eye of Ra. The form needed to be
`large enough to enclose the electronics, which were taken from an
`EZ5 Optical Pen Mouse, which has a very small circuit board. The
`wireless connection was tested and found suitable when combined
`with a longer antenna in the device. The final device was made
`from carbon and fiberglass mats layered with epoxy glue, which re-
`sults in a lightweight yet sturdy surface. The original button casing
`from the EZ5 was directly included in the design, in order to make
`a stable connection between device casing and electronics. Finally,
`the tip of the electromagnetic pen for the Tablet PC was embedded
`at the front of the device, and retro-reflective markers required for
`tracking were rigidly mounted on the body of the device.
`
`Figure 3: Eye of Ra - Input device for the hybrid user interface: The
`tip contains a conventional Tablet PC stylus tip for 2D interaction.
`Two buttons and a scroll wheel are used to trigger 3D interaction
`tasks. It is equipped with retro-reflective targets for optical tracking.
`The device is connected to the Tablet PC via a cable. Note the two
`different ways of grasping the device, the power grasp on the left and
`the precision grasp on the right.
`
`The shape of the hybrid interaction device allows for easy
`switching between flying mouse and pen mode. By pronating the
`forearm, and slightly changing the position of the fingers (mostly
`moving the thumb), the user can easily change between the differ-
`ent modes. This allows for dynamic coupling between device and
`hand without the user actively noticing it.
`
`3.4 Software
`
`Following the overall hybrid approach, the software of the system
`consists of two collaborating applications, a desktop application
`and a VR application. Both parts of the system are closely cou-
`pled and share a large portion of the code. The dataset is visualized
`on both systems simultaneously. Interaction with the data can take
`place in either application, while system control tasks like loading
`the datasets or setting parameters are limited to the 2D menu system
`on the Tablet PC. All user interaction is performed using the Eye of
`Ra input device described in Section 3.3, which acts like a normal
`Tablet PC stylus in the desktop application, while 6-DOF tracking,
`a scroll wheel and buttons on the device are used to trigger input in
`the VR application.
`Both applications involve 3D rendering based on Coin1, a scene
`graph library compatible to the Open Inventor standard. The desk-
`
`1http://www.coin3d.org
`
`0004
`
`

`

`top application uses the Qt 2 framework for graphical user interface
`programming. The VR application is based on the Studierstube [24]
`VR/AR library, which builds on top of Coin and provides handling
`of VR devices such as stereo wall and tracking as well as convenient
`programming of 3D interaction with the scene graph.
`
`Hybrid System
`
`Desktop System
`
`Desktop Application
`
`Application
`Layer
`
`VR Sytem
`
`VR Application
`
`Studierstube
`
`Studierstube
`
`Coin
`
`Coin
`
`Toolkit
`Layer
`
`DIV
`
`DIV
`
`Graphics
`Hardware
`
`Network
`Hardware
`
`Hardware
`Layer
`
`Network
`Hardware
`
`Graphics
`Hardware
`
`Figure 4: Software Architecture: The two separate applications share
`data via a network connection based on a distributed scene graph.
`
`Synchronization between the desktop and VR application is
`based on a distributed shared scene graph extension called
`DIV [17]. The two applications share synchronized copies of the
`scene graph, which stores all geometric and application relevant
`data. Modifications to one copy of the scene graph will be propa-
`gated to the other copy, and vice versa. The synchronization hap-
`pens automatically within the scene graph library, and need not be
`managed by the application programmer explicitly. Figure 4 gives
`an overview of the system architecture of the hybrid system.
`
`4.1 Model Inspection
`
`The first step, model inspection, can be performed on the Tablet PC
`screen using the Eye of Ra’s tip for interaction with the rotation,
`movement and scaling controls in the 2D user interface. In VR the
`model can be moved and rotated by pressing the scroll-wheel button
`on the input device, which fixes the model to the input device, while
`moving the device. The model navigation feature is bound to the
`scroll-wheel and is permanently available.
`CT data is visualized on a 2D cutting plane that can be arbitrarily
`placed inside the CT scan volume. On the Tablet PC the plane can
`be manipulated by dragging 3D control widgets provided by the
`scene graph library. In the VR system the cutting plane, visualized
`as a rectangle attached to the input device, can be set by dragging in
`3D with a specific button pressed. Like the model transformation,
`the cutting plane manipulation feature is permanently available.
`The user may also configure most visualization parameters. For
`example, they can choose to show the surface model in any com-
`bination of wireframe, Gouraud shading and textured with the CT
`data. An optional plane clipping the 3D model slightly above the
`cutting plane allows to inspect the surface model near the clipping
`plane more efficiently.
`
`4.2 Error Marking
`
`For efficient organization of the correction procedure, the user may
`mark regions according to the type of observed error by painting
`a ”traffic light” color code - green, yellow or red. Green indicates
`that a portion of the surface is correct and will be immutable by sub-
`sequent correction operations. Yellow indicates that the surface is
`mostly correct but may be moderately altered from its current state
`as needed for example to smooth out differences at region bound-
`aries. Finally, red indicates the surface is incorrect and may be
`drastically altered by the error correction tools. The marking is
`done by painting on the surface either on the Tablet PC or in the
`VR environment with a brush of adjustable size.
`
`4
`
`INTERACTION TOOLS
`
`4.3 Error Correction
`
`There are three main tasks necessary to improve a segmented sur-
`face model:
`
`• Model Inspection - The user tries to locate errors in the sur-
`face model by comparing raw CT data to the boundary of the
`surface.
`
`• Error Marking - Regions of the surface model that were
`found erroneous in the inspection step are marked for further
`processing. This allows to restrict the following correction
`step to the erroneous regions, and avoids accidentally modi-
`fying correct regions.
`
`• Error Correction - Marked regions are corrected using spe-
`cial correction tools based on mesh deformation.
`
`Usually these tasks are not performed in a strict order. For ex-
`ample, model inspection is repeatedly required throughout the cor-
`rection task. Individual corrected parts should be marked as final
`after successful correction.
`The following sections will give an overview of the functionality
`of the system and the tools provided, mostly from the user’s point
`of view, while technical details of the implementation of the seg-
`mentation refinement tools will be described in Section 5.2.
`
`2http://www.trolltech.no/products/qt
`
`The presented system allows for correction of segmentation errors
`using a number of different tools for interactively deforming the
`surface representation of the object.
`The sphere deformation tool consists of a sphere of user-defined
`radius which can be interactively placed in the datasets. In the VR
`system this is moved in place by moving the input device, while
`the tool position in the desktop setup is calculated as the position
`on the cutting plane corresponding to the 2D position of the cursor.
`Triggering sphere deformation causes object surface parts located
`within the sphere shape to be successively moved out of the sphere
`on the shortest possible path. Therefore, placing the sphere tool
`so that most parts of it are outside the object, causes its surface to
`move inwards, while outward movement is achieved by placing the
`sphere mostly inside the object. Moving the input device, while the
`deformation tool is active, causes the tool to respond, just like if one
`was deforming a piece of clay using a real world modeling tool.
`The plane deformation tool is much like the sphere deformation
`tools, except that its behavior is similar to modeling using a scraper.
`It can be used to flatten the object’s surface. In the VR system the
`position and orientation of the tools is directly determined by the
`input device, while cutting plane and the pen stroke direction define
`the tool’s behavior in the desktop setup.
`Fine grained deformation can be achieved using the point drag-
`ging tool, which can be used to pick individual surface vertices and
`move them directly to the desired location, while the surface de-
`forms like a rubber sheet in the vicinity. Figure 5 shows a screen-
`shot of the tools described above.
`
`0005
`
`

`

`graphics memory in each frame.
`Painting the mesh surface with the error marking tools intro-
`duced in Section 4.2 does not only alter the surface color. Paint-
`ing the mesh in red affects the deformable model in a way, that
`forces towards the erroneous segmentation boundary are not calcu-
`lated anymore. The mesh does not deform in these regions until
`until refinement tool induced forces apply. If the mesh is painted
`green, indicating that the surface is correct, no forces are calculated
`for the affected vertices. The standard color is yellow. In this case
`the forces towards the segmentation apply. Regularizing forces may
`still cause limited response to nearby tool based deformation.
`Applying the sphere deformation tool presented Section 4.3 re-
`sults in force calculation for all red colored surface vertices located
`within the tool’s influence range. The force vector direction is
`spanned by the sphere center and the affected vertex. The length
`of the vector is the distance from the affected vertex to the sphere
`border. The plane deformation tool works similarly.
`The point dragging affects the vertex closest to the input device
`location when it is triggered. The vertex can be placed freely in
`space. It is immediately removed from the set of active vertices,
`but its three neighbors are added to the active set, in case they are
`either red or yellow. When the vertex is released it keeps its current
`position, while the red of yellow adjacent parts of the mesh deforms
`and/or restructures until all quality criterions have been met.
`
`6 EVALUATION
`
`6.1 Methodology and Procedure
`
`The overall goal of the evaluation is to investigate the validity of
`our hybrid interface for liver surgery planning by comparing spa-
`tial (3D) and constrained (2D) tasks. Therefore, the evaluation is
`planned in two steps:
`
`1. The first step examined the general spatial manipulation tools
`described in Section 4.
`
`2. In the second step specific constrained tasks like segmentation
`refinement based on local contour drawing will be evaluated.
`
`In this paper, we only report on the first evaluation step. The
`different modes of the system, namely desktop, spatial interaction,
`or hybrid mode were tested in a comparative study. The evalua-
`tion included mostly empirical testing, with some analytical meth-
`ods, using a variety of data collection methods. The evaluation ad-
`dressed several issues relevant in complex interaction tasks, in par-
`ticular learning curve effects and mode switching. The evaluation
`included both qualitative and quantitative components, in which the
`user attitude and psycho-physiological abilities were collected and
`analyzed. All results were cross-compared to see if there were any
`notable differences between the user attitude towards the system,
`and the data collected through observation and recording.
`The qualitative component of the evaluation was dominated by
`the subjective measurements obtained from the questionnaires, and
`the quasi thinking aloud protocols. The thinking aloud protocol
`was more or less a notification of the thoughts that were expressed
`by the subjects. The subjects were asked to speak, but not forced.
`As such, results from the thinking aloud protocols differed between
`users, since expression levels differed between persons. The ques-
`tionnaire was mostly focused at the user satisfaction, by validating
`17 questions. The main factors included were user learning curve,
`attitude towards tools, easiness and effectives of tools, user com-
`fort including fatigue and device ergonomics, and attention. Hence,
`the questionnaires focused at the main issues specified in the hybrid
`interaction methodology applied, as described in Section 4.
`The quantitative data were collected from the external observer’s
`notes, the quality of the final liver model delivered by the subjects,
`
`Interaction Screenshot: Deformation of the model using
`Figure 5:
`the sphere deformation tool on the Tablet PC. The correctness of
`the model can be verified using the cutting plane chowing CT data.
`Correct and erroneous regions have been marked before.
`
`5 METHODS
`
`5.1 Cutting Planes
`
`The 2D cutting planes are based on OpenGL 3D textures derived
`from the CT data by downsampling to fit the texture volume into
`the graphics card memory (2563 in the VR setup and 1283 on the
`Tablet PC). The 3D texture is displayed while the plane is interac-
`tively moved. Once the user has fixed the plane’s position, a 2D
`texture for the plane is sampled at the full resolution of the dataset,
`delivering maximum quality. Plane sampling is decoupled from
`rendering, since it would impact the frame rate.
`
`5.2 Surface Model
`
`The presented tools are all based on deformable simplex
`meshes [7]. A simplex mesh is a special surface mesh with the
`property, that each vertex has exactly three neighbors, which makes
`it easy to calculate surfaces properties such as curvature and conse-
`quently to set up a deformable model based on a Newtonian law of
`motion, involving regulating forces, and other forces to deform the
`mesh. In order to avoid mesh degeneration, polygon splitting and
`merging operations are performed based on mesh quality criteria.
`In our system we formulate forces towards the boundary of the
`binary segmentation calculated using an automated segmentation
`approach. This leads to a mesh accurately representing the seg-
`mentation.
`In a VR setup the frame rates must be high, in oder not to dis-
`comfort the user, while the model should still be accurate. On aver-
`age this leads to simplex meshes of around 250.000 polygons. We
`employ vertex buffer objects available on newer graphics hardware.
`They allow for mapping graphics memory into main memory which
`makes selective updates on the graphics card possible.
`For efficiency, we keep a set of active vertices. When a vertex
`does not move significantly over several iterations, it is removed
`from this set and only added to it again, when neighboring vertices
`move significantly or new forces are applied. The refinement tools
`described in Section 4 alter the mesh locally, so the set of active
`vertices is always much smaller than the total number of vertices.
`Note, that only vertices altered in an iteration need to be updated in
`
`0006
`
`

`

`and the logging files that tracked duration and changes of inter-
`action modes. The observer noted all question asked by the user
`as well as user behavior (grasps, observable dexterity in fingers
`and wrist, arm-hand steadiness, attention to desktop and projection
`screen). Furthermore, the work-flow was observed and later com-
`pared with logs. Finally, a comparison was performed between the
`data produced by the subjects, and the best-practice model provided
`by an expert user.
`The different steps of the evaluation were as follows: Subjects
`were first introduced to the system by the instructor, taking a 10
`minute tour through the software. After the introduction, users
`could make use of the system for 12 minutes and ask questions.
`Next, the tests were performed. Use

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket