`US 20080071143Al
`
`c19) United States
`c12) Patent Application Publication
`Gattani et al.
`
`c10) Pub. No.: US 2008/0071143 Al
`Mar. 20, 2008
`(43) Pub. Date:
`
`(54) MULTI-DIMENSIONAL NAVIGATION OF
`ENDOSCOPIC VIDEO
`
`(76)
`
`Inventors:
`
`Abhishek Gattani, San Jose, CA
`(US); Salmaan Hameed, San Jose,
`CA (US)
`
`Correspondence Address:
`BLAKELY SOKOLOFF TAYLOR & ZAFMAN
`1279 OAKMEAD PARKWAY
`SUNNYVALE, CA 94085-4040
`
`(21) Appl. No.:
`
`11/523,217
`
`(22) Filed:
`
`Sep. 18, 2006
`
`Publication Classification
`
`(51)
`
`Int. Cl.
`A61B 1104
`(2006.01)
`A61B 1100
`(2006.01)
`(52) U.S. Cl. ........................................ 600/117; 600/109
`ABSTRACT
`(57)
`
`An endoscopic surgical navigation system comprises a
`multi-dimensional video generation module that enables a
`user to visually navigate captured endoscopic video with six
`degrees of freedom. This capability provides the user with
`control of a virtual camera (point of view) that can be
`translated in three orthogonal axes in 3-D space as well as
`allowing control of vertical panning (pitch), horizontal pan(cid:173)
`ning (yaw) and tilt (roll) of the virtual camera, as well as
`zoom.
`
`)0
`
`0001
`
`Exhibit 1105 page 1 of 40
`DENTAL IMAGING
`
`
`
`Patent Application Publication Mar. 20, 2008 Sheet 1 of 28
`
`US 2008/0071143 Al
`
`Li"~+
`-'o"'~
`
`Mo~ihy ·
`
`·)l
`
`JO
`
`P' l G. 1
`
`0002
`
`Exhibit 1105 page 2 of 40
`DENTAL IMAGING
`
`
`
`Patent Application Publication Mar. 20, 2008 Sheet 2 of 28
`
`US 2008/0071143 Al
`
`,--------L::.J=------2--=--I _· ___,
`
`r-' 2 2.
`
`2 D
`
`Scan Image
`
`Live Video Image
`
`Fig. 2
`
`0003
`
`Exhibit 1105 page 3 of 40
`DENTAL IMAGING
`
`
`
`Patent Application Publication Mar. 20, 2008 Sheet 3 of 28
`
`US 2008/0071143 Al
`
`,
`
`Vi.lto
`(ci,v-1t,("Q,.
`
`s~.....,;:,
`s.,Jlc...,
`Iit:op
`~
`
`33
`
`Vidto
`
`1'1
`
`Ute~ T "'P"' h
`
`. J.S"""
`
`,1"1
`
`• • •
`f"""
`!cope.
`!el\S~.s.
`• • •
`
`32
`
`30
`
`37
`
`l&IF-..w,
`Ndi.-,o..k
`
`VN.5
`
`JO
`
`FIG . .3
`
`0004
`
`Exhibit 1105 page 4 of 40
`DENTAL IMAGING
`
`
`
`Patent Application Publication Mar. 20, 2008 Sheet 4 of 28
`
`US 2008/0071143 Al
`
`VNS
`
`User-
`Interface
`Subsystem Yb
`
`Jo
`
`Registration
`Subsystem
`
`~.('
`
`Data
`Processing
`Subsystem
`'-t"I
`
`Measurment
`Subsystem
`
`YJ
`
`l.f J
`
`Data
`Acquisition
`Subsystem
`
`Tracking
`Subsystem
`
`\.f1.
`
`Fig. 4
`
`0005
`
`Exhibit 1105 page 5 of 40
`DENTAL IMAGING
`
`
`
`Patent Application Publication Mar. 20, 2008 Sheet 5 of 28
`
`US 2008/0071143 Al
`
`Input inter(cid:173)
`operative
`scan data
`
`Input data
`from scope
`sensors
`
`501C
`
`Input live
`video from
`scope video
`camera
`
`Determine scope distal
`tip's current position and
`orientation
`
`Generate real-time 30 scan
`images based on inter(cid:173)
`operative scan data and distal
`tip current position and
`orientation
`
`Co-register the real-time 30 scan
`images with live video images from
`the video camera
`
`Send co-registered scan images
`and live video images to monitor,
`image recording device and/or
`network interface
`
`Fig. 5
`
`0006
`
`Exhibit 1105 page 6 of 40
`DENTAL IMAGING
`
`
`
`Patent Application Publication Mar. 20, 2008 Sheet 6 of 28
`
`US 2008/0071143 Al
`
`VNS
`
`"" /'""'1 l
`
`Data Acquisition
`Sut,system
`
`'
`
`Sinlla~
`Transform
`
`Image resllalf
`
`'----+--1-----------voxe&
`~---~,•Ji~---,~-- MullMlfmen.slonal ~
`Vl:feo Ganeratiotl
`
`Endosc:aplc
`
`/I
`1
`
`Camera Feed IV r 1~--(cid:173)
`
`'-f,3
`
`.J
`
`I
`
`{',
`
`' I
`
`Refetanca
`Path Database
`
`\
`
`V
`
`'-SO
`
`Banellens
`Dislorllon
`,___
`~moc!ion
`.____.--_..'-b7
`
`"auls
`
`I._ __ ~ - - - - - ' ,
`
`30p Int&
`crleriallon
`
`....____,
`
`~ - - - - ,~ - - ·Graphical Model ~ ' - - -
`-
`Generalm
`~
`
`'
`
`. Path Corralallon -r-.....J-,
`
`i
`
`Pixllls
`
`MDclel panametera
`
`Resullaaa
`'!'
`,--------TSlll/Graphlcs---+---~
`
`.-
`•
`
`\;~
`
`Vlcleo
`-
`- - Signal
`(loOlsplay)
`
`Merge
`
`.,J 1l
`
`, ...... O•••••-"
`
`Measuremenl
`Subsystem
`
`( Qa--c:,_ ----
`---,use, inpum-----,_ ,__
`'--'------,...-----' ..... "13
`Fig. 6
`
`,····-------------..
`!
`!
`Pre-
`1
`f operative ,,-
`- - - - - -~
`
`:
`
`~ f f
`
`:
`
`_
`from Input
`..
`OeviceslControl
`
`User Interface
`Subsystem
`
`0007
`
`Exhibit 1105 page 7 of 40
`DENTAL IMAGING
`
`
`
`Patent Application Publication Mar. 20, 2008 Sheet 7 of 28
`
`US 2008/0071143 Al
`
`Multi-modal Image Coregistration
`~70
`
`S{f ,m 0 n
`•
`
`Moving Image ~
`m(x)
`
`Voxe_
`
`-
`
`Voals
`
`.-7"2.
`
`~
`
`B-Spline
`lnterpolator
`
`Points
`
`,__.
`
`--
`
`Fitness value
`
`I
`
`Optimizer
`, ..
`
`~
`~ ~11
`
`Transform parameters
`
`-·
`
`Fixed Image
`f(X)
`
`Affine Transform
`
`,4,
`
`Fig. 7
`
`0008
`
`Exhibit 1105 page 8 of 40
`DENTAL IMAGING
`
`
`
`Patent Application Publication Mar. 20, 2008 Sheet 8 of 28
`
`US 2008/0071143 Al
`
`User Input .
`
`· 73
`
`r~
`
`-----------7----------Audia---,1------ Speech
`totext
`
`Commands
`
`77
`
`User input
`processing
`
`71,
`
`----------7f-,---------1Controt inputs------'
`
`r~
`r~,
`
`. ------------1-----------1.Control inpuis---~
`
`Fig. 8
`
`0009
`
`Exhibit 1105 page 9 of 40
`DENTAL IMAGING
`
`
`
`Patent Application Publication Mar. 20, 2008 Sheet 9 of 28
`
`US 2008/0071143 Al
`
`Segmentation
`Parameters 1 - - - -~ -~
`
`Param
`
`Manual er
`Auto
`Segment
`
`Label Map
`
`Image
`
`Voxe
`
`Cut ROI
`
`Voxels
`
`f3
`
`~--Triangles (Polygons1---~
`
`Decimation
`
`.___----Trianglei,;.-· - - -~
`
`Location &
`Orientation
`
`Model Fitting
`
`Model
`Parameters
`
`Model
`Paramet
`ers
`
`8i
`Fig. 9
`
`0010
`
`Exhibit 1105 page 10 of 40
`DENTAL IMAGING
`
`
`
`Patent Application Publication Mar. 200 2008 Sheet 10 of 28 US 2008/0071143 Al
`
`..
`. . ~ .
`. / ~"
`f/1
`..._,
`.
`J ?)
`
`o\
`
`)03
`
`F\ G. Io
`
`0011
`
`Exhibit 1105 page 11 of 40
`DENTAL IMAGING
`
`
`
`Patent Application Publication Mar. 20, 2008 Sheet 11 of 28 US 2008/0071143 Al
`
`110
`
`\\ l
`
`t
`I
`
`I:,
`
`Fl G. \ 1
`
`0012
`
`Exhibit 1105 page 12 of 40
`DENTAL IMAGING
`
`
`
`Patent Application Publication Mar. 200 2008 Sheet 12 of 28 US 2008/0071143 Al
`
`llb
`
`f\G. )1..
`
`0013
`
`Exhibit 1105 page 13 of 40
`DENTAL IMAGING
`
`
`
`Patent Application Publication Mar. 20, 2008 Sheet 13 of 28 US 2008/0071143 Al
`
`s'
`
`, 11.a
`
`,u- \,-it>
`
`j
`
`'1
`
`0014
`
`Exhibit 1105 page 14 of 40
`DENTAL IMAGING
`
`
`
`Patent Application Publication Mar. 20, 2008 Sheet 14 of 28 US 200810071143 Al
`
`::r -
`LL-
`
`'-'>
`
`0015
`
`Exhibit 1105 page 15 of 40
`DENTAL IMAGING
`
`
`
`Patent Application Publication Mar. 20, 2008 Sheet 15 of 28 US 2008/0071143 Al
`
`Receive signals from
`sensors on base of
`scope
`
`Receive signals from
`sensors on flexible
`portion of scope
`
`Compute current position of
`the rigid member as a
`reference point, based on
`signals from sensors on
`base
`
`Compute current position and
`orientation of the distal
`relative to the reference point,
`based on signals from
`sensors on flexible portion
`
`Compute current position and
`orientation of distal tip of the
`scope, based on current
`computed reference point and
`current computed position and
`orientation of distal tip relative
`to reference point
`
`Fig. 15
`
`0016
`
`Exhibit 1105 page 16 of 40
`DENTAL IMAGING
`
`
`
`Patent Application Publication Mar. 20, 2008 Sheet 16 of 28 US 2008/0071143 Al
`
`J ,o-a ..
`
`+
`
`FIG. lb
`
`0017
`
`Exhibit 1105 page 17 of 40
`DENTAL IMAGING
`
`
`
`Patent Application Publication Mar. 20, 2008 Sheet 17 of 28 US 2008/0071143 Al
`
`7
`
`, ........
`', '-f
`- .
`'
`,
`
`.
`
`.
`\
`
`V
`
`\
`
`X
`
`fl 6. 17·
`
`0018
`
`Exhibit 1105 page 18 of 40
`DENTAL IMAGING
`
`
`
`Patent Application Publication Mar. 20, 2008 Sheet 18 of 28 US 2008/0071143 Al
`
`X
`
`f \ G. l 8
`
`0019
`
`Exhibit 1105 page 19 of 40
`DENTAL IMAGING
`
`
`
`Patent Application Publication Mar. 20, 2008 Sheet 19 of 28 US 2008/0071143 Al
`
`Acquire video frames
`
`Receive user input
`specifying a visual
`navigation action
`
`Determine position and orientation
`of distal tip of scope for each frame
`as it is acquired
`
`Identify frame(s)
`affected by the user
`input
`
`Associate corresponding
`position and orientation
`with each frame in memory'
`
`Compute a position and
`orientation of image plane
`for each of the video
`frames, as a function of the .
`corresponding position and
`orientation of distal tip of
`scope
`
`Transform the image plane
`for each affected frame to a ·
`new spatial position and/or
`orientation, based on the
`user input
`
`Cause each affected
`frame to be displayed
`according to its new
`image plane
`
`Fig. 19A
`
`Fig. 198
`
`0020
`
`Exhibit 1105 page 20 of 40
`DENTAL IMAGING
`
`
`
`Patent Application Publication Mar. 20, 2008 Sheet 20 of 28 US 2008/0071143 Al
`
`I
`l
`
`f
`
`i ?
`l i
`I
`I I
`
`j
`i
`l
`
`l
`
`I ' i
`
`/
`J
`
`•
`
`,/·
`-
`
`F\ G. 20
`
`. l
`I A
`
`l
`
`J
`l
`i
`
`.,,P
`
`-
`
`I
`. I
`
`0021
`
`Exhibit 1105 page 21 of 40
`DENTAL IMAGING
`
`
`
`Patent Application Publication Mar. 20, 2008 Sheet 21 of 28 US 2008/0071143 Al
`
`Begin
`
`Receive user inputs
`specifying three or
`more points on an
`anatomical feature
`(using tip of scope)
`
`Compute 3D locations
`of the points
`
`Fit model volume to
`the points
`
`Received
`additional user
`input(s) specifying
`additional point(s
`?
`
`Yes
`
`Refit model volume to
`all specified points
`
`..,__ _ __,,,_ __ No
`
`End
`
`Compute surface of
`the model volume
`
`Recompute surface of
`the refitted model
`volume
`
`Display surface to user and
`compute physical
`parameter(s) of the
`anatomical feature
`
`2110
`
`Display new surface to user
`and recompute the physical
`parameter(s)
`
`Output computed
`parameter(s) to user
`
`Output recomputed
`parameter(s) to user
`
`Fig. 21
`
`0022
`
`Exhibit 1105 page 22 of 40
`DENTAL IMAGING
`
`
`
`Patent Application Publication Mar. 20, 2008 Sheet 22 of 28 US 2008/0071143 Al
`
`0023
`
`Exhibit 1105 page 23 of 40
`DENTAL IMAGING
`
`
`
`Patent Application Publication Mar. 20, 2008 Sheet 23 of 28 US 2008/0071143 Al
`
`Fl G. 23
`
`0024
`
`Exhibit 1105 page 24 of 40
`DENTAL IMAGING
`
`
`
`Patent Application Publication Mar. 20, 2008 Sheet 24 of 28 US 2008/0071143 Al
`
`0025
`
`Exhibit 1105 page 25 of 40
`DENTAL IMAGING
`
`
`
`Patent Application Publication Mar. 20, 2008 Sheet 25 of 28 US 2008/0071143 Al
`
`l
`
`'l...sl --f
`,I_..
`t
`i
`
`0026
`
`Exhibit 1105 page 26 of 40
`DENTAL IMAGING
`
`
`
`Patent Application Publication Mar. 20, 2008 Sheet 26 of 28 US 2008/0071143 Al
`
`0027
`
`Exhibit 1105 page 27 of 40
`DENTAL IMAGING
`
`
`
`Patent Application Publication Mar. 20, 2008 Sheet 27 of 28 US 2008/0071143 Al
`
`>----------------_,x
`
`FlG. 2 7
`
`0028
`
`Exhibit 1105 page 28 of 40
`DENTAL IMAGING
`
`
`
`Patent Application Publication Mar. 20, 2008 Sheet 28 of 28 US 2008/0071143 Al
`
`Begin
`
`Project first and
`second paths onto a
`plane
`
`Compute first line that
`connects starting point of first
`path with starting point of
`second path
`
`Next plane
`
`Compute second line that
`connects end point of first path
`with end point of second path
`
`No
`
`Compute area of shape defined by
`first and second paths and first and
`second lines
`
`Done
`for all three
`orthogonal
`planes
`?
`
`Yes
`
`Compute total area for all three
`iterations of 2801 - 2804 as
`measure of correlation
`
`End
`
`Fig. 28
`
`0029
`
`Exhibit 1105 page 29 of 40
`DENTAL IMAGING
`
`
`
`US 2008/0071143 Al
`
`Mar. 20, 2008
`
`1
`
`MULTI-DIMENSIONAL NAVIGATION OF
`ENDOSCOPIC VIDEO
`
`FIELD OF THE INVENTION
`
`[0001] At least one embodiment of the present invention
`pertains to medical devices, and more particularly, to a
`method and apparatus for enabling a user to navigate endo(cid:173)
`scope video with multiple degrees of freedom.
`
`BACKGROUND
`
`[0002] To reduce the trauma to patients caused by invasive
`surgery, minimally invasive surgical techniques have been
`developed for performing surgical procedures within the
`body through very small incisions. Endoscopy is a technique
`that is commonly employed in minimally invasive surgery.
`Endoscopy allows internal features of the body of a patient
`to be viewed through an endoscope, either directly or
`through video generated by a video camera coupled to the
`endoscope. The endoscope typically can also be used as a
`conduit through which other surgical instruments can be
`inserted into the body.
`[0003] Endoscopes can be of the rigid type or the flexible
`type. A rigid endoscope is typically inserted into the body
`through a small external incision, as in laparoscopy, arthros(cid:173)
`copy, etc. Flexible endoscopes, on the other hand, are
`commonly used in procedures where the endoscope is
`inserted through a natural body orifice, such as the mouth or
`anus, as in gastroscopy or colonoscopy, respectively.
`[0004] Endoluminal surgery is a newer form of minimally(cid:173)
`invasive surgery, in which the surgical instrument (i.e., the
`endoscope or an instrument inserted through it) initially
`enters the body through a natural bodily orifice, such as the
`mouth. Typically a flexible endoscope is used. The instru(cid:173)
`ment is then "threaded" through a natural body lumen, such
`as the esophagus, until its distal tip is close to the target
`anatomy. Often the target anatomy is not in the immediate
`proximity of the orifice of entry, however. Therefore, the
`surgeon must navigate the endoscope to the target anatomy
`and may have to operate on portions of the anatomy that are
`not directly visible or are not easily visible.
`[0005] Because endoscopes have limited field of view,
`localization of target lesions and navigation to the desired
`areas through small entry points can be difficult. Further(cid:173)
`more, some parts of the body contain extremely small and/or
`complex structures that are difficult for a surgeon to see
`through an endoscope or in endoscopic video. The chal(cid:173)
`lenges become larger as the distance from the entry point to
`the target anatomy increases, as is the case in endoluminal
`surgery.
`
`BRIEF DESCRIPTION OF THE DRAWINGS
`
`[0006] One or more embodiments of the present invention
`are illustrated by way of example and not limitation in the
`figures of the accompanying drawings, in which like refer(cid:173)
`ences indicate similar elements and in which:
`[0007] FIG. 1 is a high level diagram of a system for
`performing endoluminal surgery;
`[0008] FIG. 2 schematically illustrates an example of a
`display of multiple coregistered, multi-modal images;
`[0009] FIG. 3 schematically illustrates an endoscopic sur(cid:173)
`gery visualization system that includes a Visual Navigation
`System (VNS);
`
`[0010] FIG. 4 is a block diagram showing the elements of
`the VNS, according to certain embodiments of the invention;
`[0011] FIG. 5 illustrates an example of the overall process
`that can be performed by the VNS while the VNS is in a
`particular operating mode to coregister multi-modal images;
`[0012] FIG. 6 is a block diagram showing the VNS in
`greater detail, according to certain embodiments of the
`invention;
`[0013] FIG. 7 is a block diagram showing an example of
`the implementation of the multi-modal image coregistration
`module;
`[0014] FIG. 8 is a block diagram showing an example of
`the user input portion of the user interface subsystem;
`[0015] FIG. 9 is a block diagram showing the model
`generator according to certain embodiments of the inven(cid:173)
`tion;
`[0016] FIG. 10 schematically shows an example of a
`system configuration for tracking the position and orienta(cid:173)
`tion of the endoscope using electromagnetic sensors;
`[0017] FIG. 11 schematically shows an example of a
`system configuration for tracking the position and orienta(cid:173)
`tion of the endoscope using optical curvature sensors;
`[0018] FIG. 12 shows the construction of an optical cur(cid:173)
`vature sensor that can be used to track the endoscope;
`[0019] FIG. 13 shows the use of the light channel in the
`endoscope to provide a light source for optical curvature
`sensors on the endoscope;
`[0020] FIG. 14 shows the use of different length optical
`curvature sensors on the endoscope;
`[0021] FIG. 15 illustrates an example of a process for
`determining the current position and orientation of the distal
`tip of the scope;
`[0022] FIG. 16 shows the tagging of video frames with
`position and orientation information;
`[0023] FIG. 17 illustrates the relationship between the
`endoscope tip, the image plane and the object being viewed;
`[0024] FIG. 18 illustrates the placement of various video
`frames into a common 3D coordinate space;
`[0025] FIG. 19A shows a process of acquiring and pro(cid:173)
`cessing video data to enable subsequent visual navigation;
`[0026] FIG. 19B shows a process by which a user can
`visually navigate through video that has been processed as
`in FIG. 19A;
`[0027] FIG. 20 illustrates an endoscopic camera view of
`an anatomical object;
`[0028] FIG. 21 shows a process of automatically measur(cid:173)
`ing one or more parameters of an anatomical feature;
`[0029] FIG. 22 shows an example of a display of slice
`images, which can be generated automatically by the VNS
`in response to movement of the endoscope;
`[0030] FIG. 23 shows an example of a 3D rendering of a
`computed path taken by an endoscope during endoscopic
`surgery;
`[0031] FIG. 24 shows an example of how the spatial
`relationship between two paths can be displayed;
`[0032] FIG. 25 illustrates an example of projecting a path
`onto three orthogonal planes;
`[0033] FIG. 26 illustrates the connection of endpoints of
`two paths;
`[0034] FIG. 27 illustrates the computation of correlation
`between the current position of the endoscope tip and a
`reference path; and
`
`0030
`
`Exhibit 1105 page 30 of 40
`DENTAL IMAGING
`
`
`
`US 2008/0071143 Al
`
`Mar. 20, 2008
`
`2
`
`[0035] FIG. 28 shows an example of a process for deter(cid:173)
`mining a correlation between two paths.
`
`DETAILED DESCRIPTION
`
`[0036] A visual navigation system for use in endoscopic
`surgery, particularly (though not exclusively) in endolumi(cid:173)
`nal surgery, is described. References in this specification to
`"an embodiment", "one embodiment", or the like, mean that
`the particular feature, structure or characteristic being
`described is included in at least one embodiment of the
`present invention. Occurrences of such phrases in this speci(cid:173)
`fication do not necessarily all refer to the same embodiment.
`[0037]
`In view of the challenges mentioned above, it is
`desirable to provide a visual navigation system to provide
`coupled three-dimensional (3D) visualization and naviga(cid:173)
`tion assistance to the surgeon in navigating an endoscope to
`the target anatomy, particularly during endoluminal surgery.
`As described in greater detail below, therefore, according to
`certain embodiments of the invention, a visual navigation
`system (VNS 30) comprises a data acquisition subsystem, an
`endoscope tracking subsystem, a registration subsystem, a
`data processing subsystem and a user interface subsystem.
`The data acquisition subsystem inputs intra-operative scan
`data from a medical scanning device during an endoscopic
`procedure. The tracking subsystem captures data represent(cid:173)
`ing positions and orientations of a flexible endoscope during
`the endoscopic procedure. The registration subsystem deter(cid:173)
`mines transformation parameters for coregistering the intra(cid:173)
`operative scan data and the data indicative of positions and
`orientations of the endoscope. The data processing sub(cid:173)
`system coregisters the intra-operative scan data and the data
`indicative of positions and orientations of the endoscope
`based on the transformation parameters and generates real(cid:173)
`time image data representing 3D internal views of a body
`that are coregistered with live video from an endoscopic
`video camera. The user interface subsystem receives input
`from a user for controlling the system and provides output
`to the user.
`[0038] The following definitions and explanations shall
`apply to terms used herein:
`[0039]
`"Coregistering" means bringing into a common
`coordinate space and orientation.
`[0040]
`"Flexible" means designed to be flexed substan(cid:173)
`tially without incurring damage to the instrument (not just
`capable of being deformed).
`[0041] A "flexible endoscope" is an endoscope, a substan(cid:173)
`tial portion of the length of which is flexible, including the
`distal end. (A "flexible endoscope" can and usually does
`have a rigid proximal portion, or "base".)
`[0042]
`"Intra-operative scan data" is scan data acquired by
`a scan performed during a particular endoscopic procedure
`on a body. This term does not include video acquired from
`an endoscopic video camera.
`[0043]
`"Logic" can be or include (but is not limited to) any
`one or more of: special-purpose hardwired circuitry, pro(cid:173)
`grammable circuitry, software, firmware, or any combina(cid:173)
`tion thereof.
`[0044] A "module" means any one or more of: special(cid:173)
`purpose hardwired circuitry; software and/or firmware in
`combination with one or more programmable processors; or
`any combination thereof.
`[0045]
`"Positions" is synonymous with "locations".
`
`[0046]
`"Pre-operative scan data" is scan data acquired by
`a scan performed prior to a particular endoscopic procedure
`on a body.
`[0047] During an endoscopic procedure on a body, the
`VNS inputs intra-operative scan data generated by a medical
`scanning device, such as an x-ray computed tomography
`(CT) device, an MRI device, ultrasound imaging device, etc.
`The intra-operative scan data is representative of a region of
`interest in the body. The VNS also captures data indicative
`of positions and orientations of a flexible endoscope during
`the endoscopic procedure, from various sensors on the
`endoscope. The VNS further generates real-time three(cid:173)
`dimensional scan images of the region of interest based on
`the intra-operative scan data and/or the pre-operative scan
`data and the data indicative of positions and orientations of
`the flexible endoscope. The VNS coregisters the real-time
`three-dimensional scan images with live video images gen(cid:173)
`erated by the endoscopic video camera that is coupled to the
`endoscope. The VNS then causes the real-time three-dimen(cid:173)
`sional (volumetric) scan images and the live video images to
`be displayed coregistered on a display device.
`[0048] The VNS can automatically detect movement of
`the flexible endoscope during an endoscopic procedure and,
`in response, identify a particular slice of scan data corre(cid:173)
`sponding to a current location and orientation of the endo(cid:173)
`scope tip and cause an image of the slice to be displayed, and
`similarly cause other slices of scan data to be displayed in
`response to additional movements of the endoscope.
`[0049] The VNS can also coregister and display the intra(cid:173)
`operative scan data with pre-operative scan data represen(cid:173)
`tative of the region of interest in the body and generated
`prior to the endoscopic procedure by a medical scanning
`device.
`[0050] Another feature of the VNS is the ability to correct
`for barrel lens distortion in the live video. Barrel lens
`distortion is divergence, in the acquired endoscopic video,
`from the rectilinear projection in geometric optics where
`image magnification decreases with increasing distance
`from the optical axis.
`[0051] Another feature of the VNS is a technique for
`employing model-fitting technique which enables a user
`easily to obtain in vivo measurements of anatomical features
`in the body during an endoscopic procedure.
`[0052] Yet another feature of the VNS is that it enables a
`user to visually navigate captured endoscopic video with six
`degrees of freedom. This capability provides the user with
`control of a virtual camera (point of view) that can be
`translated in three orthogonal axes in 3-D space as well as
`allowing control of vertical panning (pitch), horizontal pan(cid:173)
`ning (yaw) and tilt (roll) of the virtual camera, as well as
`zoom.
`[0053] Still another feature of the VNS is surgical instru(cid:173)
`ment path correlation. In particular, the VNS can compute
`the path taken by an endoscope scope ( or other medical
`instrument) during a procedure and various related attributes
`and parameters, and can compute and display a correlation
`between two paths.
`
`I. Overall System Architecture and Operation
`
`[0054] FIG. 1 is a high level diagram of a system for
`performing endoluminal surgery. A flexible endoscope
`("scope") 1 is inserted into the body of a patient 2 through
`a natural orifice, such as the mouth 3. The scope 1 includes
`a rigid base 4 and a flexible portion 5 which is inserted into
`
`0031
`
`Exhibit 1105 page 31 of 40
`DENTAL IMAGING
`
`
`
`US 2008/0071143 Al
`
`Mar. 20, 2008
`
`3
`
`the body. The distal tip 6 of the scope 1 is part of the flexible
`portion 5 and can be flexed about two or more orthogonal
`axes by the surgeon, by using controls (not shown) mounted
`on the base 4. The surgeon navigates the scope 1 through a
`natural body lumen, such as the esophagus 7 and stomach 8
`until the distal tip 6 of the scope 1 is in proximity to the
`target anatomy.
`[0055] Optically coupled to the base 4 of the scope 1 is an
`endoscopic video camera 9, which outputs a video signal to
`a display device (monitor) 10, which may be, for example,
`a cathode ray tube (CRT) display, liquid crystal display
`(LCD), or other suitable type of display device. High(cid:173)
`intensity light from a light source 11 is provided through a
`light conduit to a light port on the base 4 of the scope 1 and
`is transmitted through the flexible portion 5 and output
`through the distal tip 6. The scope 1 may include an
`instrument channel (not shown), through which a surgical
`instrument (such as a grabbing instrument for biopsies) can
`be passed through to an opening at the distal tip 6. The entry
`port for the instrument channel is normally on or near the
`base 4 of the scope 1.
`[0056] As noted above, the VNS introduced here (not
`shown in FIG. 1) provides a display that includes coregis(cid:173)
`tered views of intra-operative scan images and/or intra(cid:173)
`operative scan images with live endoscopic video, among
`other features. FIG. 2 illustrates schematically how such a
`display may be presented to a user. The VNS may include its
`own display device, on which such a display can be pre(cid:173)
`sented. Alternatively, the display can be presented on a
`separate external display device that is connected to the VNS
`30, such as the monitor 10.
`[0057] As shown in FIG. 2, a display 20 generated by the
`VNS includes several windows in proximity to each other,
`including a window 21 that contains a scan image and a
`window 22 that contains a corresponding live video image,
`presented side-by-side. The scan image is generated and
`updated in real-time based on intra-operative scan data
`acquired during the endoscopic procedure from a CT scan(cid:173)
`ning device, MRI device, ultrasonic imaging device, or other
`medical scanning device. The scan image is coregistered
`with the live video image, according to the technique
`described herein. Other data, such as text, graphics, touch(cid:173)
`screen controls, etc., can be included on a separate portion
`23 of the display 20.
`[0058] FIG. 3 schematically illustrates an endoscopic sur(cid:173)
`gery visualization system that includes the VNS 30. The
`VNS 30 receives intra-operative scan data from a medical
`scanning system 31, which can be, for example, a CT
`system, MRI system, ultrasonic imaging system in, or the
`like. The VNS 30 also receive live video from the endo(cid:173)
`scopic video camera 9 that is coupled to the scope 1. The
`VNS 30 also receives inputs 32 from various sensors on the
`endoscope 1, which are used to determine the current
`position and orientation of the distal tip 6 of the endoscope
`1. The VNS 30 may also input pre-operative scan data from
`a data storage facility 33 (e.g., a computer hard drive, file
`server, or the like). The pre-operative scan data can be
`coregistered with the intra-operative scan data and or the
`live video.
`[0059] The VNS 30 may have speech recognition/voice
`response capability; in that case, the VNS 30 further receives
`audio inputs from a microphone 34, through which to
`receive voice commands. The VNS 30 may also receives
`various other user inputs 35, such as from touchscreen
`
`controls or other input devices such as a keyboard, mouse,
`buttons, switches, etc. The VNS 30 outputs coregistered
`images such as described above to its own display device, if
`it is so equipped, and/or to an external monitor 10. The VNS
`30 may also output synthesized speech and/or other forms of
`audible output (e.g., warnings or distance to target) to the
`user through an audio speaker 36. The VNS 30 may also
`include a network interface 37 through which to transmit
`and/or receive data over a network, such as a local-area
`network (LAN), a wide area network (WAN), a corporate
`intranet, the Internet, are any combination thereof. The VNS
`30 may also include a separate video camera and appropriate
`software (not shown) to capture and recognize gestures of
`the user as commands, in real-time, and to cause correspond(cid:173)
`ing actions to be performed.
`[0060] FIG. 4 is a block diagram showing the major
`subsystems of the VNS 30, according to certain embodi(cid:173)
`ments of the invention. As shown, the VNS 30 includes a
`data acquisition subsystem 41, a scope tracking subsystem
`42, a measurement subsystem 43, a data processing sub(cid:173)
`system 44, a registration subsystem 45 and a user interface
`subsystem 46. The purpose of the data acquisition subsystem
`44 is to load intra-operative and pre-operative scan data
`representative of a region of interest of a given patient. The
`purpose of the registration subsystem is to bring the various
`data acquired into a common coordinate space. The regis(cid:173)
`tration subsystem 45 determines the transformation param(cid:173)
`eters needed to coregister the data acquired by the data
`acquisition subsystem 41 and the tracking subsystem 42 to
`a common coordinate space. These parameters are passed to
`the data processing subsystem 44, which transforms the
`input data, performs segmentation and creates the desired
`visualizations. The data processing subsystem is the main
`processing subsystem of the VNS 30.
`[0061] The visualizations are passed to the user interface
`subsystem 46 for audio and visual output. The user interface
`subsystem 46 also interprets and passes user commands
`received in the form of any one or more of: voice, gestures,
`touch screen inputs, button presses, etc.
`[0062] The purpose of the endoscope tracking subsystem
`42 is to capture, in real-time, data indicative of the position
`and orientation of the endoscope, particularly its distal tip, to
`enable coregistration of multi-modal images. Note, how(cid:173)
`ever, that the techniques introduced here can also be used to
`track a surgical instrument other than an endoscope, such as
`a catheter, guide wire, pointer probe, stent, seed, or implant.
`[0063] The measurement subsystem 43 receives user
`inputs and processed data via the data processing subsystem
`44, computes measurements of anatomical features, and
`formats the results to be passed to the user interface sub(cid:173)
`system 46 for audio and/or visual output. These subsystems
`are described further below.
`[0064] FIG. 5 illustrates an example of a process that can
`be performed by the VNS 30, according to certain embodi(cid:173)
`ments of the invention, while the VNS 30 is in an operating
`mode to coregister scan images and live video images.
`Initially, the VNS 30 concurrently inputs intra-operative
`scan data, position/orientation data from the sensors on the
`scope, and live video from the endoscopic video camera, at
`501a, 501b and 501c, respectively. At 502 the VNS 30
`determines the current position and orientation of the
`scope's distal tip. At 503 the VNS 30 generates real-time
`3-D (volumetric) scan images, based on the intra-operative
`scan data and the current position and orientation of the
`
`0032
`
`Exhibit 1105 page 32 of 40
`DENTAL IMAGING
`
`
`
`US 2008/0071143 Al
`
`Mar. 20, 2008
`
`4
`
`scope's distal tip. The VNS 30 then coregisters the real-time
`3-D scan images with the live video from the endoscopic
`video camera at 504. The coregistered scan images and live
`video are then sent to a monitor (which may be integral with
`or external to the VNS 30) for display, to an image recording
`device for recording, and/or to a network interface for
`transmission over a network. The process then repeats from
`the beginning with new data while the VNS 30 is in this
`operating mode. It should be understood that the process of
`FIG. 5 is illustrated and described here at a conceptual level;
`consequently, the exact sequence of operations shown in
`FIG. 5 does not necessarily have to be the actual sequence
`in practice. For example, input data can be received (501a,
`501b and 501c) and buffered as necessary