Multiple reconstruction and dynamic modeling of 3D digital

study, diagnostic aid, therapy planning or even tumor de- struction. We present ... kidney is deformed and moves because of the respiratory cycle. A kidney (and ...
2MB taille 2 téléchargements 342 vues
Noname manuscript No. (will be inserted by the editor)

Multiple reconstruction and dynamic modeling of 3D digital objects using a morphing approach Application to kidney animation and tumor tracking Valentin Leonardi · Jean-Luc Mari · Vincent Vidal · Marc Daniel

Received: date / Accepted: date

Abstract Organ segmentation as well as its motion simulation can be useful for many clinical purposes like organ study, diagnostic aid, therapy planning or even tumor destruction. We present in this paper a full workflow starting from CT-Scan, resulting in kidney motion simulation and tumor tracking. Our method is divided into three major steps which are kidney segmentation, surface reconstruction and animation. The segmentation is based on a semi-automatic region growing approach which is then refine in order to improve its results. The reconstruction is done through the Poisson surface reconstruction and gives a manifold 3D model of the kidney. Finally, the animation is done using an automatic mesh morphing among the models previously obtained. Thus, the results are purely geometric since they are 3D animated models. Moreover, our method only needs basic user interaction and is fast enough to be used in a medical environment, which satisfy our constraints. Finally, it can be easily adapted to MRI acquisition as only the segmentation part would need minor modifcations.

Keywords Organ segmentation · region growing · surface reconstruction · kidney modeling · geometrical modeling · organ motion simulation · mesh morphing

Valentin Leonardi, Jean-Luc Mari, Marc Daniel LSIS, UMR CNRS 7296 Aix-Marseille Universit´e Vincent Vidal LIIE, EA 4264 CERIMED, Aix-Marseille Universit´e E-mail: [email protected] E-mail: [email protected] E-mail: [email protected] E-mail: [email protected]

1 Introduction Tumors can be treated by low-invasive approaches. The goal is to minimize interactions between the surrounding environment and the patient in order to limit the consequences of surgery (incision treatment, convalescence) and their possible complications (nosocomial infections). Kidney tumors can be treated by radiofrequency. Radiofrequency is a lowinvasive, non-surgical percutaneous heat treatment. The principle is to locate the tumor through CT scan, and insert a radiofrequency electrode in its center. An electric current is then delivered, in order to destroy the tumor. However, there is a chance of cancerous cell displacement when removing the electrode. The KiTT project (for Kidney Tumor Tracking, of which we take part) is fully involved in the low-invasive protocol. Its goal is to create a totally non-invasive new approach by transmitting radiofrequency waves, in a transcutaneous way until tumor eradication. The main difficulty is to keep the wave beam continuously focalized on the tumor while the kidney is deformed and moves because of the respiratory cycle. A kidney (and a tumor) tracking is therefore necessary. Before this organ tracking stage, we need to obtain a solid 3D model of it. Thus, we present an entire workflow which aims at tracking a kidney tumor and simulate the deformation of the organ from three sets of medical acquisition. Each one of these sets is obtained for a precise breathing phase, so that one acquisition is done for the exhale phase, an other one for the inhale phase and the third one for the middle phase of the respiratory cycle. For the rest of this paper, this phase will be referred as mid-cycle phase. The method presented is divided into three major steps: first one is the kidney segmentation for every slice of the three acquisitions. Three point

2

clouds are issued from this first step. The second step is their reconstruction in order to have three manifold 3D models representing the same kidney for three different breathing phases. We call these models M1 (kidney model for the inhale phase), M2 (kidney model for the mid-cycle phase) and M3 (kidney model for the exhale phase). Finally, the last step is a soft transition among the three models in order to simulate the organ movements and deformation. This is done through mesh morphing between M1 and M2 and between M2 and M3 . Section 2 of this article deals with the previous work in segmentation and organ tracking. Section 3 introduces the whole process of kidney tracking from three medical acquisitions where each step is detailed. In section 4, results are presented and their performances commented. Finally in section 5 we discuss the limits of our method and the perspective to overcome them. The present paper is an augmented and enhanced version of our previous work described in [23]. 2 Related work 2.1 Segmentation Clustering methods aim at assigning pixels into homogenous subsets (or clusters). The main difficulty is the clusters definition (what are the conditions necessary for a pixel to belong to a cluster) and their number. These parameters can be set manually [1], [32], [16] or automatically [22], [24]. There are several approaches for the segmentation itself: Bayes classifier [22], [24], Expectation-Maximization algorithm [43], Maximization A Posteriori [34], or K-Mean (or the Fuzzy K-Mean) [1], [32]. Strictly speaking, Markov Random Fields are not a segmentation approach. They are used along with other segmentation methods in order to improve their results. The principle is to model the spatial interactions between a pixel and its neighborhood. Markov Random Fields are generally used along with clustering methods [15], [12], [46]. Artificial Neuron Network are not often used for segmentation. Most os the time, they are employed as clustering methods [21], [26]. The advantage of this approach is its learning capacity through performed tasks. The more a network is used for segmentation, the better are the results. Moreover, it is also possible to parallelize methods thanks to artificial neural network and, therefore, accelerate them [42]. Deformable models are widely used for segmentation. They consist in positioning a curve next to the object to seg-

Valentin Leonardi et al.

ment, and then deforming the curve so that it fits the contour. Like clustering methods, the main difficulty here is the initialization. Two major approaches come out of the literature: the first is to segment coarsely the object in order to detect its contour and fit the model on it [3], [4], [7], [9]. The second way needs a set of deformable models already used to segment the desired object. The model initialization is then obtained by calculation of a mean deformable model [6], [8], [10]. Finally, region growing approaches are often used in medical imaging, the principle being to place a point (seed) inside the object to segment (organ, tumor, bone, ...). This seed defines the first pixel of the region. The region gets then iteratively bigger by adding to it the surrounding pixels according to a given homogeneity criterion. The first difficulty of this approach is to define a robust seed. There are several approaches to define the seed automatically. In [25], a seed is considered as robust if the difference between the greatest and the lowest grey value within its neighborhood does not exceed a given threshold. Wu et al. [44] define a Region Of Interest (ROI) around the organ to segment. Then, they apply a function to every pixel in the ROI based on spatial and feature space distances between the pixel, its neighborhood and the ROI contour. The seed will be the pixel for which the function is minimum. Finally, Rusko et al. [38] first binarize the image, then define the seed by successive erosion of the biggest connected component.

2.2 Tracking Organ tracking methods are either based on mathematical models which represent the respiratory cycle as a periodical function, or on empirical algorithms which predict future movements by observation and analysis of previous ones. The most intuitive way to track an organ is to put a marker which is highly detectable by a classic medical imaging acquisition near this organ [28], [29], [31], [40], [41]. This formalism is also used in all-in-one robotic radiosurgery systems such as the Cyberknife [27]. This kind of method requires a surgical intervention which is not suitable in our case. The following approaches assume the kidney has been segmented and reconstructed previously for two or more phases of the respiratory cycle. Most of the time, only two models are needed, but three [39] or even more [36] are sometimes necessary. These extra acquisitions can be used to improve the precision of the organ deformation. In other cases, it is not an extra acquisition of the organ that is needed, but other kind of data essential to the method. Hostettler et

Multiple reconstruction and dynamic modeling of 3D digital objects using a morphing approach

3

al. [13] use the diaphragm movement in order to reflect it on the abdomen organs. In [39], air, tissues and lungs have to be segmented for three acquisitions in order to get an organ tracking. Deformation fields are used to understand the motion of an organ. This field computes the deformations necessary to apply on a given source model Ms to deform it into a given target model Mt . The deformation field can be computed using several methods like Maximum Likelihood / Exceptation-Maximisation [35], least squares [39] or approaches based on Normalized Mutual Information [37]. Deformations can also be applied on a mesh through a deformable superquadratic in order to get the movement of an organ [5]. Deformation fields can also be find in registration methods which can be used for organ tracking. Nicolau et al. [29] use two acquisitions: on the first one, markers are used in order to get the position of the organ of interest. Then a second acquisition is done without these markers. By analyzing the difference of position of the spine for both acquisitions, the registration is performed using the minimization of the Extended Projective Point Criterion. In [37] two operations are done to compute the registration: affine transformation is used for global movements while Free Form Deformation is used for local movements. Two registration algorithms based on optical flow are implemented and accelerated using GPU programming in [30] in order to perform an image-guided radiotherapy.

3 Dynamic modeling process 3.1 Overview An overview of our entire workflow can be seen in Figure 1. It shows how to get the kidney motion simulation from 3 sets of CT-scan or MRI through the 3 major steps detailed in the following subsections.

3.2 Kidney segmentation 3.2.1 First segmentation Among the existing approaches for segmentation, we want to use one which does not need any learning dataset. Building up such a set would be time- and memory-consuming. The execution time has to be compatible with a medical realtime (or semi real-time) environment. It should not exceed one minute.The initialization also has to be compatible with a real-time constraint. A method which can be initialized automatically or manually (in this case, it should need a few

Fig. 1: Overview of our entire workflow: three sets of images resulting from a medical imaging acquisition for the inhale, mid-cycle and exhale phase is done (first line). The kidney and the tumor are segmented for every images of these three acquisitions (second line). The Poisson surface reconstruction is then applied to the point cloud extracted from the segmentation of each three different phases. We call the resulting models M1 , M2 and M3 (third line). Mesh morphing is computed between M1 and M2 and between M2 and M3 . The results are two metameshes which allow a smooth transition between M1 to M2 and M2 to M3 (fourth line). By alternating the two metameshes, a full and smooth transition from M1 to M3 is possible, resulting in the kidney motion visualization (fifth line).

interactions) is necessary. Thus, the region growing is the approach chosen. It does not need a learning dataset and the runtime is fast due to the fact that such methods are generally based upon recursive functions. Finally the initializa-

4

Valentin Leonardi et al.

tion, which consist in defining a point, can be performed automatically or manually (the time needed for defining only one point is acceptable). In our case, we use a region growing which is manually initialized; the user has to define the seed with the mouse. Only one click is necessary in order to segment all the kidney regions present in each image of the set. Let I1 , I2 , I3 , ..., IN be the images to segment (we assume the kidney is present in every one of them), I1 and IN being both ends of the kidney. The seed is manually defined for image I N and then automatically propagated to the adjacent images 2 (I N −1 et I N +1 ) (see Figure 2.a). The propagation is done by 2 2 considering the weighted barycenter of kidney contour on the previous image as the seed for the current image (Figure 2.b & 2.c). The segmentation is done first for images I N to 2 IN , then for I N to I1 . 2

Fig. 2: (a, left) Initial seed propagation. – (b, above on the right) Kidney contour for image Ik . – (c, below on the right) New seed for image Ik+1 calculated from the kidney contour in image Ik

For each slice of the acquisition, the growing region method is the same and detailed below. The growing is limited to a ROI of a given size and centered on the seed. First, in order to homogenize the grey values and to reduce the presence of noise (inherent in medical images), we apply a gaussian blur of size 1. Then we evaluate the mean grey value in a 5 pixels radius around the seed (we do not take into account extreme values). Let meanseed be this value. For each new image Ix to segment, we know the mean grey value present in every kidney region in the previous images segmented so far. Let meankid be this value. For each pixel p of grey value Gv p inside the ROI and next to the region border, we calculate the distance between p and the seed. Using this distance we can

set a threshold thres (see Figure 3). p is then added to the region if |Gv p − meanseed | < thres and |Gv p − meankid | < 30. Thus, the region grows iteratively until no pixel can be added anymore. Despite the gaussian blur, noise is still present in the image, which leads to unsatisfying results (Figure 4.a). A succession of mathe- matical morphology operations allows to overcome this problem. The purpose of this post-treatment is to fill holes and eliminates the part where the region overflowed into a second organ next to the kidney which greyscale is slightly different. To do so, we first perform a morphological closing of a given size followed by an opening of a bigger size. Note that it is essential to first perform the closing, since at this stage, the kidney binary volume is composed by several small connected components; an opening would suppress them. We can see that holes are filled correctly (Figure 4.b), but the most important overflowings are still present (Figure 4.c).

Fig. 3: The distance between the current pixel and the seed defines the threshold. The bigger the distance, the stricter the condition a pixel should meet to be considered as part of the region.

3.2.2 Segmentation refinement In order to improve the results and to avoid adjacent organs from being considered as part of the kidney, the cumulative histogram H for the kidney regions is computed. This histogram represents the number of appearances of each grey value in the segmented kidney region in all images I1 , ..., IN . H shows a peak since the kidney has the same greyscale range [min; max] in all images (Figure 5.a). The refinement is done as follows. The same ROI as in 3.2.1 is defined and we apply to it a gaussian blur of size 1 in order to homogenize the grey values. Let appmax be the maximal number of

Multiple reconstruction and dynamic modeling of 3D digital objects using a morphing approach

5

Fig. 4: (a, left) Region growing without post-treatment. Contours which should not be considered appear because of noise. – (b, middle) Posttreatment makes wrong contours disappear. – (c, right) Post-treatment does not allow to suppress areas of important overflowing.

appearance of all grey values in H and, H(Gv) the function which returns the number of appearances in H for a given grey value Gv. For each pixel p inside the ROI, the distance dist between p and the center of the ROI sets a threshold thres according to the formula: dist − size4ROI 100 p is considered as part of the kidney if H(Gv p ) ≥ appmax (0.85+ thres). Literally, this approach defines a number of appearance threshold. Grey values whose number of appearance in H is below this threshold are no longer considered as part of the kidney. In this way, the greyscale range [min; max] is reduced to the new one [mina f f ; maxa f f ] (Figure 5.b) so that the less frequent grey values are eliminated. The less frequent grey values zone represents the part where the region overflowed into an adjacent organ (Figure 5.c & 5.d). Finally, for the same reason as in 3.2.1 we perform a posttreatment based on mathematical morphology operators. First, Fig. 5: (a, above on the left) Cumulative histogram. Numwe apply a closing followed by a little particles elimination. ber of appearances of each grey value in the kidney region An other closing is then applied in order to connect the reis shown. Greyscale is located between [min; max] – (b bemaining connected components. Finally, a hole filling algolow on the left) A threshold is set so that kidney greyscale rithm fills the final kidney. is reduced to [mina f f ; maxa f f ] which will define the refined thres =

3.3 Point cloud reconstruction As the output of the segmentation step is a point cloud, it is now necessary to reconstruct it in order to have a triangular and manifold model. The Poisson surface reconstruction [17] is a recent algorithm aiming at meshing a point cloud of a model M that has been oriented beforehand, i.e. the normal at each point is known. There are several reasons we use this method: the main one is that it is possible to compute an approximating surface, i.e. the final surface does not fit every point of the point cloud, but represents the mean shape of it. Thus, some segmentation errors that might remain are ignored. Moreover, it allows to drop down the number of vertices of the final surface (but not too radicaly so that the resulting shape is coherent and correct) which is helpful for us since the more vertices there are, the slower morphing

kidney regions. – (c, above on the right) Segmentation showing inaccurate results. – (d, below on the right) refinement of the kidney region shown in 5.c.

methods are. Another reason we use this method is its fast computing-time, which is primordial for our medical environement constraint. The principle of the Poisson surface reconstruction is to define an indicator function χ, peculiar to model M, which equals 1 for every point inside the point cloud and 0 outside. The final reconstruction is deduced from the extraction of an appropriated isosurface (Figure 6). There is an existing relation between the oriented point cloud and its function χ: gradient of χ is a vector field that is 0 almost everywhere except near the surface where it is equal to the inward surface normal. Thus, the oriented point cloud can be seen as

6

Valentin Leonardi et al.

samples of the gradient of χ. Computing χ amounts to find a function χ whose gradient best approximates a vector field → − V defined by the normals at each point, i.e. χ is solution → − → − of ∇ χ = V . The application of the divergence operator transforms this problem into a classical Poisson equation: → − → − ∆χ ≡ ∇· ∇χ = ∇· V .

→ − The vector field V being defined, we want to know χ → − such as its gradient is close to V , i.e. a solution to the Pois→ − son equation ∆ χ = ∇ · V . Resolving such an equation is a well known problem (especially in physics) and several methods exist, but we will not describe them here. To obtain the desired surface δ M of the initial point cloud, it is first necessary to set an isovalue in order to extract the corresponding isosurface. We choose an isovalue whose isosurface is the closest from the initial points. This is done by evaluating χ to these positions and use the mean value: δ M = {q ∈ R3 kχ(q) = γ}

Fig. 6: Overview of the Poisson surface reconstruction.

with γ =

1 ∑ χ(s.p) kSk s∈S

where kSk is the number of points in the initial point cloud S. Finally, the isosurface of the indicator function is extracted using a method similar to the adaptation of Marching Cubes for octree representation.

The normal at each point of the point cloud is known, → − → − which implies V (= ∇ χ) is know at these points. Nevertheless, it has to be known for every point p in R3 . The main → − idea is then to find an expression for the vector field V , gradient of χ, and deduce χ as solution of the Poisson equation → − ∆ χ = ∇ · V . Once χ is known we can then extract the isosurface, thus giving the final reconstruction. As we have an oriented point cloud which is often noisy, we consider the gradient of the smoothed function χ, resulting from χ convolved with a smoothing filter. Let M be the model to reconstruct, δ M its surface and χM its indica→ − tor function. Let N δ M (p) be the inward normal at point p (p ∈ δ M), F(q) a smoothing filter and Fp (q) = F(q − p) its translation to point p. The gradient of the smoothed indicator function is defined as: ∇(χM ∗ F)(q) =

Z δM

→ − Fp (q) N δ M (p)d p

Obviously, we cannot use this formula as we do not know δ M yet, and therefore, cannot evaluate its integral. On the other hand, it is possible to approximate it by partitioning δ M into distinct patches Ps , according to the initial point cloud. More precisely, an octree is defined so that every point of the original point cloud falls into a leaf, each leaf being considered as a patch Ps . Let S be the point cloud composed by a set of points s which positions are s.p and → − normals s. N . To each point s is associated the leaf which it falls into. The union of the leaves approximates δ M. The integral of a patch Ps is thus approximated thanks to the coordinates of point s.p scaled to the area of Ps : ∇(χM ∗ F)(q) = ∑

Z

s∈S Ps

→ − Fp (q) N δ M d p

→ − → − ≈ ∑ |Ps |Fs.p (q)s. N ≡ V (q) s∈S

Fig. 7: Final result of a kidney point cloud (left) and its reconstruction using the Poisson surface reconstruction (right).

3.4 Animation through mesh morphing The originality of our method is that the tracking part is based on a fully geometric approach, mesh morphing. Mesh morphing is a method used to transform progressively a source model Ms into a target model Mt by computing a smooth transition between both models. Thus, what we propose here is a new approach which has two goals: the first one is the motion and deformation visualization of an organ (the kidney in this case) under the influence of natural breathing. The second goal results directly from the first one and is the tracking of a part of this organ: its tumor. The advantage to use mesh morphing is that our method is fast and only needs three models corresponding to three breathing phases: inhale phase, mid-cycle phase and exhale phase. Moreover, the results obtained are fully geometric; the output is an animated 3D model. Thereby general motion and all deformations can be studied at once where some methods only offer the possibility of a 2D visualization. Finally, as the tumor is also

Multiple reconstruction and dynamic modeling of 3D digital objects using a morphing approach

animated, it is possible to know its position at any time.

7

first stage of the final tearing path (see Figure 8).

The most usual method to perform a mesh morphing is to find a common vertex/edge/face network for both models in order to compute a metamesh Mm which contains the topology of Ms and Mt . This approach was first used by Kent et al. in [18], where both models are divided into small parts (also called patches). Every part is then mapped onto the unit disk. Most of mesh morphing methods are based on disk, or other regular polygon, mapping ([2], [11], [14], [19], [20]). Unfortunately, these approaches always need either user interaction (which can be very time consuming for some methods) or vertices correspondence between Ms and Mt prior to the mesh morphing, which does not satisfy our constraints. There are several ways to fully automate a mesh morphing method. The most straightforward is to map models onto a sphere, which was first introduced in [18]. Indeed, there is no need to divide the models anymore since they are homeomorphic to a sphere. On the other hand they have to be star-shaped, although the method described in [2] allows to solve minor covering problems. Another approach is to use a constraint field as described in [45]. The morphing stage must have very basic user-interactions. The two models to morph are close to each other since they both come from the same kidney. Thus, the mesh morphing method uses an automatic mesh cutting up in two patches, unit disk mapping and metamesh creation. We cannot map onto a sphere as kidney models are not star-shaped. All these steps are described in detail hereunder. For the rest of this paper, we will use the following symbols: M represents a given model, Ms is the source model and Mt the target model. C is the connectivity between vertices, edges and faces of M. V = {v1 , v2 , v3 , ..., vn } is the position in R3 of vertices. Edges are represented as a pair of vertices {i, j} and faces as a triplet of vertices {i, j, k}. Finally, N(i) is the set of adjacent vertices to vertex {i}, i.e. N(i) = {{ j}|{i, j} ∈ C}.

Mesh cutting up: obtaining the tearing path The first stage of the mesh dissection consists in computing its principal axis. This can be done by considering only the vertices and using Principal Component Analysis (PCA). Moreover, the PCA gives the 3 principal vectors of the mesh; the firsts two and the barycenter of the mesh define the principal plane. Thus, the next stage consists in computing the intersections between edges of M and the principal plane, defining what we call the intersected edges. In the same way, the vertices {i, j} of an intersected edge are called intersected vertices. This set of the intersected edges is the

Fig. 8: Intersection between the kidney model and its principal plane (in blue). The resulting tearing path is displayed in red.

The tearing path must be a unique loop of edges in C, i.e. {{i1 , i2 }, {i2 , i3 }, ..., {in−1 , in } , {in , i1 } | {ik , im } ∈ C ∀(k, m) ∈ [1; n]; this set of edges is a subset of C and is called c. Thus, two successive intersected edges must share a same vertex. The purpose of the first post-process of the intersected edges is to remove dead-end edges from c. Such edge has one of its vertices which is not shared with any other intersected edge, i.e. {{i, j}|∀l ∈ N( j){ j, l} ∈ / c}. To detect such edges, we first compute the partial adjacency list of each vertex in c. This list is the set of adjacent vertices { j} in c to a vertex {i}, i.e. {{ j}|{i, j} ∈ c}. A dead-end edge is then simply detected when at least one of its vertices has only one neighbor, i.e. its partial adjacency list length is 1 (see Figure 9 - b). The second post-process consists in removing local loops: the tearing path must be a unique succession of edges and each vertex must be shared by two and only two edges. Thanks to the partial adjacency list, vertices from which the tearing path separates are easily detected: such vertices have, at least, 3 neighbors. Thus, local loops are removed as follow. Starting from a 2-adjacency vertex we choose arbitrarily one of its neighbors and so on, until a 3-adjacency vertex is reached. During this step, each vertex is skimmed only once so that it appears at most once in the final tearing path. An arbitrary neighbor of the current 3-adjacency vertex is still chosen, but every other edges containing the current vertex is suppressed from c. As such a process creates new dead-end edges, every edge of each 2-adjacency neighbor is recursively suppressed until the neighbor is a 3-adjacency

8

vertex (see Figure 9 - c,d,e). As the current 3-adjacency vertex becomes a 2-adjacency vertex, the whole process is repeated until we fall back on the first vertex.

Valentin Leonardi et al.

the mesh allows to define the two parts of it which will be mapped later. Vertices defining the tearing path are tagged as 0. A unique arbitrary neighbor of a vertex tagged as 0 is tagged as 1. We recursively tag all its neighbors, so that a whole part of the mesh is tagged as 1. The other part is tagged as 2. Both meshes are then rotated so that their principal planes are aligned with xz-plane. This way, it is possible to check if parts tagged samely in the two models have the same y orientation. If not, tags 1 and 2 of one model are swapped. This step is essential since the part of Ms tagged as 1 (resp. 2) will be morphed into the part of Mt tagged as 1 (resp. 2) (see Figure 10).

Fig. 10: Example of two models for which a same tag has a different y orientation. Vertices in red are tagged as 0, vertices in cyan tagged as 1 and vertices in magenta tagged as 2.

Fig. 9: Whole example of the post-process of a tearing path. Although this example cannot exist in a real situation, it presents all the cases needed to understand how the full post-process works. From top to bottom: (a) Original tearing path - (b) 1-adjacency vertex detection (diamond) and dead-end edges suppression - (c) 3 (or more)-adjacency vertex detection (square). Starting from the pointed vertex, an arbitrary neighbor is chosen. - (d) For a 3-adjacency vertex, we still choose an arbitrary neighbor, but every other edge is suppressed. - (e) To avoid apparition of new deadend edges when edges are suppressed, recursive suppression of every edges from 2-adjacency neighbor is done. - (f) The final tearing path obtained after the post-process.

Now that every vertex is tagged, they can be mapped onto the unit disk. Although any kind of mapping is applicable, we choose the discrete harmonic mapping [33] since it preserves as much as possible the topology of faces of both models. The most straightforward step of this mapping is for the intersected vertices. They are fixed on the unit circle in a way that arc length between each pair of successive vertices is proportional to the original length of edge in mesh. For vertices tagged as 1 or 2, discrete harmonic mapping (as well as other mapping) amounts to solving a linear system described as follow. Two distinct mappings are done, one for each tag. Let Vi be the vertices to map with 0 ≤ i < n index of vertices tagged as 1 (resp. 2) and n ≤ i < N index of vertices tagged as 0. Then, the linear system to solve is:     (I − Λ )   

v1 v2 v3 .. . vn−1





      =    

∑N−1 i=n λ0,i vi ∑N−1 i=n λ1,i vi ∑N−1 i=n λ2,i vi .. .

      

∑N−1 i=n λn−1,i vi

where Λ = {λi, j } and λi, j is a coefficient depending on the mapping used. Here, for discrete harmonic mappings, we have: Mapping mesh onto the unit disk Once the tearing path has been computed, vertices are tagged in three different ways. We call them tag 0, 1 and 2. Tagging

( λi, j =

cotαi, j +cotβ i, j ∑ j∈N(i) (cotαi, j +cotβi, j )

if {i, j} ∈ C

0 if {i, j} ∈ /C

Multiple reconstruction and dynamic modeling of 3D digital objects using a morphing approach

with αi, j = ∠(i, k0 , j) and βi, j = ∠(i, k1 , j). Edge {i, j} is adjacent to two and only two faces since M is a triangular mesh. k0 and k1 are the two vertices that define these faces. We call Ms0N Mt0N the mapping of Ms and Mt for tag N. Similarly we call {i0 } a mapped vertex. Although such notation should not exist since only the position of vertices (vi ) changed during the mapping, this notation will make further expressions more straightforward.

Metamesh creation and animation: computing intersections and barycentric coordinates The next step is to overlay Ms0N Mt0N for both tags in order to compute the metamesh. The first stage is to detect intersections between mapped edges. When two edges {i0 , j0 } ∈ C for Ms0 and {k0 , l 0 } ∈ C for Mt0 cross, a new vertex is created. −→ Two valid definitions of this intersection point are v0i + α v0i v0j −−→ and v0k + β v0k v0l . Coefficient α and β are saved along with the new vertex. These coefficients will be necessary for intermediate models as they are sufficient to compute the coordinates of the vertex, even when vi , v j , vk and vl are interpolated. These kind of vertex is called an intersection vertex. Once an intersection vertex is created, appropriate edges and faces are created along with it in order to build the topology of the metamesh Mm . These new edges and faces will allow Mm to combine topology of both Ms and Mt and to have a continuous interpolation between the two models (see Figure 11).

Fig. 11: Example of intersections between mapped edges of Ms (solid line) and Mt (dotted line). Intersection points 1, 2, 3 and 4 are created, as well as appropriate edges (C1, 1D, C2, 2F, ...) and faces (C12, F32, ...).

9

u, v, w such that v0m = uv0i + vv0j + wv0k . The face where v0m lies on and its BC are saved. This kind of vertex is called a mesh vertex. Thus, the metamesh is completely built and composed of a set of intersection vertices and mesh vertices. Intermediate models can now be easily obtained by interpolating positions of vertices. The interpolation is possible since we know, for each one of them, an initial and a final position as following: for a mesh vertex coming from Ms , the initial position is its position in Ms . The final position is known by the combination of its BC and the face of Mt it lies on. Inversely, for a mesh vertex coming from Mt , the initial position is known using its BC and the face of Ms it lies on. The final position is its natural position in Mt . For an intersection vertex, the initial position is known thanks to its α coefficient and the edge of Ms it lies on. The final position is computed using its β coefficient and the edge of Mt it lies on.

3.5 Tracking the tumor The tumor tracking is the second goal of our method. It is important to know where it is located to adjust the wave beam accordingly. From this point of view, there are two main differences between the tumor and the kidney. The first one is the tumor is not deformed by the respiratory cycle, it only moves along with the kidney. The second one is the tumor is similar to an ellipsoid. In the segmentation step, the tumor is segmented separately from the kidney and in a such way that the center of the tumor is known. An other mesh morphing to obtain the tumor movements (i.e. its tracking) would be inappropriate since its shape remains the same from one breathing phase to another. Moreover, it would cost useless computational time. A more convenient and fast way to do that is to interpolate the position of the tumor since we have the coordinates of its center for the inhale, exhale and mid-cycle phases. We can use a quadratic Bzier curve interpolation, which gives the tumor position for intermediate phases. Thus, the 3D coordinates are known at any time, resulting in the tumor tracking.

4 Results The second stage of the metamesh creation is the computation of barycentric coordinates (BC) for every vertices of Ms and Mt . To do that, we first want to know on which mapped face {i0 , j0 , k0 } of Mt0 (resp. of Ms0 ) a mapped vertex v0m of Ms0 (resp. Mt0 ) lies on. The BC are a unique triplet

Results are evaluated through three sets of a CT-scan acquisition. The kidney is present in about 160 slices. If we consider that we only have the slices where the kidney is present, the only user-interactions during the whole process are three mouse clicks in order to define the seeds necessary

10

to segment the kidney for the three sets. Despite the refinement of the region growing approach, some errors can be possible during the segmentation step. These errors can occur when an organ is located right next to the kidney, or around its natural cavity where various arteries and veins are present. The choice of the Poisson surface reconstruction is relevant in this case. One advantage of this method is that the final reconstruction can be more or less accurate depending on the depth of the octree. By setting this parameter to 5, we choose not to consider the details and approximate the point cloud, i.e. points coming from segmentation errors will be ignored, as long as they are not frequent nor all located around the same region of the kidney. In order to have a full animation, two mesh morphings are performed: the first between M1 and M2 and the second between M2 and M3 , which respectively correspond to the inhale, mid-cycle and exhale phase. Figures 13, 14 present several intermediate models obtained while performing the morphing from M1 to M2 to M3 . As results are not very explicit with frozen models, an animated version can be seen at the following URL: http://www.youtube.com/watch? v=bjxtoSn4s04. General movement and deformation of the kidney are respected. The natural rotation of the principal axis of the organ is present here, as well as its enlargement. On the other hand, local deformations are not totally satisfying, especially for the tumor. The one on the morphed kidney is absorbed into a part of the kidney and reappear from a different part, right next to its original location. The natural deformation would have been a smooth displacement between theses two locations, almost like a translation. This is due to the morphing method itself. Although these false deformations are not really noticeable, they become obvious when tumor is displayed: it sticks out of the kidney model (Figures 12).

Fig. 12: Highlighting local deformation problem. Intermediate model with tumor (blue ellipsoid) presents local inaccuracy, especially for the tumor region (encircled).

The whole process takes less than 2 minutes. The longest part is the segmentation (1 minute), more precisely the final post-treatment when mathematical morphology operators are applied (about 44 seconds). The shortest part is the

Valentin Leonardi et al.

Poisson surface reconstruction since the depth of the octree is low (up to 3 seconds). The morphing step is computed in 40 seconds for models composed by up to 2,300 vertices, 6,900 edges and 4,600 faces. Although these computational times does not allow a real time use, it is acceptable for our medical environment where interventions used for our noninvasive tumor destruction (High Focused Ultrasound) are very long (3 hours for uterus cancer). Moreover, the whole computation time is needed only once at the beginning. The animation and the tracking are done in real time as it is simply an interpolation between an initial and final position of vertices as seen in the end of section 3.4

5 Conclusion We have presented a full method from three CT-Scan or MRI acquisition to kidney motion simulation and tumor tracking. It is divided into 3 majors steps. The first one is kidney segmentation which is done through a semi-automatic region growing approach. It needs a mouse click in order to define the seed. Although the segmentation is then refined thanks to histogram analysis, small errors can remain. Fortunately, these errors can be ignored during the second step, the surface reconstruction. This is done through the Poisson surface reconstruction which offers the possibility to compute a smooth surface without considering all the points of the point cloud to reconstruct, resulting in a general shape of it. The third step is the mesh morphing among the three kidney models, corresponding to three breathing phases. Two mesh morphings are done here, so that soft transitions between the first and the second model and between the second and the third models are done. This step is fully automatic and is based on a mesh cutting up, unit disk mapping and metamesh creation. The output of the whole method is fully geometric as it is a 3D model of the kidney for any phase of the respiratory cycle. Although general deformation and movement of the kidney is well simulated, local deformations are not precise enough, especially for tumors near the surface. The perspectives of our work are multiple. The first one will be the validation of the model in order to quantify the error between its position and the real kidney and tumor position. This can be done using a fourth acquisition for a given phase of the breathing cycle for which kidney and tumor are manually segmented by experts. The comparison between these borders and the ones of the model (for the same breathing phase) will provide a scientific clue on how far our model is and on the robustness of the method. The second perspective will be the improvement of the segmentation step as errors can still occur. An approach based on mathematical morphology pre-treatment and watershed

Multiple reconstruction and dynamic modeling of 3D digital objects using a morphing approach

11

Fig. 13: Final results showing natural movements of the right kidney due to respiration. Source and target models obtained from reconstruction are displayed in red. Intermediate models are displayed in grey. Morphing from M1 to M2 is showed here (from left to right).

Fig. 14: Morphing between M2 to M3 from a different point of view (rotation of 180 degrees around vertical axis). Models are displayed in wireframe and the tumor is visible (blue ellipsoid).

algorithm is considered. Although watershed is known for sur-segmented results, the pre-treatment allows to have an acceptable segmentation and first results are encouraging. Nevertheless, CT-Scan acquisitions will probably not be used anymore as there is no tumor heating control, which is primordial for HiFU therapy. MRI acquisitions will be used instead. The contrasts in these images are greater than the contrasts in CT-Scan images. Thus, the segmentation part will work better, giving better results and reducing errors. Finally, the third perspective will deal with the mesh morphing, especially to correct the animation of some local deformation. A way to overcome this problem would be to impose that parts of the model with close curvature morph one into each other. Firsts results are encouraging and will be published soon.

2.

3.

4.

5.

6.

7. Acknowledgements This work is granted by the Foundation ”Sant´e, Sport et D´eveloppement Durable”, presided by Pr. Yvon Berland. The authors would like to thank everyone involved in the KiTT project : Christian Coulange for his precious help, Marc Andr´e, Fr´ed´eric Cohen and Philippe Souteyrand for their wise advices and for providing CT scan data, and Pierre-Henri Rolland for his support.

8.

9.

10.

References 1. Ahmed, M., Yamany, S., Mohamed, N., Farag, A., Moriarty, T.: A modified fuzzy C-means algorithm for bias field estimation and

11.

segmentation of MRI datac-means algorithm for bias field estimation and segmentation of mri data. IEEE Transactions on Medical Imaging 21(3) (2002) Alexa, M., Cohen-Or, D., Levin, D.: As-rigid-as-possible shape interpolation. Proceedings of Computer Graphics and Interactive Techniques (2000) Atkins, S., Mackiewich, B.: Fully automatic segmentation of the brain in MRI. IEEE Transactions on Medical Imaging 17(1), 98 – 107 (1998) Bardinet, E., Cohen, L., Ayache, N.: A parametric deformable model to fit unstructured 3D data. Computer Vision and Image Understanding 71, 39 – 54 (1995) Bardinet, E., Cohen, L., Ayache, N.: Tracking and motion analysis of the left ventricle with deformable superquadrics. Medical Image Analysis 1(2), 129 – 149 (1996) Boes, J., Weymouth, T., Meyer, C.: Multiple organ definition in CT using a bayesian approach for 3D model fitting. Vision Geometry 2573, 244 – 251 (1995) Boscolo, R., Brown, M., McNitt-Gray, M.: Medical image segmentation with knowledge-guided robust active contours. Radio Graphics 22, 437 – 448 (2002) Fritsch, D., Pizer, S., Yu, L., Johnson, V., Chaney, E.: Localization and segmentation of medical image objects using deformable shape loci. Information Processing in Medical Imaging 1230, 127 – 140 (1997) Gao, J., Kosaka, A., Kak, A.: A deformable model for human organ extraction. International Conference on Image Proessing 3 (1998) Ginneken, B., Frangi, A., Staal, J., ter Haar Romeny, B., Viergever, M.: Active shape model segmentation with optimal features. IEEE Transactions on Medical Imaging 21(8) (2002) Gregory, A., State, A., Lin, M., Manocha, D., Livingston, M.: Feature-based surface decomposition for correspondence and morphing between polyhedra. Proceedings of Computer Animation pp. 64 – 71 (1998)

12 12. Held, K., Kops, E.R., Krause, B., III, W.W., Kikinis, R., M¨ullerG¨artner, H.W.: Markov random field segmentation of brain MR images. IEEE Transactions on Medical Imaging (1997) 13. Hostettler, A., Nicolau, S., Soler, L., R´emond, Y., Marescaux, J.: A real-time predictive simulation of abdominal organ positions induced by free breathing. International Symposium on Biomedical Simulation pp. 89 – 97 (2008) 14. Kanai, T., Suzuki, H., Kimura, F.: Metamorphosis of arbitrary triangular meshes. Proceedings of Computer Graphics and Application 20(2) (2000) 15. Kapur, T., Grimson, E., Kikinis, R., Wells, W.: Enhanced spatial priors for segmentation of magnetic resonance imagery. Medical Image Computing and Computer-Assisted Intervention (1998) 16. Kaus, M., Warfield, S., Nabavi, A., Black, P., Jolesz, F., Kikinis, R.: Automated segmentation of MR images of brain tumors. Radiology 218, 586 – 591 (2001) 17. Kazhdan, M., Bolitho, M., Hoppe, H.: Poisson surface reconstruction. Eurographics Symposium on Geometry Processing (2006) 18. Kent, J., Carlson, W., Parent, R.: Shape transformation for polyhedral objects. Computer Graphics 26(2) (1992) 19. Lee, A., Dobkin, D., Sweldens, W., Schroder, P.: Multiresolution mesh morphing. Proceedings of Computer Graphics and Interactive Techniques (1999) 20. Lee, A., Sweldens, W., Schroder, P., Cowsar, L., Dobkin, D.: Maps: Multiresolution adaptive parameterization of surfaces. Proceedings of SIGGRAPH pp. 95 – 104 (1998) 21. Lee, C.C., Chung, P.C., Tsai, H.M.: Identifying multiple abdominal organs from CT image series using a multimodule contextual neural network and spatial fuzzy rules. IEEE Transactions on Information Technology in Biomedecine 7(3) (2003) 22. Lei, T., Sewchand, W.: A new stochastic model based image segmentation tech for CT images. IEEE Transactions on Medical Imaging 11(1) (1992) 23. Leonardi, V., Mari, J.L., Vidal, V., Daniel, M.: A Morphing Approach for Kidney Dynamic Modeling - From 3D Reconstruction to Motion Simulation. 20th Internation Conference in Central Europe on Computer Graphics, Visalization and Computer Vision, WSCG pp. 179 – 187 (2012) 24. Liang, Z., MacFall, J., Harrington, D.: Parameter estimation and tissue segmentation from multispectral MR images. IEEE Transactions on Medical Imaging 13(3) (1994) 25. Lin, D.T., Lei, C.C., Hung, S.W.: Computer-aided kidney segmentation on abdominal CT images. IEEE Transactions on Information Technology in Biomedecine 10(1) (2006) 26. Lin, J.S., Cheng, K.S., Mao, C.W.: Multispectral magnetic resonance images segmentation using fuzzy hopfield neural network. International Journal of Biomedical Computing 42, 205 – 214 (1996) 27. Murphy, M., Chang, S., Gibbs, I., Le, Q.T., Hai, J., Kim, D., Martin, D., Adler, J.: Patterns of patient movement during frameless image-guided radiosurgery. International Journal of Radiation Oncology Biology Physics 55(5), 1400 – 1408 (2003) 28. Nakamoto, M., Ukimura, O., Gill, I., Mahadevan, A., Miki, T., Hashizume, M., Sato, Y.: Realtime organ tracking for endoscopic augmented reality visualization using miniature wireless magnetic tracker. Medical Imaging and Augmented Reality pp. 359 – 366 (2008) 29. Nicolau, S., Pennec, X., Soler, L., Ayache, N.: Clinical evaluation of a respiratory gated guidance system for liver punctures. Medical Image Computing and Computer-Assisted Intervention pp. 77 – 85 (2007) 30. Noe, K.O., de Senneville, B.D., Elstrom, U.V., Tanderup, K., Sorensen, T.S.: Acceleration and validation of optical flow based deformable registration for image-guided radiotherapy. Acta Oncology 47(7), 1286 – 1293 (2008)

Valentin Leonardi et al. 31. Olbricha, B., Trau, J., Wiesner, S., Wicherta, A., Feussner, H., Navab, N.: Respiratory motion analysis: Towards gated augmentation of the liver. Computer Assisted Radiology and Surgery 1281, 248 – 253 (2005) 32. Pham, D., Prince, J.: Adaptative fuzzy segmentation of magnetic resonance images. IEEE Transactions on Medical Imaging (1998) 33. Polthier, K.: Conjugate harmonic maps and minimal surfaces. Tech. rep., Technische University of Berlin (2000) 34. Rajapakse, J., Giedd, J., Rapoport, J.: Statistical approach to segmentation of single-channel cerebral MR images. IEEE Transactions on Medical Imaging 16(2) (2001) 35. Reyes, M., Malandain, G., Koulibaly, P.M., Ballester, M.G., Darcourt, J.: Respiratory motion correction in emission tomography image reconstruction. Medical Image Computing and ComputerAssisted Intervention pp. 396 – 376 (2005) 36. Rohlfing, T., Maurer, C., O’Dell, W., Zhong, J.: Modeling liver motion and deformation during the respiratory cycle using intensity-based free-form registration of gated MR images. Medical Imaging 2001: Visualization, Image-Guided Procedures, and Display pp. 337 – 348 (2001) 37. Rueckert, D., Sonoda, L.I., Hayes, C., Hill, D.L.G., Leach, M.O., Hawkes, D.J.: Nonrigid registration using free-form deformations: Application to breast MR images. IEEE Transactions on Medical Imaging 18(8) (1999) 38. Rusko, L., Bekes, G., Fidrich, M.: Automatic segmentation of the liver from multi- and single-phase contrast-enhanced CT images. Medical Image Analysis 13, 871 – 882 (2009) 39. Sarrut, D., Boldea, V., Miguet, S., Ginestet, C.: Simulation of fourdimensional CT images from deformable registration between inhale and exhale breath-hold CT scans. Medical Physics 33(3) (2006) 40. Schweikard, A., Glosser, G., Bodduluri, M., Murphy, M., Adler, J.: Robotic motion compensation for respiratory movement during radiosurgery. Journal of Computer-Aided Surgery (2000) 41. Shirato, H., Shimizu, S., Kitamura, K., Nishioka, T., Kagei, K., Hashimoto, S., Aoyama, H., Kunieda, T., Shinohara, N., DosakaAkita, H., Miyasaka, K.: Four dimensional treatment planning anf fluoroscopic real-time tumor tracking radiotherapy for moving tumor. International Journal of Radiation Oncology Biology Physics 48, 435 – 442 (2000) 42. Vilari˜no, D., Cabello, D., Balsi, M., Brea, V.: Image segmentation based on active contours using discrete time cellular neural networks. Journal of VLSI Signal Processing Systems 23, 403 – 414 (1999) 43. Wells, W., Grimson, W., Kikinis, R., Jolesz, F.A.: Adaptive segmentation of MRI data. IEEE Transactions on Medical Imaging 15, 429 – 442 (1996) 44. Wu, J., Poehlman, S., Noseworthy, M., Kamath, M.: Texture feature based automated seeded region growing in abdominal MRI segmentation. Journal of Biomedical Science and Engineering 2, 1 – 8 (2009) 45. Yan, H.B., Hu, S.M., Martin, R.: 3d morphing using strain field interpolation. Computer Science and Technology 1 (2007) 46. Zhang, Y., Brady, M., Smith, S.: Segmentation of brain MR images through a hidden markov random field model and the expectation-maximization algorithm. IEEE Transactions on Medical Imaging 20(1) (2001)