In modern dentistry two very common types of imaging are CT-scan images and also panoramic imaging (also known as panorex or OPG). CT-scan images of the teeth are acquired for many purposes like dental implants, jaw reconstruction and surgery. Even though CT-scan images have many benefits and hold a lot of information in them but panoramic images of the teeth have information in them which are not readily available to the naked eye using CT-scan images. To acquire these images an extra cost and X-ray dosage will be exerted on to the patient. Finding a way to construct panoramic images from CT-scan images in places applicable will benefit the patient and the doctor alike in terms of expenses, X-ray dosage and diagnosis.
Panoramic projection of teeth in CT-scan images can also be used for teeth
segmentation (Keyhaninejad et al., 2006; Hosntalab
et al., 2008). Another benefit of extracting panoramic dental images
from CT-scan images is the fact that different attenuation functions can be
used to create the final panoramic image. In ordinary radiology panoramic images
this is not possible since the X-ray beams attenuation is only dependent on
the path that it takes through the body and the substances it passes through.
But using this technique we can create images which are based for example on
the average density of the substance the beam passes through or even the maximum
intensity of all the substances the beam passes through. An extensive search
in the global literature has shown that currently no fully automatic technique
to perform such a task using CT-scan images has been proposed. In this study
a completely automatic algorithm to extract panoramic images from multi-slice
CT-scan images of the teeth has been proposed.
MATERIALS AND METHODS
This study was performed at the Image Processing Laboratory, School of Electrical and Computer Engineering, Faculty of Engineering, University of Tehran. It was performed from 15 September 2008 to 25 November 2008.
The method proposed is fully automatic and no human intervention is needed.
It is separated in to three main parts:
|| Block diagram of proposed method
||Mandibular region isolation
||Mandibular curve creation
||Panoramic image extraction
The purpose of the first part is to isolate the region which corresponds to the mandible. Once this region is isolated it is used in the second part to create the mandibular curve. The mandibular curve, simply put, is a curve which follows the shape of the mandible. This curve is a guideline which will be used to create the panoramic image. In the third part the region within an arbitrary distance from the mandibular curve is processed to create the final panoramic image.
A block diagram of the proposed method can be shown in Fig. 1.
Mandibular region isolation: The mandible must be separated to so that it can be used to create the primary curve that is needed in present algorithm. To achieve this task first the slices which contain the mandible must be separated from the slices which contain the maxilla. To do this a maximum intensity projection of the dataset is obtained parallel to the Y axis. This can be shown in Fig. 2A.
Using this image the approximate midline between the 2 jaws is found. To find the midline we pass this image through a threshold filter to remove all none bone tissues (Fig. 2B).
As it can clearly be seen from Fig. 2B the midline between
the mandible and the maxilla is the horizontal line that if started from the
left edge of the image, will have the most length before it reaches a bony area.
(A) Result of MIP in y direction, (B) result of threshold
filter and (C) sample lines to show how the slice separating the mandible
and maxilla is found. The thinner line is created by present algorithm which
is used to separate the slices
To find this line the algorithm starts from the topmost pixel in the first
column of the image (the first column is the leftmost column) and move towards
the right side of the image and counts the number of pixels until it reaches
bone. This value is recorded and the procedure is performed on all pixels in
the column. The slice corresponding to the maximum pixels counted in one row
is selected as the midline between the 2 jaws. The thinner line in the lines
drawn in Fig. 2C shows the midline.
|| MIP of slices belonging to mandible
||MIP image after being passed through threshold filter
As it can be seen from
the image this midline will usually include the top parts of the molar teeth
as being part of the slices which are part of the upper jaw. This is not important because they have none or negligible effect on the algorithm
we have proposed.
The reason we do not use the slices from the upper jaw is that we want to find
an approximate curve to create the panoramic images with. This curve can easily
be found using the mandible slices. Using the maxilla slices would have created
distortions in our image making it much harder to find the curve.
|| Result of dilation
curve is found it will we be used to create the panoramic image using all the
slices from both jaws (mandible and maxilla).
Next the Maximum Intensity Projection (MIP) of the slices which are below the midline is obtained. The mandibular curve can be clearly shown in Fig. 3.
The maximum intensity projection image is then passed through a threshold filter to remove non-bony tissue and the softer bones. The result can be shown in Fig. 4.
The region which corresponds to the mandible is clearly visible as a coarse
curved structure with the teeth lying in it. All we need is a seed point located
in this region. By growing this seed point the mandibular region will be completely
selected and we will be able to completely isolate it from the rest of the image.
Since in the threshold procedure there is a possibility that the mandible can
be broken up into smaller parts we slightly dilate this image (Fig.
Now a single seed point must be chosen in the mandibular region. By observing
the image we can see that the mandibular region is the closet bony part to the
middle of the top edge of the image. With this point in mind we start from the
middle pixel in the first row of pixels in the image. We then start coming down
until we reach a pixel that is not black (i.e., it is bone). This pixel is chosen
as this seed point. Since, the mandibular region is the closest bone to our
starting point we can be 100% sure that the seed point is in the mandibular
|| Isolated mandibular region
The co-ordinates of the seed point are then passed to a connected image
region filter (a filter which isolates a region connected to a seed point with
the condition that the connected pixels selected are between 2 thresholds) (Fig.
After isolating the mandible it will be used to find an initial curve. Before this task is performed we dilate and then erode the image. We have two reasons for doing this. First: this will slightly smooth the isolated mandibular region. Second: in some data sets there will be some gaps and empty spaces within the mandibular region. These have been created by the threshold filter. In order to fill these gaps up the image is dilated. It is then eroded to bring its size back to normal.
Mandibular curve creation: Now it is time to create the initial curve using the isolated mandibular region. Starting from the first column of pixels in the image, we start from the topmost pixel and come down towards the last pixel in the column. The Y coordinate of the top most pixel (Yt) and the bottom most pixel (Yb) of the mandibular region in each column is recorded. In each column the pixel with Y co-ordinate that is the average of Yb and Yt of that column is selected as a point on the curve. The result of this curve overlaid on the maximum intensity projection image can be shown in Fig. 7.
It is clear that this curve will not pass through many of the teeth. To make
sure this curve passes through the teeth another approach is used. In this approach
the y co-ordinate of all the pixels between Yt and Yb
are added with each other but with different weights. Two weights are chosen.
The first weight is one (W1 = 1) and the second weight has a value
much higher than one (W2 >>1).
|| Initial mandibular curve overlaid on MIP image
||Smoothed and improved mandibular curve overlaid on MIP image
To bring the curve as close
as possible to the teeth the Y co-ordinate of pixels which are part of the teeth
are multiplied by W2 and the Y co-ordinate of pixels which are not
part of the teeth are multiplied by W1. The final value of the Y co-ordinate which will represent this curve is the
average of the weighted Y co-ordinates. This procedure efficiently moves the
curve towards the teeth. It must be noted that a pixel is considered to be part
of the teeth if it is higher than a specified threshold. The enamel of the teeth
is the hardest substance in the body so in the MIP image the pixels belonging
to the teeth are the brightest. Using this information in multiple datasets
a global threshold is easily found. Before we select our control points from
the points on the curve we smooth this curve by passing it through an averaging
filter. This will reduce the probability of error (Fig. 8).
Now, we will create the final curve. To create the final curve splines are
||Final curve created using splines. Control points are drawn
The reason we use splines is that: (A) the smoothed curve is not continuous
and we dont have the co-ordinates for many parts of it. By fitting these points to a spline we will have the co-ordinates for each
point and (B) in order to find the panoramic image we have to find the lines
which are perpendicular to the curve. In order to reduce the error in the panoramic
image the slope of the perpendicular lines must be exact with a minimum of error.
These slopes cannot be found in such a manner unless we know the exact formula
of the curve that they are perpendicular to. By using the mentioned splines
we can find the exact value of the slope of each perpendicular line.
To create a spline we need some control points. The control points are used
to find the parameters of the spline. From the points on the smoothed curve
nine control points are automatically chosen. The first control point is at
the beginning of the curve and the last one at the end of it. The rest are located
at equal spaces with respect to the X axis between these. We have used cubic
splines to perform this task. The formula for each segment of the spline between
two adjacent control points is as following:
S(x) = Yi + Bi.W + Ci.W2
In which i is the spline segment number, Yi and Xi are
the co-ordinates of the first control point in spline segment i. Bi,
Ci and Di are the co-efficients for spline segment i.
x is the abscissa at which the spline is to be evaluated and S(x) is the value
of the spline at abscissa x. The parameters for each segment are found using
the control points of that segment using ( ) and its derivatives. With all the
parameters available the final curve is drawn using the splines. This can be
shown in Fig. 9. The control points are shown in black.
Panoramic image extraction: Once we have found the mandibular curve we will use it to create the final panoramic image. To achieve this goal the next step is to find the lines perpendicular to the final curve at each pixel that the curve passes through. The pixels that each perpendicular line passes through are used to find the final value of the pixel on the curve that the line is perpendicular to. These final pixel values are used to create the panoramic image.
The formula of a line perpendicular to a curve at point (x0,y0)
is found using the following formula:
where, M is the slope of the curve at point (x0, y0). Using this formula all the perpendicular lines are found. In Fig. 10A these can be seen overlaid on the image in Fig. 10B we have drawn one in every 5 lines to visualize the perpendicular lines in a clearer manner.
The perpendicular lines are used on all slices to create the panoramic image.
The panoramic image will have a height equal to the number of slices in the
dataset and a width equal to the number of pixels that the curve passes through.
To calculate the value of the pixel that each of the perpendicular lines creates,
the attenuation formula for ordinary X-ray panoramic images has been used. This
formula is as follows:
where, I is the final intensity of the beam. I0 is the original intensity of the beam. μ is the linear attenuation co-efficient and e is the thickness of the of the material. For composite materials containing many substances the above formula changes to:
where, μi is the attenuation co-efficient of tissue element i and ei is the thickness of the element that the beam has passed through. The pixel intensity of each pixel in the image is proportional to its attenuation co-efficient (μi = ∝Pi). If PJ is the pixel intensity of the Jth pixel in a perpendicular line and PMAX is the maximum intensity value that a pixel can have in a CT-scan image and P is the final intensity value of a pixel on the panoramic image the above formula can be re-written as following:
||(A) The region which the perpendicular lines cover and (B)
one in every five perpendicular lines drawn for better visualization
For simplicity ei has been put equal to one. n is the total number of pixels that a perpendicular line passes through. The reason that we have subtracted the result of the exponential from one is that the higher the final intensity of a beam the darker the X-ray film becomes i.e., for higher intensity beams we will have lower intensity pixels and vice versa. When changing the formula from intensity to pixel value we must bear this in mind. The final pixel values of the panoramic image are calculated using the above formula.
To show the different applications this technique can have three different
types of images will be presented as in present results here.
||Final results (A) X-ray attenuation formula emulation, (B)
averaging and (C) maximum intensity of pixels
Only one type of this image can be acquired using normal panoramic X-ray imaging.
These are as following:
||Normal X-ray imaging emulation: in which the X-ray attenuation
formula is used to calculate the final attenuation of a virtual X-ray beam
||Average imaging: in which the final pixel intensity is the
average of all the pixel intensities the beam passes through Fig.
||Maximum intensity projection image: in which the final pixel
intensity is the value of the pixel intensity with the maximum value between
all the pixels that the virtual beam has passed through Fig.
As we can see from Fig. 11 the images created by averaging and using normal X-ray attenuation formula are nearly the same, but the image created using the maximum pixel value is very different. Using this image the dentist can clearly see where the crown of the teeth are separated from the roots.
In this part the proposed method will be compared with earlier study which
had pursued similar goals. Methods used in these studies and the proposed method
will be compared in Table 1-3 to show why
the proposed algorithm is superior. It should be noted that in earlier studies
only parts of the tasks done in the new method have been performed and only
those relevant parts will be compared with the proposed method. As stated before
according to a search in the global literature there is currently no method
which performs this task completely from beginning to end in a fully automatic
manner. In the global literature 5 studies were found in which each had tried
to reach some of the goals stated here. The studies are as following:
||Keyhaninejad et al. (2006):
Tooth segmentation of dental study models using range images. It must
be noted that in this study the images were separated 3D images of the jaw
acquired by laser scanners and not CT-scan
||Hosntalab et al. (2008): Segmentation of teeth in CT volumetric dataset by panoramic projection
and variational level set
||Keyhani et al. (2007): Automated segmentation
of teeth in multislice ct images
||Tohnak et al. (2006): Synthesizing
panoramic radiographs by unwrapping dental CT data
||Gao and Chae (2008): Automatic
tooth region separation for dental CT images
Three different parts of this study which will be compared with these works are as following:
||Separation of slices corresponding to mandible and maxilla
from each other (Table 1)
||Finding a curve which will be used for the creation of the
panoramic image (Table 2)
||Finding perpendicular lines and creating the final panoramic
image (Table 3)
In the Table 1 the numbers in the first column refer to the above five studies, respectively. In the second column the method used has been mentioned. In the third column the weakness of the method has been shown. In the fourth column present method has been summarized and why it has advantages over the previous methods.
We also proposed a new technique for finding the final panoramic image. In
this technique we suggested using functions different than the X-ray attenuation
function for finding the final panoramic image.
|| A comparison between methods used to separate the maxilla
|| A comparison between methods used to find a dental curve
|| A comparison between methods used to find perpendicular lines
and creating final panoramic projection
||Results of algorithm on closed bite database, (A) X-ray attenuation,
(B) averaging and (C) maximum intensity of pixels
Using the average pixel value and also the maximum pixel value were proposed
and implemented. These images cannot be obtained by panoramic image devices.
Sometimes CT-scan images are acquired closed-bite. In these images the midline between the mandible and maxilla cannot be found using the proposed algorithm. To show that this technique will work equally well on these kind of datasets, the slice separating the maxilla and mandible of a closed-bite dataset was selected by hand. Then the algorithm was used on the dataset. The results show that the algorithm works equally well on these kind of datasets (Fig. 12).
We have proposed a fully automatic method for the extraction of panoramic dental
images from volumetric CT-scan datasets of the head. New techniques were proposed
for the automatic separation of the slices corresponding to the mandible and
maxilla and also for finding the dental curve. These methods were both based
on maximum intensity projection of the teeth in respect to different axis. The
dental arch was then used to create panoramic images. Implementing the algorithm
on multiple datasets showed its success and robustness. Using functions other
than normal X-ray attenuation co-efficients was also proposed and implemented.
In the future studies we plan to find a technique to automatically separate
the mandible and maxilla in datasets which are not open-bite. We also plan to
find an optimum attenuation function (not necessarily based on X-ray attenuation)
for the best visualization of the teeth in the panoramic image. Another field
which can be explored is creating a system to extract any kind of dental images
from CT-scan images. These images can then be used by other systems for different
purposes. For example they can be used as input for human identification systems
based on dental X-ray images (Anil et al., 2004;
Hong and Jain, 2005; Nassar et
al., 2008; Nomir and Abdel-Mottaleb, 2007, 2008).
They can also be used to evaluate 2D tooth segmentation methods (Said
et al., 2006).