Home People Research Projects Teaching Theses Downloads Links   Contacts English Italiano

Medical and biomedical image analysis

Digital mammography

OUTLINE Breast cancer is still the most common form of cancer among women. Mammography (Fig. 1) is a specific type of imaging that uses a low-dose x-ray system to examine breasts. Digital mammography, often called Full-Field Digital Mammography (FFDM) [MI1], is a mammography system in which the x-ray film is replaced by solid-state detectors that convert x-rays into electrical signals. These detectors are similar to those found in digital cameras. Mammography is currently the best imaging technique for early detection and diagnosis of breast cancers in women because it can show changes in the breast up to two years before a patient or a doctor can feel them. Fig. 1 also shows different kinds of breast cancers, microcalcifications and masses [MI2]. Microcalcifications are very small spots that are relatively bright compared with the surrounding normal tissue. Typically they are between 0.1 mm and 1.0 mm in size and their presence in breast tissues is of particular clinical significance when found in cluster of five of more in 1 cm2 area. Most of the clusters consist of at least one evident microcalcification and other more hidden signals.

CC MLO microcalcification mass

Fig. 1: From left to right: Cranio-caudal (CC) and mediolateral-oblique (MLO) views in mammograms; examples of microcalcifications and mass.

CAD (computer-aided detect) systems use a digitally acquired mammogram to assist radiologists to detect breast cancers: the computer output is presented to radiologists as a second opinion and can improve the accuracy of the detection. The CAD system highlights these regions on the images, alerting the radiologist to the need for further analysis.
Several techniques developed for the automated detection of microcalcifications can mainly be grouped into three different categories: multiresolution analysis, difference-image techniques and statistical methods. By comparing the different methods it turns out that some microcalcifications are detected by one method but missed by others, due to different types of microcalcifications. It is thus often hard for one single detection scheme to discover different types of signal with diverse characteristics.

METHOD The method we propose combines the coarse- and fine-level detection methods we developed in order to get optimal performance [MI3][MI4]. The CAD system we realized is able to perform automatic detection of microcalcifications. Fig. 2 shows the block-based flow chart of our system. Both the methods are able to find out the most evident signals, i.e. having high local contrast and high Signal to Noise Ratio (SNR).

Fig. 2: The block-based functional scheme of our CAD system.

Signal extraction. Denoising. After isolating the breast from the image through a histogram-based segmentation, the local contrast over 3×3 windows is computed as well as its standard deviation. This permits to build a non parametric and non-linear noise model of the acquisition system that is used to perform iso-precision equalization of the input image so that each gray level, hence is pixel, has the same noise (Fig. 3).

Fig. 3: Coarse level signal extraction. The original images are 12 bits grey levels with a resolution of 0.1 mm/pixel.

In the coarse level detection, two different operations are performed starting from this image, contrast enhancement and signal suppression. In particular, the signal suppression is achieved through decomposing the image using wavelet transform (where the mother wavelet is LAD8) and reconstructing the signal by keeping low frequencies only. Also, this means extracting the structured background from the image. Then the difference between these images ends the preprocessing step. Basically, this represents a background difference and the same principle (although not the same technique) has been employed even in our motion detection algorithms. This image is now supposed to contain only signals in the high grey level domain. This means that the Gaussian distribution of the grey levels corresponding to the background should be biased towards the high levels, thus expecting a positive skewness. For these reason a local thresholding function of pixel grey levels and skewness is performed to achieve a binary image of the microcalcifications (Fig. 3).
The fine level detection has been conceived to discover more subtle microcalcifications, by means of a multiresolution analysis based on the wavelet transform (LAD8). To reconstruct the image we use the first three scales, corresponding respectively to details of 100µm, 200µm and 400µm. Although scale 1 retains high frequency noise, it also retains useful information regarding microcalcifications. Therefore, rather than discard it we work by keeping only the coefficients in scale 1 greater that 1.5σ, referring to the coefficient distributions. To extract interesting signals, we perform in the resconstructed image a local thresholding in a 40×40 window retaining all pixels having grey level values above 2.0σ. The same thresholding is performed on the image achieved through the difference between a suppressed image (obtained with a 9×9 moving average filter) and an enhanced image (coming from a convolution with a 3×3 match filter). The output of the fine level detection is the binary image attained by AND'ing the output of these thresholdings.
To summarize the performances of our two signal extraction methods, we can state that the coarse one is able to find out even microcalcifications having a low contrast, provided that they retain a high SNR. Nevertheless, the fine level detection is suitable to find out even low SNR signals, provided that they have a high contrast, thanks to the wavelet transform that can enhance in a better way signals with similar size as microcalcifications. Then the output of both the methods is OR'ed and the binary image is then used as a mask to identify the microcalcifications in the original image. More details are reported in [MI3][MI4].

False Positive Reduction (FPR). Most of the signals survived to the thresholding stages are False Positive ones that have to be eliminated. A thorough discussion with radiologists has permitted to recover the method they use to detect microcalcifications and to turn it into digital features that can be computed automatically. Accordingly, we have considered five features (area, average pixel value, edge gradient, degree of linearity, average local gradient) to separate microcalcifications from false signals [MI4][MI5]. Therefore for each labelled (true, false) signal has been provided with this five-dimensional feature vector that constitute the input of a Support Vector Machines (SVM) classifier.
To set up the classifier we have used the well known Nijmegen database, made of 40 12-bit images, 0.1mm resolution, containing about 9000 signals in 105 classified clusters. This database provides the ground truth with respect to the clusters but no information regarding the location of each signal. In order to define the true signals, three experienced radiologists have marked the true microcalcifications. Most of them (about 8300) are false positive signals, whereas only 8% are true microcalcifications. The detected signals have been randomly divided into three groups: training, validation and test set. Training and validation are used to choose the best classifier's architecture, where the test group allows us to evaluate its performance in unknown cases. Signals are evenly distributed among groups, and the ratio between false signals and true microcalcifications remains nearly 12.
All the parameters of our detection system have been tuned using a distributed genetic algorithm, that permitted to find out the best parameter setting for general purposes [MI6][MI7].

Clustering. Simply we define a cluster as being an ensemble of at least three microcalcifications whose minimum distance between them is no more than 0.5 mm.

RESULTS Here we show a comparison among the performances of the three classifiers to reduce false signals. We have randomly divided the 9000 detected signals into 3 groups for 9 different times and the results have been averaged.
ROC (Receiver Operating Characteristic) analysis, a well known method to evaluate the performance of a binary decision-making process in the medical community, has been employed to estimate the accuracy of the classifier. The ROC curve is a plot of the classifier's TPF (True Positive Fraction) versus FPF (False Positive Fraction). The area under the ROC curve (named Az) is an accepted way of comparing the performance of different classifiers.
Different kernels have been assessed for the SVM and finally we have select polynomial kernel of the 3-rd degree as the most suitable architecture. We have also compared the results of SVM with MLP (Multi Layer Perceptron) and LDA (Linear Discriminant Analysis) classifiers [MI8].
method method method

Fig. 4: Performance of our CAD system after FPR reduction (left); the ROC curves of the three classifiers with full size (middl) and reduced size (right) training sets.

Fig. 4 (left) illustrates the overall quality performance of our CAD system using SVM, through the FROC (Free Response Operating Characteristic) curve. Our CAD yields a sensitivity of 95% true clusters with 0.6 false-positive clusters on the 40 images of the Nijmegen database. The output is computed as the average on the whole database of the SVM classifier trained on the 9 training groups. Fig. 4 (middle) shows the ROC curves of the three classifiers, where LDA has the worst performance. Although SVM and MLP shows comparable performances, the SVM classifier is far simpler to handle and setup. These results are comparable with the state-of-the art for this database, the reference one in this field. Finally, we test the robustness of the classifiers when trained with a reduced training set. Fig. 4 (right) shows the variation of Az as a function of the size of the training set. Here SVM outperforms the MLP classifier, yet more when the training set is small.
Fig. 5 shows the simple Graphical User Interface we developed to prompt the radiologist in case of detection of a cluster of microcalcifications.
method method method

Fig. 5: The original image (left), with the highlighted cluster found out by our CAD (middle) and the cluster at half the original size (right).

It is worth remarking that we have patented our CAD system that has been included in a wider CAD system, now used in several Hospitals in the world.

3D Positron Emission Tomography (3D PET)

OUTLINE Positron Emission Tomography (PET) is a tomographic method which allows imaging of any body part of a patient to whom has been previously administered a radiopharmaceutical. The radioactive isotopes (tracers) are targeted to specific molecules, organs or tissues by attaching them to other compounds such as glucose. The image is created by measuring this distribution inside the body. In particular, a positron resulting from isotope decay annihilates with an electron in the surrounding body tissue, producing two 511 keV photons (events) travelling in opposite direction. The coincidence detection of these opposite photons allows an image of the density of the annihilation points to be reconstructed by approximating the spatial distribution of the tracer.
PET imaging consists of two steps. First, the data acquisition system detects the number of pairs of opposite photons derived from positrons' annihilation; then, through a reconstruction method, we can obtain the original distribution, hence the related image. Statistical model-based iterative algorithms and Fourier-based algorithms are the two major classes of tomographic reconstruction methods. Iterative algorithms exploit the statistical nature of the acquired projection data and incorporate the physical model into the reconstruction. The BPF (Backprojection Then Filter) algorithm is a Fourier-based method which operates directly on list-mode data giving a backprojected image. Depending on the PET system used and the target object to study, one method can be preferable to the other one. However, recent studies show that the BPF algorithm gives a better spatial resolution over the FOV (Field Of View) than other Fourier transform-based or iterative methods.
PET reconstruction of small animals is a class of problems characterized by poor counting statistics. Small animal studies require a very high resolution at a reasonable noise level, but are count-limited because of the dimension of the emitting object (e.g. a small animal's brain) compared with the human equivalent. On the other side, high sensitivity PET imaging systems can result in lower radiation doses to the patient and shorter imaging times. To this purpose, several 3D reconstruction methods can be applied.

METHOD The method we conceived performs a fully 3D image reconstruction, where also photons with trajectories oblique to the plane of detectors are considered, thus increasing significantly the sensitivity of the system. The method has been implemented in a parallel distributed algorithm to work in real time on a distributed computer made of a Cluster of Workstations. We have conceived a BPF method for a fully 3D image reconstruction because it works on list-mode data, thus being suitable to work on-line. More details are given in [M12].
Commercial PET scanners are composed of a number of adjacent detectors' rings (Fig. 1, left). Our experimental apparatus is a high-resolution PET scanner designed for imaging small laboratory animals (YAP-PET) (Fig. 1, right) and consists of a four-head detector positioned on a rotating gantry 90° from each other. Each detector consists of a 20×20 array, made of 2×2×30 mm3 YAP:Ce finger-like crystals coupled to a PSPMT (Position Sensitive Photo-Multiplier Tube). The ring diameter ranges from 10-25 cm, with a FOV of 40 mm transaxially by 40 mm axially. Our previous Monte Carlo studies [MI9][MI10] proved that a spatial resolution below 2 mm (FWHM) can be obtained over a FOV of 20×20×20 mm3 for a gantry diameter of 15 cm.
method method

Fig. 1: The scheme of a classical PET scanner (left) and our experimental PET apparatus for small animals (right).

BPF method. We propose an original method to exploit all the LORs (lines of response, the ideal lines along which a pair of photons travels in opposite directions) to achieve a reliable statistics in each voxel (a body's volume element), thus achieving a reliable fully 3D volume. Essentially, our method consists of filling with weighted values w, for any accepted event (50-415 keV), all voxels crossed by each LOR (Fig. 2). At the end of this process the image is filtered using the 3D Colsher filter with the Hamming window. The basic novelty is considering the amount of intersected voxels changing as LORs vary (and producing a different computational load for each event).
method method

Fig. 2: Left: Backprojection scheme in a 2D section of the image matrix: the voxels intersected (shaded) by the LORs (dot lines) are filling with weighted values. Right: A pair of opposite detectors with geometrical centre O. Each LOR is subdivided into equally-spaced points; through each of them (e.g.: P) a number of LORs could pass which is geometrically limited by the maximum number of crystals that could be fired. Here, if an event had happened at point P, then the maximum number of LORs that could have been detected theoretically would be 25 (left shaded surface).

Each LOR is subdivided into equally-spaced points, and weights are computed for each of them. We empirically found an optimum value of 100µm for the distance among the points, where the position of the detected event arising from its energy levels (50-415 keV) into the detectors' array.
Let p=n/N be the probability, from a geometrical point of view, that an event is detected, where n is the number of crystals inside the maximum lighted surface (Fig. 2, right, the shaded area on the left) and N is the total number of crystals, here 400. Then the weight w for that point is the inverse of the probability p. Hence, a point belonging to the solid angle defined by the dotted lined crossing the centre O has the maximum probability p = 1 and the minimum weight w = 1. In contrast, a point belonging to a LOR joining two crystals placed at the corner of detectors has the minimum probability p = 1/400 and the maximum weight w = 400. Here the kernel of the sequential algorithm is briefly outlined, in a pseudo-code:

1. for (each accepted event)
2. \ for (each crossed voxel)
3. \ \ for (x from 1 to l/r)
4. \ \ \ V(ix, jx, kx) = V(ix, jx, kx) + w
5. filter (V)

where line 4. represents the backprojection and V is the volume to be reconstructed, l is the length of the LOR and r depends on the resolution of the object (here r = 100µm). Parallel algorithm. There are no data dependencies among frames, the BPF algorithm is intrinsically parallel and each event can be processed independently and the BPF reconstruction, accordingly. Once all data have been processed, then filtering takes place.
To accomplish the parallel version of this algorithm we implement the SPMD (Single Program Multi Data) model required in which one code runs on each processor. We follow the master-slave paradigm, in which a separate master program is responsible for assigning data to the processes (slaves), and for collecting the results. In practice, the filter procedure has not been parallelized because it takes a very small percentage of the total time. For more details see [MI12].

RESULTS For our experiments we used a hot resolution phantom filled with a solution of radioactive water containing 18F labelled fluoro-deoxyglucose (FDG): images of glucose utilization by the brain in FDG studies can be used to indicate differences in glucose metabolism in vivo. This type of phantom is frequently used to assess the quality of images produced by PET scanners. The phantom used is a 38 mm diameter by 40 mm height cylinder, consisting of arrays of cylindrical "line" sources with 1.5, 2.0, 2.5 and 3.0 mm diameter, and centre-to-centre spacing equal to twice the source diameter. Practically speaking, it lays into a volume of 48×48×48 mm3 represented by a 3D matrix M of dimension N3, where N depends on the resolution of the final reconstructed volume. Here we use N = 64 with a resolution of 750µm. Each head of the tomograph was rotated on a radius of 76 mm by 180° and detected over 30×106 events.
In Fig. 4, first row, three slices are extracted before filtering: from left to right, at 1/5, 1/2, 4/5 (8 mm, 20 mm, 32 mm) of the height of the phantom, measured from its upper side (note that this phantom covers almost the whole FOV). In the second row, the corresponding three slices are taken after filtering. In all the slices the 2.5 and 3 mm line source sections are completely separated. In addition, near the centre of FOV (Fig. 4, second row) we are able to distinguish six out of the ten 2 mm sections: this confirms the Monte Carlo results [MI9]. In addition, we can recognize activities even from 1.5 mm diameter sources.
method method method
method method method

Fig. 4: First row: unfiltered transaxial slices: from left to right, taken at 8 mm (1/5 of the height), 20 mm (central slice) and 32 mm (4/5 of the height) from the upper side of the phantom. Second row: same slices after filtering.

Image reconstruction can be easily performed in real time with our algorithm implementing dynamical load balancing on a small entry-level cluster of 8 Intel PIV 1.7GHz, inter-connected in a 100Mbit Ethernet LAN. The resolution could be improved by decreasing by at least one orde of magnitude, while upgrading the PCs employed.

Combined tomographic and emission methods (CT, SPECT)

OUTLINE In recent years, nuclear imaging techniques such as PET (Positon Emission Tomography) and SPECT (Single-Photon Emission Computed Tomography) have found widespread oncologic use because they can visualize the metabolic activity of a tumor with high sensitivity and specificity. Although conventional mammography is the most widespread technique in clinical practice, its limitations in terms of specificity and sensitivity are well known. To improve the diagnostic information scintigraphic techniques have been used, and in particular scintimammography with an Anger camera. The recent introduction of dedicated imagers with small field of view (FOV) coupled to a moderate breast compression has allowed to increase the sensitivity up to 80% for tumour size between 0.5 and 1 cm. An alternative technique has also been proposed with the breast in a pendular prone position and a dedicated ring for its SPECT imaging. It has been suggested that a dedicated CT (Computed Tomography) system could be of great benefit if combined with this SPECT modality. Indeed, by integrating the morphological (CT) image with the functional (SPECT) one, the sensitivity for lesions smaller than 1 cm could be further increased.

METHOD An integrated CT–SPECT system for breast cancer study is being investigated. A dedicated tomographic imager has been assembled which allows us to perform scintimammography and X-ray CT in the same geometrical conditions. The experimetal set-up is schematically shown in Fig. 1. The system, whose toroidal ring has a diameter of about 13 cm, has been designed and built in order to image patients in a pendular prone position (leaning breast). In the prototype, two SPECT detectors are orthogonally positioned to the X-ray source-to-CT detector axis and have been mounted on the same rotating gantry. The X-ray source produces a quasi-monochromatic beam tuned at 28.5 keV (ΔE/E≈4%), with a measured flux of 1.7×104 photons mm2 s-1 mA-1.

Fig. 1: Schematic of the integrated CT-SPECT system.

The CT detector is based on 9 modules of 16 elements each, 1×20 mm2 in size, to form an array of 144 elements. Nine circuit boards use the same clock signal at 10 MHz. The 20-bit digital signals were fed into a PC via parallel port. A software in C language was specifically written to control the integration time and to provide the trigger for starting data acquisition and retrieval. A lead collimator having an aperture of 3 mm is placed at the entrance window of the detector, thus giving an actual pixel size of 1×3 mm2.
The SPECT detector (two identical heads) is based on a scintillator matrix made up of 8×8 match-like crystals, 2.5×2.5×5 mm3 of size. The dimensions of the FOV (Field Of View) in SPECT are 2 cm axially and 2 cm in diameter. The energy resolution is 23% (FWHM) at 140 keV.

CT acquisition. CT profiles were recorded by rotating the system with an angular step of 2° from 0° to 18°. The X-ray tube conditions were 40 kV, 10 mA, and the exposure time was 500 ms for all acquisitions. The following data processing must be applied to the raw profiles to obtain the corrected sinogram:
(1) dark and white field normalization
(2) calculation of -ln(I/I0) to correct the sinogram
(3) centering of sinogram
where I and I0 are, respectively, the transmitted intensity and the intensity in the air outside the object. The raw and the corrected sinograms are shown in Fig. 2, left and right, respectively. More details are given in [MI13].
method method

Fig. 2: CT acquisition of a cylindrical phantom: raw sinogram (left), corrected sinogram (right).

SPECT and CT acquisition. SPECT images were acquired by rotating the system with an angular step of 6° degrees from 0° to 180°. The acquisition time of each view was about 1 min and the data were stored in list mode. The data correction was performed by using three look-up tables that account for:
(1) the spatial non-linearity of the PSPMT (Position Sensitive Photo-Multiplier Tubes), i.e., the spatial distortions due to PSPMT were corrected via the appropriate algorithm (see Fig. 3)
(2) the uneven response of PSPMT, i.e., the 140 keV peak-shift of each detector element was corrected by applying a normalization procedure
(3) the variation of detection efficiency, i.e., the number of events detected by each element was corrected by using the image of a uniform (flood) radioactive source
As for CT acquisition, a similar procedure for determing the COR of the SPECT system was carried out before acquiring the data.

Fig. 3: Correction of spatial distortion in SPECT: raw matrix (left), matrix corrected using the appropriate look-up table (right).

For the reconstruction of tomographic data, it is necessary to precisely determine the position of the rotation axis with respect to the digital data array. To this aim we measured the so-called Center Of Rotation (COR) which was obtained by acquiring a 270° rotational sinogram of a high attenuating point-like object positioned near the physical axis of rotation. The measured offset between the center of the data array and the center of the rotation axis was then used in the reconstruction program.
As for tomographic image reconstruction, the most innovative contribution has been brought in the SPECT image reconstruction. Differently from the most used method where data previously acquired are back-projected onto image pixels, the method developed aims to minimize a cost function, where the cost is the minimum deviation between measured data and data as if they were generated by the ``image'' itself, also considering attenuation and scattering phenomena. This complex minimization problem has been solved by means of the parallel iterative Simulated Annealing (SA) algorithm we conceived and implemented on a parallel SMP (Symmetric Multi-Processor) system [MI14][MI15].

Image reconstruction and fusion. The CT sinograms were reconstructed by using the RECLB library for Reconstruction Tomography, using the conjugate gradient method of reconstruction. We chose this reconstruction program because it takes into account the fan-beam geometry using a flat detector. With the platform we used, the mininum reconstruction frame rate was always 1 fps (frame per second). As far as the SPECT image reconstruction is concerned, the SA algorithm made use of the attenuation coefficients obtained from the CT image and of the corresponding coefficients at the energy of 140 keV that we calculated by means of an analytical function.
Finally, the fusion of CT and SPECT images was performed by equal weighing the two contributions. Four slices of the fused image of a test phantom are shown in Fig. 4. A complete description of the phantom is given in [MI16].

Fig. 4: Fusion of CT and SPECT images in diverse slices.

Preliminary results have thus demonstrated the feasibility of the combined CT-SPECT tomograph. However, before testing the system prototype in a clinical environment, various issues have to be addressed. The SPECT FOV must be enlarged so as to reconstruct full size images as in CT modality. Furthermore, the use of more sophisticated fusion algorithms is desirable for better detection and display of lesions. Finally, the diagnostic limits of such a technique has to be investigated by means of more complex tomographic phantoms.

Characterization of the skin cells and wrinkles by using capacitive image analysis

OUTLINE The automated analysis of images and sequences could play a crucial role in most of the computer vision systems for medical images. The assessment of pathologies or treatments is often entirely still left to human (visual) evaluation, because of the objective difficulty to develop automated (or computer aided) assessment systems, also due to the human health being a critical issue. In the field of dermatology and dermocosmetics, providing an automated and accurate characterization of the skin surface is a basic requirement to enable a quantitative assessment of the effects of environmental exposure, physiological changes or dermocosmetic treatments. In normal subjects, skin ageing is the most visible effect: with aging, the skin surface changes its appearance from both physiological and morphological points of view.
Many approaches have been proposed to characterize the cells of the skin surface, some of them employing highly accurate, but very expensive, device and methods for off-line analyses. Dermatologists demand systems to achieve in vivo and routinely the automated assessment of treatments. On the other side, some other approaches provide for portable systems, but they often offer incomplete and low quality information.

METHOD We have developed the first approach known in scientific literature, and in industry, relying on a compact and low-cost capacitive device to achieve automatically quantitative measurements in vivo and in real time. Originally developed for fingerprint acquisition and recognition, the device can give a detailed map (12×18 mm2) of the skin surface in terms of grey level values with a resolution of 50µm/pixel (Fig. 1). A sample image of the ventral side of the forearm can be compared with one achieved by means of a camera (Fig. 1). This is the body site commonly considered in such studies because of its limited photoageing due to environmental factors [S17]. Differently from optical images, capacitive images are obtained by contact and are insensitive to light changes. In general, the capacitive device outputs gray levels values darker under skin tissue than under skin wrinkles. However, black spots may be given by skin hydration or pore sweat (Fig. 1, last two images on the right). In addition, also acquisition pressures affect the darkness of the image: while the last two images of Fig. 1, on the right, show a lack of pressure at the image borders, the first image on the left is acquired with a right pressure. To conclude, the data acquisition stage being operator-dependent as well as the physiological factors, yield images presenting a high unhomogeneity.

before before before before before

Fig. 1: From left to right: the capacitive device and the capacitive image (31 y.o., male); just for comparison, an image of almost the same skin region achieved using a camera; example of "bad" images, with black spots yielded by skin hydration or pore sweat.

In order to reduce the effects of the sensor noise and of the grey level deformation due to both pressure differences and the effects of skin hydration, we perform the normalization of a local contrast, after a flat field correction. Fig. 2, first row, right, shows three normalized samples where every pixels have same contrast, referring to a 1 year old (y.o.) baby, a 51 y.o. and a 94 y.o. females. are the original ones. As one can see, looking at the original samples (the first three images of Fig. 2), as a person gets on in years, more and more wrinkles appear to get deeper and deeper, and wider.
We realized that the original images look like a DEM (Digital Elevation Map) whereby the value of a pixel represents an elevation rather than luminance intensity - the darker the pixel, the higher the elevation of the terrain point. In our images, cells represent mountains and wrinkles represent valleys. Therefore, we have thought that such images could be segmented by using the methods usually employed in geomorphologic applications. In particular, we focused our attention on watershed analysis which subdivides the image into low-gradient catchment basins (locally homogeneous connected sets of pixels) surrounded by high-gradient watershed lines (connected pixels exhibiting local maxima in gradient magnitude). Wrinkles are watershed lines (first three images of Fig. 2, second row) and cells (last three coloured images) are catchment basins [S18]. Since wrinkles are typically absorbed into their respective cells, the method is not suitable to detect wrinkles.

before before before before before before
before before before before before before

Fig. 2: First row: From left to right: the capacitive images referring to a 1 year old (y.o.) baby, a 51 y.o. and a 94 y.o. females; normalized samples of the same images. Second row: From left to right: the normalized images with superimposed watershed lines; the coloured segmented cells.

Several features have been extracted from the skin topographic structures by using different image processing methods, referring to geometrical, photometric or statistical properties, on the original as well as in the Wavelet transformed images [S19][S20]. Fig. 3, first row, shows on the left the histogram H(x) of the cells area (in pixel), while its cumulative histogram HC(x) is on the right (51 y.o., female). M is chosen so that P=(M, HC(M)) has derivative equal to 1. Then, the feature related to cells is f1=HC(M). Fig. 3, second row, shows how a feature related to wrinkles has been extracted. The original image (first) is normalized (second) and then wrinkles are enhanced using a bank of bidimensional Gaussian filters having 14 different orientations (third). The rightmost image shows the corresponding wrinkles orientation chart [S21]. On the enhanced image we have extracted the MLC (Mean Local Contrast). This measure is related just to the wrinkle component of each sample, thus expecting it is higher for aged people, and it is the second feature here discussed.
before before
before before before before

Fig. 3: First row: the histogram H(x) of the cells area and the related cumulative histogram HC(x) (the 51 y.o. subject of Fig. 2). Second row: From left to right: original sample (44 y.o., male); the normalized image; the image with the wrinkles enhanced; the corresponding wrinkle orientation chart, from which another feature have been extracted.

In order to investigate the capability of the capacitive device to permit accurate 2D and 3D geometrical measurements of the skin surface, we have developed a method to compare measurements of the same body site achieved by means of the capacitive device and of an optical profilometer, considered as being the reference approach, but not suitable for in-vivo studies, since it requires silicone skin replicas [S22]. Fig. 4, top left, shows a 3D profilometric 640×480 image, covering a field of view of 627.6×470.7 µm2, with a resolution 50 times smaller than capacitive one (0.981µm/pixel). Accordingly, we needed to employ our mosaicing techniques to build a profilometric mosaic (Fig. 4, top right), thus achieving a region wide enough to enable a comparison. Subsequently, we have conceived an accurate image fusion method to overlap the profilometric mosaic to the corresponding region of the capacitive image (Fig. 4, bottom left). The last step has been to develop a method that exploits B-spline to fit the skin surface and extracts absolute measures regarding the wrinkles' width and depth from both images. Fig. 4, bottom right, shows the native profilometric line profile and the reconstructed capacitive one. Control points in the saturated regions that are close to high gradients of the capacitive profile are automatically removed from the spline interpolation [S23].

before before
before before

Fig. 4: First row: 3D profilometric map (left) and mosaic of profilometric images (right). Second row: fusion of the profilometric mosaic and the capacitive image (left); native profilometric line profile and the recostructed capacitive one.

Device assessment. Fig. 5, graphics 1 and 2 from left show the correlation between capacitive and profilometric measurements, respectively referring to wrinkles' depth and width (both in µm). As regards depth, we can see how the capacitive device is able to measure wrinkles whose depth is below 50µm, above which the device, due to a limit of the present technology, conceived to work with fingerprints. The new device designed on purpose would permit to overcome this limitation. In fact, R=0.981 in the region before saturation points out a strong correlation and the capability of this technology to measure wrinkles' depth with a high accuracy. Oppositely, as far as the width is concerned, there is no limit to the width of wrinkles that can be measured accurately (R=0.978): we tested it successfully with wrinkles' width up to 200µm. [S24]. To conclude, the method we setup to validate the capacitive device allows us to state that it is reliable for quantitative analysis of the skin surface.

before before before before

Fig. 5: First row: From left to right: correlation between capacitive and profilometric measurements of wrinkles' width and depth (in µm); results for features f1 and MLC.

Skin surface analysis. After that the reliability of planar 2D measurements have been proved, we have assessed the reliability of our features by searching for the correlation with skin aging, that in normal subjects corresponds to biological aging and can be considered as the ground truth. A good correlation between features and skin ageing should validate the features as being reliable to characterize the skin surface, accordingly. Subsequently, a dataset of N=90 normal subjects has been built up together with a team of dermatologists during common dermatological examinations.
Fig. 5, graphics 3 and 4, from left, report the results attained f1, after being normalized, and MLC. The good correlation between values of the features extracted (here, mainly f1) and skin aging proves how the features we conceived are suitable to measure the skin's health status. Besides, in order to assess the accuracy of our measurements for a given subject we have performed a measurement repeatability test on a set of 10 people, one for each decade. The relevant results achieved (e.g. f1 yields a relative error of 0.21%) prove that our system can be used to measure any change in the skin surface for follow-up studies in dermatology and dermocosmetics.
At present, we have started a follow up study involving 30 subjects for evaluating middle/long term changes due to hydrating treatments. So far, preliminary experiments directed to assess the skin surface modifications due to hydrating creams [S25] as well as to hyperhydrosis have yielded promising results.

It is worth remarking that we have been invited to present these results at the National Congress of the Italian Association of Hospital Dermatologists [S26].

[MI1] R. Campanini, A. Bazzani, A. Bevilacqua, D. Bollini, D. Dongiovanni, E. Iampieri, N. Lanconelli, A. Riccardi, M. Roffilli, R. Tazzoli, A novel approach to mass detection in digital mammography based on Support Vector Machines (SVM), 6th International Workshop on Digital Mammography (IWDM), Bremen, Germany, June 22-25, 2002, pp.399-401
[MI2] A. Albanese, A. Bevilacqua, R. Campanini, E. Iampieri, N. Lanconelli, M. Rossi, D.Romani, V. Salomoni. P. Vignoli, Characterization of an FFDM unit based on a-Se direct conversion detector, 6th International Workshop on Digital Mammography (IWDM), Bremen, Germany, June 22-25, 2002, pp.69-71
[MI3] A. Bazzani, A. Bevilacqua, D. Bollini, R. Brancaccio, R. Campanini, N. Lanconelli, D. Romani, System for automatic detection of clustered microcalcifications in digital mammograms, International Journal of Modern Physics C - Physics and Computers, Vol.11:5 (Jul 2000) 901-912
[MI4] A. Bazzani, A. Bevilacqua, D. Bollini, R. Campanini, N. Lanconelli, A.Riccardi, D. Romani, Automatic detection of clustered microcalcifications using a combined method and an SVM classifier, 5th International Workshop on Digital Mammography (IWDM), Toronto, Canada, June 11-14, 2000, pp.161-167
[MI5] A. Bazzani, A. Bevilacqua, D. Bollini, R. Brancaccio, R. Campanini, N.Lanconelli, A. Riccardi, D. Romani, G. Zamboni, Automatic detection of clustered microcalcifications in digital mammograms using an SVM classifier, 8th European Symposium on Artificial Neural Networks (ESANN 2000), Bruges, Belgium, April 26-28, 2000, pp.195-200
[MI6] A. Bevilacqua, R. Campanini, N. Lanconelli, Optimization of a distributed genetic algorithm on a cluster of workstations for the detection of microcalcifications, International Journal of Modern Physics C - Physics and Computers, Vol.12:1 (Jan 2001) 55-70
[MI7] A. Bevilacqua, R. Campanini, N. Lanconelli, A Distributed Genetic Algorithm for Parameters Optimization to Detect Microcalcifications in Digital Mammograms, Lecture Notes in Computer Science (LNCS), Vol.2037 (2001) 278-287
[MI8] A. Bazzani, A. Bevilacqua, D. Bollini, R. Brancaccio, R. Campanini, N.Lanconelli, A.Riccardi, D. Romani, An SVM classifier to separate false signals from microcalcifications in digital mammograms, Physics in Medicine and Biology, Vol.46:6 (Jun 2001) 1651-1663
[MI9] A. Bevilacqua, D. Bollini, A. Del Guerra, G. Di Domenico, M. Galli, M. Scandola, G.Zavattini, A 3D Monte Carlo simulation of a small animal Positron Emission Tomograph with millimeter spatial resolution, IEEE Nuclear Science Symposium, Albuquerque, New Mexico, November 9-15, 1997, Vol.2, pp.1702-1705
[MI10] A. Bevilacqua, D. Bollini, A. Del Guerra, G. Di Domenico, M. Galli, M.Scandola, G. Zavattini, A 3D Monte Carlo simulation of a small animal Positron Emission Tomograph with millimeter spatial resolution, IEEE Transactions on Nuclear Science, Vol.46:3 (Jun 1999) 697-701
[MI11] A. Bevilacqua, D. Bollini, R. Campanini, M. Galli, N. Lanconelli, A new approach to image reconstruction in Positron Emission Tomography using Artificial Neural Networks (ANN), International Journal of Modern Physics C - Physics and Computers, Vol.9:1 (Feb 1998) 71-85
[MI12] A. Bevilacqua, Evaluation of a Fully 3D BPF Method for Small Animal PET Images on MIMD Architectures, International Journal of Modern Physics C - Physics and Computers, Vol.10:4 (Jun 1999) 723-739
[MI13] A.del Guerra, A. Bevilacqua et al., A dedicated system for breast cancer study with combined SPECT CT modalities, Nuclear Instruments and Methods in Physics Research A, Vol.497 (2003), 129-134
[MI14] A. Bevilacqua, A Problem Independent Parallel Simulated Annealing on a SMP System, International Conference on Parallel and Distributed Computing and Systems, Anaheim, CA, USA, August 21-24, 2001, pp.414-418
[MI15] A. Bevilacqua, A Methodological Approach to Parallel Simulated Annealing on an SMP System, Journal of Parallel and Distributed Computing, Vol.10:2 (Oct 2002), pp.1548-1570
[MI16] M. Gambaccini, A. Bevilacqua et al., Combined CT-SPECT tomography system for breast cancer study, Physica Medica, Vol.XVII:4, Oct-Dec 2001, 249-252
[S17] A. Bevilacqua, A. Gherardi, Age-related Skin Analysis by Capacitance Images, 17th IEEE-IAPR International Conference on Pattern Recognition (ICPR 2004), Cambridge, UK, August 23-26, 2004, Vol.2, pp.703-706
[S18] A. Bevilacqua, A. Gherardi, M. Ferri, Predicting Biological Age from a Skin Surface Capacitive Analysis, International Journal of Modern Physics C - Physics and Computers, Vol.15:9 (2004), 1309-1320
[S19] A. Bevilacqua, A. Gherardi, R. Guerrieri, Studying Skin Ageing through Wavelet-based Analysis of Capacitive Images, IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS 2005), Como, Italy, September 15-16, 2005, pp.360-365
[S20] A. Bevilacqua, A. Gherardi, R. Guerrieri, Evaluation of skin ageing through wrinkle analysis in capacitive images, International Journal of Modern Physics C - Physics and Computers, Vol.17:11 (2006), 1663-1678
[S21] A. Bevilacqua, A. Gherardi, R.Guerrieri, In vivo quantitative evaluation of skin aging by capacitance image analysis, 7th IEEE Workshop on Motion and Video Computing (WACV'05), Breckenridge, CO, USA, January 5-7, 2005, Vol.1, pp.342-347
[S22] A. Bevilacqua, A. Gherardi, R. Guerrieri, Measuring Skin Topographic Structures through Capacitance Images Analysis, IEEE International Conference on Advanced Video and Signal based Surveillance (AVSS 2006), Sidney, NSW, Australia, November 22-24, 2006, pp.53-57
[S23] A. Bevilacqua, A. Gherardi, Characterization of a capacitive imaging system for skin surface analysis, 1st IEEE International Workshops on Image Processing Theory, Tools & Applications (IPTA08), Sousse, Tunisia, November 23-26, 2008, pp.265-271
[S24] A. Gherardi, A. Bevilacqua, A capacitive image analysis system to characterize the skin surface, International Journal of Modern Physics C - Physics and Computers, (2010)
[S25] A. Bevilacqua, A. Gherardi, Measuring the skin surface changes due to hydrating treatments through capacitive images analysis, International Symposium on Nonlinear Theory and its Applications (NOLTA 2006), Bologna, Italy, September 11-14, 2006, pp.831-834
[S26] A. Bevilacqua, G. Carboni, R. Davalli, V. Gutu, P. Bulletti, M. Reggiani, Characterizing the skin surface through image capacitive analysis - invited at XLII ADOI (Italian Association of Hospital Dermatologists) National Congress, Rimini (Italy), October 13-16, 2004, pp.166-167

Copyright © 2008, A.G. - All Rights Reserved