OBJECTIVE The goal of this industrial research is to develop an automatic system exploiting image analysis to characterize automatically the photometric and the geometric properties of vehicle headlamp beams projected into an optical chamber.
This research has been carried out together with
SIMPESFAIP SPA (CORGHI Group).
SUMMARY The vehicle's headlamp orientation, luminous and geometrical beam properties are a matter strictly ruled by European Commission for Transportation. To test the headlamps, the test system is firstly aligned (usually manually) to the vehicle and then the human being has the definite opinion even on the beam-related measures derived by looking at the reference points of the rear panel (Fig. 1, right) of the Optical Projection System (OPS) (Fig. 1, left and middle). The outcome of the project is an industrial prototype that performs both alignment and measurements automatically, where the operator's eyes are replaced with CCD cameras and real time image and video analysis algorithms [HP1]
Fig. 1: From left to right: the side- and the front-view of the Optical Projection System (OPS); the rear panel of the OPS, with reference points.
In the automotive field, the vision-based technology (often exploiting 3D sensors) has become popular
when introduced to detect obstacles to reduce braking time or to assist parking, just to cite a few examples.
Nevertheless, the automatic and accurate testing of automotive equipment is increasingly stirring up
the industrial research and it stimulating new proposals even in regulation requirements, in order
to improve safety standards.
Our research work [HP2][HP3] represents the first approach based on automatic video analysis to characterize
headlamp beam's profiles in an industrial prototype, suitable to be employed in garages
for routine use in periodic car tests. In particular, our automatic real time system exploits
image analysis in 3D to perform accurate alignment and in 2D for measurements of geometric and photometric
regulation parameters for both driving and passing headlamps light beams.
The system is made of two subunits, the alignment and the beam characterization units,
that act sequentially. Firstly, the algorithm based on the stereo couple recovers the 3D alignment parameters
while the vehicle approaches the OPS and stops at about 1 m. Then these parameters are passed to the control engine
that aligns the OPS. A camera sensor looks toward the panel at a prefixed distance and inclination, in order not to
interfere with the incident beam light (Fig. 1, left).
Fig. 2: Left rectified images of: the license plate moving along a known trajectory (left) and the vehicle moving toward the stereo rig with on the floor the "sand track" of the vehicle’s trajectory (middle); the absolute angular error yielded by our alignment procedure (right).
Beam characterization unit. After the vehicle has been properly aligned, our self-adaptive algorithm can compute reliable parameters referring to the luminous profile of the beam that is projected onto a panel and acquired by a CCD camera. Excluding the instrumentation assessment algorithm, our characterization algorithm is made of different parts. Two of them are camera dependent and have to be performed once and for all after choosing and installing the camera on the prototype.
Instrumentation assessment. In order to validate the characterization method, together with the industrial partner we have built a Numerical Control Unit (hereinafter, NCU), where headlamps projectors can be mounted on and oriented according to three degrees of freedom (roll, pitch and yaw). These movements are measured electronically with a resolution r=0.06°.
Fig. 2: A schematic representation of the NCU (left) and an image of the real prototype with headlamp (right).Since the aim of this work is to test systems and algorithms that must comply with strict regulations, before performing any measurements and algorithm assessment, we thoroughly investigated the NCU accuracy [HP1]. An experimental procedure, based on pattern recognition and image analysis methods, has been devised to quantify the accuracy of the ground-truth measurement achieved by the NCU. The aim of the algorithm is to collect and process the measurements of the angular variations returned by the NCU in response to a fixed angular distance, whose magnitude is measured through pattern matching techniques using a couple of black-filled circular patterns. The yaw angle is varied on the NCU so that the camera moves toward the second pattern. The pattern recognition algorithm detects when the pattern centre is found at the same distance from the image centre as recorded in the first configuration, with a tolerance of Δd=0.5 pix, and the second reference position is set. Statistics about the results collected show a standard deviation of about σ=0.031°. Therefore, the accuracy of the measures provided by the NCU is proved to be comparable with the resolution of the instrument in the 86.5% (2σ) confidence interval.
Analysis of optical device. The problem in achieving radiometric measurements through analyzing the gray level values of image pixels arises from finding out the relationship that binds the scene radiance and the image irradiance, that is the "power of light" recorded by the vision sensor. This relationship is known as the Response Function (RF) of the camera and needs to be found out. Fig. 3, left, shows the RF of the industrial B/W camera we employed in this stage and recovered through our method (more details are given in [HP5]).
System calibration. After the camera is fixed onto the OPS, it is necessary to compensate for the perspective effects caused by the inclination of the camera's optical axis with respect to the rear panel's plane, as it can be seen in Fig. 1, left. Fig. 3 shows the calibration pattern before (middle) and after (right) correction [HP6].
Fig. 3: From left to right: the calibration pattern as seen by the CCD before (left) and after (middle) our correction; the recovered camera's RF and the samples directly extracted using a calibration chart (right).The remaining two parts of the algorithms are camera independent, just exploits previous results, whatever the chosen camera is, to cope with two basic issues: auto-exposure and profile segmentation by human eyes.
Auto-exposure. The light of the vehicle headlamp beams has an extremely wide dynamic range, whereas the CCD sensors available on the market that are economically compatible with the commercial cost of the final diagnostic equipment show a limited dynamic range. To face this challenging problem, we have conceived an original algorithm capable to extract all the useful information by adjusting the radiometric resolution of the CCD and preventing it enters saturation [HP7]. To this purpose, our algorithm uses all the image pixels of the real time sequence to find the optimal exposure time, assuring that the whole image is not in saturation, even in case of such highly contrasted scenes. This permits the acquisition system to work with all the possible light sources.
Profile segmentation. We want now to mimic the response of the human eye even in a highly contrasted and untextured scene such as the one generated by beams projected on a white panel. Therefore, instead of processing a synthetic image generated by tone mapping operators, we exploit the camera RF knowledge with a locally adaptive segmentation algorithm based on the gradient of visual perception property, performed on the acquired non-saturated image. The automatic exposure algorithm just ensures that the radiometric content is preserved in the image. Nevertheless, the difference between what human eyes perceive looking at the panel directly and watching the image of the panel could be relevant. We have conceived an accurate and automatic eye-like segmentation algorithm able to detect the line corresponding to the light-dark border of the projected beam as perceived by the human being rather than as it appears on the image captured by the CCD [HP8]. The algorithm is based on a method devised to find suitable local thresholding values that can fit spatial luminous variations and automatically adjust to different light intensities. We have taken into account that the non-linear response of the human visual system depends on the relationship between local variations and the surrounding luminance rather than on the absolute luminance.
Fig. 4: Level sets and profiles (left); a particular (right).Fig. 4, left, shows the level sets of a passing beam headlamp together with the profiles computed by our algorithm and detected by human operators. While these are consistent to each other, they do not match with any contour defined by the level sets (Fig. 4, right), because the human perception of what being imaged is far different.
Extraction of regulation points. Interesting points representing geometrical references according to the current European regulations can now be identified by computing first and second derivatives of this profile. In particular, for passing beam headlamps the "elbow" point, which is related to a strong profile slope change, can be extracted from the maximum value of the second derivative of the cut-off line. In Fig. 5, from top to bottom, the first derivative signal and its smoothed version, here achieved using running mean filtering, are shown together with the second derivative signal and its smoothed version. Also, in this figure the final cut-off profile is represented by two line segments, obtained by a linear regression (according to Least Square Method, LSM) on the points of the profile pertaining to the left and the right side of the detected elbow. Thus, we can compute another important geometric parameter required by regulations: the "deviation" angle between the two line segments. Finally, the algorithm can give even the maximum peak of illumination (identified by the cross in Fig. 5).
Fig. 5: Profile derivative analysis for a passing headlamp beam. The extracted points of the light profile (thin red curve) and the segmented profile (blue line segments). From top to bottom: first derivative signal (1) and its smoothed version (2), second derivative signal (3) and its smoothed version (4).
RESULTS In Fig. 6, left, the raw image of a luminous profile projected by a passing beam headlamp is shown. Two of the most representative (according to their difference) perceived profiles are superimposed in Fig. 6, middle (dotted lines), together with the profile extracted by our algorithm (continuous green line). As one can realize, the trend of the profile is "correctly" followed even in the last (right) part of Fig. 6, left, where the SNR in the displayed image is dramatically low and where a common level set method will fail (continuous blue line).
Fig. 6: The image of a luminous profile of a halogen passing headlamp (left); two significant perceived profiles (dotted lines) and the superimposed extracted profiles, using our method (continuous line) and level set processing (middle); maximum distance and standard deviations (both in pixels) of 14 profiles, referring to the two most distant profiles seen by the human operators.In Fig. 6, right, the average distances and the standard deviations (in pixels) are represented for 14 different profiles, generated by halogen, lenticular and xenon passing headlamps, for the two most representative human operators: that is, for each test the two most distant profile are taken for comparison. The average distance is about 6.8 pixel with a standard deviation of about 2.9 pixel. Since the vertical resolution of the camera we used is about 0.16 mm/pixel, that means that average distance and standard deviation are about 1.1 mm and 0.47 mm, respectively. Therefore, we can conclude that this represent an excellent result, since the accuracy of our method is comparable with the inter-operator standard deviation (about 0.3 mm).
Fig. 7 presents the results attained for yaw and pitch perturbations referred to the elbow of a beam profile of an halogen passing headlamp equipped with lenticular lens. The hatched boxes show that in the European regulations range [−1.5°,+1.5°] the accuracy fulfils requirements for both pitch and yaw. In terms of precision, the standard deviation shown in Fig. 7 is very low and even in the worst case (the yaw angle) it keeps below 0.02°. More experiments accomplished on different kinds of headlamps are reported in [ICIAR2009].
Fig. 7: Alignment measurements (precision and accuracy) for halogen passing beam headlamp equipped with lenticular lens.
Copyright © 2008, A.G. - All Rights Reserved