Compact 3D-Camera

Special hardware issues are a high quality, bright light source (and ... INTRODUCTION ... The fringe projector uses a LCD which enables fast and flexible pattern projection. Camera and projector have the same short focal length to get a high system aperture. ..... the spot's image must have a size smaller than the lens stop.Missing:
3MB taille 3 téléchargements 390 vues
Compact 3D-Camera Thorsten Bothe*, Wolfgang Osten, Achim Gesierich, Werner Jüptner Bremer Institut für Angewandte Strahltechnik (BIAS) ABSTRACT A new, miniaturized fringe projection system is presented which has a size and handling that approximates to common 2D cameras. The system is based on the fringe projection technique. A miniaturized fringe projector and camera are assembled into a housing of 21x20x11 cm size with a triangulation basis of 10 cm. The advantage of the small triangulation basis is the possibility to measure difficult objects with high gradients. Normally a small basis has the disadvantage of reduced sensitivity. We investigated in methods to compensate the reduced sensitivity via setup and enhanced evaluation methods. Special hardware issues are a high quality, bright light source (and components to handle the high luminous flux) as well as adapted optics to gain a large aperture angle and a focus scan unit to increase the usable measurement volume. Adaptable synthetic wavelengths and integration times were used to increase the measurement quality and allow robust measurements that are adaptable to the desired speed and accuracy. Algorithms were developed to generate automatic focus positions to completely cover extended measurement volumes. Principles, setup, measurement examples and applications are shown. Keywords: structured light, fringe projection, 3-D shape measurement, miniaturization, depth scanning, phase shifting, synthetic wavelengths, absolute phase measurement, inverse projection

1.

INTRODUCTION

1.1. Properties summary of the 3D camera Industrial- and multimedia applications need cost effective, compact and flexible 3D profiling instruments. In the paper we will show the principle of and results from a new miniaturized 3-D profiling system for macroscopic scenes. The system uses a compact housing and is usable like a camera with minimum stabilization like a tripod. In future development states the 3D camera might be useful for semi-professional and private users to generate virtual models. The system is based on common fringe projection technique. However, camera and projector are assembled with parallel optical axes having coplanar projection and imaging planes. The distance of their axes compares to the spatial separation of the human eyes distance. The fringe projector uses a LCD which enables fast and flexible pattern projection. Camera and projector have the same short focal length to get a high system aperture. Thus, objects can be measured from a shorter distance compared to common systems and also high gradient objects like the interior of tubes are measurable with high accuracy. Both projector grating and camera chip can be simultaneously focused by a translation stage. The setup allows to work with completely opened aperture giving a big amount of available light (high SNR) but also a reduced depth of focus. Anyhow, it is possible to measure with narrow fringes for high accuracy. In this case multiple focal planes are measured, so all object points deliver valid data. A similar demonstrated technique1 uses a Ronchi grating for fringe generation and continuously scans the measurement volume with combined phase shifting. For measurement we use synthetic wavelengths that are of special advantage for defocused fringes (the phase of sinusoid fringes remains constant also in the defocused state) and minimize the amount of images to be taken. The developed algorithms are completely adaptable to the users needs: the operator defines the measurement volume, then automatically a series of 1 to 5 focused distances is used which covers the complete volume. The measurement time from 1 to 20 seconds depends on the user selectable accuracy. Using standard accuracy (resolution of 100 micron height differences in half meter distance), the device can be used like a camera with tripod. The 3D camera is built from low cost components, robust, nearly handheld and delivers insights also into difficult technical objects like tubes and small volumes.

* [email protected], [email protected]; phone 49 421 218-5014; fax 49 421 218-5063; www.bias.uni-bremen.de; Bremer Institut für Angewandte Strahltechnik - BIAS, Klagenfurter Straße 2, 28359 Bremen, Germany

48

Interferometry XI: Applications, Wolfgang Osten, Editor, Proceedings of SPIE Vol. 4778 (2002) © 2002 SPIE · 0277-786X/02/$15.00

Downloaded from SPIE Digital Library on 26 Aug 2011 to 130.215.169.248. Terms of Use: http://spiedl.org/terms

1.2. 3D profiling by fringe projection For common fringe projection systems a projector is used to project straight fringes onto the object under investigation (Fig. 1). A camera takes images of the fringes on the object. The fringe plane position that is recorded by the camera pixel (i,j) is identified with the according projector plane position. In case of a LCD projector this is the LCD pixel coordinates (l,m). The process is called absolute phase measurement and for automatic evaluation a series of fringes is used2. projection

CCD recording

Fig. 1: Image formation principle: a) fringe projection; b) evaluated position correspondence by absolute phase measurement

When the object is viewed under a triangulation angle we record deformed fringes that allow evaluating the object’s 3D-shape. The sensitivity of the system is mainly influenced by the triangulation angle and the fringe frequency. The bigger the triangulation angle and the thinner the fringes are the higher is the measurement resolution. But the disadvantage of big triangulation angles is the shadowing of object parts that makes a measurement of high gradient objects impossible. Using a calibrated camera model, the positions (i,j) and (l,m) define two rays in space that meet just at the scattering position of the object surface. Thus, we calculate 3D object surface coordinates. 1.3. Synthetic wavelengths for absolute phase measurement Typical fringe projection setups use high frequency fringes (e.g. Ronchi gratings) for a basic high accuracy measurement of an ambiguous wrapped phase. Then a set of e.g. grey coded images follows to demodulate the phase 3. In this system, we use a flexible LCD projector for fringe generation that allows the projection of any desired patterns. We use sinusoidal fringes with user defined frequency and for reliable demodulation a series of low frequency synthetic wavelengths4 as shown in Fig. 2. P1 PS1..4 P2 PS1..4 P3 PS1..4 Fig. 2: Series of 12 fringe pattern for phase measurement (three synthetic wavelengths P1, P2 and P3 each with four phase shifts PS1...4).

To clarify the process of demodulation to generate an absolute phase: in Fig. 2 we see a series of three different fringe periods P1, P2, P3. For each frequency the phase is shifted (four times PS1, PS2, PS3, PS4) for automatic evaluation of a wrapped phase (P1, P2). The biggest fringe period P3 is noisy but generates an unambiguous (not wrapped) phase as shown in Fig. 3 (P3).

P1

P2

P3

P

3D-Model

Fig. 3: Demodulation of the wrapped phases from three synthetic wavelengths (fringe periods P1…3) and resultant 3D-model.

The less noisy data of P2 can be unwrapped by comparing with P3. The same is done with P1. Finally, we get a low noise, unwrapped phase P for the evaluation of the 3D coordinates.

Proc. SPIE Vol. 4778

Downloaded from SPIE Digital Library on 26 Aug 2011 to 130.215.169.248. Terms of Use: http://spiedl.org/terms

49

A further important feature is available because a flexible LCD is used for projection. It is possible to freely select the minimum fringe period (down to the size of some LCD pixels) for phase evaluation. Thus, the system is completely adaptable to the user’s needs. E.g. for a quick measurement it is possible to take only four phase shifted broad fringes for a complete measurement as shown in Fig. 4. PS1..4 Fig. 4: Series of only 4 phase shifted fringe pattern for fast absolute phase measurement with reduced accuracy.

In Sec. 2.3 is demonstrated how a variable minimum fringe period allows to adapt the size of a focused area (measurement volume) for the user’s needs. For a very high accuracy narrow fringes are used and the electronic noise is reduced by integrating multiple camera images.

2.

3D-CAMERA SETUP

2.1. Miniaturized combination of camera and projector For the 3D-camera we use the setup concept shown in Fig. 5.

Recording Illumination CCD

CCD camera

LCD

CCD lens

UHP lamp heat rejection Collimation optics & polarizer

LCD lens

Polarizer (analyser)

Overlap a)

LCD

b) Fig. 5: 3D-camera concept: a) parallel assembly of camera and projector delivers common focused planes. CCD and LCD chip are mounted movable, so the common focused area can be adjusted. b) Concept for the miniaturized system (small CCD camera and miniaturized LCD projector).

In Fig. 5 a) the schematic fringe projection setup is shown: projector and camera are positioned with parallel optical axes. Thus, both can be adjusted to have the same coplanar focused region in the measurement volume. There is still a triangulation angle resulting from the distance of 10 cm between the optical axes. The resolution of the setup decreases for big distances because of the smaller triangulation angle. In Fig. 5 b) the components for the camera and the projector are shown. Both CCD and LCD chip are separated from the imaging objectives to allow a variable focus. The CCD camera is a miniature mega pixel camera (PULNIX TM1020-15) with a big 9x9 mm sensor. The camera is light weight ( 1000 h), high stability and a high intensity flux from a very small spot (Fig. 11a) about 1 mm. The light beam of the lamp has got a small solid angle. The product of solid angle and light spot size – the so called etendue (or geometric extent) – is constant for an optical system (if not too bad optics is used). The small etendue of the system allows generating a very small image of the light source inside the lens stop of the objective. Thus, the aperture of the system is self-limited. This allows to project fringes with a very big depth of focus – even with full opened objective lens stop (compare with Sec.2.3). Also known from the technique of multimedia projectors is a possibility to homogenize the illumination beam directly in front of the light source by light tunnels. (Fig. 12). The light is focused inside a light tunnel and mixed by multiple reflections. Imaging the finishing surface of the light tunnel gives a very homogenously illuminated area – abandoning some etendue. We currently try to homogenize illumination by this and similar homogenizers.

Proc. SPIE Vol. 4778

Downloaded from SPIE Digital Library on 26 Aug 2011 to 130.215.169.248. Terms of Use: http://spiedl.org/terms

53

a)

b)

Fig. 12: Light tunnel: a) used with a focusing light source, b) built from 4 mirrors in square geometry

The light beam is collimated as shown in Fig. 5 b and then polarized before illuminating the LCD. In front of the LCD a second polarizer is placed, turned by 90° functioning as analyzer. Because of the high intensity generated by the UHP lamp, common, absorbing polarizers are destroyed. Especially the first polarizer absorbs about 50% of the light energy and gets thermal problems. Other common polarizers like thin film polarizers (or polarizing beam splitters) fail because they are very sensitive to the incident angle of light and the light cannot be fully collimated, so they produce colored beams.

Fig. 13: Micro wire polarizer sketch.

A solution was found by using micro wire polarizers (Fig. 13) which were formerly known for telecommunication wavelengths and which are available for visible light wavelengths, now. Light with polarization parallel to the micro wires is reflected like from a mirror. The perpendicularly polarized direction passes the micro wire (isolating - no electric current is possible). The elements are very insensitive to the incident angle. Another advantage of the micro wires is that the elements are very thin (0.6 mm) and do not disturb the optical path between LCD and objective. 2.3. Depth of focus: Adapted fringes and / or scanning For fringe projection, the depth of focus limits the size of the measurement volume. Narrow fringes get blurred, when they are projected onto object points outside of the focused region. There is a simple model to describe the focused area depending on the optical properties of a projection setup, Fig. 14. bb

b

bf

aperture radius = f/2k u

Fig. 14: Simple model (circle of confusion) to calculate the focused area.

An imaged object point in space is perfectly focused in one image plane only. This can be calculated by the known thin lens imaging formula. When we move the image plane out of the focused region the point is blurred and is imaged as a circle of confusion. The size u of this circle depends on focal length f, aperture size (by f-stop number k), the object distance b and the differing unfocused distanced b’ (e.g. bf and bb). For a given maximum circle of confusion (for fringe

54

Proc. SPIE Vol. 4778

Downloaded from SPIE Digital Library on 26 Aug 2011 to 130.215.169.248. Terms of Use: http://spiedl.org/terms

projection this is e.g. one fringe period, Fig. 14 right side) the maximum bf and minimum bb distance around the focused distance b can be geometrically calculated6:

bb =

b b− f 1+ u ⋅ k ⋅ 2 f

,b = f

b

(3)

b− f 1− u ⋅ k ⋅ 2 f

For a given position of the LCD plane relatively to the objective’s principal plane the focused plane can be calculated as well as the nearest and farthest distance that is still focused as shown in Fig. 15.

bmax

front focal plane bf focal plane b back focal plane bb

far

b1 scan-

focal distance

ned area

b2 b3 b4 b5 bmin

near LCD / CCD position Fig. 15: Automatic calculation of focus series. From infinity to near position

For very narrow fringes only a small area of the measurement volume is focused. If we still desire to measure a larger area we can change the focus positions by the translation stage shown in Fig. 7a. To cover a given measurement volume (by overlapping, subsequent focused areas) we can use a series of focused positions as shown in Fig. 15. The series is chosen in a way that the front focal plane of a preceding series is the back focal plane of the next series and so on. Using Eq. 3 multiple times in that way it is possible to calculate a series of positions starting from the farthest point bmax step by step to planes near the 3D camera (never reaching it, though):

bN =

f b − f (1 − u ⋅ k / f ) N 1 − max ⋅ bmax (1 + u ⋅ k / f ) N +1

(4)

The 3D camera has got a large focal depth by construction (self-limited aperture and short objective focus). Currently we are using fringes of 4 to 6 pixels per period to get a sufficiently correct sinusoid signal. For this fringe size it is not necessary to scan the measurement volume because the complete reasonable distance interval is focused in the current version of the 3D camera. In a future version of the 3D camera a LCD with smaller pixels (14 µm instead of 18 µm) might be used and then a scanning can be useful again. There are cases in which focal plane scanning is not wanted. E.g., when a quick measurement is needed or when the effort of calibration for multiple focus planes shall be avoided. In this case there is always the possibility to increase the fringe size and therefore the allowed circle of confusion u until the complete area of interest is covered by focused fringes (compare with Sec. 1.2). To demonstrate such a scanning series a tilted plane was placed in front of the 3D camera that covers distances from 15 cm to 140 cm, Fig. 16.

Proc. SPIE Vol. 4778

Downloaded from SPIE Digital Library on 26 Aug 2011 to 130.215.169.248. Terms of Use: http://spiedl.org/terms

55

a)

b)

Fig. 16: Measurement for a deep .measurement volume: tilted plane from 15 cm to 140 cm (and some objects in larger distance)

Two fringe periods were used (2 and 6 LCD pixels per period) and the modulation changes were measured to demonstrate the depth of focus. In Fig. 17a the projection image for fringes of 6 pixels period size is shown. The fringes are focused in the complete measurement volume. To visualize the modulation right aside the background was removed and a column cut is shown.

a)

b) Fig. 17: Depth of focus: a) a fringe period of 6 pixels is focused from 12 cm to infinity (photo, removed background and column cut) b) for a period of 2 pixels we use 5 focused distances to cover the range from 12 cm to 2 m (markers are at the positions of the focused planes) shown are calculated modulation and column cut through fringe pattern

Markers were affixed at the main focused positions of a focal plane series for the two pixels period. Fig. 17b shows for each of these focusing states the modulation and a column cut of the pattern difference between even and odd fringes (shifted pattern by half a period). As claimed the focused areas show an overlap and each point is focused at least once. For the patterns of larger distance the modulation function is degenerated because the scattered intensity quickly decreases by distance (1/r²) and by change from scattering into total reflection. For a scanning measurement the coordinates are measured in each focused position and merged into a complete dataset. It is not possible to merge the measured absolute phase data directly because the imaged size changes for different LCD distances. The effort to correct for the effects in the measured phase data is comparable to that of the coordinate calculation itself. Thus, it only makes sense to calculate calibrated 3D coordinate data sets and merge them. For faster measurements only one focus position is used and the fringe size is chosen in a way that all points of interest are focused.

3.

RESULTS

The main advantages of the chosen setup for the 3D camera is the compact design, big opening angle, large quantity of light and large focused area (i.e. measurement volume). The compact housing principally reduces the sensitivity of the system because of the small triangulation angle at larger measurement distances. By optimization of the measurement process and the used algorithms this disadvantage was compensated. The small triangulation angle turns into an

56

Proc. SPIE Vol. 4778

Downloaded from SPIE Digital Library on 26 Aug 2011 to 130.215.169.248. Terms of Use: http://spiedl.org/terms

advantage for the measurement of objects that have big gradients and a large extension along the optical axis. Typical triangulation systems have shadowing problems and are not able to focus the complete measurement volume. The 3D camera is able to measure this kind of objects in one step as shown in Fig. 18. In the upper part we see photos of an automobile gear box with an extension of 40x40 cm and 45 cm depth and a lot of cavities. Especially because of it’s depth it is very difficult for standard systems to measure this object from top to bottom which was possible with the 3D camera. Fig. 18 shows in the lower part the virtual model of a single measurement with color coded height values.

Fig. 18: gear box: photos and the virtual model yielded from one measurement only (no merging of different views were necessary). A legend shows the color coded heights (with kind permission from Volkswagen).

The model shows valid data in deep cavities over the complete depth range. Another view for the gear box shows the gear labyrinth (Fig. 19a). Up to now it was impossible to measure the labyrinth by optical means. The measurement was performed by tactile means. The 3D camera is able to measure down to the ground of the labyrinth cavities and even deeper to the ground of tubes inside. Fig. 19b shows the color coded measured heights. The resolution (about 0.1 mm) of the measurement is higher than it is possible to see in the high dynamics measured data. A view of the virtual model (Fig. 19c) reveals more details and a deep insight into the measured object.

a)

b)

c)

Fig. 19: gear labyrinth: a) photo b) colour coded height c) virtual model yielded from one measurement only (no merging of different views were necessary). A legend shows the colour coded heights (with kind permission from Volkswagen).

The compact, transportable setup is well suited for multimedia 3D measurement of complete scenes as shown in Fig. 20. Fig. 20a contains a photo of the system in the state of measurement. On the left side the 3D camera is placed on a

Proc. SPIE Vol. 4778

Downloaded from SPIE Digital Library on 26 Aug 2011 to 130.215.169.248. Terms of Use: http://spiedl.org/terms

57

tripod projecting fringes into the lab containing objects in different distances from 0.5 m to 4 m. Fig. 20b shows the virtual model derived from the measurement. A cut out of the data (Fig. 20c) reveals that there is much higher resolution in the data than visible with a big interval of depths. The measurement time of 8 seconds (faster for smaller scenes with less integration time) is short enough to measure e.g. a resting person. Many applications are possible e.g. for multimedia and medicine.

a)

b)

c)

Fig. 20: 3D multimedia view: a) photo of the 3D camera in the state of measurement b) color coded height c) cut out region with different lookup table shows the existing resolution in dataset b). A legend shows the color coded heights.

An application that was also realized for the 3D camera is the inverse pattern projection7. By fringe projection measurement the exact geometric transformation between LCD matrix and CCD image is evaluated. On basis of this information an “inverse” projection pattern can be generated that recreates any pre-defined wanted images in the camera plane.

a)

b)

c)

Fig. 21: inverse projection process: a) projection of common straight fringes b) inverse pattern for the projector c) camera image while the inverse pattern is projected (wanted pattern for inversion was straight fringes).

Fig. 21a shows the camera image for projected straight sinusoidal fringes. The fringes are deformed by the shape of the object. We define a wanted pattern that we wish to see by the camera: straight sinusoidal fringes. From this information an inverse pattern (Fig. 21b) is calculated and projected by the projector. In Fig. 21c we see the result: now the camera records straight fringes as defined before. In Fig. 22b some letters were used instead of a fringe pattern. There are several applications for this kind of projection. One simple application is implemented in actual multimedia projectors: the Keystone correction for projections on tilted planes. Using our implementation of inverse projection a correction is possible on any surface even discontinuous surfaces like in Fig. 21 and Fig. 22b. Using straight fringe for inverse projection (Fig. 21) we perform inverse fringe projection. The inverse fringe projection allows an advanced 3D defect analysis because changes of the object’s surface show up quickly as deformed fringes. In

58

Proc. SPIE Vol. 4778

Downloaded from SPIE Digital Library on 26 Aug 2011 to 130.215.169.248. Terms of Use: http://spiedl.org/terms

Fig. 22a (upper part) we see inverse fringes projected onto an object without and with two defects (upper dent 300 µm, lower dent 100 µm). By analyzing the frequency modulation in both images we quickly evaluate the location and quantity of the defects. Fig. 22b shows the possibility to arbitrary generate any inverse patterns. An option to use the results from Fig. 22a is to map the previously found defects (color coded) directly onto the object. This augmented reality helps an operator to investigate the defects. Another option is to use the inverse projection vice versa: An object needs to be replaced – so it has to be moved until all fringes are straight for the camera.

a)

b)

Fig. 22: Applications for inverse projection a) defect detection b) augmented reality

4.

CONCLUSIONS

A 3D camera was demonstrated which is based on standard fringe projection technique and needed some algorithmic and technical improvements to result into a compact system with a variety of possible applications. We still optimize the algorithms for phase evaluation to gain robustness, enable a better masking of invalid image parts and to reduce the amount of necessary images to be recorded (increasing speed while maintaining the quality). To fully use the potential of a compact system that has all components on-board we intend to mobilize the system by enabling a full system control via standard notebook computers. By use of a FireWire compatible camera the necessary connectors DVI and FireWire are part of actual standard notebooks. Another option could be the integration of a small computer board into the system.

ACKNOWLEDGEMENTS This research was supported by the German Ministry for Research and Technology (BMBF) within the joint project MicroScan under contract no. 16SV941/5.

REFERENCES 1. K. Körner, U. Droste, R. Windecker, M. Fleischer, H. Tiziani, T. Bothe, W. Osten, “Depth-Scanning Fringe Projection for absolute 3-D profiling”, Proc. Fringe’01, Elsevier, Paris, 2001, p. 394-401 2. J. Burke, T. Bothe, W. Osten, C. Hess: “Reverse engineering by fringe projection”, Proc. SPIE, 2002, Volume 4778 (in this proceeding) 3. F. Chen, G.M. Brown, M. Song: “Overview of three-dimensional shape measurement using optical methods”, Opt.Eng. 2000; 39:10-22. 4. W. Nadeborn, P. Andrä, W. Osten: “A robust procedure for absolute phase measurement”, Opt. and Lasers in Eng. 24 (1996): 245-260 5. H. Naumann, G. Schröder: “Bauelemente der Optik”, Carl Hanser Verlag, Munich, 1987, 304-307 6. G. Franke: “Photographische Optik”, Akademische Verlagsgesellschaft, Frankfurt, 1964 7. W. Li, T. Bothe, W. Osten: “Object Adapted Pattern Projection”, to be published in OLE

Proc. SPIE Vol. 4778

Downloaded from SPIE Digital Library on 26 Aug 2011 to 130.215.169.248. Terms of Use: http://spiedl.org/terms

59