There are many methods an industrial professional can use to capture 3D machine vision data. As digital transformation slowly becomes a major part of many manufacturers’ operation focus, integrating the right vision technology can make or break the automation system.
Capturing 3D data with Geometric Measurement Techniques
3D machine vision techniques can be categorized into two fields based on the aspects of the scene the vision components use to calculate —
- The geometric relationship between the target and the measurement device.
- The examination of changes in the properties of light.
This is a two-part video on “3D Machine Vision Scanning Techniques”. Part 1 features techniques using the geometric relationships of the target scene and 3D measurement device elements. Click here for Part 2 on 3D scanning techniques by measuring the changing attributes in the light.
Stereo vision, active stereo vision, laser triangulation, coded light, and structured light are some of the most commonly used 3D vision techniques in the industry. Depending on the scene and the target objects being viewed, automation professionals should deploy different methods to optimize the result.
Machine vision has evolved to become an integral part of any manufacturer’s quality control process. The speed of change in consumers’ buying behavior has increased exponentially, thanks to improved e-commerce capabilities. 3D vision data has the advantage of maintaining system flexibility and scalability, which traditional vision systems could not achieve. In the next installment, Hermary’s Josh Harrington will cover techniques that were developed based on manipulating the properties of light.
Machine vision works best when working alongside human operators. If you would like us to cover any specific subjects or have any questions or feedback, leave us a comment below or contact us.
Hi, welcome back to Machine Vision for Industry Professionals, an educational video series for engineers to learn about machine vision and how to work with it.
In this video, we will be exploring the geometric methods used to capture 3D machine vision data. By the end of this video, you should have a basic understanding of the different triangulation-based methods and techniques used to capture 3D information in industrial environments.
My Name is Josh Harrington and I am a Product Applications Engineer at Hermary a leading Machine vision hardware manufacturer. I’m here to answer common questions regarding machine vision and share my experience in the field.
Methods of Capturing 3D Data
3D machine vision devices capture 3D information most often relating to the physical structure of what is being viewed. There are two primary categories that most 3D machine vision techniques will fall into.
The first categories include all methods that rely on the geometric relationships of the target scene and 3D measurement device elements. By using known and established geometries within the design of the 3D measurement device, triangulation can be used to identify unique positions in space.
The second category encompasses all methods that take advantage of properties of light to determine the 3D nature of a scene. These light characteristics include speed, wavelength, modulation, and interference. The next video in this series will cover in detail different light-based techniques.
To get started we will look at the most relatable method because it is the same way we as humans extract 3D information from the world: Stereo Vision.
Stereo Vison requires two (or more) cameras with overlapping fields of view. As an object moves within the view of the two cameras, the object is visible in different locations within each camera.
With the distance between the two cameras, commonly referred to as the baseline, being fixed, any combination of the positions of an object within each camera will correspond to a unique location in space.
Now to actually find and identify a feature within a scene requires the images from each camera to have enough contrast to uniquely identify many different image features. This can be difficult, for example, if you wanted to view a white object in front of a white background. To overcome this, a technique called active stereo vision has been developed. A light projector is used to project a random pattern. This will give the viewed scene “texture” visible in both cameras.
Stereo vision is widely used in applications such as self-driving cars, robotic navigation, and laboratory environments as it is easily relatable and accessible. It is also capable of combining 2D and 3D information. However, due to the high computation requirements, the scene-dependent resolution, and often complex calibration required for many industrial automation tasks, other techniques exist that more easily and deterministically provide the desired 3D information.
One of the most common and widespread techniques to geometrically capture 3D information in industrial environments is laser triangulation. An object is illuminated with a laser — typically either a pencil beam for a single range measurement or a fan beam for a profile measurement — and then viewed by an off-axis camera. Where the light reflected off the object is found within the camera will correspond to the depth of the object from the laser projector. Typically, the devices that make these measurements are factory calibrated by the manufacturer to be able to make measurements accurate to fractions of a millimeter.
Laser triangulation is used across many industries, including lumber production, electronics, food and beverage, automotive and more. Some of the primary drawbacks or considerations when using this technique are that it can have trouble measuring surfaces that are specularly reflective, and measurements often require some relative motion between the measurement device and the object. However, due to its robustness and simplicity of integration, the laser triangulation technique will remain a primary method of capturing 3D data in industrial environments.
Coded Light Projection
The next method we will discuss is coded light projection. Coded light projection has a similar setup to a two-camera stereo arrangement, but one of the cameras is replaced with a coded light projector.
What is meant by coded light? Coded light is light broken up into a pattern that is used to illuminate a scene. The pattern is designed so that any position within the pattern can be uniquely determined by the surrounding pattern elements. This could be a 2D pattern to capture 3D area measurements of a complete scene or a 1D pattern to capture a profile of an object similar to laser triangulation.
An off-axis camera is used to view the coded light as projected onto a scene. With the geometry being fixed between the camera and the projector, any combination of pattern position within a location in the camera will correspond to a unique position in space. Like laser triangulation devices that include this technology are typically factory calibrated, removing the need for a complex calibration task for the user.
Coded light projection has become a very common method for capturing 3D data in consumer electronics but has also been widely used in industrial applications such as log and board scanning for over 2 decades. Some drawbacks to the coded light technique are that the algorithms used to recover the pattern can be computationally intensive, especially as the code is more complex, and that gaps can exist in the data if the code is not recovered correctly.
The final geometric method that we will discuss is multi-shot structured light with sequential projections of encoding patterns. Similar to coded light, this technique uses a projector and an off-axis camera, but rather than making a measurement using one image, a series of images are captured using changing fringe patterns. There are many potential fringe patterns to use, but for our purposes here we will discuss the two most common, binary patterns and sinusoidal patterns.
When a series of images are captured of binary patterns of increasing frequency, each position in the camera will have a binary code with a depth relative to the number of images used. The patterns are designed so that any unique binary code at a given camera position will correspond uniquely to a location in space.
A series of phase-shifted sinusoidal patterns can then be projected to determine the phase of a position relative to the frequency of the sinusoidal pattern. In conjunction with the binary encoded coarse position, the phase position can provide a much higher resolution to the final measurement.
These techniques of 3D measurement have proven to be useful for either handheld or robot mounted inspection tasks of stationary objects. The major drawback to these types of multi-image encodings is that they often require the object to remain still while multiple images are captured, and they rely on a projector that may only be effective at relatively short ranges.
There are many other geometric techniques that are either hybrids, tweaks, or extensions of the methods discussed here. By taking advantage of simple triangulation principles 3D machine vision techniques have been and are continuing to be developed.
3D machine vision is transforming and enhancing our ability to automate and optimize tasks and processes across all industries. In the next video in this series, we will explore the 3D measurement techniques associated with the properties of light. If you have any comments or questions, please drop a line in the comment section below. Thanks, it’s been great having you here.