This is a two-part video on “A Brief History of 3D Vision Technologies”. Click here for Part 1 on light curtain and single point measurement.
Sheet of Light Scanner
Now shown here is a sheet of light scanner. And this is very similar to the single point scanner.
How they differ from a single point sensor is that rather than emitting a single point of light, they emit a fan beam of light. Now this isn’t a fan beam that’s like rastering, or moving. We use optics to spread a single collimated like laser pointer’s source type of laser into a fan beam. And then rather than using a linear camera with just a single row of pixels, we use an area camera.
And what this allows us to do is to capture images, like you can see over here on the left, and in the same triangulation manner that we showed for the single point sensor convert these where we find the laser in that sensor into known physical coordinates. And shown here is the scanner. And this would be the distance out from the scanner. And here we just have kind of a zigzag target and then the wall behind it. Just for completeness, the image here is shown in kind of a heat map, it doesn’t correspond to heat, it’s just shown that way to make the laser more visible with color, rather than just a white line on a black field.
Similar type scanner would be the Co-planar scanner. But basically the main difference is the geometry is set up differently. These are very popular scanner for edgers and trimmers because they don’t take up as much room, they don’t need quite as wide of an area. And the difference here is you still are creating triangles or geometric measurement, but you’re trying to find a given ray, and where that ray lies on a pixel in the camera. So say we had a board out here right where my mouse is. And we found this ray in the camera. And we found that on a given pixel, we would then be able to correspond where that ray to that pixel existed in space. And the way that we find where a given ray is, is we code the light. So it looks kind of like a barcode where any four rays combined together is completely unique within the scanner. And so you can always say the shining on a board, you can always find a given ray, and then find the pixel that that ray lies on, and then determine the depth of the object that you’re looking at that ray. This is kind of a more specialty machine vision device specifically designed for looking at boards and logs. Whereas the SL-1880, that, or the sheet of light scanner that I showed on the previous slide is a much more ubiquitous device that can be used across a huge wide range of applications. The drawback with a co-planar scanner is that it has much fewer points, basically you only get a data point for every ray versus the sheet of light scanner gets a data point for every pixel or every column in the camera. So you’re talking hundreds of points versus thousands of points per scan. One thing to note with both of these scanners is they do require relative motion because they only scan one single profile of an object at a time. So as the log moves underneath one of these scanners, it scans one profile that log but that log needs to be moving around. So it’ll just scan that same profile every single time. So that is a co-planar scanner. Another benefit is it has much higher noise immunity because it’s not looking at quite as wide of an area. So we do design a scanner very similar to this that is capable of being used in outdoors in direct sunlight.
So here’s an example of a bank of scanners. I think you probably have something like eight or 10 scanners on top and eight or 10 scanners on bottom that are scanning the board from both sides. They’re set up at an angle here so that it can get both the leading and trailing face in a reasonable resolution as well. And a scans on like this will typically accommodate more than just one-dimensional size, probably two by fours wider boards as well but also potentially taller boards, four by fours, etc. and that all has to do with how the system is designed. Our company does not design the complete end systems. That’s generally the optimizer or the machine builder. We design the scanners themselves.
So those are a couple of major scanning technologies that we make at Hermary. But there’s quite a number of other 3d machine vision technologies. And stereo vision is a prime example of that. And it’s probably, without necessarily knowing it, the one we’re most familiar with. Because this is how human vision works. It’s still very similar to the methods I already mentioned, and that everything’s about triangles or geometric measurements. So here we have a representation of two eyes. And then depending on where an object is, and the distance between the two eyes, the position of that object can be found. One drawback to stereo vision is that it requires texture. So you can imagine yourself standing in front of infinite white wall, you would never really know how far that wall is away. So if you’re looking at, like a robot moving around with a stereo vision, it typically has a much better ability to see things that have some sort of contrast, or some sort of pattern to them or edge to them, or break shadows. This type of machine vision also requires quite a bit of compute time often, and the resolution can vary depending on what you’re looking at and how it’s been found. Now one way to improve stereo vision is to have what’s called active stereo vision. And that’s to project a pattern. And that pattern that you’re projecting with, say, a laser or a projector, like you might use for a movie theater, or a home theater gives the texture that you need to be able to measure depth. So even on that infinite flat white wall, you’d be able to determine how far away it is.
Shown here some examples of stereo vision. So service has been quite popular for autonomous vehicles, and autonomous vehicle research. Because it’s very similar to how we see the world it can do, it knows where things are roughly, maybe not as accurately as you might need for the wood products industry. But you can know that that’s a car in front of you, and it’s roughly this far away. And it’s also an easy one to get involved with. And get started with when exploring machine vision because cameras are easy to get at for relatively low cost.
Tying back into pattern projection, there are other methods of pattern projection as well. You need not necessarily need stereo vision for a pattern projection type machine, 3d machine vision sensor, they often use it just for redundancy. But you can also project something like binary patterns where a given depth can be isolated by what combination of binary patterns exists at a given pixel. Or even something like the iPhone’s Face ID, it uses a somewhat random pattern, but it’s a calibrated random pattern that it can use to determine if the shape of your face matches what it expects it should.
There are also machine vision devices that operate somewhat fundamentally differently than the ones I’ve already mentioned insofar as that they’re not using a geometric relationship, such as a triangle to determine depth, but they’re using light itself. And what I mean by this is basically the device emits a pulse or a snapshot of light. And it then either accounts for the time that it takes that light to hit an object and return. Or it accounts for the phase that light returns at to determine the depth of the objective you’re looking at. These aren’t as common in the wind industry, because they can suffer from quite a bit of noise. And they also don’t necessarily have the same accuracy or robustness as some of the geometric measurements. But I do know for a fact that some people use this device over on the right side of the screen for back pressure measurement, measure the back pressure going into a machine center. These LIDAR devices shown over here are very common on lots of self-driving car applications, because they can look very, very far away because they are single, strong pulses of infrared laser light.
And kind of the final type of machine vision we’ll touch on here today is X-ray. And that’s because it’s becoming more popular in the wood industry. You know, as things are evolving, people want to see get more and more data and be able to optimize for more and more data. And one thing that we haven’t been able to do as much as see inside the log, and try and make great decisions farther and farther upstream within the mill. We want to isolate where not are we when I say we’re cracks and other defects are and then actually cut the log at the primary breakdown stage to maximize our yield of high-quality boards versus just the number of boards itself. And this can be done and there’s applications that are being explored and researched to accomplish this with X-ray or some combination of X-ray. One major hurdle for X-ray is that it’s very expensive. And there are also added risks. X rays are known to be dangerous.
Machine Vision Overview
To do a simple review of what we just discussed there, there are many types of machine vision we didn’t even really touch on all the different types of devices and the nuances of the differences between different types of devices. What we have discussed today is by no means even close to comprehensive, it’s just meant to touch and give you guys a taste of it. The selection of what type of machine vision technology to use 3d, 2d and which specific device to use is very much application dependent and should be based on application requirements, and capabilities. And where and how machine vision can be used is constantly growing, evolving. It’s becoming more and more prevalent in the industry, especially as, as a desire to remove people from harsh environments and dangerous jobs becomes a pusher in these types of environments.