So one of the oldest and simplest methods of 3D measurement, and this was used very widely in the wood industry is what’s called the light curtain. And this was the first product that my company made. Basically, what this consists of is a bank of emitters. And you can see the bank of emitters would exist over on the left here. And then a bank of detectors, which would exist over on the right here. And these emitters emit light via some source, typically an LED or laser diode, emit light. And that light will propagate over to where the detectors are. And if there’s any object in the way, then it will block the detectors on the other side. And so are the detectors are dark, it means there’s an object there, and you would get a dimension across that object. Now, it doesn’t give you any idea of what that shape of that object is other than the most extreme dimensions. But you can use multiple banks of these light curtains to get dimensions and multiple planes. And the setup you see right here was very common in the wood industry back in the early 90s, where you’d have two banks set up. And that would allow you to get these four points shown in purple around a log and give you a rough idea of what are the dimensions or what is the kind of representative ellipse at a given time when this log is moving through this bank of light curtains.
Single Point Depth Measurement
The next logical step after the light curtain is a single point depth measurement. Being able to measure depth using a single point proved to be a very powerful tool for real-time dimensional monitoring, say something like, you know, planar mill, but we’ll get to that in one minute. Basically, what this type of device consists of is the device itself, which is shown over here in blue, and that emits a single laser beam, a single collimated laser beam kind of like a laser pointer out from its face. And then if there is an object that it hits, say, a board, for example, some of that light will hit that object and bounce back towards a camera in device, or what we’ll call an imager here. And then if that object were, say, in a different position, maybe a couple of inches farther out, that same laser would hit a different position within the imager. And what you’ll notice here, these are triangles. So this ends up being very much a geometric measurement where we can, we can calibrate any position in this imager to a given depth. And we can do that very, very accurately to submillimeter accuracy, and get a very high-quality result. And that’s largely because it’s based on very solid geometries. So here’s a good example of where we’re using an imager or a camera that captures intensity. And then using that in conjunction with structured light, to get a 3D measurement. This is a very common way of getting 3D measurements. And there’s lots of different devices that use this geometric property of positioning some sort of light source with some sort of imager, and getting data back.
Shown here is a mock-up of a planer mill, where in real-time using three or four of these devices, you can monitor the dimensions of a board. And if anything’s out of dimension, you know it in real-time. And this was a big step up from having somebody go once or a couple of times a shift checking with callipers or with some manual measurement method if the dimensions were on or off, and by having this be real-time, if they were off, you knew right away and you could have the machine center adjust itself, and you wouldn’t have a lot of undersized or oversized products that you had to deal with. shown over here is a model of what we call the LRS-50. It’s one of the products we make it’s very small, it can basically sit in the palm of your hand. This is an example of a single point sensor. There are quite a number of these on the market. So you might run into a number of different examples of these in your careers.
Sheet of Light Scanner
Now shown here is a sheet of light scanner. And this is very similar to the single point scanner.
How they differ from a single point sensor is that rather than emitting a single point of light, they emit a fan beam of light. Now this isn’t a fan beam that’s like rastering, or moving. We use optics to spread a single collimated like laser pointer’s source type of laser into a fan beam. And then rather than using a linear camera with just a single row of pixels, we use an area camera.
And what this allows us to do is to capture images, like you can see over here on the left, and in the same triangulation manner that we showed for the single point sensor convert these where we find the laser in that sensor into known physical coordinates. And shown here is the scanner. And this would be the distance out from the scanner. And here we just have kind of a zigzag target and then the wall behind it. Just for completeness, the image here is shown in kind of a heat map, it doesn’t correspond to heat, it’s just shown that way to make the laser more visible with color, rather than just a white line on a black field.
Similar type scanner would be the Co-planar scanner. But basically the main difference is the geometry is set up differently. These are very popular scanner for edgers and trimmers because they don’t take up as much room, they don’t need quite as wide of an area. And the difference here is you still are creating triangles or geometric measurement, but you’re trying to find a given ray, and where that ray lies on a pixel in the camera. So say we had a board out here right where my mouse is. And we found this ray in the camera. And we found that on a given pixel, we would then be able to correspond where that ray to that pixel existed in space. And the way that we find where a given ray is, is we code the light. So it looks kind of like a barcode where any four rays combined together is completely unique within the scanner. And so you can always say the shining on a board, you can always find a given ray, and then find the pixel that that ray lies on, and then determine the depth of the object that you’re looking at that ray. This is kind of a more specialty machine vision device specifically designed for looking at boards and logs. Whereas the SL-1880, that, or the sheet of light scanner that I showed on the previous slide is a much more ubiquitous device that can be used across a huge wide range of applications. The drawback with a co-planar scanner is that it has much fewer points, basically you only get a data point for every ray versus the sheet of light scanner gets a data point for every pixel or every column in the camera. So you’re talking hundreds of points versus thousands of points per scan. One thing to note with both of these scanners is they do require relative motion because they only scan one single profile of an object at a time. So as the log moves underneath one of these scanners, it scans one profile that log but that log needs to be moving around. So it’ll just scan that same profile every single time. So that is a co-planar scanner. Another benefit is it has much higher noise immunity because it’s not looking at quite as wide of an area. So we do design a scanner very similar to this that is capable of being used in outdoors in direct sunlight.
So here’s an example of a bank of scanners. I think you probably have something like eight or 10 scanners on top and eight or 10 scanners on bottom that are scanning the board from both sides. They’re set up at an angle here so that it can get both the leading and trailing face in a reasonable resolution as well. And a scans on like this will typically accommodate more than just one-dimensional size, probably two by fours wider boards as well but also potentially taller boards, four by fours, etc. and that all has to do with how the system is designed. Our company does not design the complete end systems. That’s generally the optimizer or the machine builder. We design the scanners themselves.
So those are a couple of major scanning technologies that we make at Hermary. But there’s quite a number of other 3d machine vision technologies. And stereo vision is a prime example of that. And it’s probably, without necessarily knowing it, the one we’re most familiar with. Because this is how human vision works. It’s still very similar to the methods I already mentioned, and that everything’s about triangles or geometric measurements. So here we have a representation of two eyes. And then depending on where an object is, and the distance between the two eyes, the position of that object can be found. One drawback to stereo vision is that it requires texture. So you can imagine yourself standing in front of infinite white wall, you would never really know how far that wall is away. So if you’re looking at, like a robot moving around with a stereo vision, it typically has a much better ability to see things that have some sort of contrast, or some sort of pattern to them or edge to them, or break shadows. This type of machine vision also requires quite a bit of compute time often, and the resolution can vary depending on what you’re looking at and how it’s been found. Now one way to improve stereo vision is to have what’s called active stereo vision. And that’s to project a pattern. And that pattern that you’re projecting with, say, a laser or a projector, like you might use for a movie theater, or a home theater gives the texture that you need to be able to measure depth. So even on that infinite flat white wall, you’d be able to determine how far away it is.
Shown here some examples of stereo vision. So service has been quite popular for autonomous vehicles, and autonomous vehicle research. Because it’s very similar to how we see the world it can do, it knows where things are roughly, maybe not as accurately as you might need for the wood products industry. But you can know that that’s a car in front of you, and it’s roughly this far away. And it’s also an easy one to get involved with. And get started with when exploring machine vision because cameras are easy to get at for relatively low cost.
Tying back into pattern projection, there are other methods of pattern projection as well. You need not necessarily need stereo vision for a pattern projection type machine, 3d machine vision sensor, they often use it just for redundancy. But you can also project something like binary patterns where a given depth can be isolated by what combination of binary patterns exists at a given pixel. Or even something like the iPhone’s Face ID, it uses a somewhat random pattern, but it’s a calibrated random pattern that it can use to determine if the shape of your face matches what it expects it should.
There are also machine vision devices that operate somewhat fundamentally differently than the ones I’ve already mentioned insofar as that they’re not using a geometric relationship, such as a triangle to determine depth, but they’re using light itself. And what I mean by this is basically the device emits a pulse or a snapshot of light. And it then either accounts for the time that it takes that light to hit an object and return. Or it accounts for the phase that that light returns at to determine the depth of the objective you’re looking at. These aren’t as common in the wind industry, because they can suffer from quite a bit of noise. And they also don’t necessarily have the same accuracy or robustness as some of the geometric measurements. But I do know for a fact that some people use this device over on the right side of the screen for back pressure measurement, measure the back pressure going into a machine center. These LIDAR devices shown over here are very common on lots of self-driving car applications, because they can look very, very far away because they are single, strong pulses of infrared laser light.
And kind of the final type of machine vision we’ll touch on here today is X-ray. And that’s because it’s becoming more popular in the wood industry. You know, as things are evolving, people want to see get more and more data and be able to optimize for more and more data. And one thing that we haven’t been able to do as much as see inside the log, and try and make great decisions farther and farther upstream within the mill. We want to isolate where not are we when I say we’re cracks and other defects are and then actually cut the log at the primary breakdown stage to maximize our yield of high-quality boards versus just the number of boards itself. And this can be done and there’s applications that are being explored and researched to accomplish this with X-ray or some combination of X-ray. One major hurdle for X-ray is that it’s very expensive. And there are also added risks. X rays are known to be dangerous.
Machine Vision Overview
To do a simple review of what we just discussed there, there are many types of machine vision we didn’t even really touch on all the different types of devices and the nuances of the differences between different types of devices. What we have discussed today is by no means even close to comprehensive, it’s just meant to touch and give you guys a taste of it. The selection of what type of machine vision technology to use 3d, 2d and which specific device to use is very much application dependent and should be based on application requirements, and capabilities. And where and how machine vision can be used is constantly growing, evolving. It’s becoming more and more prevalent in the industry, especially as, as a desire to remove people from harsh environments and dangerous jobs becomes a pusher in these types of environments.
Visit our YouTube channel for more learning resources.