Small Automation, Big Impacts: A Sandwich Assembly Plant’s Journey

6 min read

Prepackaged sandwiches are a quick and nutritious meal for many busy office workers. The global industry for prepackaged sandwiches was reported to be USD 13 billion in 2018, with North America dominating all other regions at 30% of the market share.

The journey to finding the “right” other half

Commercial sandwich assembly factories are under tremendous daily pressure to ensure these ready-to-eat meals arrive on time at the retail locations, including airlines, chain coffee shops, delis, supermarkets, and convenience stores spread out across the country. Working with food is also time-sensitive as any spoilage can cause harm to the consumers and tarnish the company’s reputation. Not to mention the wastage that negatively impacts the bottom line.

From gourmet ciabatta sandwiches to breakfast burritos, sandwich assembly plants must rely on automation to ensure their production lines stay flexible and responsive to various customer requests.

About the Small Automation, Big Impacts series

The automation journey series is based on Hermary’s extensive field experience and many proofs of concepts we have worked on with our partners and end customers. We want to inspire the industry and help automation experts by sharing what 3D machine vision can do for them. As a whole, we will improve industrial automation together.

We want to hear about your journeys with industrial automation. Share your story with us, and we will make sure that it is published on our website for the whole industry to see.

Production bottleneck is an opportunity to automate

One of the bottlenecks in the production lines is flipping buns onto their spreadable sides and sending the top and bottom halves down different lines. The initial solution was to use  2D cameras to inspect and determine which side of the bun was on. The objectives were to –

  1. Determine whether the halved buns were on their spreadable sides or not, and
  2. Sort the bottom from the top halves

2D machine vision relies on calculating the contrast with the sample files for results, fulfilling the first objective: a bun’s exterior is always darker than the interior. However, the second objective is challenging for the 2D camera system as there is often not enough contrast to discern one from another.

3D data help robots become sandwich experts

3D scanners using laser triangulation can capture geometric features of objects within the field of view. After the workers unload the buns onto the assembly line, these buns pass under an array of 3D scanners. Laser scanners can perform up to 1,250 scans per second, with a minimum of 650 data points per scan. One scanner, therefore, can capture up to 812,500 points per second, far exceeding what human vision can process.

These scanners send point cloud data to a computer programmed with algorithms that identify the unique geometric features of each bun’s possible state. Based on the bun’s state, the program passes appropriate instructions to downstream robotic arms. Laser triangulation does not rely on contrast and is immune to ambient lighting. Additionally, the deterministic nature of point clouds ensures the scanning system delivers accurate and consistent results for the robots regardless of the bun’s shape, orientation, and color.

 A simplified matrix of the buns’ states and instructions looks as follows:

Flipping buns and sending each one down the correct path may seem like a trivial task. Still, when manufacturing is facing an unprecedented labor shortage, small automation can bring exponential, long-lasting impacts.

The bonus benefits

Deep learning is a subset of machine learning that perceives and processes data closer to the way humans’ brains work. While machine learning learns and recognizes from labeled examples, deep learning uses a large amount of data and extracts object features on its own. The amount of high-quality data a 3D laser scanner can generate is ideal for using deep learning. Additionally, recent achievements in deep learning with unstructured vector data have made point clouds a powerful tool in training scanning systems to become faster and smarter.

Using point cloud data from Hermary’s 3D scanners and a Convolutional Neural Network-based model, a scanning system can identify a single image in 3.5 milliseconds at an accuracy rate of 98%. The result roughly translates to the system processing 285 buns per second, far exceeding what is humanly possible.

(Hermary & Ou, 2018)

Relieving humans from this task eliminates ergonomic injuries workers may sustain from standing for all shifts. Reduced human interaction with fresh ingredients can significantly minimize the risk of food contamination. Food processing plants generally must keep the floor below four °C (40℉). While this ensures ingredients stay fresh and prevent spoiling, it can be a demanding work environment for workers. However, robotic arms do not mind working at reduced temperatures 24/7 without breaks. Manufacturers will benefit from upskilling these workers and reassigning them where their experience and skillsets can add significant value to production. 

Small Automation, Big Impacts is a collection of industrial automation journeys made possible by 3D machine vision

We believe sharing these stories is a way to stay connected with our partners and positively impact the manufacturing community.

This free eBook also includes a bonus chapter to help automation professionals easily decide when to use 2D and 3D machine vision

All this is just one click away in our new eBook, Small Automation, Big Impacts.

The journey continues

Various dietary preferences are shaping the ready-to-eat market. Manufacturers must strengthen their production flexibility to stay ahead of the competition.

Consumers’ preferences in food and beverage are constantly changing. Restaurants and food manufacturers alike are under enormous pressure for product innovations. McKinsey’s recent research in 2019 shows the uprising in food sensitivity causes approximately 85 million American shoppers to avoid foods that may contain milk, eggs, wheat, fish, shellfish, peanuts, tree nuts, soy, or sesame. The increasing awareness of environmental sustainability and health consciousness also spawned various dietary preferences such as plant-based, ketogenic, or paleo diets. These driving forces are shaping the market, and food manufacturers must remain flexible and agile to avoid being left behind by the competition.

3D machine vision is agnostic to the type of baked goods they scan, and its data has proven to be influential when optimizing a given application. However, using deep learning with high-quality point cloud data allows a scanning system to learn discriminative features that can be further generalized. Automation with flexibility is thus achieved by training a scanning system to recognize different classes of foods. What works for a sandwich assembly line can work just as smoothly for a burrito maker or a made-to-order cake factory.

Big impacts of small automation

Industry: Food & beverage, ready-to-eat, grab-and-go, prepackaged foods

Safety

  • Minimizes the risk of food contamination
  • Relieves workers from standing for prolonged periods

Operation

  • Fewer manual workers needed
  • Facility can run 24/7 if needed
  • Strengthened production flexibility and scalability

Future expansibility

  • Upskilling workers to carry value-added tasks
  • 3D machine vision is agnostic of types of foods scan
  • High-speed scanners can accommodate even faster conveyor speeds
  • Use of deep learning to recognize different categories of foods

Bottom line

  • Increased throughput
  • Reduced reliance on manual work
  • Minimized food waste and spoilage

Sign up with our newsletter today or follow us on LinkedIn to stay up to date on the latest installment of Small Automation, Big Impacts: Automation Journeys Made Possible by Industrial 3D Scanners.

Read about Small Automation Big Impacts: 3D Machine Vision Sees Clearly in Mines.


Footnotes

Hermary and Ou. “Using a Convolution Neural Network to Identify the Quality and Orientation of An English Muffin on a Manufacturing Line.” 13 Dec. 2018. Machine Learning. University of Vermont.

About The Author - Terry Hermary

Co-founder of Hermary.

Terry is the customer-facing machine vision expert at Hermary with over 30 years of experience. With a background in electrical engineering, he specializes in developing 3D vision applications with system integrators and machine builders. He is passionate about solving unique automation challenges using 3D vision technologies. Over the past three decades, Terry and his team have established Hermary as the leading innovative 3D machine vision provider, revolutionizing industries from sawmilling to meat processing.

Qualifications:

  • Co-founded Hermary Machine Vision in 1991
  • Patent holder of many 3D machine vision inventions