A high-speed machine experiences a wide tolerance for cutting a hole because of tool wear over the life of a piece of capital. This tolerance compromises the downstream manufacturing steps leading to a scrap rate of 15 percent. This rate reduces the number of parts the machine produces in a day, so engineers shut it down to reprogram. The next iteration is better, but the scrap rate is still 8.5 percent. To make up for the downtime and scrap, the team decides to accept the scrap rate to increase fabrication speed to avoid further downtime to reprogram. The increased speed improves throughput, but the high tolerance increases the scrap rate to 12 percent. Capacity continues to lag. Soon production is so far behind management must approve another shift and overtime.
Downtime, reprogram, run, check, repeat.
This cycle now seems to be in a death spiral, hopelessly incapable of catching up to meet the production schedule. High-speed robotics hinges on accuracy, and the cost of an errant dimensional reading can be catastrophic. 3D-machine vision has disrupted intelligent robotics by improving the quality, throughput, and cost of mass manufacturing. Able to react in real-time, this technology auto-corrects and provides feedback to the operators simultaneously, keeping production running while proactively alerting the robot’s human counterparts of the issue.
3D-machine vision receives, processes, and reacts to unexpected events during operation and completes its task without reprogramming to create a fully automated experience. In the following, we explore three manufacturing tasks handled by robotics—pick and place, outgoing dimensional inspection, and defect identification—along with how 3D vision solves a known process error that occurs with each.
Pick and Place
As automated outgoing dimensional inspection benefits from the accuracy of 3D-machine vision, the robotics technology’s flexibility benefits pick and place, a critical process step, especially during a climate when supply chains everywhere are strained. Software engineers develop algorithms to detect, reach, grab, move, and place an item for order fulfillment. The enterprise resource planning (ERP) system receives an order from the system and transmits it to the robotics for product pull.
With an influx of orders, wasted time is lost revenue. If the machine has an improper grabbing force, the robot can damage the product, losing time and a saleable good in the process. 3D imaging takes a holistic picture of the product and can feed strength information back to the processor in real-time, issuing a dynamic response for the next pick-and-place action. Instead of becoming familiar with a 2D drawing, the robot calibrates to a CAD model and triangulates the position of the item in real-time. Intelligent robotics learns and optimizes the best way to grab the item over many repetitions of picking up the product. The accuracy of the image location in space coupled with a view of the product’s structural integrity enables the robotics to improve continuously (and speed up) order fulfillment.
Outgoing Dimensional Inspection
3D-machine vision is ideal for measurement and dimensional inspection. It uses image sensors to record data in the height, width, and depth dimensions and locates the position from the remaining degrees of freedom on the yaw, pitch, and roll axes. This approach yields the enhanced accuracy required for tightly tolerance control plan dimensions. 2D inspection compares a flat image of a part to a 2D engineering drawing or known set of measurements. With 3D imaging, a picture of the shape, volume, or depth position of an object or feature augments the benefits of 2D.
A common challenge in 2D inspection is a feature that does not meet the specification in either the depth or rotational position normal to the viewing surface. A planar view might not catch an excessive deviation from the dimensional tolerance, passing a part that should be rejected and flagged for the operator to check with the process or equipment. 3D-machine vision collects positional data along all six degrees of freedom and builds the image to address this challenge. The inspector’s confidence is higher with the added amount of data analyzed as the robotics compares multiple versions of the picture. The software then superimposes them to output a singular perspective of the relevant part or feature in real-time with intelligent robotics in a process that does not require human intervention.
If pick-and-place requires a macroscale level of accuracy and dimensional inspection calls for a microscale level, part-defect identification needs one on the nanoscale. Throughput rate and fit/form/function are essential, but a defective product compromises your company’s reputation and image. Consumers and customers are only too happy to leave a negative review if a product is faulty. The marketplace is too advanced to compromise on product integrity, and a product defect that the quality engineers cannot detect presents an enormous risk of losing share.
The multitude of approaches to collect 3D images of a part can also describe the geometry and location of a product defect. These defects can be expressed as unexpected density inconsistencies because of material non-homogeneity, broken interior features, or residual support material remaining in an additively manufactured part, or anything in between. In addition to dimensional accuracy, quality-assurance engineers can define a set of success criteria to approve the part. A defective part could increase the scrap rate, reduce throughput, and risk field failures if the engineers do not address it. 3D-machine imaging collects, analyzes, and transmits information to the operators who immediately alert them of the issue. The data collected by the robotics provides the engineers valuable, real-time data that they can use to summarize the frequency, consistency, and placement of the defects for root-cause analysis. Early identification of a product defect is critical to de-risking production and maintaining the manufacturing timeline.
As with any high-volume, capital-intensive process, the machines wear over time. This reality means that you need to plan for unexpected process variables disrupting production. Machinery that uses this disruptive technology can absorb unforeseen variables and obstacles, navigating them and completing their task without reprogramming. The more information you can get during operation, the more likely you can get to a solution sooner.
3D-machine vision is a strong ally in this quest for more informed process conditions. It collects significantly more data than its 2D counterpart and uses it to create and implement full images during mass manufacturing. These images can help guide the machines to converge pick-and-place application positions and improve outgoing dimensional inspection by adding insight into the depth and rotational axis directions. They can also identify massively harmful product defects that could lead to recalls or safety issues.
3D machine vision is gaining popularity as part of the Internet of Things (IoT) by executing dynamic response without reprogramming. Not yet wholly mainstream, the technology will rapidly go from novel innovation to commodity expectation. Businesses will continue to push the feedback loop between machine and process control to become shorter. The power of this technology will continue to grow because more industries and mass-manufacturing processes connect.