Skip to main content
Category

Manufacturing

NanoSteel Launches BLDRmetal L-40 for Additive Manufacturing | Laser-View

By Manufacturing, Steel production

NanoSteel Launches BLDRmetal L-40 for Additive Manufacturing (Posted by admin on July 17, 2017) PROVIDENCE, RI, June 17, 2017 – NanoSteel, the leader in nanostructured steel materials, announced the launch of its first product for the laser powder bed fusion additive manufacturing process. BLDRmetal L-40 is a case-hardening steel powder that provides high hardness and ductility (case hardness >70HRC, 10%+ core elongation) and prints easily on standard commercial equipment. This alloy provides superior performance to M300 maraging steel and is an alternative to difficult-to-print tool steels such as H13. Expanding the potential use of 3D printing in a wide variety of hard materials markets, BLDRmetal L-40 is designed to be used for parts including tools, dies, bearings and gears.

Source: NanoSteel Launches BLDRmetal L-40 for Additive Manufacturing | Laser-View

Embedded Vision Propels Bead Inspection – Photonics.com

By Automotive Industry, Editor's Choice, Industrial automation, Manufacturing, News, Sensors

Automotive designers are more frequently turning to structural adhesives as a solution for strengthening, structurally optimizing and lightweighting automotive bodies. These adhesives enable mixing material types to achieve lightweight structures while meeting goals for safety, noise, vibration and harshness. Vehicle assemblies may now contain hundreds of adhesive beads. The dramatic increase in dispensed meters of these beads is driving a need for cost-effective, real-time, 100 percent 3D bead inspection. The inspection objective is to speedily and accurately detect and locate gaps and volumetric defects in the adhesive bead to support efficient in-process repair or rejection of defective beads.

The bead inspection challenge was presented by the supplier of adhesive-dispensing systems who desired a self-contained 3D smart sensor that would reside at the dispensing nozzle. At high speeds and in real time, the sensor needed to be capable of 3D measurement of the bead — height, width and volume — at the point of dispensing. The focus application was automotive body-in-white sheet metal assembly where increasingly structural adhesive is replacing welding and riveting.

Demand for adhesive bead inspection has been growing for years. But existing solutions all have been 2D and deemed inadequate because of limited inspection capability (no bead height), large computer racks that are difficult to integrate and sensor sensitivity to environmental lighting.

Basic 3D imaging of the bead as it is dispensed on the part can be accomplished by a number of well-known techniques. The technique chosen here, arguably the most robust, is laser line triangulation. As with most automation inspection applications, there are many layers of complexity to a complete and robust solution. The adhesive bead can be dispensed in any direction relative to the nozzle. The part can take any shape or form. The nozzle can be positioned obliquely to the surface. And the speed of the dispensing motion may vary.

A sensor residing on the nozzle capturing laser profiles either will need to track the bead mechanically or to surround the nozzle with 360-degree profile data. In either case, substantial information is required to process the profiles into a continuous and accurate bead path and location in 3D space. The 3D path and location is derived from a combination of a priori setup information recorded during system teach, speed and position information from the motion controller, as well as information derived from the part surface by the precalibrated laser profiles themselves. All of these data sources are handled by sophisticated algorithms and application software.

The solution described here enables continuous capture of surface profile data by surrounding the dispenser nozzle with four orthogonal, overlapping laser line profilers. From a systems design perspective, continuously measuring 360 degrees around the nozzle offers many advantages. These include capturing the part surface geometry prior- and post-bead dispense, completely mapping the nozzle orientation and distance relative to the surface, and avoiding the liability of mechanical motion.

The optoelectromechanical design requires an embedded vision architecture to integrate high-performance CPU capability for high-speed image capture, 3D processing, visualization and factory interfacing. Fortunately, such embedded vision architectures are becoming more common in high-end machine vision applications because of the availability of off-the-shelf CMOS image sensors, lasers, optics, advanced FPGAs and compact form factor single board computers or system on modules (SoMs). Additionally, the abundance of reference design IP and robust CAD/CAE tools accelerates design work and lowers the barriers of what used to be only the realm of camera specialists.

Design tools

Embedded vision systems span a diverse set of technologies and associated CAD/CAE design automation tools for both hardware and embedded code development. A core set of powerful but accessible tools, vendor IP blocks, reference designs and evaluation kits enable rapid and robust development. In addition, online user forums incubate an environment where engineers can quickly come up to speed on the tools and leading design techniques.

This project design workflow used three core tools to manage electrical, mechanical and optical compatibility and design optimization: Altium Designer for circuit board schematic design and layout; SOLIDWORKS for 3D mechanical and thermal design; and Zemax for optical design of field of view, focus depth, pixel resolution, illumination geometry and signal-to-noise ratio (SNR) modeling. Altium generates 3D STEP models and interfaces nicely with SOLIDWORKS, and Zemax outputs optical geometry to SOLIDWORKS. In addition, 3D printing was leveraged throughout the design process to build testable prototypes and achieve a compact but robust and assembly-friendly design.

For embedded logic, timing and control code development, the tools were dictated by the selected components — Xilinx FPGA and Microchip PIC32 microcontroller. The design workflow included three core tools: Xilinx Vivado Design Suite, ModelSim simulator, and MPLAB Integrated Development Environment and In-Circuit Emulator. These tools can be quickly accessed via evaluation kits or development boards that include example projects and libraries that can be retargeted to the specific embedded design.

A significant part of the development included the design of the vision engine, implementation of a direct image data path and creation of useful tools for testing and debug work.

Architecture

The embedded vision architecture can be divided into the application engine and the vision engine. The goal of the vision engine is to capture image data efficiently and determinately from the image sensors into CPU memory for processing. “Efficiently” means handling use case flexibility while maintaining low latency and requiring minimal resources. “Determinately” means keeping track of frame counts, timestamps and other image parameters. Adhesive bead inspection can last for seconds or minutes, acquiring hundreds of thousands of images. Therefore, it requires a continuous streaming architecture as opposed to a burst architecture.

The vision engine comprises four laser line profilers, each consisting of a CMOS image sensor chip, visible laser line projector illumination source and associated optics. The vision engine is governed by the field-programmable gate array (FPGA) and the PIC32 microcontroller.

The FPGA is the heart of the vision engine managing the image data paths from the four image sensors, applying preprocessing and appending acquisition information. The image sensors are directly controlled and interfaced to the FPGA for tight exposure, illumination synchronization and low-latency image data processing. The microcontroller has interrupt inputs from the FPGA and can be used as a low-latency path to the application engine. Otherwise, the microcontroller has connections to various system resources and diagnostic chips — i.e., inertial motion sensors, temperature sensors, current detectors and voltage monitors.

The application engine needs to keep up with the image acquisitions arriving in the CPU memory circular buffer and apply algorithms to extract and transform image data into application data. The application engine maintains multiple communication paths with the vision engine (via microcontroller and via PCIe channel) to begin/end image acquisition and to set or update imaging parameters (i.e., exposure time, frame rate, readout window size, etc.). The 3D processing application software suite resides on a small form factor SoM single board computer. All external factory IO and protocols are managed by the application engine.

Debug and test

Tight integration of the embedded system brings many advantages. But it can make debugging and testing very difficult. The embedded architecture does not give visibility into subcomponents of the system, much less individual signals internal to the FPGA or microprocessor. ModelSim enables end-to-end verification of the image path. Verilog models of the image sensor can be quickly coded, and back-end DMA transfers are modeled by vendor-specific test benches and bus functional models.

One technique that embedded vision enables is recording metadata in each image acquired. The FPGA records image count, timestamp, image sensor settings, illumination settings and firmware revisions, and allocates space for custom data that can be set by the microcontroller (or the application engine via the microcontroller). This supports run-time diagnostics and post-analysis of settings and signals via stored image sets.

Additional debug and test is supported via the FPGA register interface and test applications with read-and-write access to the FPGA via the microcontroller interface. The Xilinx ChipScope Pro tool and Microchip MPLAB debugger are used together for detailed testing scenarios. A major challenge is that no one system component has direct access to all relevant information. The application engine integrates the timing and event information from the adhesive dispenser and robot with the image data to allow full replay and step-by-step debug.

Maximizing performance

Each bead-dispensing application brings its own requirements based on linear dispense speed (part motion or nozzle motion) and the minimum detectable defect desired. Faster dispensing speeds and smaller defects require higher acquisition rates. Higher acquisition rates are achievable with smaller inspection ranges (and vice versa). The higher speed applications dispense the beads at 1000 mm/sec. To inspect for gaps as small as 2 to 3 mm requires a 1-mm sampling along the bead. This equates to 1000 bead profiles per second (pps).

A primary goal of the vision engine was to maximize the number of measured profiles. A profile is made up of range samples digitized along the laser line on the part surface. The decision to capture profiles surrounding the nozzle for processing surface and bead information meant four lasers each sampling 1000 pps for a bandwidth of 4000 pps total. This speed goal was exceeded by ensuring that the combined pixel rate of the four imagers was supported by the entire system.

Fundamentally, the acquisition rate should only be limited by the pixel rate of the imager. Raw pixel data races out of the imagers via high-speed serialized low-voltage differential signaling (LVDS) channels. The FPGA has dedicated resources to handle LVDS deserialization and inter-channel synchronization at the highest pixel rates. Reference design blocks are available from either the FPGA or image sensor vendor and can be integrated with application-specific FPGA code.

The FPGA interface to CPU memory via direct memory access (DMA) has a finite bandwidth. The tradeoff between acquisition rate (pps) and bead height (inspection depth range = number of image lines readout per image) pivots about the bandwidth of this interface. It also may be limited by image exposure time to achieve required laser image SNR for reliable image processing. Therefore, the dominant system trade-off is between speed, bead height and SNR.

In practice, two of these three constraints will limit the system profile sampling speed. If a certain application has a tall bead, then a larger imager window that reduces pps must be used. The larger window increases both the readout time from the imager and the required PCIe bandwidth to transfer the images.

Finally, the application engine must be capable of accessing and processing bead profile shapes at high rates and, on average, not fall behind the vision engine acquisition rate. The circular image buffer in CPU memory provides some elasticity but, ultimately, the acquisition speed may be limited by the processing speed. The highest speeds are achieved with a combination of a SoM with the latest multicore processor, intelligent memory management and highly optimized algorithms.

High-speed and high-fidelity 3D inspection of adhesive beads can be achieved in a compact, robust, flexible, embedded vision design. Higher bandwidth and low cost goals can be met on an individual application basis by using more sophisticated (or economical) SoMs and FPGAs that support faster (or slower) DMA interfaces. This results in highly tailored solutions that are scalable and low cost.

Meet the authors

Dave Kelly is vice president of research and development at Coherix Inc., with more than 25 years’ experience developing embedded vision systems for industrial and military applications; email: [email protected].

Andres Tamez is an embedded specialist and electronics manager at Coherix Inc.; email: [email protected].

 

Advance Introduces Rover, the Next Generation Automated Storage and Retrieval System – DC Velocity

By Automated storage and retrieval system (ASRS), Editor's Choice, Industrial automation, Manufacturing, Materials Handling, News

Advance Storage Products, the leading provider of pallet racking solutions, introduces Rover, the next generation of Automated Storage and Retrieval Systems (AS/RS). Rover is a highly configurable, 3-dimensional shuttle-based AS/RS that is cost effective, flexible and scalable. The system is ideal for manufacturers and distributors, including those in the food, beverage, and frozen food industries whose operations demand flexible high density storage with great throughput.

According to John Krummell, President and CEO of Advance, “Rover provides extremely high throughput and is easily reconfigured to accommodate changing storage needs, SKU profiles and production demands. Rover can help distribution and production facility operators dramatically improve the storage density and performance of their warehouses.”

Rover advantages include:

  • High Density: From selective to deep lane storage, Rover provides the ability to configure lane depths to match slotting needs
  • Modularity: Easily expanded throughput (additional vehicles or VRC’s) or storage capacity (racking)
  • High Throughput: Multiple vehicles can work in a single aisle allowing throughput unmatched by traditional AS/RS
  • Design Flexibility: Vary storage depths within a lane, configure to existing building footprints, and output to multiple locations
  • Single Source: As the direct producer of the vehicles, system control software and storage rack system, Advance is in a unique position to provide unmatched service at a very competitive cost position

rover image

To learn more, visit www.advancestorageproducts.com/rover-as-rs or call Kevin Darby at 714.657.1608.

About Advance Storage Products

Advance Storage Products, headquartered in Huntington Beach, California and manufacturing in Cedartown, Georgia and Salt Lake City, Utah, is a market leader in quality warehouse rack systems.  For more than 50 years, Advance has been providing design, engineering, and project management for a full line of substantial material handling installations.

Click here for more information

How to keep a machine in position – Control Design

By Automated storage and retrieval system (ASRS), Editor's Choice, Industrial automation, Manufacturing, Materials Handling, News

The starting point for linear measurements in a machine is understanding the position requirements to properly assemble, machine or otherwise affect the final product.

Are you going to crash and damage expensive tooling or just one manufactured part?

In machine automation terms, linear measurement is not just a measurement of length; it’s a one-dimensional measurement of a position. However, two- and three-dimensions of measurement should be considered because error, in any dimension, can change the measurement based on position. As a linear actuator, gantry or pallet moves, the positional error can change, as well; some error adds up, but some reliably repeats. There is much to consider when adding position feedback to a machine controller.

There are many ways to determine position of an actuator or part in machine automation. Some are simple; some are complex; some are accurate; some are not. The starting point for linear measurements in a machine is understanding the position requirements to properly assemble, machine or otherwise affect the final product. What are the part assembly requirements, and how tight are final part tolerances?

Do you need accuracy or repeatability in your linear measurement? Yes. You need both, but sometimes one is more important than the other. What about linearity and resolution? Sure. Which is more important? It likely depends on the measurement device, sensor sales guy and the application. Sometimes several are important, but one bad apple can upset the cart. What do you have to have in the way of machine position measurement?

Accuracy, the correctness of the measurement at any point in the measurement range, is important for CNC machining applications. In this case the part needs to be made to a tolerance, so the measurement needs to be close to the actual value in maybe five or more axes. Accuracy is critical in this case. However, good repeatability—how close is the measurement each time it is made—may be all that is necessary in other applications. An actuator may only need to move to a repeatable position each automatic machine cycle, for example. And, to add another term, the Six-Sigma and metrology guys like to refer to  repeatability as precision, and then they’ll talk reproducibility. Have fun with that.

When it comes to linearity, I think of it as accuracy. A non-contact magnetostrictive position sensor, for example, may have a linearity of +/-0.02% of full scale. That’s approximately +/-100 µm linearity for a 500 mm measurement range sensor. The specification lists repeatability of this device at +/-3µm. It’s 33 times more repeatable than its full-scale linearity, which is the measurement accuracy of the analog position output from the device. A stepper motor on a machine using this device to close its position loop would have very good repeatability moving to within 3µm of individual position set points. The same device may not be very accurate if programmed to move 100 mm from any relative position in its travel range, based on its specifications.

Resolution of the measurement device plays into accuracy, repeatability and linearity. Too coarse of resolution on a magnetic scale sensor or too few of pulses per revolution in an encoder may hurt accuracy and repeatability. The resolution in an encoder or analog circuit must be finer than the measurement tolerance for proper assembly or machining. A resolution of 20% to 30% of the measurement tolerance usually works. Some will say even a finer resolution is needed, but too much can cause noise in the measurement.

A rule of thumb, often heard in machine shops, is that the measurement repeatability should be 10 times better than required for the application. The end results should pass an in-house gage repeatability and reproducibility (GR&R). Results will differ, so study up on some statistics and other good measurement practices for more information.

Got Position?

Not to delve too deeply into component considerations, but there are many forms of linear measurement in automation. My favorite is machine vision. Calibrate the image pixels to engineering units and then calibrate a robot’s coordinates to the image plane as well and, voilà, vision-guided robots. Okay, that’s two dimensional but it is very common today.

Some linear measurement in automation is like a tape measure. The linear encoder, magnetostrictive position sensor and magnetic scale position sensor are examples. However, a rotary encoder can also provide linear measurement to determine a machine actuator’s position. A cable actuated position sensors or draw wire pull cord are examples. An encoder on a servo motor for positioning a linear actuator or slide is another example. The actual linear position of the actuator is calculated based on the encoder count and actuator mechanics, such as gear ratio or ball screw pitch. The mechanical specifications of the actuator, encoder resolution, servo-motor tuning and even the servo-to-ball-screw coupling can affect accuracy and repeatability. The position errors can and will add up, and approaching a position from the opposite direction can make it worse.

Other linear measurement devices include an LVDT that must contact the measured surface and laser displacement measurement, which is contact-free. Both devices and many similar measurement methods provide very good accuracy and repeatability specifications, in many cases less than a few microns.

Whether measuring the surface of a semiconductor die with sub-micron accuracy or the position of a gantry over a 12-station, 54-foot tin plating line, there is a linear-measurement device to do the job. Carefully consider the cost of failed, inaccurate or non-repeatable measurements and find the actual position.

What IS an Encoder? – Robotics Tomorrow (press release)

By Automated storage and retrieval system (ASRS), Editor's Choice, Fabrication and metalworking, Food processing, Industrial automation, Manufacturing, Materials Handling, News, Sensors

Encoders use different types of technologies to create a signal, including: mechanical, magnetic, resistive and optical – optical being the most common.

Contributed by | Encoder Products Company

A VERY basic introduction

If you Google encoder, you’ll get a vast and confusing array of responses. For our purposes,encoders are used in machinery for motion control.

Encoders are found in machinery in all industries. You’ll find encoders used in cut-to-lengthapplications, plotters, robotics, packaging, conveying, automation, sorting, filling, imaging, and many, many more.  You may have never noticed them, but they are there. In this blog post, we will give you a very basic introduction into what an encoder is, and what it does.

Encoders in use

Encoder

What IS an encoder?

Simply put, an encoder is a sensing device that provides feedback. Encoders convert motion to an electrical signal that can be read by some type of control device in a motion control system, such as a counter or PLC. The encoder sends a feedback signal that can be used to determine position, count, speed, or direction.  A control device can use this information to send a command for a particular function. For example:

  • In a cut-to-length application, an encoder with a measuring wheel tells the control device how much material has been fed,  so the control device knows when to cut.
  • In an observatory, the encoders tell actuators what position a moveable mirror is in by providing positioning feedback.
  • On railroad-car lifting jacks, precision-motion feedback is provided by encoders, so the jacks lift in unison.
  • In a precision servo label application system, the encoder signal is used by the PLC to control the timing and speed of bottle rotation.
  • In a printing application, feedback from the encoder activates a print head to create a mark at a specific location.
  • With a large crane, encoders mounted to a motor shaft provide positioning feedback so the crane knows when to pick up or release its load.
  • In an application where bottles or jars are being filled, feedback tells the filling machines the position of the containers.
  • In an elevator, encoders tell the controller when the car has reached the correct floor, in the correct position. That is, encoder motion feedback to the elevator’s controller ensures that elevator doors open level with the floor. Without encoders, you might find yourself climbing in or out of an elevator, rather than simply walking out onto a level floor.
  • On automated assembly lines, encoders give motion feedback to robots. On an automotive assembly line, this might mean ensuring the robotic welding arms have the correct information to weld in the correct locations.

In any application, the process is the same: a count is generated by the encoder and sent to the controller, which then sends a signal to the machine to perform a function.

How does an encoder work?

Encoders use different types of technologies to create a signal, including: mechanical, magnetic, resistive and optical – optical being the most common. In optical sensing, the encoder provides feedback based on the interruption of light.

Illustration of the parts of encoders

The parts of an encoder

The graphic at right outlines the basic construction of an incremental rotary encoder using optical technology. A beam of light emitted from an LED passes through the Code Disk, which is patterned with opaque lines (much like the spokes on a bike wheel). As the encoder shaft rotates, the light beam from the LED is interrupted by the opaque lines on the Code Disk before being picked up by the Photodetector Assembly. This produces a pulse signal: light = on; no light = off. The signal is sent to the counter or controller, which will then send the signal to produce the desired function.

What’s the difference between Absolute and Incremental encoders?

Encoders may produce either incremental or absolute signals. Incremental signals do not indicate specific position, only that the position has changed. Absolute encoders, on the other hand, use a different “word” for each position, meaning that an absolute encoder provides both the indication that the position has changed and an indication of the absolute position of the encoder.

For more detailed information on how encoders work, see WP-2011: The Basics of How an Encoder Works, and the Encoder Basics section of EPC’s Installation and Wiring Guide.

 

About Encoder Products Company
Encoder Products Company (EPC) is a leading manufacturer of premium rotary incremental and absolute encoders used for motion feedback. Our encoders are available worldwide to OEMs, MROs, End Users, Service/Repair Organizations and others through a qualified network of electrical, motion control, and industrial distributors. The Americas Division of EPC, located in Sagle, Idaho, USA, is headquarters for worldwide manufacturing and encoder research and development. To service and support regional markets, EPC also operates manufacturing facilities in Europe and Asia.