Skip to main content
Category

technology

At Toyota, The Automation Is Human-Powered

By Automation, Automotive Industry, Editor's Choice, Sensors, technology
While the rest of the auto industry increasingly uses robots in manufacturing, Toyota has taken a contrarian stance by accentuating human craftsmanship

[Photo: courtesy of Toyota]

On the assembly line in Toyota’s low-strung, sprawling Georgetown, Kentucky factory, worker ingenuity pops up in the least expected places. For instance, normally in auto plants installing a gas tank is a tedious, relatively complicated procedure. Because the tank is so heavy, a crane usually positions and holds it against the skeletal frame while employees tighten its straps and bolts from under the chassis, a strained and time-consuming maneuver that requires keeping arms up in the air for long periods of time.

To allay the obvious shortcomings in this process, a group of Toyota workers designed an ingenuous device–a multi-armed piece of industrial machinery that in a single action lifts the tank in the air, places it in its crevice and reaches underneath the vehicle’s skeletal body to permanently attach the tank to the chassis. The process is fast, seamless, and ergonomically safe.

Freed from securing the bolts, the workers (the designers of this new device) would seem to be superfluous. However, that’s not the case at all. Indeed, the same number of employees as before are still at that assembly station. But instead of turning bolts in cramped crannies, they are doing the types of less obviously essential human tasks that manufacturers tend to eliminate when automation is introduced; namely, painstaking tactile and visual inspections to check and double-check for flaws on the tank and its connections and for holes or weaknesses in the critical fuel line.

The central role that people play in this corner of the Georgetown plant is repeated in varying degrees throughout the factory and exemplifies the uniqueness of Toyota’s manufacturing philosophy, which while still cutting edge continues to curiously nod to the past. Even as the automaker unveils an updated version of its vaunted production system, called the Toyota New Global Architecture (TNGA), the company has resisted the very modern allure of automation–a particularly contrarian stance to take in the car industry, which is estimated to be responsible for over half of commercial robot purchases in North America.

“Our automation ratio today is no higher than it was 15 years ago,” Wil James, president of Toyota Motor Manufacturing in Kentucky, told me as we sat in his office above the 8.1-million-square-foot (170 football fields) factory. And that ratio was low to begin with: For at least the last 10 years, robots have been responsible for less than 8% of the work on Toyota’s global assembly lines. “Machines are good for repetitive things,” James continued, “but they can’t improve their own efficiency or the quality of their work. Only people can.” He added that Toyota has conducted internal studies comparing the time it took people and machines to assemble a car; over and over, human labor won.

The Robotic Mystique

Such thinking seems unorthodox but it’s not surprising given Toyota’s well-known manufacturing system, which was first popularized in The Machine That Changed the World, an unlikely best-seller in the early 1990s written by three MIT academics. Despite its dry subject, this book had a radical impact inside and outside of the business community–for the first time, unveiling the mysteries of Japanese industrial expertise and popularizing terms like lean production, continuous improvement, andon assembly lines, seven wastes or mudas and product flow. With the publication of The Machine That Changed the World, it became de rigueur for every large and small manufacturer to at least give lip service to emulating Toyota’s production strategy.

But as the decades past, you’d be forgiven if you thought that many of the balletic set of assembly line systems depicted in The Machine That Changed the World was anachronistic, especially the ones involving the contribution of human workers. Fundamentally, Toyota’s production principles were keyed to the notion that people are indispensable, the eyes, ears, and hands on the assembly line–identifying problems, recommending creative fixes, and offering new solutions for enhancing the product or process. Today, that idea seems quaint. In the industrial world now manufacturing prowess is measured more by robotic agility than human ingenuity. As an aspiration, lean is taking a back seat to lights-out–a manufacturing concept Elon Musk is championing for his Model 3 Tesla plant in which illumination will ultimately not be needed because the factory will be devoid of people . Even before we get there, auto companies like Kia–headquartered in Korea where the use of robots in manufacturing outpaces all other countries–are already claiming productivity improvements of nearly 200% from automation. Some plants have more than 1,000 robots–and less than a thousand people–on an assembly line.

Indeed, a nearly fetishistic appreciation of automation is ubiquitous these days. Dozens of articles, white papers, and books, written by respected thought leaders, executives, and consultants, depict an industrial future inevitably overrun by robots able to do the most sophisticated tasks at inhuman levels of efficiency. These are siren calls to most manufacturers whose growth plans are conditioned on cutting labor costs, which often make up as much as 25% of the value of their products. Some of the Pollyannaish views about the onslaught of robots foresee a period of unprecedented free time for individuals to cater to the whims of their imagination, turning us all into freelance artisans and entrepreneurs. Other, more sober forecasts worry about what people will do without the satisfaction of a job and the stability of a paycheck. Either way, a revolution awaits us, so goes the conventional wisdom. An oft-quoted Oxford University analysis predicts that over the next two decades, some 47% of American jobs will be lost to automation. In China and India, that figure is even higher: 77% and 69% respectively.

Links To The Past

But Toyota has forged a different path. The automaker, now jockeying with Volkswagen and Renault-Nissan for the top spot in worldwide sales, consistently generates industry best profit margins, often 8% or more. To maintain this performance, Toyota has eschewed seeking growth primarily through cost-cutting (read automation), but rather has focused on automobile improvements offered at aggressively competitive prices. Codified as the Toyota New Global Architecture, this strategy doesn’t primarily target labor to reduce production expenses but instead is weighted toward smarter use of materials; reengineering automobiles so their component parts are lighter and more compact and their weight distribution is maxed out for performance and fuel efficiency; more economical global sharing of engine and vehicle models (trimming back more than 100 different platforms to fewer than ten); and a renewed emphasis on elusive lean concepts, such as processes that allow assembly lines to produce a different car one after another with no downtime. In TNGA’s framework, robots are not the strategic centerpiece, but merely enablers and handmaidens, helping assemblers do their jobs better, stimulating employee innovation and when possible facilitating cost gains.

As if to punctuate how old-school this way of thinking is today, Toyota made an unusual executive appointment in 2015. Unexpectedly, the automaker named Mitsuru Kawai, a 52-year veteran of the firm (he was hired at 15), to head up global manufacturing, the highest position ever held by a former blue-collar worker. Kawai is one of the last remaining links at Toyota to Taiichi Ohno, the godfather of lean manufacturing and the Toyota production system. Ohno, who died in 1990, idealized the importance of seasoned and practiced individual workers to the success of the organization. Kawai recalled with some nostalgia how this attitude elevated employee self-regard and in turn the quality of Toyota’s assembly line. When he first started at the company, experienced factory employees were called gods because they were masters that could make anything by hand. Regrettably, he said, more recently Toyota had less appetite for “making use of human skills and wisdom.”

Kawai’s job now is to imbue TNGA with Ohno’s memory by bringing human craftsmanship back to the fore. Soft-spoken and unassuming, Kawai described the manufacturing philosophy he uses to achieve this as uncomplicated: “Humans should produce goods manually and make the process as simple as possible. Then when the process is thoroughly simplified, machines can take over. But rather than gigantic multi-function robots, we should use equipment that is adept at single simple purposes.”

A Series Of Elementary Innovation

Aspects of TNGA are being implemented in most Toyota factories around the world. But Toyota has invested in a $1.3 billion overhaul of Georgetown–its largest plant where 550,000 Camrys, Avalons, and Lexuses are produced each year–as TNGA’s pilot site before disseminating the new system globally. The imprints of Kawai and Ohno are already evident in large and small ways in the Kentucky facility. For instance, the outsized overhead conveyer belts that used to carry a steady stream of engines to the assembly line have been swapped out for moving pedestals that skate across the factory guided by electronic sensors in the floor. This new engine delivery system (which is, after all, merely a machine replacing a machine) accomplishes a series of manufacturing goals. By eliminating the complex web of conveyer belts, Toyota is able to downsize its plants considerably, essentially to one story from as many as three. That, in turn, results in substantial savings on construction, real estate, cooling, heating, and maintenance, some of the highest costs in managing a factory network.

In addition, the pedestals’ payloads are computer-directed, each engine matched directly with customer purchases. Which means that Toyota can theoretically make three SUVs for every sedan one hour and do the opposite the next, depending on market orders. Such flexible, one-by-one production is the elusive Holy Grail of the auto industry.

And equally significant, freed of the conveyer belts, the assembly line workspaces are relatively uncluttered, cleared of pulleys, tubes, and pipes. As a result, assemblers can spend their limited amount of time with the automobile–usually less than a minute–completing their tasks and checking for defects while not wasting seconds navigating inelegantly around it.

A more rudimentary innovation in Georgetown that dovetails perfectly with TNGA tenets is the floating chair ,or raku seat (raku roughly means easy in Japanese). This assembly aid glides along rails in and out of the vehicle and then front to back inside the car, giving installers unimpeded access to difficult-to-reach spots like the dashboard console without having to bend or squeeze into awkward positions. The Toyota employee that designed this device patterned it after the moving swivel chair in his bass fishing rig–and used a seat from his boat to beta test the concept.

Besides its ergonomic benefits, the raku seat prunes seconds off of the production process, which is a persistent goal of Toyota’s manufacturing systems. Repeated across the assembly line, trimming small slices of time adds to up to meaningful productivity benefits. “In our world, we see work in 55-second bursts,” said Kentucky Toyota manufacturing president James. “And we challenge our workers to chop a second or more off if they can. If we gain back 55 seconds throughout the factory, we can ultimately eliminate a job and move that worker to another slot where they can begin the innovation process over again. Humans are amazing at finding those stray seconds to remove.”

Among the more compelling experiments underway in Georgetown is a training exercise meant to infuse the TNGA idea that automation should solely grow organically out of human innovation. To this end, assemblers were given a karakuri assignment–a lean management drill that requires workers to build a Rube Goldberg-inspired contraption that operates under its own force to improve a workspace activity. One team is reengineering the flow rack, the ubiquitous stand next to each assembly station that holds the parts needed for the local task. Currently, as shelves are emptied, workers have to manually set them aside and then replace them with a full bin of parts. The “modernized” version will instead rely on a combination of springs, ropes, and weights to navigate this task after a button is pressed. When this decidedly low-tech device is perfected, Toyota plans to use the prototype as the blueprint for a robot to emulate the process.

Toyota Emerges From A Debacle

TNGA, which Toyota expects will reduce manufacturing expenditures by as much as 40%, emerged from a dark period in the early 2000s when the automaker overeagerly attempted to outpace General Motors and Volkswagen as the world’s No. 1 vehicle manufacturer. Toyota has admitted that by juicing production growth too rapidly at that time, quality, manufacturing controls and factory productivity were allowed to lag. So much so that in 2009 Toyota had its first loss in 59 years (in part due to the global financial depression) and during the next two years recalled more than 10 million vehicles after a spate of sudden acceleration accidents. In 2014, Toyota agreed to pay a record $1.2 billion penalty to end a criminal probe by the U.S. Justice Department into its alleged attempts to mislead the public and hide the true facts about the dangerous problems with its vehicles. Toyota’s CEO Akio Toyoda apologized publicly and abjectly for the company’s failures and said the automaker was “grasping for salvation.” An internal soul-searching followed, which in turn led to the new manufacturing system and ultimately to Kawai’s appointment to lead its implementation and a return to craftsmanship.

As part of the TNGA rollout, Kawai has demanded that factories establish manual workspaces for critical plant processes, in some cases eliminating automation where it had already been installed during the period that Toyota overtaxed its production capacity. Kawai’s goal is twofold. First, to ensure that Toyota’s workers have the expertise and skills to manufacture a car by hand even if they wouldn’t be called upon to fully do so again. Kawai believes that without this body of knowledge assemblers become myopic, focused solely on their small part of the operation and blinded to their responsibility to design improvements for the larger team effort that are required to consistently produce high-quality vehicles. Worse yet, in this narrow outlook, workers often mistakenly see robots as replacements for people rather than basic tools that can be used to enhance factory performance.

Kawai’s second aim in replacing automated factory zones with people is to revisit with a clear mind–removed from the anxiety of a surge in production volume–whether robots have actually improved efficiency in individual plant activities. Some of the results of this experiment are unexpected. For instance, in a Japanese Toyota factory where workers have taken over forging crankshafts out of metal from automated equipment, subsequent innovations have reduced material waste by 10% and shortened the crankshaft production line by 96%.

Is The Robot Threat A Fantasy?

Toyota’s aversion toward automation is noteworthy for the obvious reason that the automaker is arguably one of the most creative and successful manufacturing companies in history, and has never followed the herd but rather set the course. However, beyond that, also worth examining more closely is the question raised by Toyota’s choice of direction: Do robots kill jobs or create them? Toyota, of course, would argue that while some manufacturers eagerly embrace automation and more will in the future, on a larger scale (and ironically in the more innovative and pioneering factories) robots are best used to precipitate more human plant activity rather than reduce it.

Recent analyses of employment data support this somewhat contrarian point of view. In one bit of research, James Bessen, a Boston College law professor, found that although automation has been increasingly prevalent in all types of services and manufacturing industries since 1950, in that time only one of 270 occupations categorized by the Census Bureau was eliminated by technology; namely, the elevator operator. Other jobs were partially automated and in many cases, automation led to more jobs, often higher-skilled positions at companies that used technology to design and develop new products and new ways to reach customers.

For instance, ATMs have radically altered consumer-banking habits, yet the number of branch employees has grown since money machines were first installed during the late 1990s. “Why didn’t employment fall?” writes Bessen. “Because the ATM allowed banks to operate branch offices at lower cost; this prompted them to open many more branches (their demand was elastic), offsetting the erstwhile loss in teller jobs.” And simultaneously, banks morphed into financial services companies, introducing an array of customized products that tellers were deputized to sell, giving behind-the-cage clerks the same opportunities for upward job mobility as deskbound bankers used to have.

In addition, historically there is a direct correlation between productivity growth, which robots should naturally contribute to, and job creation not explained by population gains. In theory, companies able to manufacture products more quickly and efficiently will reinvest the money from higher sales in assets and innovation and, in turn, additional workers. Or they may lower prices, which drives more consumer spending, higher GDP, and an improved employment outlook. A trenchant study on this topic by the Information Technology and Innovation Foundation illustrated the relationship between productivity and employment by examining economic data of the post World War II era. ITIF found that in the 1960s, when U.S. productivity grew 3.1% per year, unemployment averaged 4.9%. [JR1] A couple of decades later, annual productivity growth had fallen to just under 2% and unemployment rates averaged 7.3%. And in the 1990s and early 2000s in the wake of the internet boom, annual productivity growth was nearly 3% again; in turn, the unemployment rate declined. From 2008 to 2015, productivity gains had ticked downward once more to only 1.2%; concomitantly, the rate of jobs creation has been sluggish compared to the pre-recession period.

These productivity statistics lead to a few significant conclusions about automation today. For one thing, robots thus far do not make up a significant portion of manufacturing activities–responsible for only around 10% of the work in factories, according to some estimates–and companies that have embraced automation have yet to see significant gains from it since productivity growth continues to trend downward. Moreover, the effect that automation has had on employment has been muted. Another bit of data is worth mentioning in this regard: Workers are not leaving occupations as frequently as they did in past decades–the rate of occupational change in the 2000s is 45% lower than the 1940–1980 period and 33% lower than the 1990s, according to the Economic Policy Institute.

Robert Atkinson, ITIF president, believes that robots will in fact have a substantial presence in global factories before too long, although he doesn’t view automation as a threat to human jobs. He asserts that the productivity slump reflects a slowdown in innovation recently. Technology waves lasting as long 50-years have traditionally transformed society and revitalized economies but IT has stalled out, at the bottom of the S curve, Atkinson argues. “In the 1990s we went from dialup to 3 megabit broadband, that was transformative,” Atkinson says. “But going from 10 megabit to 50 megabit is not. Same thing with how much chips progressed in capabilities in the 1990s, but no more.”

As Atkinson sees it, the somewhat labored abilities of artificial intelligence are holding back robotic skills. That position is shared by John Launchbury, director of DARPA’s Information Innovation Office, who says that AI is within reach but yet a distance away from the type of contextual adaption that true factory automation requires; in other words, AI systems still lack cognition skills to understand and manipulate underlying explanatory models and identify and analyze real-world objects. According to Launchbury, today’s second wave AI systems are capable of statistical learning; based on millions of bits of data, they can separate one voice from another or a cat from a dog, among many other more complex distinctions. A contextual adaption system, though, “could say if a specific animal it sees with little more than a cursory glance has ears, paws, or fur and how they differ from another animal in the most minute ways,” he says.

When that level of AI is available, Atkinson argues, the next technological wave–the robot era–will have arrived. And annual productivity could increase to as much as 3.5%. “Which will create hundreds of thousands of jobs for people working with and around robots,” he says.

That, anyway, is what Toyota is counting on–or, better yet, cutting its own curve to make sure it happens.

Tracking Space Debris in Earth’s Orbit With Centimeter Precision Using Efficient Laser Technology

By Aerospace, Editor's Choice, laser distance sensor, Sensors, technology
Fighting the perils of space debris: Fraunhofer IOF's fiber laser technology. Credit: Fraunhofer IOF Read more at: https://phys.org/news/2017-09-tracking-debris-earth-orbit-centimeter.html#jCp

Fighting the perils of space debris: Fraunhofer IOF’s fiber laser technology. Credit: Fraunhofer IOF Read more at: https://phys.org/news/2017-09-tracking-debris-earth-orbit-centimeter.html#jCp

Uncontrollable flying objects in orbit are a massive risk for modern space travel, and, due to our dependence on satellites today, it is also a risk to global economy. A research team at the Fraunhofer Institute for Applied Optics and Precision Engineering IOF in Jena, Germany, has now especially developed a fiber laser that reliably determines the position and direction of the space debris’ movement to mitigate these risks.

A short-pulse fiber laser for centimeter-accurate detection of space debris.

A short-pulse fiber laser suitable for LIDAR applications (light detection and ranging) for the centimeter-accurate detection of space debris. Credit: Fraunhofer IOF

Space debris is a massive problem in low Earth orbit space flight. Decommissioned or damaged satellites, fragments of space stations and other remnants of space missions pose a potential threat of collisions with active satellites and spacecraft every day. In addition to their destructive force, collisions also create additional risk creating thousands of new pieces of debris, which in turn could collide with other objects – a dangerous snowball effect.

Today, the global economy depends to a substantial degree on satellites and their functions – these applications are, for example, used in telecommunications, the transmission of TV signals, navigation, weather forecasting and climate research. The damage or destruction of such satellites through a collision with orbiting satellites or remains of rockets can cause immense and lasting damage. Therefore, the hazardous space debris needs to be reliably tracked and recorded before any salvaging or other counter-measures can be considered. Experts from Fraunhofer IOF in Jena have developed a laser system that is perfectly suited for this task.

Reliable recording of the position and movement of objects in the Earth’s orbit

“With our robust and efficient system we can reliably and accurately determine the objects’ exact position and direction of movement in orbit,” explains Dr. Thomas Schreiber from the fiber lasers group at Fraunhofer IOF. “Laser systems like ours must be exceptionally powerful in order to withstand the extreme conditions in space. In particular, the high physical strain on the carrier rocket during the launch, where the technology is subjected to very strong vibrations. “In the low earth orbit, the high level of exposure to radiation, the extreme temperature fluctuations and the low energy supply are just as great obstacles to overcome. This necessitated the new development by the Jena research team since common laser technologies are not able to cope with these challenges.

Moreover, it is also necessary to analyze space debris over comparatively long distances. For this purpose, the laser pulse is propagating through a glass fiber-based amplifier and sent on its kilometers long journey.

Measurements with ten thousands laser pulses per second

“Very short laser pulses, which last only a few billionths of a second, are shot at different positions in space to determine the speed, direction of motion and the rotational motion of the objects,” explains Dr. Dr. Oliver de Vries. “With our laser system it is possible to shoot up thousands of pulses per second. If an object is actually at one of the positions examined, part of the radiation is reflected back to a special scanner, which is directly integrated into the system. Even though the laser beam is very fast, it takes some time for the emitted light to get to the object and back again. This so-called ‘time of flight’ can then be converted into a distance and a real 3D coordinate accordingly.” The system’s sophisticated sensors, which collect the reflected light reflexes, can detect even billionths of the reflected light.

The principle – originally developed by the two researchers of Fraunhofer IOF for Jena-Optronik and the German Aerospace Centre (Deutsches Zentrum für Luft- und Raumfahrt, DLR) – has already been successfully tested during a space transporter’s docking maneuver at the International Space Station ISS. Previously, the laser system had been installed in a sensor of the Thuringian aerospace company Jena-Optronik GmbH and was launched in 2016 with the autonomous supply transporter ATV-5. Jena Optronik’s system also excels in energy efficiency: the fiber laser operates at a total power of less than 10 watts – that is significantly less than a commercial laptop, for instance.

Source: Physics.org. Read more at: https://phys.org/news/2017-09-tracking-debris-earth-orbit-centimeter.html

Surveying: ‘The future is here’ – KHL Group

By Automation, Editor's Choice, Engineering and construction, Sensors, Structural monitoring, technology

Abdce contex capture copy

The days when surveying meant a group of people holding up poles and measuring angles and distances, marking out a site with yet more poles, are long gone, and the techniques used today are becoming more and more sophisticated all the time.

BIM (building information modelling) is a term that was coined only a few short years ago, but is now the key to unlock the data needed on a big project. And the basic information that allows BIM to hold that powerful position, can be sourced easily from so many different places – even the sky, with drones increasingly playing a part.

However, all these new technologies and the possibilities they offer have to be harnessed.

Elżbieta Bieńkowska, EU Commissioner for Internal Market, Industry, Entrepreneurship and SMEs, wrote in the introduction to the Handbook for the Introduction of Building Information Modelling by the European Public Sector, “Similar to other sectors, construction is now seeing its own digital revolution, having previously benefited from only modest productivity improvements.

“Building Information Modelling is being adopted rapidly by different parts of the value chain as a strategic tool to deliver cost savings, productivity and operations efficiencies, improved infrastructure quality and better environmental performance.”

She said, “The future is here, and the moment has now come to build a common European approach for this sector. Both public procurement – which is accountable for a major share of construction expenditure – and policy makers can play a pivotal role to encourage the wider use of BIM in support of innovation and sustainable growth, while actively including our SMEs – and generating better value for money for the European taxpayer.

In the handbook’s executive summary, it says, “The prize is large: if the wider adoption of BIM across Europe delivered 10% savings to the construction sector then an additional €130 billion would be generated for the €1.3 trillion market.

“Even this impact could be small when compared with the potential social and environmental benefits that could be delivered to the climate change and resource efficiency agenda.”

Roads and bridges

One of the leading companies in this area is US-based Bentley Systems. Santanu Das, senior vice president, design modelling, said that one of the biggest advances in information modelling was its use not only in buildings, but also in transportation and heavy civil engineering projects like roads and bridges.

He said there was increased use in brownfield projects.

“Brownfield projects require some sort of starting data,” he said. In the past, 2D drawings were the starting point, then a 3D model.

“One advancement that came out years ago was point clouds – LiDAR,” said Das. “The issue with LiDAR was two big things – it was quite bulky and expensive, you can only do it once every four or five years. The data that are generated would be in the terabytes sometimes and there was nothing really available to process it properly.”

He said a third problem was classification.

“If you took a point cloud, you had no idea what the hell those points meant. A human can figure out, that’s a wall, that’s a column, but in order to do what we call classification, automatically, was impossible.”

He continued, “So what Bentley’s been working on in the past couple years on its BIM platform is reality modelling, and that’s now all a part of our Connect Edition platform.”

Santanu das gc1 1677

Connect Edition converges Bentley’s platform technology to support a hybrid environment across desktop modelling applications, cloud services, on-premise servers, and mobile apps.

“Every single Connect Edition product we have – from Building Designer to OpenPlant to OpenRoads – uses this fundamental datatype that we create from our ContextCapture piece in there.

“That’s one huge advantage that we have for people who want to start off on brownfield projects.”

Bentley can now process this information in the cloud. Das said that with LiDAR and any type of photogrammetry, the number of pictures captured could be astronomical.

“What we doing out with our new ContextCapture in the cloud is that we have the ability to process hundreds of thousands of these pictures in a very, very quick manner, because we’re using the power of multiple servers.

“Then we stream that information as needed to the BIM platform via our ProjectWise.”

With ProjectWise, what Bentley calls i-models can be combined together with other documents into a single package, so that models and associated content can be accessed on an iPad using Bentley mobile apps.

“Reality modelling classification is huge,” said Das. “The other thing that we are finding in information modelling today and the advancement in BIM is the collaboration aspect.”

While people can work together, share data together, Das said there was a problem of a lack of basic terminology of communication.

“So what we have done is to work really hard to come up with a common terminology for all asset class types. So if we’re talking about a beam in a building, or a beam in a plant scenario, it understands what a beam is.”

Some years ago, Bentley introduced i-models, which Das described as “a sort of pdf for the AEC (architecture, engineering and construction) industry”.

He said, “We’ve taken that to the next level. We’re going to be introducing this thing called the i-model hub, which allows for data to flow from discipline to discipline, and different hierarchies.”

He said there were different levels of detail.

“The hub can filter out the information depending on what your role is, and what your discipline is. It also manages change – which is huge – it’s communicating and constantly keeping that model up to date.”

This communication can be with the products of other companies, too.

“We believe in third party interoperability,” said Das.

The daily visualisation of a jobsite can help minimise construction delays, prevent clashes between the work onsite and the design, eliminate the need for rework, facilitate stakeholder communication and align schedules.

The Pix4D Crane Camera claims to combine hardware and software to help with this. A camera is mounted on a tower crane jib, from where it captures site images. These are transmitted wirelessly to the Pix4D cloud, and processed automatically to be converted into 2D maps and 3D models.

The company behind it said it was designed to monitor any type of construction, and had already been endorsed by some large companies worldwide.

Early adopter

A metro line project in western France was among the early adopters. Dodin Campenon Bernard, part of the Vinci group, was awarded a 14km project that included a tunnel and underground stations.

The station to be monitored was in the heart of the city centre and made drone flights, which was one option, impossible.

At 32m of digging depth and with massive brace frames to support the excavation, the building site was said to be a challenge. However, through the data collected with the crane camera, Dodin Campenon Bernard was able to follow the evolution of the site day by day.

Romain Nicolas, deputy technical director at Dodin Campenon Bernard, said, “Projects are complicated – unforeseen circumstances can happen and delay the project. This kind of projects take a few years to achieve, and meanwhile, can highly perturb the neighbourhood.”

He said it was crucial to communicate progress on the construction site, and share visual updates from the site to local residents and all stakeholders.

In Zurich, Switzerland, it was a railway bridge that was the focus for Pix4D. The capacity of the Zurich rail network and surrounding region was felt to have reached its limit, and Porr Suisse, part of one of the largest Austrian construction groups, was given the job of expanding the railway infrastructure. This included the construction of a 200m bridge and a new track.

Swiss company BSF Swissphoto was in charge of surveying the infrastructure. It used the crane camera to document the current situation of the site, capturing data daily.

The weekly work progress reports produced were said to have improved communication and collaboration between the companies and subcontractors involved.

Pix4 d hospital denmark

A large new hospital complex being built in Denmark covers more than 150,000m2, and had 13 cranes erected.

Pix4D said that with BIM and digital construction technologies, this project was a perfect example of a connected site. The contractors have been continually testing new technologies, and selected the crane camera to be a part of the project.

The results were said to have quickly revealed to be a huge time saver for the project team. Although the project team was based on the work site, the camera was situated on the other side of the site, meaning a long walk to check the building status, which could take a few hours. With a permanent monitoring solution like the crane camera, data has been automatically available when needed, enabling the project team to get information quickly, and make faster decisions when it came to confirming or realigning the schedule.

Pix4D said that combining crane camera use with drones could ensure the most complete aerial site overview, from the earliest earthwork stages of a project.

Unmanned

Drones go under several aliases – UAS (unmanned aircraft system) and UAV (unmanned aerial vehicle), for example.

Trimble is collaborating with Propeller Aero to distribute its UAS analytics platform. Propeller, based in Australia and the US, said it was a leader in the advanced collection, visualisation and analysis of data from drones.

Trimble said Propeller’s simple automated ground control targets, cloud-based visualisation and rapid analysis platform would also be integrated with Trimble Connected Site solutions to bring “an end-to-end cloud-based UAS solution to civil engineering and construction contractors”.

It said that pairing Propeller’s web-based interface with Trimble Connected Site solutions would allow users to unlock the full value of UAS information.

Texo dsi

Users can get access to simple tools to measure surface geometry, track trends and changes across time, and perform visual inspections. Trimble said that both technical and non-technical professionals were now able to gather insights remotely and collaborate. It added this would drive improvements in safety and efficiency as well as reducing environmental impact across a construction worksite.

Scott Crozier, director of marketing for Trimble Civil Engineering & Construction, said, “Propeller combines ease of use with powerful analysis tools that allow users to view 2D and 3D deliverables and extract valuable information.

“Like Trimble, Propeller understands the value of quality and accurate data for integration with civil engineering and construction workflows.”

Rory San Miguel, CEO of Propeller Aero, said, “We pride ourselves in taking the most trusted, technical data and tools, and wrapping that up in an easy-to-use online platform that is relevant to the entire organisation, not just technical users.

“Integrating our platform into Trimble’s Connected Site solutions will bring a new class of information to construction sites and organisations globally.”

Also working with UAVs, Texo Drone Survey & Inspection (DSI) said that with UAVs, a big part of keeping on top of potential challenges involved talking to clients ahead of them encountering particular issues, and developing bespoke platforms that mee their needs precisely, by engineering solutions from the bottom up.

It said it had been investing in technology that allowed for heavier payloads and enabling its fleet of UAVs to operate under more difficult weather conditions.

The UAVs currently in operation can deal with wind speeds of up to 15m/s, with the flexibility to carry a variety of custom payloads. Texo DSI has permits for operations up to 20kg, which it said was a game changer for the construction sector.

The company said that a standard LiDAR survey, accuracy of data is generally to around 40mm. However, it claimed that through investment and development of its LiDAR UAV fleet and associated survey software, it was achieving accuracy of 1 to 3mm with its survey grade UAV integrated LiDAR system. This system is delivered via a custom-built UAV platform that measures over 1 million points per second.

Topcon Positioning Group has added advanced connectivity options to its DS-200i direct aiming imaging station.

The DS-200i, now with wi-fi access, provides real-time, touchscreen video and photo imaging to capture measured positions.

Ray Kerwin, director of global surveying products, said, “The ultra-wide 5 MP on-board camera provides photo documentation in the field and can now transmit live video using either LongLink or high-speed WLAN as an access point, which allows the FC-5000 or Windows 10 tablets easily to connect.

“The addition of Wi-Fi connectivity offers convenience to the powerful video capabilities of the DS-200i. The system allows for non-prism measurements to be aimed and measured to remote objects – saving time without having to return to the tripod.”

He added, “The live video allows a remote user to know exactly what is being measured.”

Additional standard features include Hybrid Positioning functionality, Xpointing technology for quick and reliable prism acquisition, TSshield telematics security and maintenance technology, and a rating of IP65 for water-resistant construction.

GNSS suported

Leica Geosystems has just released Leica Spider v7.0 software suite, which is now said to support all GNSS (global navigation satellite systems) – GPS, GLONASS, BeiDou, Galileo and QZSS, as well as the GPS-L5 signal for improved network RTK (real time kinematic) correction services.

The all-in-one solution is said to offer users working on surveying and mapping, among other tasks, improved positioning accuracy and correction service. Leica said that professionals could now increase productivity while they operated reliably in environments with obstructions, like urban canyons, or at high latitudes, thanks to the higher number of usable satellites from multiple GNSS constellations.

Leica said that for the first time, all important GNSS network information was available in one “convenient and easy-to-access web user interface”. The Leica Spider Business Centre web portal is said to combine all the elements to operate the infrastructure efficiently, including user and access management, and network and rover status monitoring.

Leica spider v7.0

Markus Roland, product manager for GNSS Networks and Reference Stations, said, “Our goal with this new version is to incorporate the latest developments into our solution to continue our history of pioneering in GNSS.

“We strive to deliver reliable productivity improvements for our customers. With the new Spider v7.0, customer benefits are tangible and quality is ensured.”

Another new surveying technology which is increasingly apparent on jobsites is augmented and virtual reality (AR and VR).

In the UK, Scotland’s University of Strathclyde’s Advanced Forming Research Centre (AFRC) and the Advanced Manufacturing Research Centre with Boeing (AMRC) in Sheffield, South Yorkshire, have been working with Glasgow-based design visualisation company Soluis Group and modular building designer and manufacturer Carbon Dynamic.

Together, they claim to have successfully built a demonstrator for the use of AR and VR in the construction industry.

The technology was first trialled on a 2.2m plasterboard wall which, when viewed with a Microsoft HoloLens, showed a 3D rendering of the plumbing and wiring behind the façade.

The system can also be used to examine different wall parts to ensure there are no gaps in insulation before being sent to a construction site.

David Grant, partnership development leader at the AFRC, said, “This new technology has a role to play before, during and after construction of both domestic and commercial properties.”

Strathclyde afrc

Strathclyde afrc 3

He said that before work starts, those involved in a construction project would be able accurately to visualise and walk through a building before the foundations were even dug. He said this would help in identifying any potential issues before they occur.

And a Danish BIM-software company is claiming that for the first time, construction workers will be able to see a mix of reality and digital drawings from their smartphone

Dalux has launched what it says is the world’s first AR technology that works on mobile devices, and shows a mix of construction drawings and reality – based on what is being looked at and the location.

Jakob Andreas Bærentzen, associate professor at the Danish Technical University Compute, said he was impressed that an AR product was mature enough to aid in the construction industry already.

He said, “Dalux’s AR-technology already seems to be useful in practice. This is several years earlier than I expected we would see such solutions.

“It makes the accomplishment even more impressive that the software can handle large amounts of data and is mature for practical use on mobile devices – that are not designed for such tasks in the construction industry as the HoloLens is.”

Dalux co-founder Bent Dalgaard said, “Now, at most large construction projects, a digital BIM model is often created. We can access these drawings through mobile devices, based on the construction worker’s location, and show it as AR.

“The fact that the technology can be used on mobile devices makes the adoption in the construction industry much faster, since everybody has a smartphone or tablet these days, and HoloLens is much more expensive, meaning that not all workers have access to the AR drawings.”

Real-time collaboration

Another company, HoloBuilder, which provides 360° reality capturing of construction sites, is releasing a product featuring new capabilities for real-time collaboration and offline handover for project close-out.

HoloBuilder offers a scalable SaaS (software as a service – licensed on subscription) solution. It is said to be a collection of all features that HoloBuilder offers as a collaborative enterprise package – 360° reality capturing with the JobWalk mobile app, TimeTravel for progress documentation, the measurement tool to measure within 360° images, and annotations.

The company said that users could now collaborate with the whole team and enjoy enterprise level service and security. HoloBuilder lets entire construction project teams contribute to the documentation process.

During project close-out, the project can be downloaded and saved as a view-only deliverable for the owner to keep throughout the lifetime of the building.

Depth-sensing tech from Qualcomm challenges Apple

By Editor's Choice, laser distance sensor, Sensors, technology

New camera and image processing technology from Qualcomm promises to change how Android smartphones and VR headsets see the world. Depth sensing isn’t new to smartphones and tablets, first seeing significant use in Google’s Project Tango and Intel’s RealSense Technology. Tango uses a laser-based implementation that measures roundtrip times bouncing off surfaces but requires a bulky lens on the rear of the device. Early Tango phones like the Lenovo Phab 2 were hindered by large size requirements as a result. Intel RealSense was featured in the Dell Venue 8 7000 tablet and allowed the camera to adjust depth of field and focal points after the image had been capturing. It used a pair of cameras and calculated depth based on parallax mapping between them, just as the human eye works.

Modern devices like the iPhone 7 Plus and Samsung Galaxy S8 offer faux-depth perception for features like portrait photo modes. In reality, they only emulate the ability to sense depth by use different range camera lenses and don’t provide true depth mapping capability.

New technology and integration programs at Qualcomm are working to improve the performance, capability, and availability of true depth sensing technology for Android-based smartphones and VR headsets this year. For the entry-level market devices that today do not have the ability to utilize depth sensing, a passive camera module was built to utilize parallax displacement and estimate depth. This requires two matching camera lenses and a known offset distance between them. Even low-cost phones will have the ability to integrate image quality enhancements like blurred bokeh and basic mixed or augmented reality, bringing the technology to a mass market.

The more advanced integration of the Qualcomm Spectra module program provides active depth sensing with a set of three devices. A standard high resolution camera is paired with both an infrared projector and an infrared camera that are utilized for high resolution depth map creation. The technology projects an infrared image with a preset pattern into the world, invisible to the human eye, but picked up by the IR camera. The Spectra image processor on the Qualcomm Snapdragon mobile platform then measures the displacement and deformations of the pattern to determine the depth and location of the items in the physical world. This is done in real-time, at high frame rates and high resolution to create a 10,000 data point “cloud” in a virtual 3D space.

View Full Size

For consumers this means more advanced security and advanced features on mobile devices. Face detection and mapping that combines the standard camera input along with the IR depth sensing combination will allow for incredibly accurate and secure authentication. Qualcomm claims that the accuracy level is high enough to prevent photos of faces and even 3D models of faces from unlocking the device thanks to interactions of human skin and eyes with IR light.

3D reconstruction of physical objects will also be possible with active depth sensing, allowing gamers to bring real items into virtual worlds. It also allows designers to accurately measure physical spaces that they can look through in full 3D. Virtual reality and augmented reality will benefit from the increased accuracy of its localization and mapping algorithms, improving the “inside-out” tracking capabilities of dedicated headsets and slot-in devices like Samsung’s Gear VR and Google Daydream.

Though the second generation Qualcomm Spectra ISP (image sensor processor) is required for the complex compute tasks that depth sensing will create, the module program the company has created is more important for the adoption, speed of integration, and cost of the technology to potential customers. By working with companies like Sony for image sensors and integration on modules, Qualcomm has pre-qualified sets of hardware and provides calibration profiles for its licensees to select from and build into upcoming devices. These arrangements allow for Qualcomm to remove some of the burden from handset vendors, lowering development time and costs, getting depth sensing and advanced photo capabilities to Android phones faster.

It has been all but confirmed that the upcoming Apple iPhone 8 will have face detection integrated on it and the company’s push into AR (augmented reality) with iOS 11 points to a bet on depth sensing technology as well. Though Apple is letting developers build applications and integrations with the current A9 and A10 processors, it will likely build its own co-processor to handle the compute workloads that come from active depth sensing and offset power consumption concerns of using a general purpose processor.

Early leaks indicate that Apple will focus its face detection technology on a similar path to the one Qualcomm has paved: security and convenience. By using depth-based facial recognition for both login and security (as a Touch ID replacement), users will have an alternative to fingerprints. That is good news for a device that is having problems moving to a fingerprint sensor design that uses the entire screen.

It now looks like a race to integration for Android and Apple smartphones and devices. The Qualcomm Spectra ISP and module program will accelerate adoption in the large and financially variable Android market, giving handset vendors another reason to consider Qualcomm chipsets over competing solutions. Apple benefits from control over the entire hardware, software, and supply chain, and will see immediate adoption of the capabilities when the next-generation iPhone makes its debut.

New Sensor System Helps Tracking ISIS Explosives

By Aerospace, Editor's Choice, laser distance sensor, Sensors, technology

A new use for a bomb-detecting technology: a system which identifies the dangerous opioid fentanyl from a distance using a laser beam to protect soldiers and law enforcement officers and help with prosecutions.

Fentanyl — 100 times more potent than morphine — can be dangerous even to touch. Officers are often sickened by fentanyl exposure during busts.

“Farther, faster, safer,” said Ed Dottery, owner of Alakai Defense Systems, describing the advantages of the technology. Dottery said his sensor system is already being used by the US military to protect troops against the Islamic State of Iraq and Syria truck bombs.

It works by shooting a laser beam at an object, then reading the change in the beam when it bounces back. Those changes signal whether there are chemicals used to make explosives.

According to tampabay.com, a portable version of the system appears to have great value in law enforcement narcotics and bomb detection investigations, especially involving the deadly drug Fentanyl, make technology like this important for officer safety.

Dottery said his system could prevent officers from having to don protective gear because they could detect the presence of the drug without getting near it.

Still, while promising, the technology has a long way to go before it can be used by law enforcement to detect fentanyl. The current version is too bulky and too expensive to be practical, a police sheriff said.

Alen Tomczak, an Alakai field engineer who was present for the Sheriff’s Office test, said the company is not trying to sell a system to either agency at this point because more research is needed. But the initial result was promising, Tomczak said.

The test, he said, only took one laser shot of the fentanyl. “To make it official, we usually take 100 or 200 shots to make sure our algorithms are right,” he said.

Alakai is looking to collaborate with law enforcement to refine and develop the system, Tomczak said. That includes finding enough fentanyl to conduct further testing.