Home | Feed aggregator | Sources |

Design News

Design News
Serving the 21st Century Design Engineer
Updated: 16 hours 46 min ago

10 Scary True Stories from Engineering History

Fri, 2018-10-19 05:00

 

RELATED ARTICLES:

 

Chris Wiltz is a Senior Editor at  Design News covering emerging technologies including AI, VR/AR, and robotics.

Festo Rolls Out New Machine Controller at Pack Expo

Fri, 2018-10-19 04:00

A new controller from Festo Corp. promises to help machine builders bring servo-based automation equipment to market faster and for less cost.

Introduced this week at Pack Expo, the CPX-E modular controller is targeted at linear motion applications employing multiple servo drives. “This is a game-changer in terms of synchronized control of servo motors in the field,” John Holmes, sales director for the food and packaging industries at Festo, told Design News. “It allows OEM machine builders to control up to 16 different axes of servo control, from single-phase 350-Watt motors all the way up to three-phase, 250-kiloWatt motors.”

At Pack Expo, Festo rolled out a controller targeted at designers of automation and handling equipment. (Image source: Festo Corp.)

At the show, Festo used the new controller to demonstrate a conveyor-based flow wrapper system that included sensors, image processing, electric servos, and actuators.

The new platform is said to be well-suited for a variety of applications. Festo said it could be incorporated into automation systems, such as packaging machines, palletizers, and soldering systems. It could also serve in handling machinery—such as parts handlers, assembly systems, dispensing machinery, and gluing systems. Holmes said it could even serve in the auto industry for construction of automated machinery for the assembly of printed circuit boards and headlights.

Designed as an EtherCAT master controller and motion controller, the CPX-E is said to work in plug-and-play fashion with such components as sensors, cameras, and human-machine interfaces, as well as electric and pneumatic servo motors and drives.

“The beauty of this is it allows the engineer to synchronize all the drives off one master controller,” Holmes said.

To be sure, other suppliers have offered the capability of synchronizing multiple servos off a single I/O platform. Such systems, however, have typically required machine builders to get their components from multiple suppliers, Holmes said. “Here, instead of having to go to different companies, they can get the pneumatics, they can get the servos, and they can get all the digital I/O from one technology provider.” 

RELATED ARTICLES:

Holmes added that the turnkey nature of the package translates to benefits in terms of machine-building time and economics. “It’s about convenience and cost,” he told us. “You’re going to have a lot of cost savings with this platform.”

Senior technical editor Chuck Murray has been writing about technology for 34 years. He joined Design News in 1987, and has covered electronics, automation, fluid power, and auto.

Today's Insights. Tomorrow's Technologies.
ESC returns to Minneapolis, Oct. 31-Nov. 1, 2018, with a fresh, in-depth, two-day educational program designed specifically for the needs of today's embedded systems professionals. With four comprehensive tracks, new technical tutorials, and a host of top engineering talent on stage, you'll get the specialized training you need to create competitive embedded products. Get hands-on in the classroom and speak directly to the engineers and developers who can help you work faster, cheaper, and smarter. Click here to register today!

Anything Can Become a Robot Using Technology from Yale

Fri, 2018-10-19 03:00

Robots are starting to make their way into our homes through inventions like the Roomba vacuum cleaner and other inventions. Now, researchers at Yale University have developed technology that can animate common objects into self-actuation to create robots out of pretty much anything. At the Yale laboratory of Rebecca Kramer-Bottiglio, assistant professor of mechanical engineering and materials science, the team has developed robotic skins comprised of elastic sheets embedded with sensors and actuators.

Using robotic skin developed in a lab at Yale University, researchers turn objects like this stuffed animal into robots. (Image source: Rebecca Kramer-Bottiglio, Yale University)

Unlike many robots built today, the skins weren’t developed with any particular task in mind, Kramer-Bottiglio said in a Yale news release. Instead, they can be placed on any deformable object—such as a stuffed animal or a foam tube—to animate these objects from their surfaces to perform different tasks, depending on the properties of the soft objects and how the skins are applied, she said.

“We can take the skins and wrap them around one object to perform a task—locomotion, for example—and then take them off and put them on a different object to perform a different task, such as grasping and moving an object,” she explained. “We can then take those same skins off that object and put them on a shirt to make an active wearable device.”

In this way, people can use the technology to create robots that can perform a variety of functions almost spontaneously, paving the way for new types of devices and uses for those devices in numerous settings and applications, Kramer-Bottiglio stated.

NASA

The Yale team partnered with U.S. aerospace agency NASA on the development of the robotic skins. Indeed, it was a call by NASA for soft robotic systems that inspired Kramer-Bottiglio to invent the technology, which has potential for use by astronauts to accomplish various tasks with the same reconfigurable material.

RELATED ARTICLES:

“One of the main things I considered was the importance of multifunctionality—especially for deep space exploration, where the environment is unpredictable,” Kramer-Bottiglio said. “The question is: How do you prepare for the unknown unknowns?”

NASA and the Yale team envision that the robotic skins can be used on board the International Space Station or other spacecraft during missions to help the astronauts create a diverse range of robotic devices out of anything—from balloons to balls of crumpled paper—depending on their need. For instance, the same skins used to make a robotic arm out of a piece of foam could be removed and applied to create a soft Mars rover that can roll over rough terrain, researchers said.

Moreover, people can use the skin to give these insta-robots more complex movements by using more than one skin in a layering effect that allows for different types of motion, Kramer-Bottiglio said. “Now, we can get combined modes of actuation—for example, simultaneous compression and bending.” 

The team developed a number of prototypes to demonstrate their research in action, including foam cylinders that move like an inchworm, a shirt-like wearable device designed to correct a person’s posture, and a device with a gripper that can grasp and move objects. Researchers published a paper on their work in the journal Science Robotics.

Using a $2 million grant recently awarded to Kramer-Bottiglio by the National Science Foundation (NSF), she and her researchers will continue working to streamline the devices and explore the possibility of 3D printing the components, she said. The grant was given to her as part of the NSF’s Emerging Frontiers in Research and Innovation program.

Elizabeth Montalbano is a freelance writer who has written about technology and culture for 20 years. She has lived and worked as a professional journalist in Phoenix, San Francisco, and New York City. In her free time, she enjoys surfing, traveling, music, yoga, and cooking. She currently resides in a village on the southwest coast of Portugal.

SAVE THE DATE FOR PACIFIC DESIGN & MANUFACTURING 2019! 
Pacific Design & Manufacturing, North America’s premier conference that connects you with thousands of professionals across the advanced design & manufacturing spectrum, will be back at the Anaheim Convention Center February 5-7, 2019! Don’t miss your chance to connect and share your expertise with industry peers during this can't-miss event. Click here to pre-register for the event today!

 

iPhone XS and XS Max Teardowns: Better Machine Learning, New Battery Shapes, and Beer-Proofing

Thu, 2018-10-18 05:00
Can the iPhone XS and XS Max survive being fully submerged in beer? (Image source: SquareTrade)

It wouldn't be another year if there wasn't another iteration of the iPhone. Last year's iPhone X brought some notable innovations to the product line—most notably, the A11 Bionic chip designed for machine learning and augmented reality (AR) applications.

Let's forgo the usual, expected upgrades: bigger screen, better resolution...higher price tag. On the hardware end, the iPhone XS and XS Max (that's XS as in “10 S,” not “extra small”) do offer some upgrades over the X, but nothing as dramatic as the X offered over previous versions. The aforementioned A11 chip is replaced by a new chip called the A12 that Apple says is up to 15 percent faster and up to 50 percent lower power than the A11. The goal is still the same: leveraging machine learning for improved performance in AR, photos, gaming, and other applications that require graphics-heavy performance. But this time around, Apple has tucked a bit more under the hood. The A12 is manufactured using a 7-nanometer process, making it the first of its kind to appear in a smartphone.

Like the A11, the A12 is an Arm-based, 64-bit SoC that integrates a six-core CPU (two performance cores, four efficiency cores) with a proprietary GPU. This time, the CPU cores have had a slight upgrade with the high performance cores running at 2.49 GHz according to benchmarks, compared with the A11's 2.39 GHz. The GPU has also been upgraded from a three-core to a four-core, allowing it to perform up to 50 percent faster than the A11's GPU, according to Apple.

In addition, Apple has upgraded the “Neural Engine”—a proprietary, hardware-based neural network framework built into the A11 chip. The A12's Neural Engine is an eight-core architecture that Apple says can perform up to 5 trillion operations per second, making it up to nine times faster than the A11 at processing machine learning functions. Apple has also decided to open this version of the Neural Engine to its Core ML platform for developing machine learning apps, meaning developers will be able to create machine learning-based apps optimized for the XS and XS Max.

Teardowns of the XS and XS Max over on iFixit reveal much of the same engineering and design behind the X. Apple is still using the folded circuit board design to optimize space, with a few component changes here and there that more or less look to be moving up to the latest versions of components used in the iPhone X.

For the battery aficionados, it is particularly worth noting that the XS does some interesting things with a contoured battery for further space savings. Where the iPhone X used two cells joined into an L shape, the XS uses a single, L-shaped battery. Anyone wondering about the challenges of having your battery do yoga inside your device only needs to ask Samsung about the notorious Galaxy Note 7, which had an irregular battery shape that led to issues with... “exploding.”

RELATED ARTICLES:

Apple has been working on new battery form factors since at least 2011, according to a few patents pointed to by iFixit. One patent is for “Non-rectangular batteries for portable electronic devices” and another discusses creating “Battery cells having notched electrodes.” Both discuss the challenges of designing batteries with atypical shapes and methods of overcoming their inherent problems.

But on to the important stuff. Real Apple fans are curious about another feature touched on in the XS and XS Max announcements. According to Apple, the XS and XS Max are the most durable iPhones ever made and have been tested in many different liquids—including beer. The new iPhones feature an IP68 water resistance rating, meaning they withstand being submerged in up to 1.5 meters of water for 30 minutes.

To test just how durable (and beer proof) the XS and XS Max really are, SquareTrade—a subsidiary of AllState that provides insurance for electronic devices—dunked the phones into a five-foot tube containing the equivalent of 138 cans of beer (Pabst Blue Ribbon, specifically) for 30 minutes. It found that both phones functioned normally after being submerged.

Drop the iPhone XS and you could be in for a $599 repair bill. (Image source: SquareTrade)

iFixit did a beer test of its own, sinking an iPhone X into 2 meters of beer from Figeroa Mountain Brewing Company. At the end of iFixit's nearly five-hour livestream teardown of both phones, the XS was still going strong in all of that beer. Also, that's right, you can watch an iPhone submerged in beer for several hours if you are so inclined.

Drop resistance was another story. SquareTrade submitted the XS and XS Max to several drop tests. The short version of the story is: Don't drop either phone from a height of six feet or higher.

Overall, iFixit gave the XS and XS Max repairability scores of 6 out of 10—the same as the iPhone X. On one hand, display and battery repairs can be made pretty easily with enough know-how. But there are a lot of screws, fasteners, and adhesives to get through, and that shiny glass front and back is prone to damage, which means a hefty repair bill.

In a statement regarding SquareTrade's iPhone tests, Jason Siciliano, vice president and global creative director at SquareTrade, agreed with much of iFixit's assessment—most notably the fragile glass exterior. “Repair costs for the new iPhones are expected to be around $399 to replace a front screen and $599 to fix a shattered back,” Siciliano said. “Considering $599 was the cost of the most expensive version of the very first iPhone, repair costs are now something to consider when buying a new iPhone. They’re beautiful phones. Just hang on tight.”

Watch SquareTrade's iPhone XS and XS Max testing below.

And check out iFixit's website for full teardowns on the iPhone XS and XS Max.


Chris Wiltz is a Senior Editor at  Design News covering emerging technologies including AI, VR/AR, and robotics.

Today's Insights. Tomorrow's Technologies
ESC returns to Minneapolis, Oct. 31-Nov. 1, 2018, with a fresh, in-depth,  two-day educational program  designed specifically for the needs of today's embedded systems professionals. With four comprehensive tracks, new technical tutorials, and a host of top engineering talent on stage, you'll get the specialized training you need to create competitive embedded products. Get hands-on in the classroom and speak directly to the engineers and developers who can help you work faster, cheaper, and smarter.  Click here to register today! Page

Cobot Rings the New York Stock Exchange Bell

Thu, 2018-10-18 04:00

In news that contrasts the demise of Rethink Robotics last week, we saw the ringing of the closing bell of the New York Stock Exchange on Wednesday performed by a robot arm. The bell ringer was the UR5e from Universal Robots. The robot used a two-fingered gripper from Robotiq to clang the bell, offering viewers a display of cobots interacting with humans.

The display at the stock exchange was timely, given that Rethink Robotics pulled the curtain on Baxter and Sawyer last week, closing a very visible chapter in the progress of cobots. While leaders in the robot industry were quick to argue that Rethink’s troubles were not reflective of the robot industry in general, the ringing of the bell in downtown New York did bring positive light to the small robots world.

This NYSE bell ringing was a celebration of the five-year anniversary of ROBO Global—a robotics, automation, and AI index. Launched in October 2013, ROBO invests in more than 80 of the most innovative companies across the globe, spanning 12 subsectors from manufacturing to healthcare to sensing.

RELATED ARTICLES:

Cobots are a fast-growing segment of industrial automation. Sales of cobots are expected to jump tenfold to 34% of all industrial robot sales by 2025, according to the International Federation of Robotics.  

Rob Spiegel has covered automation and control for 17 years, 15 of them for Design News. Other topics he has covered include supply chain technology, alternative energy, and cyber security. For 10 years, he was owner and publisher of the food magazine Chile Pepper.

SAVE THE DATE FOR PACIFIC DESIGN & MANUFACTURING 2019!    
Pacific Design & Manufacturing, North America’s premier conference that connects you with thousands of professionals across the advanced design & manufacturing spectrum, will be back at the Anaheim Convention Center February 5-7, 2019! Don’t miss your chance to connect and share your expertise with industry peers during this can't-miss event. Click here to pre-register for the event today!

 

Moving Toward an Industrial Internet Connectivity Framework

Thu, 2018-10-18 03:00

A new white paper from the Industrial Internet Consortium is setting the stage for an important publication from this influential standards group—namely, a new Industrial Internet Networking Framework. As a complement to the upcoming framework document, a key goal is to provide advance information which details the requirements and best available technologies for the lower three layers of the Industrial Internet Communication Stack.

Industrial Networking Enabling IIoT Communication

IIoT communication has a number of complicating factors: the convergence of IT and OT technologies, along with technical requirements including network latency, jitter, reliability, and availability. Networks must meet application performance requirements, which can be very different, depending on technical needs of specific systems.

Shown is the Yin-Yang Model of industrial networking and IIoT. (Image source: Industrial Internet Consortium)

The authors of the white paper include David Zhe Lou (Huawei Technologies), Jan Holler (Ericsson), Clifford Whitehead (Rockwell Automation), Sari Germanos (B&R Automation), Michael Hilgner (TE Connectivity), Wei Qiu (Huawei Technologies), and Manish Sharma (Ligado Networks LLC). They point to the need for tools to assess available technologies and leverage best practices to achieve appropriate IIoT networking solutions.

According to the white paper, “the derived networking requirements lead to the diversity of design considerations, which provides introductory guidance to IIoT solution architects and industrial networking engineers to help them make the right choices.”

Addressing Different IIoT Networking Scenarios

Six different IIoT networking scenarios, or examples, are used to explore how collected data needs to be communicated across industrial networks. One interesting scenario is what they call “elastic virtual industrial control” to address market demands for effectively deploying scalable automation systems.

Today’s industrial processes are typically comprised of distributed subsystems controlled by discrete industrial controllers—often, a programmable logic controller (PLC). Future production systems will need to meet fast-growing market demands for agile and flexible production processes. PLCs offer constrained resources and hardware function is not “elastic.”

One option could be “PLC-as-a-service” at the edge of the network or as part of a cloud solution to help meet the scalable automation requirements of a business. This approach of using an Edge/Cloud-to-Field network could potentially guarantee the required latency and jitter between layers and lossless networks.

Standards and New Technologies

Another important aspect of IIoT connectivity solutions is the network’s role in the Industrial Internet communication stack. Ongoing standards work is addressing needs at the network, link, and physical layers of IIoT networks.

One area where developing an Industrial Internet Networking Framework (IINF) comes into the equation is an ability to provide a conceptual toolbox as well as guidance and recommendations for suitable network infrastructure in various industrial scenarios. A second priority is to create network architectures by applying design considerations onto derived requirements, and a reference model(s) or blueprints for various vertical application areas.  

RELATED ARTICLES:

The white paper authors conclude that “because there is no universal or preferred networking technology for IIoT, we introduced the concept of the toolbox and methodology, which will be elaborated in the forthcoming publication of an Industrial Internet Networking Framework. The IINF will include requirements and solutions overviews as well as tools to support the process of deriving requirements from usage scenarios and selecting the appropriate technology.”

The complete white paper on the IIC website is definitely worth a detailed read for automation and control engineers interested in learning more about these future possibilities.

Al Presher is a contributing editor for Design News specializing in automation and control and writing on automation topics, machine control, robotics, fluid power, and power transmission since 2002. Previously, he worked in the electronic motion control field for 18 years, most recently as VP of Marketing for ORMEC Systems Corp (manufacturer of PC-based servo control systems).

Today's Insights. Tomorrow's Technologies.
ESC returns to Minneapolis, Oct. 31-Nov. 1, 2018, with a fresh, in-depth, two-day educational program designed specifically for the needs of today's embedded systems professionals. With four comprehensive tracks, new technical tutorials, and a host of top engineering talent on stage, you'll get the specialized training you need to create competitive embedded products. Get hands-on in the classroom and speak directly to the engineers and developers who can help you work faster, cheaper, and smarter. Click here to register today!

Drone Batteries Receive Attention

Wed, 2018-10-17 04:00

Unmanned Aerial Vehicles (UAVs), more commonly called drones, have found an amazing number of commercial applications. They provide an unmatched platform for aerial surveillance for search and rescue, security, and gathering intelligence. As a camera platform, they have become the tool of choice for documentary filmmakers and cinematographers, used by everyone from local real estate agents to Hollywood directors. Agricultural practices often include drones for measuring water needs and crop conditions. They even provide capability for aerial spraying. Industrial uses include automated inspections of pipelines or chemical plants to detect leaks and other problems.

According to a report by DRONELIFE, as many as 96% of commercial UAVs are powered by electric batteries. Lithium ion battery technology makes this kind of electrified flight possible by providing a combination of sufficient power and energy output, with relatively light weight, and at a reasonable cost. Lithium ion batteries have become the power source for the 21st Century, powering everything from cell phones and personal electronics to electric vehicles (EVs) and providing backup storage for electric power grids. But as UAVs become more sophisticated and require better performance and longer flight times, their batteries are becoming more specialized.

Lithium ion batteries are becoming more specialized depending upon the application. Argonne National Laboratory has put into place research programs that will enhance drone (UAV) performance through improved battery materials. (Image source: Argonne National Laboratory)

Lithium Ion Limitations

Present-day, commercial lithium ion batteries have three main components. The positive electrode (cathode) is a combination of metal oxides. The negative electrode (anode) is usually made from carbon graphite that allows the storage of lithium ions between the graphite layers during charging. An organic solvent liquid electrolyte allows the flow of lithium ions back and forth between the cathode and anode during charging and discharging.

For land vehicles and stationary applications, additional range or battery capacity is added by simply adding more battery cells to increase the amount of energy that can be stored. When the batteries must be lifted into the air for flight, the additional weight that comes with additional battery cells becomes a problem. So as UAVs are growing in popularity and usefulness, battery systems are beginning to evolve to meet their specific requirements.

Shabbir Ahmed, a chemical engineer and group leader in the Chemical Sciences and Engineering division of the U.S. Department of Energy’s (DoE) Argonne National Laboratory, notes that moving from land-based to air-based vehicles creates very different battery and energy storage needs. “The safety margins for something that flies versus something on the road are different,” Ahmed said in an Argonne news release. ​“To lift something in the air also has different power requirements than for something that rolls on the ground. The longer the range, the heavier the battery. All these things must be considered when the application changes,” he added.

Searching Materials

In many ways, the search for higher energy density (energy per unit mass) comes down to materials research—an area in which Argonne is particularly well suited. The metal oxides used in lithium ion batteries are typically a combination of nickel, manganese, and cobalt. Cathodes made from these metals are heavy and relatively expensive and lithium ions can only provide a single electron, limiting batteries with this configuration to around 200 watt-hours per kilogram (Wh/kg).

On the anode side, the use of graphite to collect and store lithium ions prevents that buildup of spiky dendritic crystals when the battery is charged, as would occur if pure lithium metal were used as the anode. These dendrites can become large enough to grow across the gap between the anode and cathode, shorting out the battery and causing safety issues. The graphite can only supply a limited amount of lithium during discharge. Lithium metal anodes are attractive because they can increase power output by a factor of 2 or 3 times what graphite can do.

Solid electrolytes may provide an alternative to liquid electrolytes. Not only can they prevent the growth of dendritic crystals when lithium metal is used as an anode; they also don’t provide the same fire hazard that the current commercial flammable organic solvent electrolytes do. Solid state batteries with solid electrolytes are a subject of intense research worldwide.

RELATED ARTICLES:

Flight Requirements

Deciding which battery materials are best suited for drone flight first requires a careful examination of the requirements for UAV missions. To address these questions, Argonne has recently embarked on a new initiative, supported by the Laboratory Directed Research and Development program. It has established a new Mission-Driven Unmanned Aircraft Systems (UAS) Design Center. This center addresses the various interconnected challenges and tradeoffs of energy consumption, noise, flight time, and payload. Among the goals of the new design center is to develop a tool to evaluate potential drone design options, including battery-powered and hybrid architectures.

The Argonne release notes that, “All the research on batteries for UAVs will connect to work currently underway in the Argonne Collaborative Center for Energy Storage Science (ACCESS), a powerful association of scientists and engineers from across Argonne that solves energy storage problems through multidisciplinary research.”

The Real World

One of the most important customers for this research is the US military, whose use of UAVs has increased dramatically in recent years. “When you work with end users like the military on advanced battery systems, they want solutions,” said Christopher Claxton, who oversees commercialization management of Argonne’s battery intellectual property portfolio. “We have a demonstrated ability to work directly with people who have highly specific needs to create specialized materials that are matched to their exact mission requirements,” he added.

“Right now, the big thinking is about moving from land-based to the third dimension,” said ACCESS director Venkat Srinivasan in the Argonne release. ​“We’re very interested in everything in the third dimension—which includes drones. It’s all part of a continuum.”

It is safe to say that as real world applications become more specialized, so will the batteries that serve them. “The reason we’ve been so effective is because we are making a difference in the real world,” said Srinivasan. ​“Our lab discoveries lead to market impact. We are always looking at what the world needs and where it’s going, and looking at what these questions mean for batteries.”

Senior Editor Kevin Clemens has been writing about energy, automotive, and transportation topics for more than 30 years. He has masters degrees in Materials Engineering and Environmental Education and a doctorate degree in Mechanical Engineering, specializing in aerodynamics. He has set several world land speed records on electric motorcycles that he built in his workshop.

SAVE THE DATE FOR PACIFIC DESIGN & MANUFACTURING 2019! 
Pacific Design & Manufacturing, North America’s premier conference that connects you with thousands of professionals across the advanced design & manufacturing spectrum, will be back at the Anaheim Convention Center February 5-7, 2019! Don’t miss your chance to connect and share your expertise with industry peers during this can't-miss event. Click here to pre-register for the event today!

 

Novel Catalyst Allows for Artificial Photosynthesis

Wed, 2018-10-17 03:00

Natural plant photosynthesis still remains the most efficient way of using energy from sunlight to create a “fuel” source—in this case, food for plants. Researchers have been working to mimic this process artificially to use it to turn that energy into clean sources of power to replace fossil fuels and other types of energy that create pollution—so far, to modest success.

A team at Ludwig Maximillian University (LMU) in Munich has advanced this research with the identification of a novel catalyst based on semiconductor nanoparticles that can facilitate all of the reactions needed for an artificial photosynthesis process. The LMU physicists—led by Jacek Stolarczyk and Jochen Feldmann—developed a water-splitting system based on the catalysts in collaboration with chemists at the University of Wurzburg in Germany. Their work is part of a project in Europe aimed at turning sunlight into non-fossil-fuel energy as part of an ongoing interest in more sustainable sources of energy, researchers said in an LMU news release.

The new catalyst system developed by an interdisciplinary group of researchers in Germany functions as a multifunctional tool for splitting water. (Image source: C. Hohmann)

Because it happens so easily in nature, the process of photosynthesis is deceptively complex. This has made it hard for scientists to mimic it synthetically, as it requires a combination of processes that can interfere with each other.

Current technical methods for the photocatalytic splitting of water molecules use synthetic components to mimic these processes. In such systems, semiconductor nanoparticles that absorb light quanta, or photons, can serve as the photocatalysts.

Absorption of a photon generates a negatively charged particle, or an electron, and a positively charged species known as a “hole.” The two must be spatially separated so that a water molecule can be reduced to hydrogen by the electron and oxidized by the hole to form oxygen. The problem with this process has been allowing the two half-reactions to take place at the same time on a single particle while simultaneously ensuring that the oppositely charged species do not recombine, researchers said.

Another complexity is that many semiconductors can be oxidized themselves, and thereby destroyed, by the positively charged holes, Stolarczyk said in the release. “If one only wants to generate hydrogen gas from water, the holes are usually removed rapidly by adding sacrificial chemical reagents,” he explained. “But to achieve complete water splitting, the holes must be retained in the system to drive the slow process of water oxidation.”

Nanorods

Researchers solved the problem by using nanorods made of cadmium sulfate, a semiconducting material—the tips of which were decorated with tiny particles of platinum, he said. The platinum act as acceptors for the electrons that are excited by the light absorption. Researchers also spatially separated the areas on which the oxidation and reduction reactions occurred on the nanocrystals, they said.

This configuration overall provides an efficient photocatalyst for the reduction of water to hydrogen, Stolarczyk explained. The oxidation reaction, on the other hand, takes place on the sides of the nanorod.

To make this happen, researchers attached to the lateral surfaces a ruthenium-based oxidation catalyst developed by the scientists from the University of Würzburg. They equipped the compound with functional groups that anchored it to the nanorod.

RELATED ARTICLES:

“These groups provide for extremely fast transport of holes to the catalyst, which facilitates the efficient generation of oxygen and minimizes damage to the nanorods,” said Peter Frischmann, a researcher from the Würzburg team.

The result is a complete splitting of water in a system that uses only one catalyst, eliminating some of the previous complexities with artificial photosynthesis, researchers said. The interdisciplinary team published a paper on its work in the journal Nature Energy.

The team hopes its work will be used to explore the possibility of using sunlight not only to generate electricity using solar panels, but also to spur photosynthesis to provide even cleaner forms of fuel.

Elizabeth Montalbano is a freelance writer who has written about technology and culture for 20 years. She has lived and worked as a professional journalist in Phoenix, San Francisco, and New York City. In her free time, she enjoys surfing, traveling, music, yoga, and cooking. She currently resides in a village on the southwest coast of Portugal.

SAVE THE DATE FOR PACIFIC DESIGN & MANUFACTURING 2019! 
Pacific Design & Manufacturing, North America’s premier conference that connects you with thousands of professionals across the advanced design & manufacturing spectrum, will be back at the Anaheim Convention Center February 5-7, 2019! Don’t miss your chance to connect and share your expertise with industry peers during this can't-miss event. Click here to pre-register for the event today!

 

What Is Crowdsensing? ...And Where Can I Park?

Tue, 2018-10-16 05:00
Someday, smart cities could use crowdsourced data to alleviate parking woes. (Image source: Omer Rana on Unsplash)

You're running late for an appointment. By some miracle, you manage to catch all the green lights on the way...and you still end up late because you spent 15 minutes driving in circles looking for parking! If you're lucky enough to find street parking in a major city, you're probably rolling the dice as to whether you'll get a parking ticket or not. Even the newer parking garages, which show you the number of available spaces, are prone to sending you on wild goose chases.

A group of researchers from Singapore-based Nanyang Technological University (NTU Singapore) think the solution for our parking woes lies in a mobile data-gathering technology called crowdsensing. But parking is just the first step toward using our mobile data to make our day-to-day lives more efficient.

In 2016, Jim Cherian—a senior research engineer at NTU Singapore—conducted a study around ParkGauge, a method of leveraging crowdsensing to capture mobile phone data to track the states of cars in parking garages. ParkGauge uses 3G, GPS, and Wi-Fi to collect data from a smart phone's sensors—including its gyroscope, accelerometer, and barometer—to determine driving states (i.e., turning or braking). Based on the driving state, and with enough data from other vehicles in the garage, ParkGauge's machine learning algorithm can infer which cars are parked, where they are parked and likely to park, and also where parking spaces are most likely to be available. This can all be delivered in real time—not only in a parking area, but also online so users can understand the parking situation before they even get there.

What Is Crowdsensing?

Most have probably heard of crowdsourcing, and may even have funded a few projects on sites like Kickstarter or Indiegogo. Crowdsensing is somewhat similar in that it seeks to leverage large groups of people. But what it wants is your data, not your money. “Crowdsensing refers to the process of collecting sensor data using smart devices from a crowd of contributing users. This is akin to a crowd of connected sensors on the move. In other words, it simply refers to the crowdsourcing of sensor data to enable and empower specialized applications and services,” NTU's Cherian told Design News. “Most of the crowdsensing efforts are done using commodity mobile devices (such as smartphones or smartwatches) and hence it also is known as 'Mobile Crowdsensing.'”

Cherian explained that crowdsensing typically comes in one of two forms. There's participatory crowdsensing, in which each user manually contributes data. Opportunistic crowdsensing, in contrast, refers to automatic, non-intrusive, “behind the scenes” data collection—usually with no or minimal user intervention.


”Participatory crowdsensing has been around for a decade or even longer, but it often fails due to insufficient user motivation and incentives,” Cherian said. Some may remember an app called Open Spot released by Google back in 2011. The app's heart was in the right place, but it only worked if users manually notified the app that they were leaving a parking space. It was the digital equivalent of flagging someone down to let them know you're leaving a space. But politeness only goes so far. Without the immediacy of seeing someone searching for a spot, there was no real incentive for users to interact with the app.


Opportunistic schemes have their own challenges. Because the data collected is generally not verified or contextualized by humans, it can create sets of bad or low-fidelity data and/or noise. The idea of having data collected in the background also sparks understandable privacy concerns from users.

RELATED ARTICLES:

The key challenges for crowdsensing, Cherian said, are to incentivize human users, discriminate trust levels and data quality, and ensure data privacy. If this is done, Cherian imagines that crowdsensing could be applied to a variety of applications beyond parking. “There are several problem areas where such methods could be applied,” he said. “This includes traffic congestion estimation, prediction of stopped duration at signalized intersections, and, in general, almost any problem that involves estimating a super-state that can be represented as the temporal evolution of sub-states in a hierarchical fashion.”

The data privacy hurdle could be a major one, however—particularly in today's post-Cambridge Analytica climate. There are major risks associated with any sort of data that deals with location and activity. Is it worth risking exposing your real-time location to strangers in exchange for a smoother commute?

“Overcoming these concerns while ensuring a good quality of service (i.e., differential privacy) is an active area of ongoing research,” Cherian said. “Typical ways of addressing this include data anonymization, data obfuscation, and stochastic (randomized) data sampling. But achieving practical, efficient, privacy-preserving crowdsensing schemes still remains a challenging subject.” The ParkGauge study explains that a major goal is to get the most out of as little data as possible in any particular application:

“...Unlike existing crowdsensing-based parking systems that directly count the available parking lots, ParkGauge does not require a “crowd” (hence high penetration of the application) for sensing individual parking garages. Instead, a minimal amount of sensing data acquired from a small number of users for each parking garage would be sufficient for ParkGauge to deliver useful information, whereas the “crowd” is needed only for covering many parking garages across a large urban area.”

Because of this, technologies like ParkGauge are more likely to roll out in places such as shopping malls before a wider, smart city deployment.

Connecting Crowds of Devices

Cherian and his colleagues were able to conduct their initial study with ParkGauge using only a low-energy 3G connection. Surely, a larger deployment will require better and faster connectivity? When thinking about city-wide deployments, crowdsensing as a whole looks like an emerging use case for 5G.

“Communication bandwidth and energy consumption requirements are important aspects to consider and fine-tune in a productive crowdsensing application,” Cherian said. “These requirements vary based on application, the level of accuracy required, the amount of data to be transmitted, and the rate (how frequently) at which it has to be transmitted to achieve a certain level of quality of service.” Cherian said large-scale crowdsensing applications and services could be delivered with today's level of connectivity, but newer technologies like 5G could only help.

When discussing vehicles, the natural question is: In a world of connected vehicles, could smartphones be taken out of the equation entirely in favor of data from in-vehicle sensors? “Vehicle sensor data are more reliable and they can be obtained using different methods. But this will demand some level of in-car instrumentation,” Cherian explained. “For example, the OBD2 [on-board diagnostics] connector that is installed in most cars manufactured in the last two decades offers some basic information about the vehicle that can be read wirelessly over Bluetooth or over a USB cable. More advanced vehicle information can be gathered using open connectivity protocols (such as OpenXC on Ford vehicles) or directly from the CAN bus (proprietary to each vehicle manufacturer) or using connected sensing devices including cameras.”

Cherian said that any additional sensor-rich hardware, if installed, can yield a wealth of information about the vehicle and/or its environment, which would in turn lead to more reliable services. “However, all of these methods require instrumenting the car, and may not be attractive enough to motivate a large crowd of users who may have their own concerns or reservations,” he said.

Cherian added, “It is a different story, though, if the local governments can also encourage or enforce the installation of connected on-board hardware for specialized purposes, such as parking/toll-collection or dynamic road pricing. For example, GPS-based ERP2 [Electronic Road Pricing] boxes are due to be rolled out across vehicles here in Singapore in the next few years. If they are crowdsensed, they can offer solutions to many other problems. On the other hand, many of the current proposals for vehicular mobile crowdsensing rely on infrastructure such as GPS and/or Wi-Fi. Unfortunately, they either do not work indoors or are unavailable to indoor parking garages—especially when underground.”


Cherian said research like ParkGauge, which proposes an infrastructure-free solution, becomes more relevant in this context. While the accuracy offered by mobile devices may be slightly inferior, it still offers a practically acceptable degree of accuracy. This is also where the use of machine learning plays a role, allowing for the development of crowdsensing systems that improve over time.

Learning from the Crowd

“It all depends on the quantity and, more importantly, the quality of data available for learning,” Cherian explained. In our experiments, we have made use of a hierarchical combination of ‘supervised’ machine learning methods. This means that we have utilized ground truth (in terms of labels that were manually provided by experienced human users or occupancy data that has been precisely obtained by infrastructure-based methods) so that the learning algorithm was able to learn patterns from it in a reliable manner.”

The ParkGauge researchers use a statistical analysis approach called a Hidden Markov Model that allows the algorithm to work with states that are not immediately observable. In other words, you don't have to actually see every vehicle to predict where it is or what it is doing.

“To detect the driving contexts (e.g., ‘driving’ or ‘parked’) from a smartphone placed in a vehicle, we use a Hidden Markov Model that can model the temporal evolution of contexts as a function of driving states (such as accelerating, braking, turning, or walking). These driving states can be easily detected from smartphone inertial sensor data, such as an accelerometer or gyroscope (and barometer, if available) using a decision tree-learning method, such as Random Forest classifier,” Cherian explained.

“Once we have the driving contexts and their time stamps, we can compute some temporal characteristics of parking garages and feed them into nonlinear regression methods to estimate the occupancy of the parking garage,” he said. “A primary reason why this works remarkably well is actually quite intuitive. The occupancy of a large parking garage is inversely correlated to the time taken to park there and the time spent in queuing or looking for an available space upon arrival. Combined with additional features or characteristics to learn from, we can obtain quite useful accuracy levels. Furthermore, as we acquire more data, we are able to progressively make better predictions on real-time parking occupancy for arriving drivers. We are also able to identify and report trends that are useful to businesses and parking operators alike.”

Following their 2016 study, the NTU Singapore researchers were hoping to test a large-scale deployment of ParkGauge, but Cherian said this hasn't come about due to internal factors and operational limitations. However, he said his team has explored methods to enhance the scalability of its crowdsensing solution while reducing the need to do a major data collection for deployments at new, unseen parking garages. All of this work is currently undergoing peer review, according to Cherian.

The team also recently published new research around another prototype crowdsensing system called ParkLoc, which is aimed at another parking-related pain point. “We have worked on yet another aspect of using infrastructure-free mobile sensing for indoor parking garages regarding the localization of a parked vehicle indoors,” Cherian said. “This work in particular is motivated by the often embarrassing problem of forgetting where we parked.”

Chris Wiltz is a Senior Editor at  Design News covering emerging technologies including AI, VR/AR, and robotics.

Today's Insights. Tomorrow's Technologies
ESC returns to Minneapolis, Oct. 31-Nov. 1, 2018, with a fresh, in-depth,  two-day educational program  designed specifically for the needs of today's embedded systems professionals. With four comprehensive tracks, new technical tutorials, and a host of top engineering talent on stage, you'll get the specialized training you need to create competitive embedded products. Get hands-on in the classroom and speak directly to the engineers and developers who can help you work faster, cheaper, and smarter.  Click here to register today!

Edge Computing Emerges as Megatrend in Automation

Tue, 2018-10-16 04:00

Edge computing technology is quickly becoming a megatrend in industrial control, offering a wide range of benefits for factory automation applications.  While the major cloud suppliers are expanding, new communications hardware and software technology are beginning to provide new solutions compared to the previous offerings used in factory automation.

A future application possibility that illustrates both the general concept and potential impact of edge computing in automation and control is edge data being visualized on a tablet in a brownfield application. (Image source: B&R Industrial Automation)

“The most important benefit [compared to existing solutions] will be interoperability—from the device level to the cloud,” John Kowal, director of business development for B&R Industrial Automation, told Design News. “So it’s very important that communications be standards-based, as you see with OPC UA TSN. ‘Flavors’ of Ethernet including ‘flavors’ of TSN should not be considered as providing interoperable edge communications, although they will function perfectly well in a closed system. Interoperability is one of the primary differences between previous solutions. OPC UA TSN is critical to connecting the edge device to everything else.”

Emerging Technology Solutions

Kowal added that, in legacy installations, gateways will be necessary to translate data from proprietary systems—ideally using OPC UA over standard Ethernet to the cloud. An edge computing device can also provide this gateway translation capability. “One of the benefits of Edge technology is its ability to perform analytics and optimization locally, and therefore achieve faster response for more dynamic applications, such as adjusting line speeds and product accumulation to balance the line. You do not expect this capability of a gateway,’” Kowal added.

Sari Germanos of B&R added that these comments about edge computing can also be equally applied to the cloud. “With edge, you are using fog instead of cloud with a gateway. Edge controllers need things like redundancy and backup, while cloud services do that for you automatically,” Germanos said. He also noted that cloud computing generally makes data readily accessible from anywhere in the world, while the choice of serious cloud providers for industrial production applications is limited. Edge controllers are likely to have more local features and functions, though the responsibility for tasks like maintenance and backup falls on the user.

Factory Automation Applications

Kowal noted that you could say that any automation application would benefit from collecting and analyzing data at the edge. But the key is what kind of data, what aspects of operations, and what are the expectations of analytics that can deliver actionable productivity improvements? “If your goal is uptime, then you will want to collect data on machine health, such as bearing frequencies, temperatures, lubrication and coolant levels, increased friction on mechanical systems, gauging, and metrology,” he said.

Some of the same logic applies to product quality. Machine wear and tear leads to reduced yield which can, in turn, be defined in terms of OEE data gathering that may already be taking place, but will not be captured at shorter intervals and automatically communicated and analyzed.

Capturing Production Capacity as well as Machine and Materials Availability

Beyond the maintenance and production efficiency aspects, Kowal said that users should consider capturing production capacity, machine and raw material availability, and constraint and output data. These will be needed to schedule smaller batch sizes, tier more effectively into ordering and production scheduling systems, and ultimately improve delivery times to customers.

Edge control technology also offers benefits compared to IoT gateway products. Kowal said that he’s never been big on splitting hairs with technology definitions—at least not from the perspective of results. But fundamentally, brownfield operators tend to want gateways to translate between their installed base of equipment, which may not even be currently networked, and the cloud. Typically, these are boxes equipped with legacy communications interfaces that act as a gateway to get data from the control system without a controls retrofit, which can be costly, risky, and even ineffective. 

“We have done some work in this space, though B&R’s primary market is in new equipment,” Kowal added. “In that case, you have many options how to implement edge computing on a new machine or production line. You can use smart sensors and other devices direct to cloud or to an edge controller. The edge controller or computing resource can take many form factors. It can be a machine controller, an industrial PC that’s also used for other tasks like HMI or cell control, a small PLC used within the machine, or a standalone dedicated edge controller.”

Boosted Memory, Processing, and Connections

Germanos noted that industrial controllers were not designed to be edge controllers; they are typically designed to control one machine versus a complete production line.  Edge controllers have built-in redundancy to maintain production line operation.

RELATED ARTICLES:

“If I was designing a new machine, cell, line, or facility, I would set up the machine controllers as the edge controller/computers rather than add another piece of control hardware or gateway,” Germanos said. “Today, you can get machine controllers with plenty of memory, processing power, and network connections. I would not select a control platform unless it supports OPC UA, and I would strongly urge selecting a technology provider that supports the OPC UA TSN movement known as “The Shapers,” so that as this new standard for Industrial Ethernet evolves, I would be free from the ‘flavors’ of Ethernet.”

His recommendation is to use a platform that runs a real-time operating system for the machinery on one core or, using a Hypervisor, whatever other OS might be appropriate for any additional applications that run on Windows or Linux.

Al Presher is a veteran contributing writer for Design News, covering automation and control, motion control, power transmission, robotics, and fluid power.

SAVE THE DATE FOR PACIFIC DESIGN & MANUFACTURING 2019!    
Pacific Design & Manufacturing, North America’s premier conference that connects you with thousands of professionals across the advanced design & manufacturing spectrum, will be back at the Anaheim Convention Center February 5-7, 2019! Don’t miss your chance to connect and share your expertise with industry peers during this can't-miss event. Click here to pre-register for the event today!

 

Smartphone-Powered Device Paves Way for Wearable Ultrasound

Tue, 2018-10-16 03:00

Mobile devices and advanced sensing technology have made it possible to design medical diagnostic tools, such as blood and other vital-sign testing, that are portable, less expensive, and easier to use than previous technology. Now, engineers at the University of British Columbia (UBC) have added ultrasound technology to the list with the development of a new ultrasound transducer or probe. It is portable, wearable, and powered by a smartphone.

The transducer, which is about the size of a Band-Aid, could be made for as little as $100. This dramatically reduces the cost usually required to develop such technology, said Carlos Gerardo, a Ph.D. candidate in electrical and computer engineering at UBC, who was part of the research.

“The key significance for our work is the fact that we were able to fabricate high-quality ultrasound transducers at a very low cost,” he told Design News. “In the current market, you are able to either get a high-quality transducer at a high price or a low-quality transducer at a low price, creating high-resolution images or low-resolution images, respectively.”

In terms of quality, the transducer developed by Gerardo and his colleagues at UBC can detect ultrasound echo waves coming from deep tissues, which is what some of the top machines currently can do, he said.

Engineers at the University of British Columbia (including Carlos Gerardo, pictured) have developed a new ultrasound transducer, or probe, that could dramatically lower the cost of ultrasound scanners to as little as $100. (Image source: UBC)

Vibrating Drums

Conventional ultrasound scanners use piezoelectric crystals to create images of the inside of the body and send them to a computer to create sonograms. To make their technology less expensive to manufacture, the UBC team replaced the crystals with tiny vibrating drums made of polymer resin, called polyCMUTs—or polymer capacitive micro-machined ultrasound transducers—which are less expensive to manufacture, Gerardo said.

“The diameter of each of these drums is around 1/10th of a millimeter—the same as the diameter of a human hair,” he explained. “We have thousands of these drums in our transducer that vibrate at the same time and produce ultrasound waves.”

While the concept of using these vibrating drums for this purpose is not new, it typically has been expensive to fabricate the drums—involving a process that uses semiconductors, metals, toxic gases, and solvents, Gerardo said. “Our approach is different in the sense that we use plastic-like materials instead of semiconductors,” he explained.

The team used a plastic-like polymer resin—a photosensitive polymer called SU-8—to significantly reduce the fabrication costs, using only some basic equipment found in even the most basic microfabrication lab, Gerardo said. This gave them an unexpected positive result. “It turns out that, by using plastic-like materials, the sensitivity of your device is boosted,” he said. “This enabled the creation of high-quality ultrasound transducers for biomedical applications at very low costs.”

Moreover, the transducer developed by the team needs just 10 volts to operate, so it can be powered by a smartphone, which makes it suitable for use in remote or low-power locations, researchers said. The researchers published a paper on their work in the journal Nature Microsystems & Nanoengineering.

RELATED ARTICLES:

Gerardo and the UBC team envision their technology potentially making medical diagnosis much quicker and easier for even family physicians—possibly even saving people’s lives through early detection of disease, he said.

“Many of people’s top killers—breast cancer, prostate cancer, liver diseases, and cardiac problems—can be detected using ultrasound. And in all cases, early detection could mean the difference between life or death,” he said. “Now, imagine that your family doctor could immediately confirm the presence of breast cancer or a heart problem in the same room using a portable and ultra-cheap ultrasound scanner connected to a smartphone.”

The team plans to continue its work to turn the technology into a commercial product, and is currently seeking industrial partners to help shorten the timeframe it takes to go to market, Gerardo said.

Elizabeth Montalbano is a freelance writer who has written about technology and culture for 20 years. She has lived and worked as a professional journalist in Phoenix, San Francisco, and New York City. In her free time, she enjoys surfing, traveling, music, yoga, and cooking. She currently resides in a village on the southwest coast of Portugal.

SAVE THE DATE FOR PACIFIC DESIGN & MANUFACTURING 2019! 
Pacific Design & Manufacturing, North America’s premier conference that connects you with thousands of professionals across the advanced design & manufacturing spectrum, will be back at the Anaheim Convention Center February 5-7, 2019! Don’t miss your chance to connect and share your expertise with industry peers during this can't-miss event. Click here to pre-register for the event today!

 

Using 3D Printing to Blend Biology and Electronics

Mon, 2018-10-15 06:00

Interfacing active devices with biology in 3D printing could impact a variety of fields, from regenerative bioelectronics and smart prosthetics to biomedical devices and human-machine interfaces. Researchers at the University of Minnesota are exploring 3D printing with biology from the molecular scale of DNA and proteins to the macroscopic scale of tissues and organs. The goal is to print three-dimensional biological material that is soft and stretchable as well as temperature sensitive. 

At the University of Minnesota, Michael McAlpine’s group is working on a wide range of bio materials in 3D printing. (Image source: University of Minnesota)

“We build 3D printers that go beyond hard plastics. Hard plastics have limited abilities. We’ve been asked to expand capabilities to include cells, soft materials, or electronic materials,” said Michael McAlpine, Benjamin Mayhugh associate professor of mechanical engineering at the University of Minnesota. “We’re working on a whole range of possibilities, from bionic organs to 3D printed boards with tissue engineering.”

McAlpine will present the keynote address, "3D Printing Functional Materials & Devices," at the Embedded Systems Conference in Minneapolis on Oct. 31. In that talk, he will explore the possibilities of using 3D printing capabilities with biological materials.

Interweaving Biology and Electronics

The ability to three-dimensionally interweave biological and functional materials could lead to the creation of devices that possess unique and compelling geometries, properties, and functionalities. Interfacing these active devices with biology in a 3D print environment could impact a variety of fields including regenerative bioelectronics, smart prosthetics, biomedical devices, and human-machine interfaces. “We’re working to enable printers to print different materials on the same platform. For the first time, it would allow anyone to fabricate or architect diverse materials like biological or electronics that are interwoven into arbitrary shapes,” said McAlpine. “This includes advanced biomedical objects that are custom-fitted or designed for function. The 3D printer is the tool that enables the next revolution in biomedical objects.”

Biology—from the molecular scale of DNA and proteins to the macroscopic scale of tissues and organs—is three-dimensional, often soft and stretchable, and temperature sensitive. This renders most biological platforms incompatible with the fabrication and materials processing methods that have been developed for functional electronics, which are typically planar, rigid, and brittle. McAlpine is working to overcome these limitations by experimenting with new materials. “Since there are so many types of materials, you have a whole host of problems from energy harvesting to biomedical and electronics,” he said.

Personalized 3D Printing Biological Objects

The idea is to use an additive manufacturing technology that offers freeform, autonomous fabrication. This approach addresses a number of possibilities: (1) using 3D printing and imaging for personalized, multifunctional device architectures; (2) employing nano-inks as an enabling route for introducing diverse material functionality; and (3) 3D printing a range of functional inks to enable the interweaving of a diverse palette of materials, from biological to electronic.

3D printing offers a multiscale platform that can incorporate functional nanoscale inks. This includes the printing of microscale features and, ultimately, the creation of macroscale devices. This blending of 3D printing, functional materials, and “living” platforms may enable next-generation 3D printed devices from a one-pot printer. “We’ve shown a lot in piecemeal demonstrations,” said McAlpine. “We’re integrating multi functionalities in complicated ways for advanced therapeutics and advanced prosthetics. This is an exciting future that’s getting more and more complicated.”

RELATED ARTICLES:

McAlpine has obtained funding for the research from a wide range of interests. “There are lots of different applications and agencies that want to support this. We’re getting funds from all over—from the NIH (National Institutes of Health), to the U.S. Army, to individual companies,” said McAlpine. “It’s the next wave in the materials revolution. It’s well beyond hard plastics. So we’re working with different types of organizations for funding.”

Rob Spiegel has covered automation and control for 17 years, 15 of them for Design News. Other topics he has covered include supply chain technology, alternative energy, and cyber security. For 10 years, he was owner and publisher of the food magazine Chile Pepper.

SAVE THE DATE FOR PACIFIC DESIGN & MANUFACTURING 2019!    
Pacific Design & Manufacturing, North America’s premier conference that connects you with thousands of professionals across the advanced design & manufacturing spectrum, will be back at the Anaheim Convention Center February 5-7, 2019! Don’t miss your chance to connect and share your expertise with industry peers during this can't-miss event. Click here to pre-register for the event today!

 

Magic Leap Looks for its Enterprise Footing

Mon, 2018-10-15 05:00
Engineers from Onshape examine a 3D CAD model through the Magic Leap One. (Image source: Onshape)

Should Magic Leap be focusing more on the enterprise and B2B market?

Depending on your level of enthusiasm, there was plenty to be either very excited or very cynical about during the opening keynote of Magic Leap's first ever LEAP Developer Conference last week. On one hand, there was company CEO Rony Abovitz attempting to channel the spirit of Steve Jobs while portending a future for “spatial computing,” in which all mixed reality (MR) devices are connected in a vast IoT-like network called a “Verse” while developers enjoy Burning Man-like festivals geared around MR. There was also discussion of a commitment to build a diverse and inclusive developer landscape, with the company promising initiatives to attract female developers and developers of color. And if you're into cute animals, you got to see noted science fiction author Neal Stephenson talk about virtual goats.

Underneath all of the excitement, however, is a sobering dose of reality. Aside from failing to deliver on the lofty promises it was making before the release of the Magic Leap One, the company has also been criticized for having hardware that does nothing to “leap” beyond what's currently available on the market. (That pun will never get old, ever!) Most notably, former Oculus CEO and founder, Palmer Luckey, wrote a scathing review of the Magic Leap One, calling it a “Tragic Heap.”

On the application side, there hasn't been much to get engineers or enterprise professionals excited. Most of the keynote focused on gaming, social, and entertainment applications.

However, the silver lining is that Magic Leap is beginning to finally demonstrate some potential for enterprise. While enterprise announcements made up perhaps 15 to 20 minutes of the over three-hour-long opening keynote, what was talked about points to a possible future for Magic Leap in offices and even factory floors.

The most promising announcement was a deal with Onshape—the creators of a cloud-based, multiplatform CAD system—that will bring Onshape's CAD platform to Magic Leap. Speaking during the keynote, Jon Hirschtick, CEO and co-founder of Onshape, offered a brief glimpse of the technology. It was not made clear, however, if what was shown was an actual product demo or a proof of concept, as was the case with many Magic Leap demos, given how new the product is. “We're ready to take a huge leap in the CAD era,” Hirschtick told the audience, later adding, “ …I truly believe mixed reality and CAD can improve the way every product on Earth is designed.”

RELATED ARTICLES:

VR has been used in CAD design in some form for decades now, but is seeing renewed interest with the availability of new AR/VR hardware. Nvidia, for example, has been developing a system it calls Holodeck for design collaboration in virtual environments. Companies like Autodesk, MakeVR, and SolidWorks produce CAD software for VR that is already available.

Hirschtick, who was also a founder of SolidWorks, said the advantage of mixed reality hardware like Magic Leap is that it allows for design in the context of the real world as well and not just a scaled virtual space. If an engineer wanted to see how well a machine part being designed fit, they could use MR to overlay that part onto the actual machine rather than a virtual model of it.

As exciting as consumer entertainment applications may be, Magic Leap—like many of its competitors—could be quietly heading toward enterprise—a less exciting market but arguably more lucrative in the long term. Products like Microsoft's Hololens and various product offerings by companies like Upskill and Vuzix have eschewed consumer markets altogether in favor of enterprise. Those who tracked Gartner's annual Hype Cycle report may have noted the absence of VR, suggesting the technology is deep into the “Plateau of Productivity.” That is considered the point where a technology goes mainstream and its value and relevance to the market is clear and taking off. Augmented reality is headed that way, but is in the “Trough of Disillusionment,” according to Gartner—the point where failed use cases begin to emerge and point a technology in a more productive direction.

Aside from Onshape, Trimble—maker of the SketchUp design software—and Wacom have announced Magic Leap functionality for their software and products.

Weeks before the Leap Conference, German medtech company Brainlab announced it was partnering with Magic Leap to develop a version of Magic Leap's technology platform targeted specifically at medical applications. Essentially, it will be a medtech-focused operating system for Magic Leap.

“Truly unique and first-of-its-kind, our operating system will enable anyone to integrate multi-dimensional virtual data into real-world clinical workflows, driving precision, productivity, and an intuitive user experience,” Stefan Vilsmeier, founder and CEO of Brainlab, said in a press statement. “We will deploy mixed reality and machine learning to capture vital data that will allow physicians to optimize their procedures for every patient.”

The platform will combine data management, cloud computing, visualization, and data pre-processing software with Magic Leap’s spatial computing to support management, visualization, augmentation, and navigation of diagnostic imaging data for a broad range of clinical procedures. It will also be open to third party developers looking to develop medical applications for Magic Leap. Brainlab said the first release will target enabling, planning, and simulation of in-office interventional procedures. The company hopes to quickly expand into intensive care, radiology, and even surgical applications soon after.

Watch the full Magic Leap Conference keynote below. The enterprise application discussion begins around the 2:30:00 mark.

 

Chris Wiltz is a Senior Editor at  Design News covering emerging technologies including AI, VR/AR, and robotics.

Today's Insights. Tomorrow's Technologies
ESC returns to Minneapolis, Oct. 31-Nov. 1, 2018, with a fresh, in-depth,  two-day educational program  designed specifically for the needs of today's embedded systems professionals. With four comprehensive tracks, new technical tutorials, and a host of top engineering talent on stage, you'll get the specialized training you need to create competitive embedded products. Get hands-on in the classroom and speak directly to the engineers and developers who can help you work faster, cheaper, and smarter.  Click here to register today! Page

New Hydrogels Are Capable of Complex Movements

Mon, 2018-10-15 04:00

A new process using 2D hydrogels applies force to their surfaces in a space- and time-controlled way. The hydrogel materials, which were developed by a team at the University of Texas at Arlington, can be programmed to expand and contract like actual human soft tissues. This capability paves the way for new applications in soft robotics, medical applications, and other fields where programmable materials are useful.

This programming enables the formation of complex 3D shapes and motions that mimic how real human soft tissue moves, said Kyungsuk Yum, an assistant professor in the Materials Science and Engineering Department at UTA, who led the research. “We studied how biological organisms use continuously deformable soft tissues, such as muscle, to make shapes, change shape, and move because we were interested in using this type of method to create dynamic 3D structures,” he said in a UTA news release.

It has historically been difficult to replicate these types of movements in man-made materials, which is why the team’s work is significant and could potentially transform the way that soft engineering systems or devices are designed and fabricated, Yum said. The materials used in the research are temperature-responsive hydrogels with locally programmable degrees and rates of swelling and shrinking. It’s these properties that allow the researchers to program how the hydrogels expand or contract in response to temperature change, Yum said in the release.

These bio-inspired 3D structures were created in the lab of Kyungsuk Yum, an assistant professor in the Materials Science and Engineering Department of the University of Texas at Arlington (UTA). (Image source: UTA)

4D Printing

Researchers used a novel digital-light 4D-printing method that Yum developed to create movement in the materials. 4D printing—an emerging research field—goes beyond 3D printing in that it uses time as the fourth dimension. In the process, it mathematically programs the structures' shrinking and swelling to form 3D shapes—such as saddle shapes, wrinkles, and cones—and their direction of movement. The method allows the team to print multiple 3D structures simultaneously in a one-step process, Yum said.

What’s also unique about his work is that Yum has developed design rules based on the concept of modularity to create even more complex structures, he said. These include bio-inspired structures with programmed sequential motions that make the shapes dynamic so they can move through space.  Yum also can control the speed at which the structures change shape and thus create complex, sequential motion that hasn’t been possible before, he added.

RELATED ARTICLES:

“Unlike traditional additive manufacturing, our digital light 4D-printing method allows us to print multiple, custom-designed 3D structures simultaneously,” he said. “Most importantly, our method is very fast, taking less than 60 seconds to print, and thus highly scalable.” Researchers published a video of the process online. They also published a paper on the research in the journal Nature Communications.

Potential applications for the technology include bio-inspired soft robotics and artificial muscles that can change shape or move in response to external signals just as human muscles do, Yum said. The research also could be used to develop programmable matter and other programmable materials.

Elizabeth Montalbano is a freelance writer who has written about technology and culture for 20 years. She has lived and worked as a professional journalist in Phoenix, San Francisco, and New York City. In her free time, she enjoys surfing, traveling, music, yoga, and cooking. She currently resides in a village on the southwest coast of Portugal.

SAVE THE DATE FOR PACIFIC DESIGN & MANUFACTURING 2019! 
Pacific Design & Manufacturing, North America’s premier conference that connects you with thousands of professionals across the advanced design & manufacturing spectrum, will be back at the Anaheim Convention Center February 5-7, 2019! Don’t miss your chance to connect and share your expertise with industry peers during this can't-miss event. Click here to pre-register for the event today!