Robotic assistants may adapt to humans in the factory, thanks to new algorithm
In todays manufacturing plants, the division of labor between humans and robots is quite clear: Large, automated robots are typically cordoned off in metal cages, manipulating heavy machinery and performing repetitive tasks, while humans work in less hazardous areas on jobs requiring finer detail.
But according to Julie Shah, the Boeing Career Development Assistant Professor of Aeronautics and Astronautics at MIT, the factory floor of the future may host humans and robots working side by side, each helping the other in common tasks. Shah envisions robotic assistants performing tasks that would otherwise hinder a humans efficiency, particularly in airplane manufacturing.
If the robot can provide tools and materials so the person doesnt have to walk over to pick up parts and walk back to the plane, you can significantly reduce the idle time of the person, says Shah, who leads the Interactive Robotics Group in MITs Computer Science and Artificial Intelligence Laboratory (CSAIL). Its really hard to make robots do careful refinishing tasks that people do really well. But providing robotic assistants to do the non-value-added work can actually increase the productivity of the overall factory.
A robot working in isolation has to simply follow a set of preprogrammed instructions to perform a repetitive task. But working with humans is a different matter: For example, each mechanic working at the same station at an aircraft assembly plant may prefer to work differently and Shah says a robotic assistant would have to effortlessly adapt to an individuals particular style to be of any practical use.
Now Shah and her colleagues at MIT have devised an algorithm that enables a robot to quickly learn an individuals preference for a certain task, and adapt accordingly to help complete the task. The group is using the algorithm in simulations to train robots and humans to work together, and will present its findings at the Robotics: Science and Systems Conference in Sydney in July.
Its an interesting machine-learning human-factors problem, Shah says. Using this algorithm, we can significantly improve the robots understanding of what the persons next likely actions are.
As a test case, Shahs team looked at spar assembly, a process of building the main structural element of an aircrafts wing. In the typical manufacturing process, two pieces of the wing are aligned. Once in place, a mechanic applies sealant to predrilled holes, hammers bolts into the holes to secure the two pieces, then wipes away excess sealant. The entire process can be highly individualized: For example, one mechanic may choose to apply sealant to every hole before hammering in bolts, while another may like to completely finish one hole before moving on to the next. The only constraint is the sealant, which dries within three minutes.
The researchers say robots such as FRIDA, designed by Swiss robotics company ABB, may be programmed to help in the spar-assembly process. FRIDA is a flexible robot with two arms capable of a wide range of motion that Shah says can be manipulated to either fasten bolts or paint sealant into holes, depending on a humans preferences.
To enable such a robot to anticipate a humans actions, the group first developed a computational model in the form of a decision tree. Each branch along the tree represents a choice that a mechanic may make for example, continue to hammer a bolt after applying sealant, or apply sealant to the next hole?
If the robot places the bolt, how sure is it that the person will then hammer the bolt, or just wait for the robot to place the next bolt? Shah says. There are many branches.
Using the model, the group performed human experiments, training a laboratory robot to observe an individuals chain of preferences. Once the robot learned a persons preferred order of tasks, it then quickly adapted, either applying sealant or fastening a bolt according to a persons particular style of work.
Working side by side
Shah says in a real-life manufacturing setting, she envisions robots and humans undergoing an initial training session off the factory floor. Once the robot learns a persons work habits, its factory counterpart can be programmed to recognize that same person, and initialize the appropriate task plan. Shah adds that many workers in existing plants wear radio-frequency identification (RFID) tags a potential way for robots to identify individuals.
Steve Derby, associate professor and co-director of the Flexible Manufacturing Center at Rensselaer Polytechnic Institute, says the groups adaptive algorithm moves the field of robotics one step closer to true collaboration between humans and robots.
The evolution of the robot itself has been way too slow on all fronts, whether on mechanical design, controls or programming interface, Derby says. I think this paper is important it fits in with the whole spectrum of things that need to happen in getting people and robots to work next to each other.
Shah says robotic assistants may also be programmed to help in medical settings. For instance, a robot may be trained to monitor lengthy procedures in an operating room and anticipate a surgeons needs, handing over scalpels and gauze, depending on a doctors preference. While such a scenario may be years away, robots and humans may eventually work side by side, with the right algorithms.
We have hardware, sensing, and can do manipulation and vision, but unless the robot really develops an almost seamless understanding of how it can help the person, the persons just going to get frustrated and say, Never mind, Ill just go pick up the piece myself, Shah says.
This research was supported in part by Boeing Research and Technology and conducted in collaboration with ABB.
This story is republished courtesy of MIT News (http://web.mit.edu/newsoffice/), a popular site that covers news about MIT research, innovation and teaching.