Let’s say you program one robot to do a certain task, and another to do something related. How do you keep them from getting in each other’s way? A better idea would be a way to get the robots working in the same space while synchronizing their actions in real time. That’s what the MIT team is working on, dealing with robot collaboration.
The research deals with a type of robotic automation called Decentralized Partially Observable Markov Decision Processes (Dec-POMDPs). These are mathematical models that describe the way a multi-agent system behaves — it’s not just for robots, as any autonomous networked system would apply. The problem the MIT researchers are seeking to solve is one of uncertainty: The more agents in a system, the more complex and prone to breakdown it is.
MIT team published a paper showing how Dec-POMDPs could be used to bring together existing robotic control systems to accomplish tasks cooperatively. They test the Dec-POMDP algorithms with remote-control helicopters. The test involved a number of base stations scattered across a room. Nearby were package delivery locations. The helicopters would need to cross each other’s paths to make all the “deliveries” without causing a crash. Before the robots start flying around, there’s an offline planning phase where each agent maps out a theoretical way of accomplishing the task. From there, it’s up to the graphs.
The situation is broken down into two graphs by the algorithms. One generates a set of potential micro-actions, and the other represents transitions between macro-actions in light of observations from all the networked agents. The result is a graph of the probability that an agent (robots, remember) should perform a particular action at a particular time. This process is repeated for each action until all the drones have made it safely where they need to go.
See full story on extremetech.com
Image courtesy of extremetech.com