MIT Visualization Can Read Minds of Robots in Real Time

robots

Video screenshot courtesy of Melanie Gonick/MIT

Increasingly autonomous robots and advances in technology are bringing us closer to package delivering drones and self-driving cars becoming a reality. Robotic concepts such as these clearly involve a great deal of complexity at the algorithmic level; robots that we can trust to reliably make "decisions" without risking the safety of humans is certainly a tall order.

Surely the actions of a robot should be anything but random, particularly in these types of applications? However, many visitors to MIT's research lab found it difficult to understand why robots chose a particular action during demonstrations of robotic missions.

Graduate student Shayegan Omidshafiei agreed, saying that

"some of the decisions almost seemed random."

In an effort to respond to this common confusion, Omidshafiei teamed up with Ali-akbar Agha-mohammadi, a postdoc in the Aerospace Controls Lab of MIT. Along with their colleagues, the researchers worked together to develop a visualization system that they have dubbed "measurable virtual reality," allowing people to literally read a robot's mind in real time.

The system is made possible by mounted projectors with motion capture capabilities in combination with animation software. The result are dots and lines of light which project a visual map of possible routes and the robot's perception of any moving obstacles, such as a pedestrians, which stand in the way of their destination.

Having this visual representation of the robot's decision-making algorithms in action helps significantly in understanding why they end up doing what they do.

Agha-mohammadi explains,

"if you can see the robot’s plan projected on the ground, you can connect what it perceives with what it does to make sense of its actions.”

While the MIT researchers hope that the system can be used to speed up the development of route-planning vehicles and delivery drones, they also present more ambitious visions such as testing drones designed to fight forest fires in a virtual environment. They also stressed the obvious benefits for the engineers designing such robots, as it greatly cuts down on the time needed to debug their code when they run into problems.

Cars that drive themselves and robots delivering our packages may seem fantastical, but safety concerns remain the most significant obstacle preventing their use in the real-world. In a manufacturing environment, robotic arms have traditionally been bolted down and kept behind cages for the same reason --- to protect human workers.

Companies like Rethink Robotics are working to change that, however. Their latest collaborative robot called Baxter is among the first industrial robots that is uncaged and specifically designed to work alongside humans in a factory setting.

Even robots designed for domestic use make decisions in response to physical obstacles and navigation. For example, the robotic Roomba vacuum cleaner is able to detect steep drops in order to prevent itself from falling down stairs.

For more details on MIT's research, visit their website linked to below.

Source: MIT

BACK TO BLOG