MIT teaches robots to 'feel' objects just by looking at them

Process is something people do all the time

By David Williams, CNN
MIT/CSAIL via CNN

MIT researchers used a sophisticated touch sensor and a web camera to teach robots to predict what something feels like by looking at it.

Researchers at the Massachusetts Institute of Technology are teaching robots to "see" what an object looks like just by touching it and predict what something will feel like by looking at it.

A team from MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) have created a predictive artificial intelligence to help robots use multi-modal sensory inputs.

It's something people do all the time.

If you look at a fuzzy sweater and a barbell, you'll be able to tell that one is soft and the other is hard. Or if someone hands you a hammer, you'll be able to get a pretty good idea of what it looks like, even if you're blindfolded.

Robots can be equipped with visual and touch sensors, but it's tougher for them to combine that information.

"The two directions is only possible because we humans have this synchronized information all through our childhood," said Yunzhu Li, the lead author of a paper on the research. "We also want our robots to have this capability."

The researchers put a special tactile sensor on a robot arm and had it poke stuff.

A webcam recorded the arm touching almost 200 objects, such as tools, fabrics and household products, more than 12,000 times.

They broke the video clips down into individual frames, which gave them a data set of more than 3 million visual/tactile-paired images. That information was used to help the robot predict what it would feel when it saw a certain image, and vice versa.

"If you see a sharp edge or a flat surface, you can imagine how it would feel if you go and touch it," Yunzhu Li said. "This what we also want, to have our robots have this capability."

This technology could be used to help robots figure out the best way to hold an object just by looking at it.

It could also help them locate a specific item, even if they can't see it.

"You handle some cases where the light is off or you're reaching into some box where you have very limited vision available, or even to the extreme consider you are reaching into your pocket and trying to find out where your keys are and grasp them out," he said.

The data set only includes data collected in a controlled environment, but the team hopes to improve this by collecting new data out in the world.

A paper based on the research is scheduled to be presented Thursday at The Conference on Computer Vision and Pattern Recognition in Long Beach, California.

The-CNN-Wire ™ & © 2019 Cable News Network, Inc., a Time Warner Company. All rights reserved.