Robots can now learn to cook just like you do: by watching YouTube videos
The researchers, from the University of Maryland and the Australian research center NICTA, have just published a paper on their achievements, which they will present this month at the 29th annual conference of the Association for the Advancement of Artificial Intelligence.
The demonstration is the latest impressive use of a type of artificial intelligence called deep learning. A hot area for acquisitions as of late, deep learning entails training systems called artificial neural networks on lots of information derived from audio, images, and other inputs, and then presenting the systems with new information and receiving inferences about it in response.
The researchers employed convolutional neural networks, which are now in use at Facebook, among other companies, to identify the way a hand is grasping an item, and to recognize specific objects. The system also predicts the action involving the object and the hand.
To train their model, researchers selected data from 88 YouTube videos of people cooking. From there, the researchers generated commands that a robot could then execute.
“We believe this preliminary integrated system raises hope towards a fully intelligent robot for manipulation tasks that can automatically enrich its own knowledge resource by “watching” recordings from the World Wide Web,” the researchers concluded.
Read their full paper, “Robot Learning Manipulation Action Plans by ‘Watching’ Unconstrained Videos
from the World Wide Web,” here (PDF).
No comments:
Post a Comment