The Next Big Step for AI? Understanding Video
Will Knight for MIT Technology Review: For a computer, recognizing a cat or a duck in a still image is pretty clever. But a stiffer test for artificial intelligence will be understanding when the cat is riding a Roomba and chasing the duck around a kitchen.
MIT and IBM this week released a vast data set of video clips painstakingly annotated with details of the action being carried out. The Moments in Time Dataset includes three-second snippets of everything from fishing to break-dancing.
“A lot of things in the world change from one second to the next,” says Aude Oliva, a principal research scientist at MIT and one of the people behind the project. “If you want to understand why something is happening, motion gives you lot of information that you cannot capture in a single frame.”
The current boom in artificial intelligence was sparked, in part, by success in teaching computers to recognize the contents of static images by training deep neural networks on large labeled data sets (see “The Revolutionary Technique That Quietly Changed Machine Vision Forever”).
AI systems that interpret video today, including the systems found in some self-driving cars, often rely on identifying objects in static frames rather than interpreting actions. On Monday Google launched a tool capable of recognizing the objects in video as part of its Cloud Platform, a service that already includes AI tools for processing image, audio, and text. Full Article:
Comments (0)
This post does not have any comments. Be the first to leave a comment below.