Visionary Research: Teaching Computers to See Like a Human
Published 20/02/08 on Scientific American by Larry Greenemeier
For all their sophistication, computers still can't compete with nature's gift—a brain that sorts objects quickly and accurately enough so that people and primates can interpret what they see as it happens. Despite decades of development, computer vision systems still get bogged down by the massive amounts of data necessary just to identify the most basic images. Throw that same image into a different setting or change the lighting and artificial intelligence is even less of a match for good old gray matter.
These shortcomings become more pressing as demand grows for security systems that can recognize a known terrorist's face in a crowded airport and car safety mechanisms such as a sensor that can hit the brakes when it detects a pedestrian or another vehicle in the car's path. Seeking the way forward, Massachusetts Institute of Technology researchers are looking to advances in neuroscience for ways to improve artificial intelligence, and vice versa. The school's leading minds in both neural and computer sciences are pooling their research, mixing complex computational models of the brain with their work on image processing. Click for more...