Thursday, May 29, 2008

Free choice activates a decision circuit between frontal and parietal cortex

Published by Bijan Pesaran, Matthew J. Nelson & Richard A. Andersen on Nature journal (16/04/08)

We often face alternatives that we are free to choose between. Planning movements to select an alternative involves several areas in frontal and parietal cortex that are anatomically connected into long-range circuits. These areas must coordinate their activity to select a common movement goal, but how neural circuits make decisions remains poorly understood. Here we simultaneously record from the dorsal premotor area (PMd) in frontal cortex and the parietal reach region (PRR) in parietal cortex to investigate neural circuit mechanisms for decision making. We find that correlations in spike and local field potential (LFP) activity between these areas are greater when monkeys are freely making choices than when they are following instructions. We propose that a decision circuit featuring a sub-population of cells in frontal and parietal cortex may exchange information to coordinate activity between these areas. Cells participating in this decision circuit may influence movement choices by providing a common bias to the selection of movement goals. Click for more...

Wednesday, May 28, 2008

Neuroscience: Brain control of a helping hand

Published by John F. Kalaska on Nature journal (28/05/08)
Paralysed patients would benefit if their thoughts could become everyday actions. The demonstration that monkeys can use brain activity for precise control of an arm-like robot is a step towards that end. Strokes, spinal-cord injuries and degenerative neuromuscular disease all cause damage that can severely compromise the ability of patients to use their muscles. The loss of mobility and independence that results from such motor deficits takes a devastating toll on their quality of life. Medical research is striving on many fronts to reverse the disease or injury state of such patients. Meanwhile, other approaches are needed to enhance their quality of life. Often, the patient's condition leaves intact parts of the cerebral cortex involved in voluntary motor control, including the primary motor cortex, premotor cortex and posterior parietal cortex. These patients are still able to produce the brain activity that would normally result in voluntary movements, but their condition prevents those signals from either getting to the muscles or activating them adequately. In such cases, one possible solution is to let the subjects think about what they would like to do as if they were mentally rehearsing the desired actions, record the resulting brain activity, and use those signals to control a robotic device. The development of such brain–machine interfaces (BMIs), or neuroprosthetic controllers, is being pursued in several laboratories. Click for more...

Wednesday, March 19, 2008

The man with two brains

Published on E&T magazine of IET by Paul Dempsey
Jeff Hawkins sired the personal digital assistant and today’s smartphones as the founder of Palm Computing. But, frankly, this was never much of a priority for him. Even as a young engineer at Intel, he was nagging microprocessor pioneer Ted Hoff to let him investigate parallels between computing and the human brain. Back in the 1970s, Hoff said no – a decision Hawkins now agrees with – but as science and theory advanced, he returned to his obsession. In 2002, Hawkins set up the Redwood Neuroscience Institute, concentrating on brain theory, and in 2004, launched the start-up Numenta, which is developing a computing architecture called Hierarchical Temporal Memory (HTM). At this year’s International Solid State Circuits Conference (ISSCC), Hawkins described prototype HTM visual recognition systems, addressing a task that remains hugely challenging for computers. But it is one that the animal brain finds quite straightforward. It is one reason why a growing body of people, such as Hawkins, are looking more closely at the structure of the brain. Click fore more...

It thinks... therefore...

Published on E&T magazine of IET by Chris Edwards
Research into machine consciousness is leading engineers to re-evaluate the relevance of philosophy, discovers Chris Edwards. Inventor Ray Kurzweil has a dream. And he intends to live to see it. He claims he takes a daily cocktail of pills to help make sure he is around to witness the creation of the first artificial brain: something he reckons will happen by the end of the 2020s. One of Kurzweil’s hopes is that this will make it possible to cheat death: we could upload our consciousness into the machine and remain conscious just as long as the computer receives power and maintenance. But the prospect raises a conundrum: would that machine actually be conscious? Would it think? How would we know? Even if it told us it could see and feel, would we believe it? Or would we consider it no more than a simulation of consciousness, where the lights are on but nobody is home? Questions like these are leading engineers to consider whether it is time for the discipline to merge with philosophy, as engineering by itself will be unable to provide the answers. Philosophy as such still struggles with the questions. We don’t really know what intelligence is and whether consciousness is effectively synonymous with it. But the quest to uncover consciousness in an artificial entity may provide clues that classic philosophical introspection has not managed to uncover. Click for more...

Tuesday, March 11, 2008

Chemical brain controls nanobots

Published by BBC on 11/03/2008

A tiny chemical "brain" which could one day act as a remote control for swarms of nano-machines has been invented.
The molecular device - just two billionths of a metre across - was able to control eight of the microscopic machines simultaneously.
Writing in Proceedings of the National Academy of Sciences, scientists say it could also be used to boost the processing power of future computers. Many experts have high hopes for nano-machines in treating disease. "If you want to remotely operate on a tumour you might want to send some molecular machines there," explained Dr Anirban Bandyopadhyay of the National Institute for Materials Science, Tsukuba, Japan. "But you cannot just put them into the blood and [expect them] to go to the right place." Dr Bandyopadhyay believes his device may offer a solution. One day they may be able to guide the nanobots through the body and control their functions, he said. "That kind of device simply did not exist; this is the first time we have created a nano-brain". Click for more...

Sunday, March 02, 2008

Plan to teach baby robot to talk

Published by BBC on 28/02/2008

A university in Devon is preparing to find out if a baby robot can be taught to talk. Staff at the University of Plymouth will work with a 1m-high (3ft) humanoid baby robot called iCub. Over the next four years robotics experts will work with language development specialists who research how parents teach children to speak. Their findings could lead to the development of humanoid robots which learn, think and talk. The project is believed to be the first of its kind in the world and typical experiments with the iCub robot will include activities such as inserting objects of various shapes into the corresponding holes in a box, serialising nested cups and stacking wooden blocks. Click fore more...

Thursday, February 28, 2008

Visionary Research: Teaching Computers to See Like a Human

Published 20/02/08 on Scientific American by Larry Greenemeier

For all their sophistication, computers still can't compete with nature's gift—a brain that sorts objects quickly and accurately enough so that people and primates can interpret what they see as it happens. Despite decades of development, computer vision systems still get bogged down by the massive amounts of data necessary just to identify the most basic images. Throw that same image into a different setting or change the lighting and artificial intelligence is even less of a match for good old gray matter.

These shortcomings become more pressing as demand grows for security systems that can recognize a known terrorist's face in a crowded airport and car safety mechanisms such as a sensor that can hit the brakes when it detects a pedestrian or another vehicle in the car's path. Seeking the way forward, Massachusetts Institute of Technology researchers are looking to advances in neuroscience for ways to improve artificial intelligence, and vice versa. The school's leading minds in both neural and computer sciences are pooling their research, mixing complex computational models of the brain with their work on image processing. Click for more...

Monday, January 21, 2008

Harnessing Digital Evolution

The following article has been published in the IEEE Computer magazine (January 2008), describing the evolution of digital organisms. It makes a very interesting reading and it is definitely one of the most promising areas of computer design & robotics. Look out for it!

Abstract
In digital evolution, self-replicating computer programs—digital organisms—experience mutations and selective pressures, potentially producing computational systems that, like natural organisms, adapt to their environment and protect themselves from threats. Such organisms can help guide the design of computer software. Click here for more information...

Citation: Philip McKinley, Betty H.C. Cheng, Charles Ofria, David Knoester, Benjamin Beckmann, Heather Goldsby, "Harnessing Digital Evolution," Computer, vol. 41, no. 1, pp. 54-63, Jan., 2008

Saturday, January 05, 2008

Words of wisdom for scientists...

A. "...intuition is often the biggest obstacle to discovering the truth"
B. "...the best solutions to scientific problems are simple and elegant"
Jeff Hawkins, "On Intelligence" p.32, p34