Wednesday, September 19, 2007

Neural Networks vs. HTMs Part 2: From the OnIntelligence Forum

Quote 1:
"HTMs are similar to Bayesian networks; however, they differ from most Bayesian networks in the way that time, hierarchy, action, and attention are used. An HTM can be considered a form of Bayesian network where the network consists of a collection of nodes arranged in a tree-shaped hierarchy. Each node in the hierarchy self-discovers a set of causes in its input through a process of finding common spatial patterns and then finding common temporal patterns. Unlike many Bayesian networks, HTMs are self-training, have a well-defined parent/child relationship between each node, inherently handle time-varying data, and afford mechanisms for covert attention."

Quote 2:
"HTM's are a type of neural network. But in saying that, you should know that there are many different types of neural networks (single layer feedforward network, multi-layer network, recurrant, etc). 99% of these types of networks tend to emulate the neurons, yet don't have the overall infrastructure of the actual cortex. Additionally, neural networks tend not to deal with temporal data very well, they ignore the hierarchy in the brain, and use a different set of learning algorithms that our implementation. But, in a nutshell, HTMs are built according to biology. Whereas neural networks ignore the structure and focus on the emulation of the neurons, HTMs tend to focus on the structure and ignores the emulation of the neurons. I hope that clears things up.
_________________
Phillip B. Shoemaker Director
Developer Services Numenta, Inc."

Friday, September 14, 2007

Neural Networks vs. HTMs: What does Jeff Hawkins think?

Published by Evan Ratliff on Wired.com

Neural networks rose to prominence in the 1980s. But despite some successes in pattern recognition, they never scaled to more complex problems. Hawkins argues that such networks have traditionally lacked “neuro-realism”: Although they use the basic principle of inter-connected neurons, they don’t employ the information-processing hierarchy used by the cortex. Whereas HTMs continually pass information up and down a hierarchy, from large collections of nodes at the bottom to a few at the top and back down again, neural networks typically send information through their layers of nodes in one direction — and if they send information in both directions, it’s often just to train the system. In other words, while HTMs attempt to mimic the way the brain learns — for instance, by recognizing that the common elements of a car occur together — neural networks use static input, which prevents prediction. Click for more...