Thursday, December 13, 2007

Baby's errors are crucial first step for a smarter robot

Published on 05.10.07 by Michael Reilly and David Robson
on NewScientist Magazine issue 2624

When your software crashes, you probably restart your PC and hope it doesn't happen again, or you get the bug fixed. But not Rachel Wood. When a program she was testing screwed up a task that a 2-year-old would find easy, she was elated. The reason for this seemingly perverse reaction is that Wood's program didn't contain a bug, but had committed a famous cognitive goof identified by the psychology pioneer Jean Piaget. Known as the A-not-B error, it is made by babies between 7 and 12 months old and is seen as one of the hallmarks of fledgling human intelligence. Wood's robot has a brain far simpler than a baby's. But unravelling the events that led to this human-like behaviour - something that is easier to do in a computer program than a real brain - could help improve our understanding of artificial intelligence. Click for more...

[G.K Comment: Robots with built-in trial & error functionality for fixing "bugs"... that's the future.]

Monday, November 26, 2007

Essential circuits of cognition: The brain’s basic operations, architecture, and representations

Published by Richard Granger (University of California Irvine and Dartmouth College) on

The goals of artificial intelligence have always been twofold: i) formal explanation of the mechanisms underlying human (and animal) intelligence and ii)construction of powerful intelligent artifacts based on those mechanisms. The latter engineering goal may pragmatically benefit from the former scientific one: extant face recognition systems and automated telephone operators might have been considered the best possible mechanisms were it not for our own abilities. The only reason that we know that these industrial systems can be outperformed is that humans do so. Biological systems achieve their cognitive capabilities solely through brain mechanisms: the physiological operation of anatomical circuitries. Brain circuits are circuits; that is, they can be understood in computational terms. An explosion of knowledge in neuroscience and related fields is revealing the data crucial for characterizing the layout and properties of these circuits. Click for more... (.pdf)

Thursday, November 22, 2007

Automated Killers and the Computing Profession

Published by Noel Sharkey of the University of Sheffield on

When will we realize that our artificial-intelligence and autonomous-robotics research projects have been harnessed to manufacture killing machines? This is not terminator-style science fiction but grim reality: South Korea and Israel have both deployed armed robot border guards, while other nations—including China, India, Russia, Singapore, and the UK—increasingly use military robots. Currently, the biggest player, the US, has robots playing an integral part in its Future Combat Systems project, with spending estimated to exceed $230 billion. The US military has massive and realistic plans to develop unmanned vehicles that can strike from the air, under the sea, and on land. The US Congress set a goal in 2001 for one-third of US operational ground combat vehicles to be unmanned by 2015. More than 4,000 robots presently serve in Iraq, with others deployed in Afghanistan. The US military will spend $1.7 billion on more ground-based robots over the next five years, several of which will be armed and dangerous. Click for more...
[G.K Comment: Computer ethics is an extremely complex issue that is going to play a major role in the months and years to come, as more and more robots are deployed in the battlefield. Autonomous systems are always going to fail under specific circumstances regardless of how intelligent they become. But who is responsible if a robot unfairly causes a casualty? This article raises many valid questions that all computer scientists should be concerned about.]

Tuesday, November 20, 2007

Scientific Revolution: Skin transformed into stem cells through reprogramming!

Published by BBC on 20/11/2007

Human skin cells have been reprogrammed by two groups of scientists to mimic embryonic stem cells with the potential to become any tissue in the body. The breakthrough promises a plentiful new source of cells for use in research into new treatments for many diseases. Crucially, it could mean that such research is no longer dependent on using cells from human embryos, which has proved highly controversial. The US and Japanese studies feature in the journals Science and Cell. Until now only cells taken from embryos were thought to have an unlimited capacity to become any of the 220 types of cell in the human body - a so-called pluripotent state. Click for more...

[G.K Comment: You may wonder why such an article appears on an eBrain blog. Well it was suspected for many years that cells are generic structures that can be programmed to perform different body functions. Now with this revolutionary discovery, scientists have proven that theory to be true. Likewise neurons are generic structures that get programmed differently for vision, speech, abstract thinking etc... Only if we can decode the way they process and exchange information then one of the few remaining unsolved mysteries of science may come to light.]

Wednesday, November 14, 2007

Future directions in computing

Published on 14/11/2007 by BBC

Silicon electronics are a staple of the computing industry, but researchers are now exploring other techniques to deliver powerful computers. Quantum computers are able to tackle complex problems. A quantum computer is a theoretical device that would make use of the properties of quantum mechanics, the realm of physics that deals with energy and matter at atomic scales. In a quantum computer data is not processed by electrons passing through transistors, as is the case in today's computers, but by caged atoms known as quantum bits or Qubits. "It is a new paradigm for computation," said Professor Artur Ekert of the University of Oxford. "It's doing computation differently." Click for more...

[G.K Comment: This article lists some new technologies that could potentially replace silicon in computers. They are all very exciting prospects that could have a positive effect on the way eBrains are designed in the future.]

Tuesday, October 16, 2007

Belief Propagation and Wiring Length Optimization as Organizing Principles for Cortical Microcircuits

Published by Dileep George and Jeff Hawkins

This paper explores how functional and anatomical constraints and resource optimization could be combined to obtain a canonical cortical micro-circuit and an explanation for its laminar organization. We start with the assumption that cortical regions are involved in Bayesian Belief Propagation. This imposes a set of constraints on the type of neurons and the connection patterns between neurons in that region. In addition there are anatomical constraints that a region has to adhere to. There are several different configurations of neurons consistent with both these constraints. Among all such configurations, it is reasonable to expect that Nature has chosen the configuration with the minimum wiring length. We cast the problem of finding the optimum configuration as a combinatorial optimization problem. A near-optimal solution to this problem matched anatomical and physiological data. As the result of this investigation, we propose a canonical cortical micro-circuit that will support Bayesian Belief Propagation computation and whose laminar organization is near optimal in its wiring length. We describe how the details of this circuit match many of the anatomical and physiological findings and discuss the implications of these results to experimenters and theorists. Click fore more (.pdf)

Wednesday, October 10, 2007

Attention and consciousness: two distinct brain processes

Published by Christof Koch and Naotsugu Tsuchiya on

The close relationship between attention and consciousness has led many scholars to conflate these processes. This article summarizes psychophysical evidence, arguing that top-down attention and consciousness are distinct phenomena that need not occur together and that can be manipulated using distinct paradigms. Subjects can become conscious of an isolated object or the gist of a scene despite the near absence of top-down attention; conversely, subjects can attend to perceptually invisible objects. Furthermore, top-down attention and consciousness can have opposing effects. Such dissociations are easier to understand when the different functions of these two processes are considered. Untangling their tight relationship is necessary for the scientific elucidation of consciousness and its material substrate. Click for more... (.pdf)

Friday, October 05, 2007

Theory of Brain Function, Quantum Mechanics and Superstrings

Published by D.V. Nanopoulos

"Theory of brain function, quantum mechanics, and superstrings are three fascinating topics, which at first look bear little, if any at all, relation to each other. Trying to put them together in a cohesive way, as described in this task, becomes a most demanding challenge and unique experience. The main thrust of the present work is to put forward a, maybe, foolhardy attempt at developing a new, general, but hopefully scientifically sound framework of Brain Dynamics, based upon some recent developments, both in (sub)neural science and in (non)critical string theory. I do understand that Microtubules are not considered by all neuroscientists, to put it politely, as the microsites of consciousnes." Click for more...
[G.K Comment: Interesting... but is it all true? More to follow...]

Wednesday, September 19, 2007

Neural Networks vs. HTMs Part 2: From the OnIntelligence Forum

Quote 1:
"HTMs are similar to Bayesian networks; however, they differ from most Bayesian networks in the way that time, hierarchy, action, and attention are used. An HTM can be considered a form of Bayesian network where the network consists of a collection of nodes arranged in a tree-shaped hierarchy. Each node in the hierarchy self-discovers a set of causes in its input through a process of finding common spatial patterns and then finding common temporal patterns. Unlike many Bayesian networks, HTMs are self-training, have a well-defined parent/child relationship between each node, inherently handle time-varying data, and afford mechanisms for covert attention."

Quote 2:
"HTM's are a type of neural network. But in saying that, you should know that there are many different types of neural networks (single layer feedforward network, multi-layer network, recurrant, etc). 99% of these types of networks tend to emulate the neurons, yet don't have the overall infrastructure of the actual cortex. Additionally, neural networks tend not to deal with temporal data very well, they ignore the hierarchy in the brain, and use a different set of learning algorithms that our implementation. But, in a nutshell, HTMs are built according to biology. Whereas neural networks ignore the structure and focus on the emulation of the neurons, HTMs tend to focus on the structure and ignores the emulation of the neurons. I hope that clears things up.
Phillip B. Shoemaker Director
Developer Services Numenta, Inc."

Friday, September 14, 2007

Neural Networks vs. HTMs: What does Jeff Hawkins think?

Published by Evan Ratliff on

Neural networks rose to prominence in the 1980s. But despite some successes in pattern recognition, they never scaled to more complex problems. Hawkins argues that such networks have traditionally lacked “neuro-realism”: Although they use the basic principle of inter-connected neurons, they don’t employ the information-processing hierarchy used by the cortex. Whereas HTMs continually pass information up and down a hierarchy, from large collections of nodes at the bottom to a few at the top and back down again, neural networks typically send information through their layers of nodes in one direction — and if they send information in both directions, it’s often just to train the system. In other words, while HTMs attempt to mimic the way the brain learns — for instance, by recognizing that the common elements of a car occur together — neural networks use static input, which prevents prediction. Click for more...

Friday, August 31, 2007

Learn Like A Human

Published by Jeff Hawkins
IEEE Spectrum magazine, April 2007

By the age of five, a child can understand spoken language, distinguish a cat from a dog, and play a game of catch. These are three of the many things humans find easy that computers and robots currently cannot do. Despite decades of research, we computer scientists have not figured out how to do basic tasks of perception and robotics with a computer. Our few successes at building "intelligent" machines are notable equally for what they can and cannot do. Computers, at long last, can play winning chess. But the program that can beat the world champion can't talk about chess, let alone learn backgammon. Today's programs-at best-solve specific problems. Where humans have broad and flexible capabilities, computers do not. Perhaps we've been going about it in the wrong way. Click for more...

[G.K Comment: An excellent article by Jeff Hawkins on Hierarchical Temporal Memory. ]

“Forward” software engineering: "Brain-like software architecture... Confessions of an ex-neuroscientist"

Published by Bill Softky

Which comes first: the problem or the solution?

Reverse engineering starts with hardware, works backward. Usually only succeeds if problem is understood. “Forward” software engineering starts with the problem, and saves hardware for last. Click for more... (.ppt file)

[G.K Comment: An interesting presentation by Bill Softky on how we could use forward software engineering to solve hard problems, such the "brain" one.]

Tuesday, August 14, 2007

Neural Darwinism

Published by David Cofer of

"There are a multitude of different theories on the mind. Many more than have been discussed in this document. However, of all the ones I have seen before, I feel that this one offers the greatest hope of coming up with a real, working understanding of the science and neurobiology of how the mind works and what consciousness really is. Its author, Gerald Edelman, is a former Nobel laureate who was instrumental in cracking the mystery of how our immune systems work. After that he turned his attention to something far more difficult, attempting to understand how the neurobiology of the brain forms the mind. The main thrust of his theory of neural Darwinism is that the brain is a somatic selection system similar to evolution, and not an instructional system. (Somatic means that is over the time scale of your body instead of being on the time scale of evolution.)" Click for more...

Monday, August 13, 2007

Published by David Cofer of

"I built this website to document the progress on my research into machine intelligence. Specifically, I am currently focused on building a computer simulation that behaves like a common, everyday insect using neural networks. Most researchers in the field of Artificial Intelligence (AI) try to understand and replicate human thought and abilities. I believe this is a mistake. You must start small and work your way up the evolutionary ladder, not immediately start with the most complicated thing in the known universe. Insects seem pretty stupid when compared with humans, but they are capable of a variety of intelligent, adaptive behaviors in a very unpredictable environment. And that is something that no man made system is yet capable of emulating. Also, when you get groups of insects working together in a cooperative manner they are capable of almost miraculous accomplishments. Once we begin to understand how these tiny brains work to produce such incredible behaviors then we will be able to harness that power for useful purposes. " Click for more...

[G.K Comment: David Cofer takes the approach of simulating a relatively simple insect's brain and then following the evolutionary ladder to understand more complex brain formations. Although this could make sense for solving some other real life problems, I believe that understanding an insect's brain is far more difficult than understanding a human baby's brain! I mean it! The reason is that human beings are the most incapable living organisms the moment of their birth. We also take a long time before we can perform even the most basic tasks such as walking & talking. In my opinion, we can develop a functional brain that looks nothing like any existing organism, as long as it can sense its environment and gain knowledge about it without pre-programming.]

Tuesday, July 24, 2007

Hierarchical Temporal Memory: Concepts, Theory, and Terminology

Published by Jeff Hawkins and Dileep George, Numenta Inc.

There are many things humans find easy to do that computers are currently unable to do. Tasks such as visual pattern recognition, understanding spoken language, recognizing and manipulating objects by touch, and navigating in a complex world are easy for humans. Yet, despite decades of research, we have no viable algorithms for performing these and other cognitive functions on a computer. In a human, these capabilities are largely performed by the
neocortex. Hierarchical Temporal Memory (HTM) is a technology that replicates the structural and algorithmic properties of the neocortex. HTM therefore offers the promise of building machines that approach or exceed human level performance for many cognitive tasks. Click for more... (.pdf file)

Jeff Hawkins: Brain science is about to fundamentally change computing

About this Talk
To date, there hasn't been an overarching theory of how the human brain really works, Jeff Hawkins argues in this compelling talk. That's because we still haven't defined intelligence accurately. But one thing's for sure, he says: The brain isn't like a powerful computer processor. It's more like a memory system that records everything we experience and helps us predict, intelligently, what will happen next. Bringing this new brain science to computer devices will enable powerful new applications -- and it will happen sooner than you think.

Evolution in Your Brain: A biological point of view from a great nobelist

Published by Susan Kruglinski on the Discover magazine

Some of the most profound questions in science are also the least tangible. What does it mean to be sentient? What is the self? When the discussion turns to these imponderables, many minds defer rather than get mired in such muddy issues. Neuroscientist Gerald Edelman dives right in. A physician and cell biologist who won a Nobel Prize for his work on the structure of antibodies, Edelman is now obsessed with the enigma of human consciousness—except he doesn’t see it as a mystery. In Edelman’s grand theory of the mind, consciousness is a biological phenomenon. The developing brain undergoes its own process, similar to natural selection: Neurons proliferate and form connections in infancy; experience weeds out the useless from the useful, molding the adult brain in sync with its environment. Click for more...

Thursday, July 19, 2007

Consciousness and Computers

Published by Neville Holmes (University of Tasmania) on ""
(Computer IEEE Magazine, July 2007)

Recently, the cover of an issue of Time (29 Jan. 2007) that appeared to promote ancient phrenology caught my attention. Closer inspection showed it to be a "Mind & Body Special Issue." This surprised me because The Economist's special Christmas/New Year issue (23 Dec. 2006) had featured the supplement "A Survey of the Brain." So I bought the copy of Time to compare with the earlier issue of The Economist. Although both started with a signed introduction, the rest of the content was stylistically opposed. The Economist had five anonymous reports compiled from interviews with, and quotations of, experts, the lot decorated with a few drawings and diagrams. Time had 10 richly illustrated essays informed and often written by experts, and followed by a short puzzle section. The contest, if indeed it was one, seemed to be a standoff, like a saber versus a shillelagh. The computing profession is relevant here. Click for more...

[G.K Comment: As Neville Holmes says: "The conscious mind is not mysterious, just misunderstood".]

Tuesday, July 17, 2007

Comparison of the brain and a computer

Published in Wikipedia

Much interest has been focused on comparing the brain with computers. A variety of obvious analogies exist: for example, individual neurons can be compared with a microchip, and the specialised parts of the brain can be compared with graphics cards and other system components. However, such comparisons are fraught with difficulties. Perhaps the most fundamental difference between brains and computers is that today's computers operate by performing often sequential instructions from an input program, while no clear analogy of a program appears in human brains. The closest equivalent would be the idea of a logical process, but the nature and existence of such entities are subjects of philosophical debate. Given Turing's model of computation, the Turing machine, this may be a functional, not fundamental, distinction. However, Maass and Markram have recently argued that "in contrast to Turing machines, generic computations by neural circuits are not digital, and are not carried out on static inputs, but rather on functions of time" (the Turing machine computes computable functions). Click for more...

Thursday, July 12, 2007

Robot unravels mystery of walking

Published on 12/07/2007 by BBC

Roboticists are using the lessons of a 1930s human physiologist to build the world's fastest walking robot. Runbot is a self-learning, dynamic robot, which has been built around the theories of Nikolai Bernstein. "Getting a robot to walk like a human requires a dynamic machine," said Professor Florentin Woergoetter. Runbot is a small, biped robot which can move at speeds of more than three leg lengths per second, slightly slower than the fastest walking human. Click for more...

The hack of the century: Greek mobile wiretap scandal unpicked

Published on 11/07/2007
by John Leyden on "The Register"

More details have emerged on how Vodafone's Greek network was bugged three years ago to spy on top government officials.
To recap one of the most extraordinary wiretapping scandals of the post-Cold War era: eavesdroppers tapped the mobile phones of Greek Prime Minister Costas Karamanlis, cabinet ministers and security officials for about nine months between June 2004-Mar 2005 around the time of the Athens Olympics.

The mobile phones of about 100 people, whose ranks include journalists and Arabs living in Greece, as well as the country's political and security elite and a US embassy worker, were monitored after snooping software was illegally installed on the systems of Vodafone Greece. Click for more...

Monday, July 09, 2007

How To Think About Cognitive Systems: Requirements and Designs

Published by Aaron Sloman,

School of Computer Science,
The University of Birmingham, UK

Much early thinking about AI was about forms of representation, the knowledge expressed, and the algorithms to operate on those representations. Later there was much in-fighting between factions promoting particular forms of representation and associated algorithms, e.g. neural computations, evolutionary algorithms, reactive behaviours, physics-inspired dynamical systems. More recently, attention has turned to ways of combining different mechanisms, formalisms and kinds of knowledge within a single multi-functional system, i.e. within one architecture. Minsky's Society of Mind was a major example. Click for more...

Saturday, July 07, 2007

Grand Challenge 5 (GC-5): Architecture of Brain and Mind

Integrating high level cognitive processes with brain mechanisms and functions in a working robot.

Click for more...

Thursday, July 05, 2007

Rat-brained robot thinks like the real thing

Published 04/07/07 by Duncan Graham-Rowe on "NewScientistTech"

A robot controlled by a simulated rat brain has proved itself to be a remarkable mimic of rodent behaviour in series of classic animal experiments. The robot's biologically-inspired control software uses a functional model of "place cells". These are neurons in an area of the brain called the hippocampus that help real rats to map their environment. They fire when an animal is in a familiar location. Alfredo Weitzenfeld, a roboticist at the ITAM technical institute in Mexico City, carried out the work by reprogramming an AIBO robot dog, made by Japanese firm Sony, with the rat-inspired control software. Click for more...

[G.K Comment: Please allow me to say that simulating animal brains by creating highly complex software is not the way forward. The question is how can we create a brain platform that can program itself in a complex way(!)... i.e. in the same way that a baby evolves into an adult. Does that make sense?"]

Tuesday, July 03, 2007

Brain - some basic concepts by IBM

A very good webpage by IBM, listing some basic scientific concepts on the way the human brain works. Click here for more...

Friday, June 29, 2007

Supercomputer to build 3D brain

Published 07/06/05 By BBC

The neocortex is organised into thousands of columns of neurons. Neuroscientists are to build the most detailed model of the human brain with the help of an IBM supercomputer. Experts at the École Polytechnique Fédérale de Lausanne, Switzerland, will spend the next two years creating a 3D simulation of the neocortex. Click for more...

Thursday, June 28, 2007

Einstein(s) needed to simulate the human brain. Job specs below...

Published 28/06/07 By Bill Softky on "The Register"

So the tinkerers can't do the math, and the boffins can't tinker. To break that logjam we need an Einstein of engineering. He would be part hacker, part statistician: a special blend of mathematical genius, programmer, and tinkerer. And hopefully a businessman too. Click for more...

[G.K Comment: Bill Softky has a point here. But surely with the vast amount of research that goes into the sector of simulating the human brain on a machine... you would think that the result would come as a result of teamwork rather than individual excellence. Let's wait and see...]

Statistical Inference Software - The future is here

Published 25/05/07 By Bill Softky on "The Register"
...The fact is, what allowed Stanford's "Stanley" car (in the DARPA Grand Challenge competition) to cross a hundred miles of desert dirt unaided was not mechanical wizardry or "intelligence," but the careful application of statistical inference and software design to merging three kinds of sensor data: GPS coordinates, laser range-finders, and video color/texture signals. The secret sauce was in detecting the road fifty metres ahead and avoiding large obstacles, and even that apparently simple task consumed a year of the lives of a dozen computer science graduate students. It's hard to imagine Joe Tinkerer doing such things at home with his Mindstorms kit... Click here for the full article...

[G.K Comment: Well, another excellent article by Bill Softky. My favourite part of the article is the brief explanation of why statistical inference software is what really matters in some robot systems. I can't wait to see what other new technologies will emerge from the DARPA Urban Challenge which is due in late 2007.]

The flexi-laws of physics

Published by Paul Davies on 30/06/2007
New Scientist Magazine issue 2610

SCIENCE WORKS because the universe is ordered in an intelligible way. The most refined manifestation of this order is found in the laws of physics, the fundamental mathematical rules that govern all natural phenomena. One of the biggest questions of existence is the origin of those laws: where do they come from, and why do they have the form that they do?

Until recently this problem was considered off-limits to scientists. Their job was to discover the laws and apply them, not inquire into their form or origin. Now the mood has changed. One reason for this stems from the growing realisation that the laws of physics possess a weird and surprising property: collectively they give the universe the ability to generate life and conscious beings, such as ourselves, who can ponder the big questions.

If the universe came with any old rag-bag of laws, life would almost certainly be ruled out. Indeed, changing the existing laws by even a scintilla could have lethal consequences. For example, if protons were 0.1 per cent heavier than neutrons, rather than the other way about, all the protons coughed out of the big bang would soon have decayed into neutrons. Without protons and their crucial electric charge, atoms could not exist and chemistry would be impossible.

Physicists and cosmologists know many such examples of uncanny bio-friendly "coincidences" and fortuitous fine-tuned properties in the laws of physics. Like Baby Bear's porridge in the story of Goldilocks, our universe seems "just right" for life. It looks, to use astronomer Fred Hoyle's dramatic description, as if "a super-intellect has been monkeying with physics". So what is going on?

A popular way to explain the Goldilocks factor is the multiverse theory. This says that a god's-eye-view of the cosmos would reveal a patchwork quilt of universes, of which ours is but an infinitesimal fragment. Crucially, each patch, or "universe", comes with its own distinctive set of local by-laws. Maybe the by-laws are assigned randomly, as in a vast cosmic lottery. It is then no surprise that we find ourselves living in a patch so well suited to ...

Saturday, May 26, 2007

Software engineers – the ultimate brain scientists?

Published 17/10/03 By Bill Softky on "The Register"

Can software engineers hope to create a digital brain? Not before understanding how the brain works, and that's one of the biggest mysteries left in science. Brains are hugely intricate circuits of billions of elements. We each have one very close by, but can't open it up: it's the ultimate Black Box. Click for more...

[G.K Comment: What do you think? Can a computer simulate the human brain? Can a robot start making sensible conclusions and even start having a high-level discussion with a human? This article answers only some of the questions on this very debatable issue. As a software engineer I believe it can be done. Let's discuss...]

Friday, May 25, 2007

Design patterns for a Black Box Brain?

Published 20/10/03 By Bill Softky on "The Register"

The bad news is that biologists are very far from figuring out the grand mystery of the brain. The good news is that software engineers might get there first. [...] What we desperately need - and what software and signal processing can help us with - is a theory of what a brain ought to do. A human brain is a black box with a million wires coming in and half a million going out; a rat's brain is smaller, with fewer wires, but faces the same basic signal-processing problems: what kind of input patterns can it expect, and how can it deal with them? Click for more...

[G.K Comment]: Without a doubt, the most interesting article I have ever read. It is exactly what my favourite research topic is all about... i.e. using software to create a human-like brain. Science finction? ...well it's only 2 years away I say. Let's see...

Sunday, May 20, 2007

How Google translates without understanding!

Published 15/05/07 By Bill Softky on "The Register"

After just a couple years of practice, Google can claim to produce the best computer-generated language translations in the world - in languages their boffin creators don't even understand.

Last summer, Google took top honors at a bake-off competition sponsored by the American agency NIST between machine-translation engines, besting IBM in English-Arabic and English-Chinese. The crazy part is that no one on the Google team even understands those languages.... the automatic-translation engines they constructed triumphed by sheer brute-force statistical extrapolation rather than "understanding". Click for more...

[G.K Comment: I urge every one of you to try and understand this article even if you are not a software engineer. Google has proven that the time when machines will become more intelligent than humans is not very far away.]