Several researchers in the field of artificial intelligence (AI) are warning about an AI winter, which means that scientists might lose the interest on the discipline, institutions reduce drastically the funding towards its research and lose presence in the public debate. It wouldn’t be the first AI winter though. The last two decades have been a period of almost-unrivalled optimism about this subject. Hardware, big datasets and deep learning have finally created artificial intelligence that wows consumers and funders alike. However, we are still ages from obtaining a general AI or more human-like systems.
Personally, I wouldn’t be so sure about this decrease in the AI discipline research, but I’m convinced that from now on, this area will get along much more with neuroscience and its rules. We can call an AI winter to the period that will be needed in order to make another step forward towards the better understanding of how the human brain learns and gets the information from the environment with the help of these two sciences. This article describes the tendencies of the researchers towards this direction and some practical examples of the neuroscience contribution.
It’s paradoxical the situation now, as in its origins AI was based on neuroscience and psychology. Due to the later development and expansion of each of those subjects, limits became clearer and the interaction got lost.
Neuroscience provides two advantages for AI. First, neuroscience provides a rich source of inspiration for new types of algorithms, independent of mathematical ideas that have largely dominated traditional approaches to AI, such as deep learning and neural networks. Second, neuroscience can provide validation of AI techniques that already exist. If a known algorithm is subsequently found to be implemented in the brain, then that is strong support for its plausibility as an integral component of an overall general intelligence system.
However, it’s in this point where scientists two trends differ. Researchers like Henry Markram, Dharmendra Modha, Stephen Larson are focused on the complete simulation of the brain, even from a biological point of view. They estimate that a million lines of codes are enough for this purpose. They are trying to replicate all the synapses, dendrites, axon firings so that they get to understand how the brain learns, gets information about the environment and even how to fight against mental diseases. However, in this article, I’ll talk more about some works which just try to reproduce the learning process, from an engineering point of view.
Neuroscience has put back reinforcement learning (RL) into fashion again. It was never a trendy tool to observe: this algorithm is computationally very inefficient, and it requires hundreds or thousands of experiments to reach the optimal value. But this experimentation of different states and decisions is becoming a great advantage for the modelling and representation of some crucial human abilities.
For instance, RL has become a good representation of the learning of motor skills in animals and humans. It pursues the learning through repetition of an action. Just try to remember how you learned to ride on a bike, or swimming, or how you got used to knowing by heart the way back home in a new city. It is a sum of trial and error decisions in which the person weights the good decisions that made him achieve the equilibrium or get to the destiny.
Moreover, a great achievement has been the combination of deep neural networks with RL, which represents the usage of episodic memories. RL in this sense represents the natural learning of skills, for instance, the rules of a game. This information remains stored and next, it’s used by a neural network system that will use the allocated information from RL to obtain the optimum solution or will make sense of the past experiences to understand a process with a certain complexity. This usage of deep RL has proved to be valid to simulate how children gain experience and commonsense by interacting with the environment.
Finally, RL is also turning up to be the direction towards the capability of imagining and planning of people. Humans can forecast long-term future outcomes through simulation-based planning thanks to a model of the environment learned through experience.
Up until quite recently, most neural network models (typically convolutional) worked directly on entire images or video frames, with equal priority given to all image pixels at the earliest stage of processing. However, this is not how the brain works. Really, it focuses attention on moving objects, colours, or specific parts. Therefore, this kind of image recognition algorithms is implementing the attention, which also reduces their computational cost.
One of the main characteristics of the human brain is the ability to continuously learn without forgetting the previously acquired knowledge or skills. In the case of neural networks, until recently, every new piece of knowledge implied the retraining of the neural network and it was catastrophic for the relationships. This phenomenon is represented by the weights and bias, which represent the way to knowledge of a neural network. Now, researchers are developing a form of elastic weight consolidation in order to be able to use the same neural network system to learn different things without losing any information.
Humans have a great ability to rapidly learn about new concepts from only a handful of examples, which makes knowledge and learning very flexible. This is a very hard task for AI. However, recent learning models are creating neural networks that learn. It can be easily understandable with the following example. A child has a natural ability to recognize different handwritten letters, even they are written by different persons and styles. Neural networks are adapting this effect, by leveraging prior experience with related problems, to support one-shot concept learning.
This is also related to how humans transfer learning. Normally, a person who knows how to use a laptop or drive a car can generally use an unfamiliar computer or vehicle.
It’s not only AI who will benefit from the neuroscience feedback. In the opposite direction, AI and mainly machine-learning algorithms transformed forever neuroscience and the tools to analyze MRI, make diagnoses out of big-data and develop new medicaments.
The new era of both sciences won’t be able to evolve one without the other 1.