AI researchers are inspired a lot by the human brain. Today, they are doing their best to recreate the cognitive abilities of the brain in deep neural networks. Nowadays, the intersection of artificial intelligence and neuroscience is probably one of the most fascinating fields of research.
Most of the current research is focused on emulating the synaptic connections between neurons. Recreating cognitive capabilities must be based on assembling the architectures of neural networks that power cognitive functions in the human brain. This task is incredibly daunting for neuroscientists as they still struggle to understand the cognitive mechanisms of the brain. Despite this challenge, AI research is producing very promising results. For instance, DeepMind is one of the most active companies that work on the intersection of neuroscience and AI.
Today, the most common skills that scientists are trying to recreate with AI are the following:
Attention is one of those brain functions that people don’t understand well. It still remains a question, what mechanisms of the brain allow us to focus on certain tasks and ignore the rest of the environment. The mechanism of attention has inspired scientists for creating deep learning models such as CNNs (convolutional neural networks) or deep generative models. Modern CNN models can classify objects in a picture by getting a schematic representation of the input and ignoring irrelevant information.
Consciousness involves the ability to forecast and think about the future. Therefore, modern AI research is focused on simulation-based planning in order to allow deep generative models to plan for long-term outcomes. Recently, new architectures that have the capacity to generate consistent sequences have been introduced. They provide a parallel to the function of the hippocampus and create an imagined experience.
The human’s ability to learn new things without forgetting previous ones inspired scientists to work on the problem of neural networks that suffer from catastrophic forgetting. Neural network parameters usually shift toward the optimal state to perform the next task and overwrite the configuration that allowed them to perform the previous task.
Continual learning has inspired researchers to create a deep learning technique called ‘elastic weight consolidation’ or EWC. It slows down learning in a subset of network weights defined as essential to previous tasks, thereby anchoring these parameters to the solutions found previously. This way, EWC allows us to support continual learning.
People commonly use this brain function when they remember specific events or places. AI scientists are trying to incorporate methods similar to episodic memory into reinforcement learning algorithms. Recent networks store specific experiences and choose new actions based on the similarity between the current input and the previous events stored in memory.
Human cognition involves the ability to learn new things by drawing conclusions from previous knowledge with the help of inductive inferences. Recently, researchers have started to incorporate inference mechanisms in AI programs. So models can make inferences about completely new concepts despite poor data as well as generate new samples from an example concept. Meta-learning has become a new AI area of research inspired by the abilities of the human brain.