Mathematician Alan Turing's question, "Can machines think? This question sparked the quest for artificial intelligence (AI). Since the biological nervous system is the only known system in the world to perform complex calculations, knowledge of the physiology of the brain's neural circuits has become an important source of reference for artificial intelligence scientists. One route, which has had great success recently, is to make intelligent calculations using circuits that resemble brain neural structures, modeling cortical circuits in a highly reductionist way. The brain-inspired model, in the form of a "deep web" structure, is a continuous hierarchy of neuron-like elements connected by adjustable weights, the biomimetic name for these weights. The use of the deep Web has revolutionized artificial intelligence. In central areas of AI research, including computer vision, speech recognition and production, and playing complex games, the deep Web is superior to all previous approaches. It is now widely used in computer vision, speech and text translation and is expanding to many other fields on a large scale. Here, I will discuss the nature of neural circuits that have guiding value for cognitive and general AI network models.
The key problem in the deep web is learning, which is tweaking synapses to produce the desired output for a particular input pattern. The synaptic adjustment process is performed automatically based on a set of training examples of the pairing of input patterns and their desired outputs. The learning process is to adjust the weight to make the output value of the training input mode consistent with the expected output value. Successful learning not only allows the network to remember training examples, but also to generalize and provide the correct output for new input patterns that did not occur during the learning process.
Comparing the deep-web model with physiological, fMRI and behavioural data, the brain showed similarities and differences . By associating with primates.
Compared with the existing deep web model, the advantages of human cognitive learning and understanding may largely stem from the rich and complex innate structure contained in human cognitive system. The latest model of visual learning in infancy shows the effective combination of learning and innate mechanism. Meaningful complex concepts are neither innate nor self-learning. Congenital components are not developed concepts, but simpler "prototype concepts". They provide internal teaching signals to guide the learning system to gradually acquire and construct complex concepts, which basically does not need training. For example, a specific mode of image motion can provide an internal teaching signal for hand recognition. Hand detection and its manipulation of objects can guide the learning system to detect the gaze direction, and detecting the gaze target is useful for learning to infer human intention. Such a congenital structure can be connected to a specific brain region through a specific initial connection and provide input and error signals to a specific target.
Reprinted from Wechat public account "Brain-computer Interface Community", if there is infringement, please inform to delete.