FEATURED

The artificial brain can be created now / SurprizingFacts

 image

The time has come for calculations inspired by the device of the brain. Algorithms using neural networks and in-depth training simulating some aspects of the human brain, allows digital computers to reach incredible heights in translating languages, searching for subtle patterns in huge amounts of data and winning people in go

But while engineers continue to actively To develop this computational strategy, capable of much, the energy efficiency of digital computing approaches its limit. Our data centers and supercomputers already consume megawatts – 2% of all consumed electricity in the US goes to data centers. And the human brain is well around 20 watts, and this is a small fraction of the energy contained in daily food consumed. If we want to improve computing systems, we need to make computers similar to the brain.

With this idea, there is a surge of interest in neurromorphic technologies, promising to take computers beyond simple neural networks, towards schemes that work as neurons and synapses. The development of physical circuits similar to the brain is already fairly well developed. The work done in my laboratory and other institutions around the world over the past 35 years has led to the creation of artificial nerve components similar to synapses and dendrites that react and produce electrical signals in much the same way as real ones.

So what is required in order to integrate these building blocks into a full-scale computer brain? In 2013, Bo Marr, my former graduate student from the Georgia Institute of Technology, helped me evaluate the best modern achievements in engineering and neurobiology. We came to the conclusion that it is quite possible to create a silicon version of the cortex of the human brain with the help of transistors. Moreover, the total machine would take less than a cubic meter in space and consume less than 100 W, which is not much different from the human brain.

I do not want to say that it will be easy to create such a computer. The system we invented will require several billion dollars for development and construction, and to make it compact it will include several innovative innovations. It also raises the question of how we will program and train such a computer. Researchers of neuromorphism are still struggling to understand how to make thousands of artificial neurons work together and how to find useful applications for pseudo brain activity.

Still, the fact that we can come up with such a system says that we Not long before the appearance of chips of a smaller scale, suitable for use in portable and portable electronics. Such gadgets will consume little energy, so a neuromorphic chip with high energy efficiency – even if it takes on only a part of the calculations, say, signal processing – can become revolutionary. Existing opportunities, such as speech recognition, can work in noisy conditions. You can even imagine smartphones of the future, carrying out the translation of speech in real time in a conversation between two people. Think about this: for 40 years since the advent of integrated circuits for signal processing, Moore's law has improved their energy efficiency by about 1000 times. Very similar to the brain, neuromorphic chips can easily surpass these improvements, reducing energy consumption by another 100 million times. As a result, the calculations for which the data center was needed earlier will fit in your palm.

In an ideal machine approaching the brain, it will be necessary to recreate the analogues of all the main functional components of the brain: synapses connecting neurons and allowing them Receive and respond to signals; Dendrites combining and conducting local calculations based on incoming signals; Core, or soma, the region of each neutron that combines the entrance from the dendrites and transmits the output to the axon.

The simplest versions of these basic components are already implemented in silicon. The beginning of this work was given by the same metal-oxide semiconductor, or MOSFET, billions of copies of which are used to construct logic circuits in modern digital processors.

These devices have much in common with neurons. Neurons work by voltage-controlled barriers, and their electrical and chemical activity depends primarily on the channels in which ions move between the inner and outer spaces of the cell. It is a smooth, analog process in which a constant accumulation or reduction of a signal occurs, instead of simple operations of the on / off type

MOSFETs are also voltage controlled and operate by movements of individual units of charge. And when the MOSFETs operate in the "subthreshold" mode, without reaching the voltage threshold that switches the modes on and off, the current flowing through the device is very small – less than one thousandth of the current that can be found in typical switches or digital closures

The idea that the physics of subthreshold transistors can be used to create brain-like circuits was expressed by Carver Mead of Caltech, who promoted the revolution in the field of superintense integrated circuits in the 1970s. Mil pointed out that the chip designers did not use many interesting aspects of their behavior, using transistors exclusively for digital logic. This process, as he wrote in 1990, is similar to the fact that "all the beautiful physics that exists in transistors is crushed to zeros and ones, and then on this basis the AND and OR closures are painfully constructed, in order to reinvent multiplication". A more "physical" or physics-based computer could perform more calculations per unit of energy than a conventional digital one. Mead predicted that such a computer and space would take less.

In the ensuing years, engineers of neuromorphic systems created all the basic blocks of the brain from silicon with high biological accuracy. Dendrites, axon and neuron soma can be made from standard transistors and other elements. For example, in 2005, Ethan Farquhar and I created a neural circuit from a set of six MOSFETs and a bunch of capacitors. Our model produced electrically impulses, very similar to what the soma neurons of squid give out – an old object of experiments. Moreover, our scheme has achieved such indicators with levels of current and energy consumption close to those existing in the brain of the squid. If we wanted to use analog circuits to model the equations derived by neurobiologists to describe this behavior, we would have to use 10 times more transistors. Performing such calculations on a digital computer would require even more space.

spectrum.ieee.org/image/MjkwMTM1MQ.jpeg
Synapses and catfishes: a transistor with a floating gate (top left), capable of storing a different amount of charge, can be used to create a coordinate array of artificial synapses (lower left). Electronic versions of other components of the neuron, such as soma (right), can be made from standard transistors and other components.

Synapses are slightly more difficult to emulate. A device behaving like a synapse must be able to remember in which state it is, respond in a certain way to the incoming signal and adapt its responses over time.

There are several potential approaches to the creation of synapses. The most developed of them is a learning single-transistor learning synapse (STLS), which I worked with colleagues in Caltech in the 1990s when I was a graduate student at Mead.

For the first time we introduced STLS In 1994, and it became an important tool for engineers creating modern analog circuits – for example, physical neural networks. In neural networks, each node of the network has a weight associated with it, and these weights determine how exactly the data from different nodes are combined. STLS was the first device capable of containing a set of different weights and reprogramming on the fly. In addition, the device is non-volatile, that is, it remembers its state, even when not in use – this circumstance significantly reduces the need for energy.

STLS is a kind of floating gate transistor, a device used to create cells in flash memory . In a conventional MOSFET, the gate controls the current passing through the channel. A transistor with a floating gate has a second gate, between the electric gate and the channel. This gate is not connected directly to the ground or any other component. Thanks to this electrical insulation, reinforced with high-quality silicon insulators, the charge remains for a long time in the floating gate. This shutter is capable of taking a different amount of charge, which can give an electrical response at many levels – and this is necessary to create an artificial synapse capable of varying its response to a stimulus.

My colleagues and I used STLS to demonstrate the first Coordinate network, a computational model that is popular with nanodevice researchers. In a two-dimensional array, devices are located at the intersection of input lines that run from top to bottom and output lines running from left to right. This configuration is useful in that it allows you to program the connective strength of each "synapse" separately, without interfering with other elements of the array.

Thanks in particular to the recent DARPA program called SyNAPSE, in the field of engineering neuromorpha there has been a surge in research of artificial synapses, Created from nanodevices such as memristors, resistive memory and memory with a change in phase state, as well as devices with a floating gate. But this new artificial synapse will be difficult to improve on the basis of arrays with a floating gate of 20 years ago. Memristors and other types of new memory are difficult to program. The architecture of some of them is such that it is quite difficult to refer to a particular device in a coordinate array. Others require a dedicated transistor for programming, which significantly increases their size. Since floating gate memory can be programmed to a large range of values, it is easier to adjust to compensate for manufacturing deviations from device to device compared to other nanodevices. Several research groups studying neuromorphic devices tried to introduce nanodevices into their designs and as a result began to use devices with a floating gate.

And how do we combine all these brain-like components? In the human brain, neurons and synapses are intertwined. Developers of neuromorphic chips should also choose an integrated approach with the placement of all components on a single chip. But in many laboratories you do not see this: to make it easier to work with research projects, separate base blocks are located in different places. Synapses can be placed in an array outside the chip. The connections can go through another chip, the user-programmable gate array (FPGA).

But by scaling the neuromorphic systems, we need to make sure that we do not copy the structure of modern computers that lose a significant amount of energy to transfer bits back and forth between logic, Memory and storage. Today the computer can easily consume 10 times more energy for data movement than for computing.

The brain, by contrast, minimizes the energy consumption of communications due to the high localization of operations. Elements of brain memory, such as the strength of synapses, are mixed with signal-transmitting components. And the "wires" of the brain – dendrites and axons that transmit incoming signals and outgoing impulses – are usually short compared to the size of the brain, and they do not need much energy to maintain the signal. From anatomy we know that more than 90% of neurons connect with only 1000 neighboring ones.

Another big question for the creators of brain-like chips and computers is the algorithms that will work for them. Even a weakly brain-like system can give a great advantage over conventional digital. For example, in 2004 my group used floating gate devices to perform multiplication in signal processing, and it took 1000 times less energy and 100 times less space than a digital system. Over the years, researchers have successfully demonstrated neuromorphic approaches to other types of computations for signal processing.

But the brain is still 100,000 times more effective than these systems. It's all because, although our current neuromorphic technologies take advantage of the neuron-like physics of transistors, they do not use algorithms like those that use the brain for their work.

Today, we are just beginning to discover these physical algorithms – processes that can allow brain-like Chips work with efficiency close to brain. Four years ago, my group used silicon soms, synapses and dendrites to operate a word search algorithm that recognized words in an audio recording. This algorithm showed a thousandfold improvement in energy efficiency in comparison with analog signal processing. As a result, by reducing the voltage applied to the chips and using smaller transistors, researchers must create chips that are comparable in efficiency to the brain on many types of computations.

When I started research 30 years ago in the field of neuromorphism, everyone believed in That the development of systems similar to the brain will give us amazing opportunities. Indeed, now entire industries are built around AI and in-depth training, and these applications promise to completely transform our mobile devices, financial institutions and people's interaction in public places.

Yet these applications rely very little on our knowledge of Work of the brain. In the next 30 years, we will no doubt be able to see how this knowledge is being increasingly used. We already have many basic hardware blocks needed to convert neurobiology to a computer. But we need to understand even better how this equipment should work – and what computing schemes will give the best results.

Consider this a call to action. We have achieved a lot, using a very approximate model of brain work. But neuroscience can lead us to create more complex brain-like computers. And what better way to use our brain with you to understand how to make these new computers?