FEATURED

The end of the era of Moore's law and how it can affect the future of information technology translation

It's probably not worth telling about what the so-called Moore Law on SurprizingFacts is – we all know About him, at least approximately. If we recall briefly, this law is an empirical observation made by Gordon Moore. There were several formulations of the law itself, but the modern one says that the number of transistors placed on an integrated circuit chip doubles every 24 months. A little later, a version of the law appeared, where it appeared not for two years, but for 18 months. This is no longer connected with Moore, but with David House of Intel. In his opinion, the performance of processors should be doubled every 18 months due to the simultaneous growth of both the number of transistors and the speed of each of them.

Since the wording of the law, developers of electronic elements are trying to keep pace with the established timeframes. Generally speaking, for 1965 the law was something unusual, it can even be called radical. Then the "mini-computers" were still not very small, occupying the volume of the usual desktop in the room, or even larger. At that time, it was difficult to imagine that computers could eventually become part of a refrigerator, washing machine or other household appliances. Most people have never seen a computer, and those who have seen, almost never worked with them. Well, those who worked used punched cards and other not very convenient tools for interaction with computers, which in turn worked to solve a rather narrow range of problems.

Above the idea of ​​Moore after it became known, they even began to make fun of the magazines. For example, in one of them they placed this caricature like this:

Then it was difficult to imagine that soon such computers would not be considered small at all. Illustration, by the way, saw Moore, and she was very surprised by her originality. As far as one can judge, the artist tried to convey a somewhat skeptical attitude toward the idea of ​​a permanent reduction in the size of a PC. But in 25 years this illustration has become quite an ordinary reality.

The Influence of Moore's Law

As already mentioned above, there are several variations of Moore's Law, it is not just a matter of constantly increasing the number of transistors in the chip. One of the consequences of Moore's idea is an attempt to find out how fast all the smaller transistors will work. Also, scientists and information technology specialists, using the idea of ​​Moore, tried and are trying to predict how fast RAM will grow, the main memory, how productive chips will be, and so on.

But the main thing is not which of the versions of Moore's Law is more curious / useful, but what impact the basic idea has had on our world. There are three main forms of influence. This competition of developers, forecasting and changing the architecture of computing systems.

Rivalry

Moore's Law can be used to find out how much information can be stored in the volume of a single chip. By the way, this law can be attributed to RAM. At the dawn of computer technology, rather, a PC, a computer chip could store . The chips themselves were called RAM (Random Access Memory). Chips with 16K began to produce many. Then, in full accordance with Moore's law or even faster, chips with 64 K appeared. The engineers who developed these chips knew about the Law and tried to match it. Thus, from the very beginning a special, non-stop production cycle was established, when the engineers, releasing one chip, were already finishing work on its next generation. This situation is observed now. Everyone knows about the rules and games, and everyone in it is involved. Knowing the trend of increasing the number of transistors in the chip volume (and the formula was initially quite clear), the engineers of any of the companies that produce electronic components could roughly imagine, When what generation of chips will be released. And this was a fairly accurate forecast. You could also imagine in which year and with what performance the processor will work.

Engineers at enterprises began to make production plans, focusing mainly on Moore's Law. Sellers of computer technology knew exactly when a generation of machines should leave the market, and when it should appear.

Moore's law, it can be said, established the manufacturing process for the release of electronic components and systems. Surprises in this regard was not, and could not be, because all worked at about the same speed, not trying to overtake or fall behind the time frame set by Moore. Everything was perfectly predictable.

The architecture of PCs and elements

The same Moore Law allowed engineers to design the chips that became Long time standard. It's about Intel 4004 and its subsequent incarnations. A specialized architecture was developed, which was called von Neumann architecture.

In March 1945, the principles of logical architecture were documented in a document called "First Draft Report on EDVAC" – a report for the US Army Ballistic Laboratory, whose money was spent building ENIAC and developing EDVAC. The report, because it was just a sketch, was not intended for publication, but only for dissemination within the group, but Herman Goldstein, the project's curator for the US Army, multiplied this scientific work and sent it out to a wide range of scientists for review. Since the first page of the document contained only the name of von Neumann [1]the readers of the document had a false impression that he was the author of all the ideas outlined in the work. The document gave enough information for those who could read it to build their computers like EDVAC on the same principles and with the same architecture that as a result became known as the "von Neumann architecture."

After the end of World War II and The completion of work on ENIAC in February 1946, a team of engineers and scientists broke up, John Mokley, John Eckert decided to turn to business and create computers on a commercial basis. Von Neumann, Goldstein and Burks moved to the Institute for Advanced Studies, where they decided to create their computer "IAS-machine", similar to EDVAC, and use it for research work. In June 1946, they [2] [3] outlined their principles of constructing computers in the classic article "Preliminary examination of the logical design of an electronic computing device". Since then, more than half a century has passed, but the provisions put forward in it remain valid today. The article convincingly substantiates the use of a binary system for representation of numbers, and in fact earlier all computers stored the processed numbers in decimal form. The authors demonstrated the advantages of a binary system for technical implementation, the convenience and ease of performing arithmetic and logical operations in it. In the future, computers began to process non-numerical types of information – text, graphics, sound and others, but binary coding of data still forms the information basis of any modern computer.

All the foundations that were laid several decades ago and became Basis. In the future, almost everything remained unchanged, the developers just tried to make computers more productive.

It is worth remembering that at the core of everything lies Moore's Law. All of his incarnations served as a support for the basic model of the development of computer technology, and there was little that could lead to the rupture of this cycle. And the more active the development of computer technology went on, the deeper, it is possible to say, the developers of these systems were bogged down in the law. After all, the creation of another computer architecture takes years and years, and few of the companies could afford this luxury – the search for alternative ways of developing computer technology. Research organizations like MIT conducted brave experiments such as Lisp Machine and Connection Machine, here one can also mention one of the Japanese projects. But it all ended with nothing, in the course remained the architecture of von Neumann.

The work of engineers and programmers now consisted in optimizing the work of their programs and "hardware", so that every square millimeter of chips worked more efficiently. Developers competed in the caching of large amounts of data. Also, various manufacturers of electronic components tried (and still try to) place as much of the core as possible within a single processor. Whatever it was, the whole work focused on a limited number of processor architectures. These are X86, ARM and PowerPC. Thirty years ago there were many more.

X86 is used mainly on desktops, laptops and cloud servers. ARM processors work on phones and tablets. Well, PowerPC in most cases are used in the automotive industry.

An interesting exception to the hard rules of the game, established by the Law of Moore, you can call the GPU. They were developed in order to process graphic information with a high degree of efficiency, therefore their architecture differs from the processor one (still). But in order to cope with its task, the GPU had to be modified regardless of the evolution of the processors. The architecture of video cards was optimized to process a large amount of data necessary for drawing the image on the screen. Therefore, engineers developed a different type of chips, which did not replace existing processors, but complemented their capabilities.

When does Moore's law cease to work?

In the ordinary sense, he has already ceased to work, in the classical sense of which we spoke above. This is evidenced by various sources, including, for example, this one. Now the race is still going on. For example, in the same first-year commercial 5-bit Intel 4004 processor that was released in 1971, there were 2300 transistors. In 45 years, in 2016, Intel introduced the Xeon Broadwell-WS 24-core processor with 5.7 billion transistors. This processor is available in 14 nm technology. IBM recently announced a 7 nm processor with 20 billion transistors, and then a 5 nm processor with 30 billion transistors.

But 5 nm is a layer with a thickness of only 20 atoms. Here, engineering is already approaching the technical limit of further improvement of the technological process. In addition, the density of the placement of transistors in modern processors is very high. The square millimeter is 5 or even 10 billion transistors. The transmission speed of the signal in the transistor is very high and is of great importance. The core frequency of the most modern processors is 8.76 GHz. Further acceleration, although possible, is a technical problem, and very, very large. That's why engineers chose to create multi-core processors, rather than continue to increase the frequency of one core.

This allowed us to maintain the rate of increase in the number of transactions per second, provided for by Moore's law. But all the same multi-nuclear power is some deviation from the law. Nevertheless, a number of specialists believe that it does not matter how we try to "make it", the main thing is that the pace of development of technologies, in particular computer technology, is more or less in line with Moore's law.

Below is a graph constructed by Steve Jurvetson, co-founder of Draper Fisher Jurvetson. He claims that this is a supplemented schedule, previously presented by Ray Kurzweil.

This chart shows the relative cost of the number of operations per unit of time (1 second). That is, we can clearly see how much computer computing has fallen in price over time. And the calculations became more and more universal, if I may say so. In the 40s, there were specialized machines designed to break into military codes. In the 1950s, computers began to be used to work with common tasks, and this trend continues to this day.

It is interesting that the last two positions on the graph are GPU, GTX 450 and NVIDIA Titan X. What is interesting is that in 2010 there were no GPUs on the graphics, only multicore processors.

In general, the GPU is already here, and many are satisfied with it. In addition, now is becoming more popular such direction as deep training, one of the manifestations of neural networks. They are developed by many companies, large and small. And GPUs are ideal for neural networks.

Why all this? The fact is that the overall growth in the number of calculations is still preserved, yes, but the methods and equipment are changing.

What does all this mean?

The very form of computer calculations is changing. Architects soon will not need to think about what else to do in order to catch the law of Moore. Now, new ideas are being gradually introduced that will make it possible to reach heights inaccessible to conventional computer systems with traditional architecture. Perhaps in the near future the speed of computation will not be of such great importance, it will be possible to improve the performance of the systems differently.

Self-learning systems

Now many neural networks depend on the GPU. For them, systems with specialized architecture are created. For example, Google has developed its own chips, which are called TensorFlow Units (orTPUs). They allow you to save computing power due to the efficiency of the calculations. Google Corporation uses these chips in its data centers, based on them, many cloud services of the company. As a result, the efficiency of the systems is higher, and the energy consumption is lower.

Specialty chips

In conventional mobile devices ARM processors are now working, which are specialized. These processors process information coming from cameras, optimize speech processing, and in real time work with face recognition. Specialization in everything – that's what awaits electronics.

Specialized architecture

Yes, the light wedge did not converge on the von Neumann architecture, now systems with different architectures designed for different tasks are being developed. This trend is not only preserved, but even accelerated.

Computer systems security

Cybercriminals are becoming more skillful, with the cracking of some systems, millions, tens of millions of dollars can now be obtained. But in most cases, you can hack the system because of software or hardware errors. The overwhelming number of tricks used by crackers work on systems with von Neumann architecture, but they will not work with other systems.

Quantum systems

The so-called quantum computers are experimental technology, which, among other things, is also very expensive. Here, cryogenic elements are used, plus many other things, which are not present in conventional systems. Quantum computers are completely different from the computers we used, and Moore's law does not apply to them. Nevertheless, with their help, experts believe, it is possible to radically improve the performance of some types of calculations. Perhaps it was Moore's law that led scientists and engineers to look for new ways to improve computational efficiency, and found them.

As an afterword

Most likely in 5-10 years we will see absolutely new systems of calculations, we are talking about semiconductor technology. These systems will outrun our boldest plans and develop at a very fast pace. Most likely, experts, trying to circumvent Moore's law, will create new technologies for the development of chips, which, if we were told about them now, would seem to us magic. What would people who lived 50 years ago say if they were given a modern smartphone? Few people would understand how everything works. So in our case.