FEATURED

Epoch Nvidia / SurprizingFacts

 image

Apparently, the era of GPU-computing has come! Intel is not doing well. If you have not read my blog the last few years quite regularly, then I will explain that I [Алекс Св. Джон] stood at the origins of the original DirectX command in Microsoft in the distant 1994, and created the Direct3D API along with other first DirectX creators (Craig Eisler and Eric Engstrom) And contributed to its spread in the video game industry and the manufacturers of graphics chips. There are a lot of stories on my blog about this topic, but I wrote one that is directly related to this post in 2013.

The History of Nvidia

I think that the version of the future games From Nvidia is correct, and I really like living in an era when I can work with such amazing computer facilities. It seems to me that I have lived to an age in which I can walk along the bridge of the Enterprise and play with the warp engine. And literally – warp Nvidia calls the minimal unit of parallel processes, which can be run on the GPU.


Those who follow the stock quotes could notice that Nvidia shares recently sharply went up after many years of slow climbing. It seems to me that this sudden jerk declares a revolutionary shift in computing, representing the culmination of many years of progress in the development of GPGPU. To this day, Intel has maintained a monopoly over computing in the industrial market, successfully reflecting the attacks of competitors on their superiority in the industrial computing space. This dominance ended this year, and the market sees its approach. To understand what is happening and why this happens, I'll go back to my early years at Microsoft.

In the 90s, Bill Gates coined the term "co-operation" [Cooperatition = конкуренция + кооперация] to describe the wrangled competitive partnerships with other technology leaders of the time. In talking about Intel, the term came up especially often. And while the fate and success of Microsoft and Intel were intertwined all the same, the two companies constantly fought each other for domination. In both companies there were teams of people who "specialized" in trying to gain advantages over an opponent. Paul Maritz, then the executive director of Microsoft, was very worried that Intel could try to virtualize Windows, allowing many other competing OSes to enter the market and exist on the desktop PC in parallel with Windows. It's interesting that Paul Maritz later became CEO of VMWARE. Indeed, Intel actively invested in such attempts. One of their strategies was an attempt to emulate at the software level all the common hardware functionality with which OEMs usually supplied PCs – video cards, modems, sound cards, network equipment, etc. By migrating all external computing to the Intel processor, the company could destroy the sales and growth of all possible alternative computing platforms that could otherwise grow, threatening the CPU from Intel. Specifically, Intel's announcement of 3DR technology in 1994 prompted Microsoft to create DirectX.

I worked for a team at Microsoft that was responsible for the company's strategic positioning in the light of competitive threats in the market, the "developer relations group" [Developer Relations Group, DRG] . Intel demanded that Microsoft send its representative to speak at the presentation of 3DR. As an expert in graphics and 3D at Microsoft, I was sent with a special mission to assess the threat that the new initiative from Intel potentially represented, and the formation of an effective strategy to combat it. I decided that Intel is really trying to virtualize Windows, emulating at the software level all possible data processing devices. I wrote a proposal called "seriously take fun", where he proposed to block Intel's attempts to make Windows OS insignificant to create a competitive consumer market for new hardware capabilities. I wanted to create a new set of Windows drivers that allowed massive competition in the hardware market, so that the work of new media, including audio, data entry, video, networking, etc. On the PC market that we create depends on our own Windows drivers. Intel could not cope with the competition in the free market that we created for companies producing consumer iron, and therefore could not create a CPU capable of effectively virtualizing all the functionality that users could require. This is how DirectX was born.

In this blog you can find many stories about the events surrounding the creation of DirectX, but if in short, our "evil strategy" was a success. Microsoft realized that to dominate the consumer market and deter Intel, it was necessary to focus on video games, after which dozens of manufacturers of 3D chips appeared. Twenty years have passed since some years, and among a small number of survivors, Nvidia, together with ATI, since acquired by AMD, was dominated first in the consumer graphics market, and more recently in the industrial computing market.

This returns Us to the current year 2017, when the GPU finally begins to completely displace the x86 processors, to which everyone used to be reverent. Why now and why GPU? The secret of x86 hegemony was the success of Windows and backward compatibility with the x86 instructions until the 1970s. Intel could maintain and increase its monopoly in the industrial market due to the fact that the cost of moving applications to the CPU with any other set of instructions that did not occupy any market niche was too great. The phenomenal set of features of the Windows OS, tied to the x86 platform, strengthened Intel's market position. The beginning of the end came when Microsoft and Intel together could not make a leap to the dominance of the emerging market of mobile computing. For the first time in several decades, a crack appeared on the x86 CPU market, which was filled by ARM processors, after which new, alternative Windows OS from Apple and Google were able to capture a new market. Why did Microsoft and Intel fail to make this leap? You can find a car of interesting reasons, but within the framework of this article I would like to emphasize one thing – backward compatibility bag x86. For the first time, energy efficiency has become more important to the success of the CPU than speed. All the transistors and all the millions of lines of code for x86, invested by Intel and Microsoft in the PC, have become obstacles to energy efficiency. The most important aspect of the market hegemony of Intel and Microsoft at one point became a hindrance.

 image

Intel's need for constant speed increase and backward compatibility support forced the company to spend more and more More hunter-to-energy transistors for getting ever-decreasing increments in speed in each new generation of x86 processors. Backward compatibility also seriously hampered Intel's ability to parallelize its chips. The first parallel GPU appeared in the 90's, and the first dual-core CPU was released only in 2005. Even today, Intel's most powerful CPU can cope with only 24 cores, although most modern video cards have processors with thousands of cores. GPUs that were originally parallel did not carry backward compatibility luggage, and thanks to architecture-independent technologies, APIs like Direct3D and OpenGL were free to innovate and increase parallelism without having to compromise on compatibility or transistor performance. By 2005, GPUs had even become general-purpose computing platforms that support heterogeneous parallel computing for general use. Under the heterogeneity, I mean that chips from AMD and NVIDIA can execute the same compiled programs, despite the completely different low-level architecture and instruction set. And at a time when Intel chips were achieving ever-decreasing performance jumps, the GPU doubled the speed every 12 months, while reducing power consumption by half! Extreme parallelism made it possible to use transistors very efficiently, providing each additional GPU added to the GPU with the ability to effectively influence the speed of operation, while an increasing number of growing x86 transistors were not involved.

And although GPUs increasingly invaded On the territory of industrial supercomputers, media production and VDI, the main turn in the market occurred when Google began to effectively use the GPU to train neural networks capable of very useful in Even. The market realized that AI will become the future of processing large data and will open huge new automation markets. GPUs were ideally suited for the operation of neural networks. Until then, Intel had relied heavily on two approaches that suppressed the growing influence of the GPU on industrial computing.

1. Intel left the PCI bus speed at a low level and limited the number of I / O paths supported by their processor, thereby ensuring that the GPU will always depend on Intel processors in processing their workload and will remain separated from various valuable real-time high-speed computing applications due to Delays and bandwidth limitations. While their CPU was able to restrict application access to GPU performance, Nvidia was languishing on that end of the PCI bus without access to many practical industrial loads.
2. Provided a cheap GPU with minimal functionality on the consumer processor to isolate Nvidia and AMD from the premium gaming market and from universal adoption by the market.

The growing threat from Nvidia and failed attempts by Intel to create supercomputer accelerators compatible with x86 , Forced Intel to choose a different tactic. They purchased Alterra and want to include programmable FPGAs in the next generation of Intel processors. This is a clever way to ensure that the Intel processor supports large I / O capabilities compared to the limited PCI bus from competitors, and that the GPU does not get any advantages. Support for FPGA enabled Intel to go in the direction of supporting parallel computing on their chips without playing into the gates of the growing application market using GPUs. It also allowed industrial computer manufacturers to create highly specialized hardware, still dependent on x86. It was a brilliant move from Intel, because it excluded the possibility of penetrating the GPU to the industrial market in several directions at once. Brilliant, but most likely doomed to failure.

Five consecutive news stories describe the reason why I'm sure that the x86 party will end in 2017.

1. The VisionFund Foundation from SoftBank received investments of $ 93 billion from companies wishing to take the place of Intel
2. SoftBank bought ARM Holdings for $ 32 billion
3. SoftBank bought shares of Nvidia for $ 4 billion
4. Nvidia launches Project Denver [кодовое название для микроархитектуры от Nvidia, реализующей набор инструкций ARMv8-A 64/32-bit, используя комбинацию простого аппаратного декодера и программного двоичного транслятора c динамической рекомпиляцией / прим. перев.]
5. NVIDIA announced Xavier Tegra SOC with Volta GPU with 7 billion transistors, 512 CUDA Cores and 8 ARM64 Custom Cores – mobile ARM / Hybrid chip with ARM cores accelerated by GPU

Why is this sequence of events important? It was this year that the first generation of stand-alone GPUs entered the market in wide access, and is able to run their own OS without obstacles in the form of PCI. Nvidia no longer needs an x86 processor. ARM has an impressive number of consumer and industrial operating systems and applications transferred to them. All industrial and cloud markets are moving to ARM chips as controllers for a wide range of their market solutions. ARP chips are already integrated with FPGA. ARM chips consume little power, inferior in performance, but GPUs are extremely fast and efficient, so GPUs can provide processor power, and ARM cores can handle tedious IO and UI operations that do not require processing power. More and more applications working with large data, high-performance computing, machine learning no longer need Windows, and they do not work on x86. 2017 is the year Nvidia breaks off the leash and becomes a truly viable competitive alternative to x86-based industrial computing in valuable new markets not suited for x86-based solutions.

If the ARM processor is not powerful enough for Your needs, then IBM in collaboration with Nvidia is going to produce a new generation of CPU Power9 for processing large data, working with 160 lines of PCIe.

AMD also launches a new Ryzen CPU, And, unlike Intel, AMD does not have a strategic interest to strangle the speed PCI. Their consumer chips support 64 lines of PCIe 3.0, and professional ones – 128. AMD is also launching a new HIP cross-compiler, thanks to which applications for CUDA become compatible with the AMD GPU. Despite the fact that these two companies are competing, they will both benefit from Intel's shift in the industrial market with alternative approaches to GPU computing.

All this means that in the coming years the solutions based on the GPU will capture industrial Calculations with increasing speed, and the world of desktop interfaces will increasingly rely on cloud visualization or work on mobile ARM processors, since even Microsoft announced support for ARM.

Having combined everything, I I predict that a few For years we will only hear about the battle between the GPU and FPGA for the advantage in industrial computing, while the CPU era will gradually end.