A GPU, as the initialism suggests, is an electronic circuit that serves as a processor for handling graphical information to output on a display. The scope of this book is to go beyond just handling graphical information and stepping into the general purpose computing with GPUs (GPGPU) arena. GPGPU is all about the use of what is typically performed with central processing units (CPUs), which we are going to discuss in detail in the next section. The terms GPU and graphics card are used interchangeably very frequently. But both are in fact quite different. The graphics card is a platform that serves as an interface to the GPU. Just like a CPU is seated over a motherboard socket, a GPU is seated over a socket on the graphics card (comparably to a motherboard, we may think of it as a mini-motherboard but only to facilitate the GPU and its cooling).
If you look at computing on a universal scale, you'd find that the specific requirement that we speak of in the previous paragraph isn't just limited to computer science. Computing can be inferred as a technique to calculate any measurable entity that can belong to any field, be it the field of science or even art. Now that we have described the terms GPU and computing individually, let's go ahead with an introduction to our primary topic: GPU computing.
As we can comprehend by now, GPU computing is all about the use of a GPGPU with program code that executes on GPUs. When a GPU programmer writes a GPU program, the primary motive is to handover a certain workload that is computationally much more intensive for a CPU to handle.
Within the code, the CPU is instructed to hand over those particular operations to the GPU, which are then computed by the GPU. When these computations are done, the GPU sends back all of this information to the CPU and shows that output to you. Since the results are computed many times faster, such work can also be called GPU-accelerated computing.
Before GPUs arrived, general-purpose computing, as we know it, was only possible with CPUs, which were the first mainstream processors manufactured for both consumers as well as advanced computing enthusiasts.
Both computational and graphical processing were handled only by them. This meant that both the tasks of processing and handling computation of input and showing its corresponding computed output on a display were all handled by a CPU.
The history of general-purpose computing goes way back to the 1950s, before GPUs arrived and revolutionized the concept. The 1970s witnessed the rise of a new era, when the first commercial CPU, the Intel 4004, was released by Intel in 1971. The first AMD CPU was also launched in the 70s with the launch of AM2900 in 1975. There was no looking back, and a new cycle of CPU manufacturing came into effect, bringing up a new range of microprocessors for every generation.
Though Intel and AMD are the popular competitors in the CPU sector, there are other manufacturers as well, such as Motorola, IBM, and many others. Qualcomm and MediaTek, in particular, dominate the mobile industry.
Since this book is going to be about GPU computing with Python, let's briefly look back at how CPU computing evolved, before Python had any GPU implementations developed. If we want to learn about the computing power of CPUs, we have to look into how modern CPUs evolved before GPU computing wasn't heard of or deployed.
Since the inception of the third generation of Integrated Circuits (ICs) and microprocessors, the thinking has always been about how much power you can put into a single chip to get the maximum performance out of it. In the early 60s, a chip used to contain just tens of transistors, but that number rose to tens of thousands during the 70s. In the 80s, it became hundreds of thousands, while today's chips contain billions. So, how much power can you put into a single chip to get the maximum performance out of it?
This is why CPUs are evolving continuously. During this time, both Intel and AMD invented new technologies to improve CPU design. Being in the same field, they entered into a 10-year agreement in 1981 to enable mutual technology exchange. Dual core, Core 2 Duo, and many other technologies became popular.
But, eventually, a time arrived when the need for a device to accelerate general-purpose CPU computing was acknowledged. That's when GPGPUs entered the arena and the processing power of general CPUs increased tenfold.
Gaming is over a $100 billion USD industry. But way back in the 1950s, video games were purely made for academic purposes. Video games were a medium to demonstrate the capabilities of a newly invented technology. They were also a good application to test AI applications through tic-tac-toe or chess. But access to such platforms was still limited to computer lab environments.
Spacewar became the first purpose-built computer game in 1962.
By the 1970s, the area of gaming started to change. Arcade gaming became very popular. The PC gaming landscape took proper shape in the 80s with programmable computers in almost every household equipped with popular games such as Super Mario Bros, Donkey Kong, Prince of Persia, and more.
The 90s saw the emergence of legendary games such as Doom and Quake, which radically changed the PC gaming scenario. Many PC enthusiasts and gamers developed an immense interest in understanding the benefit of PC hardware customization. Such options to customize PC hardware grabbed the attention of many to enable smooth gameplay and the best possible visuals at that time.
During this time, the console market also started to hit the roof, which continued through the 00s, with branded hardware shipped as a single unit. Later, many became curious about the specifications of these devices to learn about their full potential, and even today when a new console arrives on the market, it is a very common to debate about the GPU that lies inside the new console.
By 2016, there were over 2 billion gamers, and half of them lived in the Asia-Pacific region. As we can see, the rise of the gaming industry is known to many, and ...