Not RTX 3090 alone, or integrated graphics in server processors


Remember the times when recommended retail prices from official video card resellers seemed like a robbery? Many turned their noses up and went to the gray market to save a few thousand rubles. It was then that the article “Crysis at maximum speed, or why a server needs a video card” was published on our blog.

Now times are hard, it is almost impossible to buy a video card at the RRP, and prices in gray stores, from scalpers and resellers are several times higher. Well, since video cards are not sold, but shown, I decided to go the opposite way - why doesn’t the server need a discrete video card?

In this article we will go over iGPUs; Let's discuss the differences between GPU, CPU and APU; We’ll find out why not all Xeons were developed for servers and much more interesting things.

Carefully! Longread!

Appearance

The first computers, for example ABC or ENIAC, did not know how to work with graphics, and worked exclusively on mathematical calculations. However, already in the 50s, using memory programming and output devices, enthusiasts created patterns from many incandescent lamps. And from that moment the history of computer graphics began.

On Wikipedia, the first graphics adapters in history are listed as being developed by IBM in 1981, but in 1977-1978 Matrox, a company from Canada, had already developed its solution - ALT-256 for the S-100 bus.

The card consisted of one board, 38 TTL chips and 16 DRAM TMS4027 RAM chips (4 kilobits each). This baby could generate an image of 256x240 visible pixels on a composite monitor. The card device did not allow displaying shades of gray: the pixel either turned on or off - that’s it.


Image output with ALT-256.

The solution was to combine three cards: for each one of the colors of the RGB palette was allocated, then it was possible to display up to 8 shades of gray on black-and-white monitors and up to 8 colors on color monitors (red, yellow, green, cyan, indigo, violet, white and black) . And everything would be fine, but the price was $395, which by today’s standards is more than $1,500. Total $4500 for a non-monochrome picture - about the same as the RTX 3090 for scalpers. Shall we take it?

The first video adapters from IBM were released in 1981. One was called MDA (Monochrome Display Adapter), and the other was CGA (Color Graphics Adapter).


MDA - 720x350, text mode only, up to 25 lines. Image produced by MDA.


CGA - 320x200, Color depth in text mode - 16 colors, in graphics mode - 4 colors. 640x200 - monochrome mode.


Arachne browser in monochrome mode 640x200.


Sim City (Maxis, 1989) in monochrome 640x200.


Prince of Persia (1990) in graphics mode 320x200

By the way, connection to compatible monitors, such as the IBM 5153, was via a 4-bit digital (TTL) RGBI interface.


From left to right: IBM 5151, IBM 5153, IBM 5154.

To connect to an NTSC-compatible TV via tulips, you need a separate RF modulator. MDA and CGA, of course, were not integrated 3D accelerators, but they reflected impending changes in the industry. By the way, both cards could be used simultaneously, since their functionality was different.

Time passed, requests for graphics grew, and video games gained more and more popularity and promised huge profits to vendors and developers. New companies began to appear on the market, like drops on a windshield:

  • In December 1984, Chips and Technologies or C&T (Chips Chips and Technologies) was founded. In 1997, the company was absorbed by Intel.
  • In 1985, ATi . Its founders were Chinese emigrants Kwok Yen Ho, Li Lau and Benny Lau, who moved to Canada. ATi started with sound cards and only in 1991 released the first video accelerator - Mach 8, a clone of the IBM 8514/A. ATi is now the graphics division of AMD . Since 2010, the previous name has not been used.
  • In January 1989, Diosdado Banatao and Ronald Iara founded S3 Graphics. This company was the first to develop a single-chip GUI accelerator. Before the sale, VIA Technologies primarily produced low-cost 2D graphics display chips. In 2000, due to failures in the 3D market, it was necessary to sell assets. In 2011, the company was bought by HTC.
  • In 1993, Nvidia . It was founded by one of the leaders of the integrated circuit manufacturing company LSI Logic, Jensen Huang, as well as engineers dissatisfied with their work at Sun Microsystems, Chris Malachowski and Curtis Praem.
  • In 1994, 3dfx Interactive , whose first product (Voodoo Graphics) became a success in arcade machines. The history of this company resembles a match: 3dfx started brightly, de facto founding the 3D accelerator market, developed SLI technology, but went bankrupt in 2002, and Nvidia . Users left without support could exchange a 3dfx product for a comparable one from Nvidia for a limited time.

At the same time, oldies like Matrox and IBM have not gone away. High competition contributed to rapid development and the emergence of a large number of products.

Just look at the professional IBM PGC , released in 1984. Resolution 640x480, 256 colors, 320 KB of RAM and 68 KB of permanent memory, proprietary Intel 8088 processor and reverse support for CGA modes. For only $3000 (equivalent to $7700 in 2022) + IBM 5175 for $1300 ($3340 in 2022), since the accelerator only worked with it. We did not run games on this miracle of technology, because its main function is to work with CAD and Autocad 2.5.


The images show that the IBM PGC consisted of three separate boards.


Two full-size ISA (Industry Standard Architecture) boards and a smaller one between them. This thing was held together with bolts.

Some of you may remember the S3 Trio3D and Trio3D/2X accelerators, announced in November 1997 . The cards got their name due to the use of three components - a clock generator, a core and a RAMDAC (random-access memory digital-to-analog converter) converter. The cards were sold in the budget segment, but supported texture mapping, 3D acceleration, Gouraud shading, overlays, bilinear and trilinear filtering, and much more.

Features of S3 Trio3D and Trio3D/2X:

  • Resolution up to 1600×1200 and 85 Hz;
  • Up to 8 MB SDRAM or SGRAM;
  • AGP and AGP 2X bus respectively;
  • D-SUB 15-PIN (VGA).

Another card that deserves attention is Nvidia RIVA TNT2 , released in 1999. The card was ahead of its closest competitor, Voodoo3 from 3dfx. TNT2 allowed not only to output a real 32-bit image in 3D, but also had two single texturing pipelines, versus one double for Voodoo3. In simple terms, in games where one texture was applied to the surface of an object, TNT2 was faster. 32-bit rendering became an important advantage as games began to increasingly use alpha compositing and multi-pass effects technologies.

The characteristics were as follows:

  • Resolution up to 2046 x 1536;
  • 32 MB VRAM;
  • AGP 4X bus;
  • D-SUB 15-PIN (VGA)

During these years, truly great events took place: the technological boom, the dizzying growth of the consumer electronics market, the war between Windows and Mac. The era is fascinating, you can plunge into it like quicksand, so let’s move a little further.

Intel: better late than never

In 1997, Intel, assessing the situation, bought C&T to launch production of its own graphics chips. The company's plans are Napoleonic - to break into the rapidly developing market and lead it. Just a year later they release the first 3D accelerator under their own brand - Intel 740. This is where we will finish our walk through our paleontological museum. The design had two chips for separate processing of graphics and textures. The accelerator also worked via the 32-bit AGP (Accelerated Graphics Port) system bus, developed by Intel in 1996. A PCI version was also released.

Characteristics:

  • Resolution: 1280×1024 16 bits, 1600×1200 8 bits, 160 Hertz;
  • Frame buffer - 4 or 8 MB, memory on PCI variants - 8 or 16 MB, SGRAM or SDRAM chips;
  • Bus: AGP or PCI;


Intel 740


Different versions of the AGP bus
But the discrete video card from Intel could not compete with products from Nvidia, ATi and 3dfx. The last attempt to produce discrete video cards was made in 1999, when the i752 and i754 were announced. The first was released on a limited basis, and the second was cancelled.

Well, let's see if Intel succeeds in a second attempt in 2022. Their discrete 3D accelerator of the Arc Alchemist line on the DG2-512EU GPU outperforms the GeForce RTX 3070 Ti and RX 6700 XT in preliminary tests. Considering the total shortage, there is every chance to gain a foothold in the market.

But the previous developments of the i752 and i754 were useful in the Intel 810 and Intel 815 chipsets with integrated graphics. Motherboards with their own video accelerators turned out to be inexpensive and energy efficient, but with a number of obvious disadvantages:

  • Relatively low productivity;
  • If you need to update the graphics accelerator, you will also have to change the motherboard;
  • Such chips did not have their own video memory, so RAM was used. Because of this, the overall performance of the PC dropped.

In this form, integrated graphics from Intel existed in the Extreme Graphics and Graphics Media Accelerator (GMA) series, until the northbridge in the chipsets was abolished - its functions were taken over by the CPU and partially by the PCH (Platform Controller Hub). As you may have guessed, the GPU has become part of the CPU.

5th generation Intel architecture.

Thus, in 2010, the history of Intel HD Graphics - graphics processors (iGPUs) integrated into the CPU. Strictly speaking, a CPU with graphics on one chip is more correctly called an APU (Accelerated Processor Unit). However, in the first Intel solutions based on the Westmere microarchitecture, the graphics were on a separate chip manufactured using the old process technology (45 nm versus 32 nm for the processor); such multi-chip assemblies are called MCM (Multi-chip module, multi-chip module).

Starting with the Sandy Bridge microarchitecture, the solutions became true APUs. And even then, Intel continued to call its developments in the old way - CPU. By the way, the technology is still being developed today and will be implemented in the 12th generation of Alder Lake processors.

The line of CPU-integrated graphics from Intel can be found here. There have been a lot of models released over the years, so it’s pointless to consider them all in this article.

But there is one interesting feature: some Xeon server processors are equipped with a graphics processor with a “P” (Professional) prefix.

For example:

HD Graphics 3000: Core i3-21×5, Core i5-2405S, Core i5-2500K, Core i7-2x00K;

HD Graphics P3000: Xeon E3-12×5.

Why is that? And why does a server processor need integrated graphics? After all, most “Zions” do not have any graphics at all, but the servers work.

The answer is very simple - the “P” graphics cores in the processors have received certificates from CAD, BIM and other professional applications: AutoCAD, Revit, Inventor, Adobe Photoshop, 3Ds Max, etc.

It turns out that Intel itself is positioning Xeon processors with iGPUs as solutions for workstations, so that you can work with professional software without discrete video cards.

Here's what Intel writes on its website about the E-series:

“Intel Xeon E processors deliver the performance and advanced security technologies you need for entry-level server solutions, professional workstations, and secure cloud environments. Available with integrated Intel UHD graphics.”

Obviously, the E-series Zions without iGPU are intended for servers, and the rest (V4, V5) are solutions for workstations.

Intel has another line - Xeon W, which was introduced in 2019 along with Mac Pro. These processors were originally designed for workstations, each with integrated graphics, and the “W” can be interpreted as “Workstation.” They do not support Optane DC Persistent Memory, do not work in multi-socket modes, Speed ​​Select and Resource Director (RDT). But there are options with the letter “M” that support more memory (up to 2 TB) and error correction in all models, which brings them a little closer to server solutions.

“Platforms with Intel Xeon W processors are the best platforms for creative professionals, delivering outstanding performance, security and reliability, and advanced capabilities for VFX, 3D rendering, complex 3D CAD workloads, and AI application development and edge deployments "

So we get that many are accustomed to calling Intel Xeon server processors, but not all of them are such, and the iGPU acts as a litmus test. Yes, this does not in any way prevent you from installing them on the server and using them normally, but you need to first evaluate how relevant such a solution will be in terms of cost and functionality.

What is a video adapter

A video card (video adapter, graphics accelerator, video accelerator) is a PC organ that, in simple terms, converts information in the computer’s memory or the adapter itself into an image, a “picture” on the monitor. It consists of several main parts: a graphics core, a video controller, a read-only memory (ROM), random access memory (video RAM), a digital-to-analog converter (DAC), a connector and a cooling system.

The graphics core is a processor that performs all calculations of the displayed “picture” on the screen and processes commands when creating graphics. It is the main component in the board, since higher processor frequencies increase its power and processing speed. Nowadays, new video accelerators have values ​​that surpass central processors in performance, providing the PC itself with good performance. And having two cores that work independently of each other, they can display images on several displays simultaneously. The processor frequency, depending on the video accelerator, varies from 500 MHz to 1600 MHz.

A video controller is a microcircuit responsible for forming an image in memory and then displaying it on the screen.

ROM is a memory that stores basic data elements (BIOS, fonts) for preliminary initialization and control of the video card before loading the computer’s operating system.

Video RAM is a component designed to store image data that is constantly changing and requires quick access. Due to this, the video card memory has a high operating frequency and speed of data transfer to it. There are several types of memory that have changed by class, growing in speed and frequency from DDR to GDDR 5. The capacity varies from 2 GB to 64 GB in one stick.

The DAC is designed to display the image generated by the video controller on the monitor screen. Depending on the type of DAC, from 16.7 million to more than 1 billion colors can be output.

The connectors connect the video card and the monitor to display the image. They are used in various interfaces, from analog VGA (now obsolete) to digital DVI, DP and HDMI.

The cooling system maintains the temperature of the core and memory within acceptable limits, preventing overheating.

We also advise you to familiarize yourself with the differences between an integrated video card and a discrete one.

AMD: hold my beer

Of course, it wasn't just Intel that was moving towards integrating graphics into the CPU. AMD and IBM/GlobalFoundries also had their own developments.

AMD started working immediately after purchasing ATI in 2006. Development took a long time - from the idea of ​​​​combining two crystals in one package (as Intel did first) we came to what we have now. The first single-chip APUs created using a 32-nanometer process technology were released in 2011 and were called “Fusion”. And unlike Intel, AMD marketers adopted the abbreviation APU.

Within the AMD APU line, 3 series were initially distinguished:

C (Ontario) - the least powerful (from 1 core with a frequency of 1.2 GHz to 2 cores with 1-1.33 GHz);

E (Zacate) - average power (from 1 core with a frequency of 1.3 GHz to 2 cores with 1.65 GHz

A (Llano) - the most powerful (from 2 cores with a frequency of 1.9-2.5 GHz to 4 cores with 2.9 GHz).


AMD APU Llano microarchitecture using the older model as an example.

If we talk about the entire block diagram of the AMD platform (using the A-series as an example), it is called Lynx and looks like this: APU Llano + FCH (Fusion Controller Hub).

Despite the fact that integrated graphics are usually treated as a low-power solution, PS5 and Xbox Series X (Past-Gen too) use APUs from AMD. The latest generation set-top boxes have a Radeon Navi graphics processor, based on the RDNA2 architecture, but with a different implementation.


Just look at how much on-die space the GPU takes up in the Xbox Series X. This allows for an impressive 12.2 TFLOPS.

As for the actual integrated graphics (or lack thereof) in AMD processors, the separation seems more logical and simpler than that of Intel. Let's look at the example of the 7-nanometer Zen 2 microarchitecture.

Zen 2 based processors:

  1. Ryzen line which includes solutions for home and office, laptops, gaming PCs and workstations ( Threadripper ). Graphics are not available on all models.
  2. EPYC line is server processors that do not have integrated graphics in all models.
  3. Embedded line - embedded processors designed for information kiosks, sales kiosks, industrial, medical, set-top boxes, thin clients, media servers, etc.

Zen 2 microarchitecture processors are divided into APUs and CPUs as follows:

Application Positioning Integrated Graphics
Desktop CPUsEntry level: Ryzen 5 3xxx
Mid-tier: Ryzen 7 3xxx
Senior level: Ryzen 9 3xxx
HEDT (High-End Desktop) and workstations: Ryzen Threadripper 3xxx
Server CPUsSingle socket: EPYC 7xxxP
Dual socket: EPYC 7xxx
Mobile

APU

Entry level: Ryzen 3 4xxx +
Mid-tier: Ryzen 5 4xxx ; Ryzen 7 4xxx (8 threads) +
Senior level: Ryzen 7 4xxx (16 threads); Ryzen 9 4xxx +
Lucienne (5000 series): Ryzen 3 5300U; Ryzen 5 5500U; Ryzen 7 5700U. +
Tabletop

APU

Entry level: Ryzen 3 4xxxG +
Mid-range: Ryzen 5 4xxxG +
Senior level: Ryzen 7 4xxxG +
Embedded processorsRadeon Vega 6: V25xx +
Radeon Vega 7: V27xx +

Everything is so logical and laid out on the shelves that if I were AMD, I would put just such a table on the website:

  1. Desktop processors have options with and without integrated graphics - choose according to your tasks and budget;
  2. Server CPUs and workstation CPUs are separated both technically and by name to avoid any confusion. Integrated graphics were removed as unnecessary;
  3. All laptop processors have integrated graphics.

Interestingly, from 2012 to 2016 inclusive, AMD was an unprofitable company and was rapidly losing market share and cutting staff.

In 2013, the decline in demand in the PC market also affected Intel, but they ended the fiscal year with profits, albeit down 25%.

In 2017-2018, AMD was able to get out of a protracted dive. They were saved by excellent sales of processors based on the Zen x86 architecture.

The benefits of embedded solutions in games

So. Why do you need an integrated card and what are its differences from a discrete one?

We will try to make a comparison with an explanation of each position, making everything as reasoned as possible. Let's start, perhaps, with such a characteristic as performance. We will consider and compare the most current solutions from Intel (HD 630 with a graphics accelerator frequency from 350 to 1200 MHz) and AMD (Vega 11 with a frequency of 300-1300 MHz), as well as the advantages that these solutions provide.


Let's start with the cost of the system. Integrated graphics allow you to save a lot on the purchase of a discrete solution, up to $150, which is critically important when creating the most economical PC for office and home use.

The frequency of the AMD graphics accelerator is noticeably higher, and the performance of the adapter from the red ones is significantly higher, which indicates the following indicators in the same games:

A gameSettingsIntelAMD
PUBGFullHD, low8-14 fps26-36 fps
GTA VFullHD, medium15-22 fps55-66 fps
Wolfenstein IIHD, low9-14 fps85-99 fps
FortniteFullHD, medium9-13 fps36-45 fps
Rocket LeagueFullHD, high15-27 fps35-53 fps
CS:GOFullHD, maximum32-63 fps105-164 fps
OverwatchFullHD, medium15-22 fps50-60 fps

As you can see, Vega 11 is the best choice for inexpensive “gaming” systems, since the adapter’s performance in some cases reaches the level of a full-fledged GeForce GT 1050. And in most online battles it performs well.

For now, this graphics only comes with the AMD Ryzen 2400G processor, but it's definitely worth a look.

Other types of graphics

From history it is clear that initially the graphics were discrete - a separate board on which the graphics processor and video memory were soldered. De facto - a computer within a computer. And at the moment this type remains the most productive and at the same time expensive.

Next came the iGPU, shared memory graphics . Examples: Intel 740 and Intel HD Graphics and Vega from AMD. As a rule, such graphic adapters do not have their own video memory; RAM is used instead. A popular solution for compact and office PCs, laptops, as well as game consoles.

What else is interesting?

Engineers are passionate and often creative people, so they also developed other types of graphics. The impetus for the implementation of new ideas was the appearance of the PCIe in 2002.

Nvidia, in a team with other vendors, developed the MXM (Mobile PCI Express Module) standard, which was supposed to solve the problem of modifying the graphics part of laptops. MXM modules can be changed as needed without completely disassembling the device - this is a kind of familiar discrete video cards, but without its own active cooling.

Another interesting technology is hybrid graphics . Since APUs have begun to gain more and more popularity, in many PCs the graphics cores are idle - the video card takes on all the load. Because of this, cheaper versions of the CPU without an iGPU even began to appear, for example, Intel processors with the letter “F”. However, there are ways to use both the video card and integrated graphics at the same time. For example, AMD states on its website that: “Dual AMD Radeon Graphics is an innovative technology exclusive to AMD platforms that enables AMD APUs and select AMD Radeon discrete graphics cards to work together. Thanks to this combination, the platform provides amazing quality and performance, better than either device alone.”

Nvidia also has a similar technology, it’s called Optimus. When the load is low, the iGPU in the processor is used, when the load is high, discrete graphics are turned on with the words: “Now dad will figure it out.”

Of course, dual graphics from AMD and Nvidia have many nuances and compromises, but you can get both performance and increased battery life with their help.

Another technology that deserves attention is an external graphics card or eGPU (External Graphics Processing Unit) .

At the dawn of their appearance, laptops could not boast of powerful graphics. Nowadays you can buy a gaming monster with an RTX 3080 on board, but previously laptops were mostly used for undemanding office applications. But then it became obvious that PCIe allows you to create an interface for connecting external graphics that solves this problem.

Well, let's finish with the genius of Chinese engineering.

Meet the ZA-KB1650 base from Zeal-All with a built-in discrete (pardon the pun) GeForce GTX 1650 video card. Why the hell they soldered a discrete video card directly on the motherboard, we will never know. But this is clearly a product for a specific task.

Dedicated graphics also use more power

There's a reason why dedicated graphics cards have built-in fans: they get hot.

Tests show that under heavy load, the Titan Xp can reach 185 degrees Fahrenheit or more. This is in addition to similar levels of heat generated by the processor and other components inside the computer. It is important that your computer does not overheat.

In comparison, an Intel Core M processor with integrated graphics can reach a maximum of 160 degrees while gaming. There is no fan at all and it uses much less energy.

Benchmarks show that graphics performance on this rig compares to a dedicated card that is several years old. But if you're not a gamer and value energy efficiency, then this is probably your best choice.

Let's return to our servers

We will not linger long in the last millennium. Intel released the first server processors under the Xeon brand in 1998. The debut product was the Pentium II Xeon Processor.

Since 2001, Intel has also produced Itanium family processors based on the IA-64 architecture - the result of joint development with HP. They decided to complete further development on the 32-nanometer Itanium 9700 (Kittson), presented in 2022.


First Itanium.

AMD's first attempt to enter the server market was the AMD Athlon MP processor, developed on the Palomino core in 2001. Two years later, AMD (in partnership with IBM) introduced Opteron server processors. This line ended with the Opteron A1170 model in 2016, and already in 2022 new EPYCs were introduced, which are still relevant today.


The AMD Opteron 246 processor is labeled 2001, although it was released in 2003. Maybe one of the readers knows why?

Above, I already touched on the topic of integrated graphics in server processors. “Zions” with iGPU are positioned by Intel as solutions for workstations. At the same time, HD Graphics began to be installed in the Xeon family only from the second generation (2011).

AMD generally divided the lines, made server Opteron and EPYC, as well as Threadripper for workstations without iGPU, so as not to cause confusion among buyers.

It turns out that microprocessor vendors do not integrate graphics into the CPU for servers, and server vendors supply their equipment without discrete graphics by default. And this begs the question: “How to work with this?”

I came across a question on the Dell forum from a poor guy who bought a Dell R330 server with an Intel Xeon E3-1225 v5 processor, which has an Intel HD P530 iGPU. He needed to run some kind of video encoding and decoding software that worked with Intel QuickSync Video technology, but the integrated graphics turned out to be disabled.

To which he received an answer that the chipset does not support graphics built into the CPU.

So the guy had to either install discrete graphics via an expansion card, or be content with the Matrox G200 graphics integrated into the chipset - which is clearly not enough for his task. This video accelerator was developed by Matrox back in 1998 (!) and looked like this.

Interesting fact: Previously, radiators were often painted black, although in theory an extra layer of paint interferes with heat dissipation. This is all because of Kirchhoff's radiation law, which states:
“The ratio of the emissivity of any body to its absorption capacity is the same for all bodies at a given temperature for a given frequency and does not depend on their shape and chemical nature”.

Later, with the advent of active cooling, they decided to neglect emissivity.

Now the Matrox G200 has lost a lot in size and is soldered onto the motherboard.:) The version in modern servers (Matrox G200eW) has twice as much SDRAM memory - as much as 16 MB. And even though this dinosaur is hopelessly outdated, it is enough to display the console and even the GUI in servers. And you don’t need more. By the way, at one time you could even play the legendary Half-Life, Quake II, Sin and Heretic II on it.

Without loading the CPU with unnecessary graphics, users and vendors receive the following benefits:

  • The cost of the processor is reduced, and there may be several of them;
  • Less power consumption and TDP requirements;
  • You can reduce the size of the crystal or increase the number of transistors on it.

Dedicated graphics laptops exist

You can get laptops with dedicated graphics cards, but your options are more limited. The tradeoffs are larger size and higher price.

Integrated graphics laptops like the Dell XPS 13 or Acer Swift 7 are less than half an inch thick. The comparable Dell model adds about a quarter of an inch to the depth. At 0.55 inches, the Asus ZenBook 13 is said to be the thinnest laptop with dedicated graphics.

Most laptops with discrete graphics are either gaming laptops or high-end machines aimed at professional users. The large size also means that 13-inch models are rare, with 15-inch and above more common.

Don't want to compromise on size, but want the best possible performance? There is a third, lesser-known choice: an external GPU

A little about the difference between CPU and GPU

Server CPUs are powerful universal combiners. But the CPU has significantly fewer cores: units and tens versus thousands in the GPU. For example, the AMD Threadripper 3990X has 64 cores and will execute fewer instructions per clock than the Nvidia RTX 3090 with 10,496 CUDA cores, but the cores are much more complex and capable of executing long instructions with high precision.

If you compare rendering on the CPU and GPU, it turns out that the CPU will produce a more complex image, with less noise, but the speed will not be as impressive. To perform calculations of similar complexity, the GPU will have to break down long instructions into small and simple ones. Processing will be faster, but the caveat is that not all instructions can be split, and therefore rendering on the GPU is limited in its capabilities.

When comparing head-to-head rendering, we get the following:

Parameter CPU GPU
Speed +
Complexity +
Quality +
Real-time rendering —* +
Price +

*CPUs can render in real time, but they do it incomparably worse.

In the video from the 10th minute you can see this with your own eyes. Crysis 1 running on AMD Threadripper 3990X runs with very low FPS on minimum settings. Let me remind you that this is a workstation processor without integrated graphics, which has 64 cores, 128 threads and a price of $3990.

It seems that the advantage is in favor of the GPU, but when it comes to expensive projects where quality is a priority, they still choose the CPU. Therefore, servers without discrete graphics can easily take on the following tasks: rendering, video conversion, photo processing, video editing, video content creation, archiving, scientific calculations and much more. And the CPU will be able to process the 2D interface for transmission to thin clients - no doubt about it.

Animated films like Disney's City of Heroes use incredible render farms with 55,000 or more processors. A similar infrastructure is being created to implement parallel rendering technologies. This allows you to significantly speed up the processing of complex graphics and/or visualize large amounts of data,

Why new if the old can handle it?

All servers are equipped by default with integrated low-power GPUs that can provide basic functionality. And if additional resources are needed, then there is always the possibility of modification and expansion.

A server is, first and foremost, a business tool, and therefore vendors use time-tested solutions that are sufficient for the assigned tasks. This does not overload the product with unnecessary functionality and does not increase its production costs. If servers ever need powerful graphics to function properly, you can rest assured that competition and common sense will instantly force manufacturers to install it. Perhaps in the processor, or perhaps in the chipset. But now even the developments from 1998 can cope.

Instead of conclusions, I would like to say that the Matrox G200 was at one time an amazing advanced video accelerator. Perhaps, in 20 years, some RTX 3090 will be hopelessly outdated and will be used in the form of a small chip to display a simple (by the standards of the future) GUI of a refrigerator or multicooker.

In the meantime, let's dream about adequate prices for discrete graphics.

How to compare video chips

Comparing them by eye is quite difficult, so we recommend that you take a look at this page, where you can see information about all integrated Intel solutions, and this one, where you can see the performance rating of video adapters and their results in benchmarks. To find out what graphics are available on the processor you need, go to the Intel website, search for your processor using the filters, and then look in the “Graphics built into the processor” column.

Rating
( 2 ratings, average 4.5 out of 5 )
Did you like the article? Share with friends:
For any suggestions regarding the site: [email protected]
Для любых предложений по сайту: [email protected]