In the last 12 months, the most popular graphics cards in history have been released. There’s only one problem: none of them are for sale. Fortunately, despite a scarcity of new silicon, some of us have been extended an olive branch that will allow us to maintain our frame rates going ever higher. This has arrived in the form of upscaling technology, whether it’s AI or not.
Upscaling solutions come in a variety of shapes and sizes, but they all strive to accomplish the same goal. To put it another way, you may enlarge a frame. Most upscaling solutions differ in how they maintain visual quality during the process.
Basic upscaling, like Mike TeeVee, will just stretch pixel values, resulting in a poor image quality and loss of clarity. Smarter upscaling solutions, such as those based on intelligent algorithms, will deduce information from a scene or improve certain elements to provide a more defined, sharp image.
By no means is upscaling a novel notion, but it is a tool in the PC gaming toolkit that is becoming increasingly crucial. Not only are really faster GPUs hard to come by, but the leaps in resolution and rendering are getting so enormous, and so quickly, that making a major gain in PC performance is becoming increasingly difficult, and surely more expensive.
Today, 4K is the peak of PC gaming resolution. Perhaps 8K for a select few, but let’s stay to the more practical option. The transition to 4K gaming has taken a long time. It’s now able to do so natively, and even at high frame rates, but the emergence of upscaling technologies, which push silicon beyond its electrical limits, has been a crucial driver of its adoption among PC gamers.
Deep Learning Super Sampling (DLSS), FidelityFX Super Resolution (FSR), and Temporal Super Resolution are the terms I’m referring to. These technologies, which were once consigned to the task of anti-aliasing, have now found themselves in a key position for playing the latest games at high resolutions and high fidelity.
Game requirements are developing at such a rapid rate that even the top graphics cards in the world are struggling to keep up. Don’t get me wrong: an Nvidia GeForce RTX 3090 will keep you playing the most demanding games for a long time, but even the proud stallion of the past, the GeForce RTX 2080 Ti, will struggle at 4K with ray tracing enabled in many titles today.
The basic reality is that, just as graphics architectures are rapidly evolving, so are game development, fidelity, texture quality, models, environments, display technology, and much more.
Upscaling solutions have enabled us to defy the new GPU trend while maintaining strong performance, and their importance will only expand in the future. Can you image how important DLSS will be in 10 years, even if it has only been under development for a few years? Between DLSS 1.0, which was respectable but noticeably slower than native rendering, and DLSS 2.0 and 2.1, which are near to the actual thing with significant speed gains, this technology has advanced by leaps and bounds.
It’s tough to envision now, but the next huge jump in PC graphics may come from a software-implemented—possibly hardware accelerated in some way—upscaling technique, the likes of which we’ve only seen in rudimentary form.
DLSS is already how Nvidia envisions RTX 3090 customers being able to attain that gigantic pixel count without their PC melting down to a sad little puddle, so I’m guessing it’ll be how the leap to 8K will be made in earnest.
Despite the name, 8K is not the same as 4K in terms of resolution. A graphics card capable of rendering 8,294,400 pixels per frame is required for a genuine 4K depiction. At 8K, the number jumps to 33,177,600. That’s a 300 percent increase in pixels, which will be difficult or impossible to achieve with only technology.
GPUs are already approaching the reticle limit—the physical limit of the lithographic methods used to build these chips—and the cost of generating such massive chips is going to be a major concern for almost everyone involved. Yields, packing, power… you name it, there’s a limit to what can be practically achieved without making a sacrifice.
That’s not to suggest GPU development is over—of course not; you’ve got some of the greatest minds in the business working on it—but there is a cost/performance trade-off to consider. You could just load up a multi-chip GPU with a slew of cores and call it a day, which I can see happening, but is there a ‘cheaper’ method to get a significant performance boost alongside that?
Yes, in my opinion, with upscaling technologies taking over more and larger chunks of the task. DLSS has shown to be a powerful weapon in the RTX toolkit, but this experiment is far from over. As with AMD’s FSR, the success of DLSS will only stimulate greater investigation into the utility of upscalers and AI algorithms, and bigger and better upscalers will be aiming to achieve performance increases that were previously thought to be in lockstep only with bigger and better chips.
So, what would it entail? Epic’s answer, Temporal Super Resolution, is built immediately into Unreal Engine 5, paving the way for its use in a variety of games. Then there’s AMD, which claims that FSR is simply the start of its adventure, whether that trip is with FSR or something quite new. And both of those options are hardware independent, which is a big plus.
Similarly, Microsoft appears to be interested in developing a DirectML-powered upscaling platform that can compete with the best while also utilizing its own machine learning API. Although it isn’t a top priority for us PC users, consider the impact that strong upscaling technology may have on the console wars. It might be a major mid-gen performance jump unlike anything seen in a console generation before.
And if it’s being launched through a DirectX-based API, it’s something that could easily follow DirectStorage from its console roots and find a home in a future OS like Windows 11.
Then there’s Nvidia, the current AI upscaling champion. DLSS 2.0 and previous generations are already fantastic, but the prospect of 3.0 excites me almost as much as the prospect of the next graphics architecture after Ampere, and I don’t say that lightly.
Machine learning models, on the other hand, can’t infer anything from the absence of data, so dedicated and powerful silicon isn’t going away. However, if these upscaling strategies are only the first of many that will use upscaling algorithms to provide more performant gaming experiences at greater resolutions and fidelity, there is surely a rich vein of performance that has yet to be discovered.