What is artificial intelligence scaling? This convenient development in TV imaging is able to take lower resolution content than the TV’s own panel and optimize it to look better, sharper and more detailed.
It may sound a lot like the usual old scaling, and you’d be right – the AI part just means that the scaling happens better knowing the context.
This is because Al scaling involves creating new pixels to add images to add detail where it didn’t previously exist, filling in gaps to create a higher resolution image, while using machine learning to improve the result.
Handle this poorly, and it may seem overdone to sharpen the picture, a feature that appears in the menus of most TVs and which we usually recommend turning all the way down. But top TV brands – like Nvidia, a maker of PC graphics devices – now have compelling 4K and 8K AI scaling technologies that elevate AI scaling beyond what an insignificant marketing buzz it could have been.
AI Scaling FAQ
- What is artificial intelligence scaling? Artificial Intelligence Scaling attempts to make videos below 4K (or less than 8K for 8K TVs) display more than a video image taken at the original screen resolution.
- How is AI scaling different from normal scaling? Artificial intelligence scaling is simply smarter scaling. Both types prepare video for monitors with a larger number of pixels, but AI scaling uses machine learning to improve the image.
- Who offers the best artificial intelligence scaling? Sony and Samsung are at the forefront of intelligent artificial intelligence systems in the world of televisions. The Nvidia Shield TV box also has very efficient scaling if you have a 4K TV with a basic scaler but otherwise satisfactory picture quality.
Which TVs have AI scaling?
All high-end televisions have some form of artificial intelligence scaling, even if the manufacturer does not explicitly use artificial intelligence.
Samsung Artificial Intelligence Scaling is available in series with its Quantum Processor hardware. These include the Samsung Q70, Q75, Q80, Q90 and Q900 series – meaning the new QN900A and QN95A are included.
Sony’s state-of-the-art scaling is called X-Reality Pro or XR Upscaling, depending on the series you purchase. 4K and 8K XR scaling are Sony’s best.
LG also uses the term AI Upscaling, but for best results, you need a TV with an α9 Gen4 or α7 Gen 4 processor. These include its best OLED TVs and higher quality LG Nanocell monitors.
Panasonic doesn’t talk about scaling as much as its competitors, but its high-end TVs have an ‘HCX PRO intelligent processor’ that does comparable calculations.
How does artificial intelligence scaling work?
Take a piece of 1080p content. It is designed for displays with just over two million pixels. But 4K TVs have almost 8.3 million pixels, while 8K TVs have an amazing 33 million pixels.
The simplest way to fill these extra pixels is doubling, where the pixel blocks reproduce the same visual information, leading to the blocking of images that are better for retro games than for home movies. This is called resizing the nearest neighbor.
Another approach is to create intermediate pixels, smoothing inter-pixel transitions in low-resolution sources. A bright pixel next to the dark pixel of the source video would border the mid-level brightness pixels of the final scaled image. The result is a soft image.
Artificial Intelligence Scaling tries to figure out what extra, “spare” TV pixels should be displayed through machine learning. Sony provides the best one-sentence summary by saying that “the patterns in the images are compared to the patterns stored in a unique database to find the best hue, saturation and brightness for each pixel.”
The most pleasant explanation for improving artificial intelligence in the human brain is that these TV processors recognize objects such as grass, fur, or lashes and fill in the details missing from the source image. But it would be more accurate to say that scaling is based on algorithms that identify common contrast patterns in real-life scenes.
The targeted engine takes clues from the source material to evaluate what the material would look like if it were described in its original 4K or 8K format.
For example, grass visible in 1080p view is likely to lose much of its 4K brightness. The smooth edges of each blade and some green hue contrasts are lost, but the “patterns” used by Sony and others in developing their scaling algorithms try to recover some of this information.
It is especially useful in areas with high contrast, such as the outline of an iris in someone’s eye. A good artificial intelligence scaler not only sharpens this edge, but also smoothes the transition between darker iris and whites of the eyes, while avoiding the halo effect caused by over-sharpening.
Balance is key, as with any kind of image processing. Engineers from Samsung, Sony and their competitors have tested more aggressive AI scaler settings than we have today before deciding on the final profiles of current TVs with the best balance of sharpness and detail with the natural picture.
The best scalers also look at the state of the pixels in the frames to avoid the details created by the scaler of the existence and exit of the moving image.
At its core, scaling artificial intelligence is simply a better way to sell scaling that has improved in a fairly predictable way, as several core technologies are also used in image processing for smartphone photos. But that doesn’t mean it’s not important.
Televisions have processors designed to do these calculations on the fly, which is why older TV displays cannot get wireless firmware updates to add next-generation scaling.
The real-time aspect is also a limiting factor. TV processors are designed for narrow jobs, but not all of them are as efficient compared to a high-end smartphone or PC graphics card, for example. Also, scaling must take place almost immediately, and every millisecond required adds delays.
What we said earlier about artificial intelligence scaling may make it sound like some kind of magic, but the limitations of time and processor power mean that it’s actually relatively superficial. Still, it’s a powerful technology that, when done correctly, only improves image quality and helps low-resolution sources look their best on high-resolution displays.
What about Nvidia DLSS?
Nvidia is another major force in artificial intelligence scaling. It offers excellent 4K scaling on an Nvidia Shield TV and is a good option if you are not quite ready to buy a new TV. Shield TV and Shield TV Pro are well-equipped for scaling, as they have a 256-core graphics processor that is much more powerful than a TV chipset, allowing them to scale 720p and 1080p content to 4K at 30fps.
The company’s revolutionary DLSS gaming feature is also undeniably a form of artificial intelligence scaling. DLSS stands for deep learning super sampling. Used on Nvidia RTX 20 and RTX 30 series graphics cards, it allows the gaming computer to play at a lower resolution, while DLSS makes a difference.
You can make the game in 1080p and see the final image, which for most eyes is almost inseparable from 4K. This means higher frame rates and a greater ability to implement expensive graphical effects, such as beam tracking, even if you don’t have one of the most powerful graphics cards.
The digital foundry explored the differences between the two latest versions of DLSS, 1.9 and 2.0, on YouTube. Give it a clock. It also serves as a good visual example of the improvements that different generations of artificial intelligence enhancement can provide, even though scaling the game’s artificial intelligence is not quite the same as scaling artificial intelligence for movies and other video content.
Want to do your own artificial intelligence scaling? Thus
Mac and PC software packages also offer separate AI scaling. These aren’t usually meant for real-time scaling, but they’re great if you want to clean up some of your own videos.
Topaz Labs Video Enhance AI is one of the best around. You drag and drop your file into its interface, select a scaling profile, and let it do the thing. Different profiles use different AI styles, so the video below has a different image mark and scaling intensity.
Topaz says high-definition resolution from 8K takes about 2-3 seconds with the frame’s Nvidia GTX 1080 graphics card. Even with a powerful computer, we are not close to real-time processing. But this allows the software to use more complex artificial intelligence processing to push forward precision-limited video.
- Look best TVs available today