What Video AI Upscalers Can and Can’t Do, and How to Make Them Do It Better
For the last 2.5 years, I’ve worked to restore Star Trek: Deep Space Nine and more recently, Star Trek: Voyager. I’ve written about those efforts on multiple occasions, but today’s story is a bit different, and it casts a much broader net. Instead of focusing on Star Trek, this article is a comprehensive overview of what AI upscaling software does well, where it falls short, and how to improve your final upscale quality. In some scenarios, a properly processed, upscaled DVD can rival or even surpass an official studio Blu-ray release.
I know that last is an eyebrow-raising claim. It’s one of the topics we’ll be discussing today. You can see the evidence below.
I have tested multiple AI upscaling applications, both paid and free, using everything from old VHS-equivalent footage to native 720p HD content. This article is also a guide to maximizing your output quality while avoiding some of the pitfalls that can wreck your video. There is some Star Trek footage here, but it’s only one type of content among many.
The question I get most often is “Can AI upscaling improve a video?” Here’s one of my answers to that. If you like what you see, keep reading.
Based on what I’ve heard from readers, there’s a lot of confusion about when AI upscaling is useful, how much improvement can be achieved, and whether paid products like AVCLabs Video Enhancer, DVDFab Video Enhancer AI, or Topaz Video Enhance AI are worth the money. My goal is to help you answer these questions and give you a feel for what’s possible, no matter what your project is.
It is also possible to train your own AI models at home if you have a sufficiently powerful GPU and a training data set, but that’s a separate topic from the question of whether any currently available paid or free AI products are worth using. This article does not address the relative merits of training one’s own model versus using already-available products. This story also does not consider online services, as these are not suitable for large-scale image processing.
How to Read This Article
I’ve never had to write a section telling people how to read one of my stories, but I’ve also never written a story that was 16,000 words long with 50 or so videos across eight pages.
This is an omnibus article that deals with three distinct topics. The text on this page discusses what AI upscalers can and can’t do, with example images and videos embedded throughout. There’s also a section on advanced AI upscaling tips for those looking to get more out of these applications, and a specific demonstration of how an upscaled version of Stargate SG-1 based on a DVD source can beat the quality of MGM’s official Blu-ray release.
The links in the table of contents below allow you to jump to any section of the article while the video links point to a dedicated page for each clip. These supplemental pages contain additional encodes and side-by-side comparisons for interested readers. At the end of each section, you’ll see two small links labeled “Contents” and “Video.” Those links will return you to the table of contents and the list of video appendices, respectively. I’ve also written a separate page for anyone who wants advanced tips on maximizing upscale quality. All of the supplemental appendices will be linked again at the bottom of the page for easy navigation, so you don’t have to jump back up to the top to access them.
If you have questions about what AI upscaling is, what it can do for you, and how to achieve the highest possible quality, you’ll want to keep reading this page. If you want to know more about advanced methods for improving upscale quality, grab the “Master Class” link. If you want to see the DVD v. Blu-ray discussion, specifically, it’s the last link in the video list below.
Table of Contents
- What is AI Upscaling?
- What Kinds of Upscaling Software Exist?
- What’s the Difference Between Cupscale and Topaz Video Enhance AI?
- The Impact of Pre-Processing
- Is Your Video a Good Candidate For Upscaling?
- Why is Upscaling Controversial?
- How Much Improvement is it Possible to Get?
- Conclusion: The Current State of Consumer AI Upscaling Software
- Bonus Round: AI Upscaling Master Class
I opted to split the story up this way after realizing there was no practical way to discuss both the software and the video it creates in detail in a single-page article. The video-specific pages at the bottom of this article document every step of the workflow. Every appendix contains multiple encodes and comparisons, but I created over 20 videos for Final Fantasy X, specifically. If you want a one-page comparison of how different AI upscalers and non-AI methods for improving content compare against each other, check there.
The Blu-ray v. DVD claim is addressed on the last page with a comparison between the Blu-ray release of Stargate SG-1 versus an upscaled version of the same episodes from DVD. All of the videos shown in this story were resized to 4K for upload to minimize the impact of YouTube’s bad low-resolution video encoding.
Videos
- Chance v. Stream (320×240, Canon PowerShot circa 2006)
- Final Fantasy X: The Dance (582×416, Original PS2 disc)
- Battlestar Galactica: The Living Legend Project (720×400 as provided)
- Mamma Grizzly (720p base, original source)
- The Makeup of Dick Tracy (Encoded at 582×320, VHS-quality, not original source)
- Beyond the Blu-ray: Stargate SG-1 Upscaled DVD v. Official Blu-ray Release
You will want to set YouTube for 4K or as high as your monitor supports, in all cases. You can use the “” keys to navigate frame-by-frame in a YouTube video. When you see a label like “Original Video” in a comparison, that means you are seeing the original, unaltered footage or an upscaled version of that footage with no other processing applied. I deinterlaced / detelecined these clips if necessary but changed nothing else. Terms like “filtered” and “processed” means the video was edited in a different application before I upscaled it.
These sample videos were of all different sizes, from 320×240 to 1280×720. The level of baked-in damage and baseline quality varied widely. Each of the individual video pages show the footage at every stage of processing, from the initial video through to the final upscale. This will help you track where improvements come from and how each stage in processing changes the final output.
You may not like every output I created. You may not like any output I created. That’s fine. The goal is to show you the breadth and depth of what’s possible with various settings and applications, not to sell you on my own work as the crème de la crème of upscaling.
Before we get started, a few quick terms: “Original source” means “The original video file created by the authoring device.” In the context of a DVD, original source might be an .m2v or set of VOB files. “Native” refers to the original resolution of a file. A video can still be in its native resolution without being original source. I will also sometimes refer to a 200 percent upscale as a 2x upscale and a 400 percent upscale as a 4x upscale. These terms are equivalent.
Table of Contents / Videos
What Is AI Upscaling?
Upscaling — with or without artificial intelligence — is the process of converting a video or image from a lower resolution to a higher one. When you resize an image from 640×480 to 1280×960 in a program like Photoshop or Paint.net you are also upscaling the image, just not with AI. Some video and image-editing applications allow the end-user to choose a specific image resizing algorithm, including options like Lanczos, Bicubic, Bilinear, and Nearest Neighbor.
The image below shows how different image resizing algorithms interpret the same source image. By using a different algorithm, you can change how your newly-resized image looks whether you are making it smaller or larger than before. It isn’t very easy to compare all the filtering patterns, so I cropped the Bilinear, Lanczos, None, Spline36, and Spline16 samples and uploaded them to imgsli if you’d like to see a close-up comparison. Use the slider to check differences and the drop-down menu to select different images. The little square icon will maximize the comparison.
AI upscalers also process and resize images/video, but they don’t rely on the same scaling algorithms. AI upscalers use models trained with machine learning to process video and increase its apparent quality and/or display resolution. Developers can choose to train the same base model on different types of content to create similar models with different specializations, or they can use new training sets for each model. Similar models with different specializations are often said to belong to the same model family. One model in a family might be tuned only to remove noise, while another might also sharpen the picture. Run the same video through two different AI models, and you’ll get two different-looking outputs.
The example below compares original frame 85,020 from the original Star Trek: Deep Space Nine episode “The Way of the Warrior” with the same frame after upscaling with different models in Topaz Video Enhance AI. Included are examples from the Dione Robust, Artemis High Quality (AHQ), and Proteus models as rendered in TVEAI 2.6.4, as well as AHQ from the just-released Topaz 3.0 Beta. The color tweaks between the OG source and the three upscaled frames from TVEAI 2.6.4 was intentional on my part. The 3.0 Beta shifted color a bit in its own direction, however.
The model you choose determines what your output will look like, no matter what AI upscaling application you use. Run live-action content through an AI designed to upscale animation, and you’ll get results that could charitably be described as “odd.” Pick a model that includes heavy denoising when your video doesn’t need it, and your output video may be over-smoothed. Choosing an AI model to resize content is conceptually similar to choosing a specific resizing algorithm in an image editing application, even though the mechanism of action is very different. In both cases, you are selecting a particular processing method that will impact how your resized image looks.
Table of Contents / Videos
What Kinds of Upscaling Software Exist?
There are a variety of AI upscaling applications you can download, both paid and free. Out of all of them, there are two clear leaders: Topaz Video Enhance AI and Cupscale. Topaz Video Enhance AI is easily the best paid product you can purchase, but at $200, it’s not cheap. Cupscale is a free alternative worth investigating if you don’t have the cash, though there are some tradeoffs we’ll discuss in this section.
Initially, I had planned to include multiple paid applications across this entire article, but I ran into serious problems with both AVCLabs Video Enhancer AI and DVDFab’s Video Enhancer. DVDFab’s problem is that it produces generally low-quality output that doesn’t qualify as “improved” in any meaningful sense of the word. AVCLabs had a similar problems and often broke at scene boundaries by pulling information from the next scene backward into the current. I have included results for both AVCLabs and DVDFab in the Final Fantasy X appendix but did not run extensive tests with these applications.
In this comparison, AVCLabs Broken 1 is blurry and destroys detail while Broken 2 causes certain items of clothing to pop out of the scene. Both cause unintended color shifts. DVDFab Broken 1 is full of odd vertical bars, Broken 2 flattens everything and has a lovely error on the bottom-right-side of the frame, and Broken 3 gives up on upscaling in favor of smearing Vaseline on your display. The Video 2x and Cupscale outputs all have color shifts, but they do not cause damage like AVCLabs or DVDFab.
As of this writing, neither AVCLabs nor DVDFab has built a video upscaler worth spending money on. If you decide to spend money on an upscaler after reading this article, Topaz Video Enhance AI is the one to buy. If you don’t think TVEAI is worth spending money on, there is no other paid application I’m aware of that remotely compares. The content samples provided throughout this article and in the appendices should help you make that decision.
There is another free application that I also tested: Video2x. It produced some reasonable results in Final Fantasy X that I’ve included on that video’s supplemental page, but it’s a bit tougher to use than Cupscale and somewhat less flexible. Overall, I think Cupscale gives a better sense of what modern AI can accomplish, but Video2x is also worth a look.
Note: This article focuses on Topaz Video Enhance AI 2.6.4 and Cupscale 1.390f1 with some specific sample videos for AVCLabs (paid), DVDFab (paid), and Video2x (free). Because I’ve also made some intentional color changes during upscaling, I’ll also note where I intended for those to happen.
Table of Contents / Videos
What’s the Difference Between Cupscale and Topaz Video Enhance AI?
Cupscale and TVEAI are aimed at rather different audiences. Topaz VEAI very much wants to be an easy, one-click solution for improving videos. Unfortunately, the app cannot automagically determine which of models and settings would give the best results in your specific video. As a result, the end-user must experiment to discover which model(s) produce the best results. The application has a preview function that’s quite useful for this.
Model choice isn’t the only thing that impacts your final output. Users also need to test the difference between 200 percent and 400 percent upscales. 400 percent upscales are not just bigger versions of 200 percent upscales. The subjective difference between 200 percent and 400 percent can be just as large as the difference between two different model families. I recommend testing 200 percent upscales before 400 percent because the 400 percent models are not as flexible at dealing with a wide range of content as the 200 percent models and they also take much longer to process.
Topaz Video Enhance AI’s 200 percent models tend to maintain detail better and sometimes render more accurately than its 400 percent models. 400 percent models are sometimes better at smoothing and noise-removal. They also require less VRAM and render more quickly. This is not always the case with Cupscale, where individual model processing times sometimes vary to the point that a fast 4x model might outperform a slow 2x or even 1x model. If you’ve read earlier stories I’ve written you may recall that I used to recommend TVEAI 400 percent models over 200 percent, but I’ve changed my approach as Topaz’s video upscaler has matured. I suggest starting with 200 percent models and testing 400 percent models as needed. Final resolution is much less important when upscaling than people think (the ‘Master Class‘ page has more details on this).
The video below compares an unmodified clip of Battlestar Galactica with a clip I upscaled 200 percent in TVEAI using the Artemis High Quality (AHQ) preset. Longtime special effects artist Adam “Mojo” Leibowitz created updated VFX for this clip back in 2010 to show what a modern remaster of the show might look like. This video and a number of other comparisons are posted on the BSG dedicated page.
The comparison above represents the beginning of the improvements Topaz Video Enhance AI can deliver, not the end. The next video shows a Cupscale-created video rendered with the 4xUniscaleRestore model and using a version of this clip that was preprocessed in Hybrid with VapourSynth.
Topaz VEAI includes features Cupscale lacks, including the option to only upscale a specific subset of frames within a larger video. It supports at least one file type (.mov) that Cupscale doesn’t. TVEAI’s models create color shifts less often and the degree of change is typically smaller. Topaz Video Enhance AI’s user interface is also a bit easier to navigate, in my opinion.
Cupscale’s base installation includes several AI models, but the real strength of the application is its ability to run a plethora of user-created models available from pages like this one and scattered around the web. You can experiment with a much larger total range of models in Cupscale than you can in TVEAI, and installing them is as simple as copying them to the appropriate subdirectory. There are Cupscale-compatible models that can improve a video just as much as any paid application I’ve seen, including Topaz’s. The output below is color-shifted, but the overall level of detail improvement is reasonable for a single pass on the original video.
I had to install Python 3.9 to enable the Python AI network model; the included embedded version did not work with my GPU. If you follow this set of instructions closely you should be able to get it up and running. Python proved faster than NCNN, but not always dramatically. I did not test Cupscale on AMD or Intel GPUs, but both are supported through Vulkan.
I really like Cupscale — some of the videos in this article are TVEAI + Cupscale hybrids — but the application also has some downsides that being free doesn’t fix.
First, Cupscale is slower than Topaz Video Enhance AI. This is true whether you use the ESRGAN (Pytorch) network or the ESRGAN (NCNN) option. My understanding is that Pytorch is Nvidia-only and that AMD GPUs need to use Vulkan. I tested both options on an RTX 3080 and found Pytorch to be 1x – 3x faster than Vulkan, depending on the model. My RTX 3080 was able to upscale Final Fantasy X’s “The Dance” CGI scene at roughly 41.5 frames per minute when using the ESRGAN (NCNN) AI network and the 2x_KemonoScale_v2 model. The fastest absolute performance in any clip I saw when using Pytorch was ~90 fpm. Unfortunately, even 90 fpm does not compare to Topaz. Topaz Video Enhance AI is anywhere from 4x – 10x faster than Cupscale. The performance gap is large enough to meaningfully improve your ability to check samples and proof content.
The difference between Cupscale and Topaz Video Enhance AI isn’t just a matter of speed. Topaz Video Enhance AI’s models collectively do a much better job dealing with content. I refer to a model’s ability to produce good results in a wide range of samples as flexibility. A model is flexible if you can apply it to many different kinds of video with reasonably good results and inflexible if it is limited to specific types of content.
Inflexibility is not automatically a bad thing. A number of TVEAI models are designed to deal with problems like haloing, and they don’t yield great results if applied to all content across the board. Unfortunately, many of the Cupscale-compatible models you find online aren’t very flexible.
A rather public example of inflexibility went viral earlier this year after the Oscars. After Will Smith slapped Chris Rock, a rumor made the rounds that Chris had been wearing a flesh-colored patch. He wasn’t. The rumor got started when someone fed a photo of the event into an upscaling service, which generated this:
Technically, this is an example of the AI “hallucinating” detail that isn’t there, but it’s also an example of what I mean when I say a model is inflexible. An inflexible model might work well for space combat or special effects shots, but break when asked to handle humans. Alternatively, it might work beautifully with live-action up-close shots but create errors in backgrounds, as shown in the image below.
This critique really only applies to models that are designed for general use cases. A model designed for animation might not handle live-action very well because it isn’t intended to. Inflexibility in this case would be an animation model that can only handle one specific show. Even then, that might not be a problem; I know several people working on show-specific AI models. Inflexibility is not always a bad thing, but it limits the scenarios in which a model can be used and may require the end-user to test more models to find one that works.
Topaz Video Enhance AI’s models sometimes break, but they break much less than any other application I have tested. The errors TVEAI occasionally introduces can almost always be reduced or fixed completely through the use of other software, provided two things are true: 1). Your video’s quality is high enough for it to upscale well, and 2). You’re willing to put some effort into fixing the problem.
As for Cupscale, the user-created AI models you can download and run are more likely to introduce or magnify the following effects, based on my testing:
- Strobing (rapid light/dark shifts across large parts of the scene)
- Texture flicker in fine details.
- Significant color shifts
- Improperly rendered detail
- Improper changes in foreground/background image focus.
- Inconsistent output quality from scene to scene.
Cupscale also occasionally has a problem with skipping and will periodically jump forward in a video and begin upscaling a new scene. Frames 0-750 might be followed by frames 11235 – 12570. I couldn’t figure out the reason behind this behavior, but decompressing video files first and telling the app to upscale images seems to prevent the problem.
After investigating some 16 Cupscale models I’d recommend 4x-UniRestore for live action. I also had good results with the Kemono family in Final Fantasy X and live-action both, apart from some significant and unexpected color shifts, as shown below.
After extensive testing, I’ve found Cupscale is more likely than TVEAI to crash if you launch another GPU-accelerated application while already upscaling a video. In some cases, just launching a hardware-accelerated application can trigger a crash. Topaz Video Enhance AI used to have this problem but it’s much better behaved than it once was.
The fact that Cupscale models are more likely to break and cause a wider range of problems does not mean that no Cupscale-compatible model is capable of matching or even beating Topaz Video Enhance AI’s quality. I’ve been genuinely impressed with the 4x-UniRestore and Kemono models. The UniScale_CartoonRestore_Lite model is shown below, this time run on a video that had been preprocessed in AviSynth. Don’t worry if that sentence doesn’t make sense to you — we’ll discuss what AviSynth is a little farther on.
Unfortunately, Cupscale’s slower rendering speed and the relative inflexibility of its models means you’ll spend more time waiting for clips to render so you can check to see if the output is useful or not. Individual frames can be found in the “frames-out” subdirectory if you want to check mid-run output, but certain problems may not be visible until you see the clip in motion. Run enough video through enough models and you’ll develop a better sense of whether any given model will work for a piece of content… eventually. The same learning process happens more quickly in Topaz VEAI.
Topaz Video AI 2.6.4 is not perfect. It occasionally misunderstands aspect ratios or introduces small contrast changes. Despite these lingering growing pains, the application has matured since I last wrote about it and now supports Intel, AMD, and Nvidia GPUs, as well as both x86 and Apple Silicon Macs. The company has continued to improve its AI upscaling models and has added new frame rate interpolation models over the past year.
Here’s how I would summarize the comparison between TVEAI and Cupscale:
Cupscale is a lot of fun if you are an enthusiast who likes testing various models or you think you want to build your own someday. I said from the beginning that I wanted my Deep Space Nine project to be based on free software, and while Cupscale isn’t fast enough to handle a job that size quite yet, you can squint and see the day coming. Cupscale is worth testing as an adjunct or additional option for upscaling content, especially if TVEAI isn’t yielding good results. The fact that the app is as flexible, powerful, and stable as it is says nothing but good things about its authors.
The clip below is a 50/50 blend between two Cupscale models. I discuss blending in more detail on the “Master Class” page, but it’s a great way to minimize errors while capturing the benefits upscaling can provide.
Topaz Video AI is the application to use if you have a project of any significant size, you need somewhat predictable results, and/or if you care about performance. TVEAI also requires experimentation, but it renders much more quickly and its models are less likely to break.
Now that we’ve discussed what AI upscaling is and some of the differences between the upscalers themselves, I’d like to switch gears a bit and talk about two important applications: AviSynth and VapourSynth. These applications are not upscalers themselves, but they often play a large role in determining an upscale’s final quality.
Table of Contents / Videos
The Impact of Pre-Processing
Topaz Video Enhance AI is an application focused on one task: Upscaling video content. Unfortunately, a lot of older videos need more than just a trip through an AI upscaler to look their best. While the Dione model family offers deinterlacing and the Proteus model features user-adjustable denoising, deringing, and sharpening options, deinterlacing and other general video editing tasks are not what Topaz VEAI or Cupscale are designed to do.
AviSynth (technically AviSynth+) and VapourSynth are frameservers with extensive video editing capabilities. Both applications can deinterlace and detelecine video, convert content between the NTSC and PAL standards (or vice-versa), shift video from one container format and/or codec to another, and offer a wide range of repair and modification functions for virtually every type of content. There are filters for antialiasing, sharpening, smoothing, denoising, degraining, line darkening, line thinning, rainbow removal, fixing chroma errors, and more. A lot more. Below, I’ve embedded a video demonstrating the same BSG clip as earlier, only this time post-filtering.
Experiment with filters and you’ll often find a combination that results in a net improvement in your final upscale. For example, both AviSynth and VapourSynth can inject grain and noise in ways that improve TVEAI’s output. Cupscale models are also affected by adding grain and noise, but not in the same way. I am not certain the technique is as helpful when working with Cupscale, though the benefit would always be model-dependent. Readers should be aware that grain and noise injection do not always improve detail and some experimentation may be needed to find the right filter. QTGMC’s GrainRestore and NoiseRestore functions are a great place to start.
Here’s a still from Battlestar Galactica. It can be difficult to see the difference between processed and unprocessed in rapid motion, so this mirrored comparison should make it easier. Pull the slider right and left and you’ll see the images swap as you cross the center line. Processing the video first removes rainbowing and overall detail.
The video below shows the difference between upscaling the original Final Fantasy X clip with no processing in AviSynth versus upscaling it after AviSynth processing. The Final Fantasy X appendix page shows two different methods if processing this video in AviSynth and the different output it produces. This is the second method.
VapourSynth is a port of AviSynth that’s been rewritten to use Python. AviSynth scripting is said to be simpler if you aren’t a programmer; VapourSynth is more flexible but requires the end-user to learn more programming syntax. There are GUI front-ends that make using either an easier task. I use StaxRip as a front-end for AviSynth and Hybrid as a front-end for VapourSynth.
Again, you can typically use AviSynth or VapourSynth to process your video. I came across Hybrid after I had a workflow established in StaxRip, and I like the application, so I’ve kept using it. AviSynth is older than VapourSynth and so has more filters, but many common AVS filters have been ported to VS. There’s also a handy encyclopedia devoted to AviSynth if you want to know more.
For simplicity, I will sometimes refer to both applications as AVS/VS, but you probably don’t need to use both unless a specific filter is only available for one of them. I refer to running AVS/VS as “pre-processing” in this context because the video is being processed in preparation for upscaling rather than ordinary viewing.
If you want access to all the models TVEAI offers and/or you have telecined content that you want to revert back to a 23.976 fps progressive frame rate, you’ll need to use a third-party application. AviSynth and/or VapourSynth is typically the best way to go, but there’s definitely a learning curve associated with these programs. Handbrake is another option, and while it’s not my favorite app personally, it sometimes does a reasonable job. As far as paid applications, TMPGEnc 7 did a good job detelcining shows like Deep Space Nine and Voyager when I tested it earlier this year. AviSynth and VapourSynth are both free, as is Handbrake. TMPGenc 7 is a paid application, but it does have a 30-day free trial.
It is sometimes possible to use AviSynth and/or VapourSynth to repair a non-viable video to the point that it can benefit from upscaling, but this must be evaluated on a case-by-case basis. It may take longer to repair a video than you care to spend. In some cases, the filter choices you’d make to achieve a high-quality upscale are not the same as what you’d choose if you were planning to watch the video sans upscaling. Topaz often creates more detailed output if additional noise and grain are injected into the video, as shown below:
I originally showed these frames in a Star Trek article earlier this year to show off the impact of injecting extra grain and noise. I altered the color in these images to make the effect easier to see, and I applied more grain and noise than you’d probably want to use. You don’t have to inject this much grain and noise to see a benefit. This is important, as not all these changes are beneficial ones.
Proper footage treatment can yield dramatic differences in final image quality. Both of the frames below were upscaled using the same Proteus settings in TVEAI. The only difference is how I pre-processed the footage. The settings I recommend testing in the appendices will not produce this strong an effect.
The relationship between AviSynth, VapourSynth, and Topaz Video Enhance AI is best understood as complementary. It is often better to perform deinterlacing using the QTGMC filter in AviSynth or VapourSynth as opposed to using one of the Dione models in Topaz Video Enhance AI. Similarly, injecting grain and noise via the QTGMC filter often improves TVEAI’s final output.
Because an upscaler will enthusiastically enhance errors as well as desired detail, the only way to upscale some footage and wind up with a net improvement is to fix what’s broken first. Proper pre-processing is pivotal to the process.
Table of Contents / Videos
Is Your Video a Good Candidate for Upscaling?
Now that we’ve discussed the basic capabilities and differences of AI upscalers and their supporting applications, let’s talk about how to evaluate a video. Although the only way to know for certain how a video will look is to feed it through the upscaler, there are broad guidelines that can help you determine if your video is a good candidate. Videos that tend to upscale well are videos that:
- Were professionally edited/mastered / captured by someone who knew what they were doing and who took the time to treat the material well.
- Are either the original source video with the original noise/grain pattern or as close to that video as possible.
- Have relatively few problems to start with.
- Were shot in the relatively recent past, on relatively modern equipment.
Here’s what this means, in practical terms:
It is much easier to upscale footage of a sunny outdoor birthday party shot on a Samsung Galaxy S5 than it is to upscale footage of a sunny outdoor birthday party shot back in 1991 on a camcorder. A good outcome is fairly likely in the first case and highly unlikely in the second. The lower quality the source video, the less likely you are to get a satisfactory result, at all times, in all cases.
The video below was shot on a 72op camera back in July 2010 on a Nikon 300S by professional photographer and colleague David Cardinal. As you can see, it cleans up pretty well.
There is a certain minimum quality threshold that a video needs to meet before an upscaler can improve it. If the video is below that threshold, it will require additional processing (at minimum) before it can benefit from upscaling. Sufficiently low-quality videos do not benefit from current common upscaling tools, no matter how they are processed first. Upscaling may make such footage look worse.
The last few questions have nothing to do with your video quality, but they’re arguably the most important when it comes to evaluating your chances of success:
- How much time and energy are you willing to devote to this project?
- Are you willing to learn to use other applications if they improve your video’s final quality?
- Are you willing to test different methods of processing a video until you find the result you are looking for?
There are no wrong answers to these questions, but you need to consider them. I suggest taking advantage of Topaz Video Enhance AI’s free trial before buying the program. Test it on the actual content you want to upscale, if possible. The old joke about “Enhance” being able to extract infinite levels of detail from a fuzzy 32×32 pixel image is much closer to reality than it used to be, but we still don’t have anything like a push-button solution. Repairing damaged video to prep it for upscaling is often a fair bit of work.
Table of Contents / Videos
Why Is Upscaling Controversial?
Upscaling is not well-loved in certain corners of the video editing community. The reasons vary depending on the individual. Some people dislike Topaz Video Enhance AI specifically, some dislike the entire concept of AI upscaling, and some people are fine with the concept but unhappy with the way current products are marketed and/or what they can achieve.
The big picture problem is that a lot of AI upscaler output isn’t always very good. Sometimes, it’s downright bad. It’s possible for 38 minutes of a 43-minute video to upscale beautifully, while five minutes of it look as if they’ve been scraped through an MPEG-1 decoder and left out in the rain for a week. Final upscale quality can vary considerably, even within the same professionally-produced episode of television. It doesn’t help that the entire field is new and people are still figuring out what works and what doesn’t.
Here’s an example of a video that doesn’t upscale very well if you just toss the original clip through Topaz Video Enhance AI:
What’s clear is that the back and forth online has left readers confused. I’ve heard from at least a half-dozen people who weren’t sure what to think about AI upscaling because my previous articles and sample clips showcasing Deep Space Nine and Voyager argued in favor of one conclusion, while knowledgeable video editors in online forums had made very different arguments.
Instead of arguing over theory, I launched this project to explore the facts. I did not cherry-pick my sources for this story. I was asked to restore the Dick Tracy and BSG clips by their respective owners, the 320×240 Jack Russell video is one of the oldest and lowest-resolution videos I had sitting around, and I chose the Final Fantasy X video on a whim after seeing another restoration project on YouTube. I asked David Cardinal for an older video that would let me test upscaling on 720p content, but he picked the video to share.
Take the same video clip above and put it through several AI models as well as different processing workflows and the end result is different. While the degree of improvement is not enormous, the upscaler no longer causes damage and can even be said to improve the content somewhat.
The general video editing community is well aware of the myriad ways AI upscaling can go wrong, but there’s less awareness of the ways that AI upscaling can be nudged into going right, even if it wasn’t headed that way to start with.
Table of Contents / Videos
How Much Improvement Is It Possible to Get?
This is a difficult question to answer. What people want to hear is something specific, like: “Easily upscale DVDs to 1080p HD quality!” Some companies make claims like this in their marketing literature. They shouldn’t. This is a rare situation in which speaking literally can leave people with the wrong impression. When I say that improving DVD footage to the point that it looks like native 720p is difficult, I do not mean the literal shift from 720×480 to 1280×960. I mean the difficulty of raising a DVD’s perceived quality high enough that it could be mistaken for a native 720p source.
Those who wish to try and fully bridge the gap between SD and HD will find themselves on the hook for more than a quick single pass through any video-editing application, including Topaz Video Enhance AI or Cupscale. One reason I’ve included so many samples in this article is to illustrate not just how much video can be improved, but where those improvements come from and how much work it takes to get them.
As far as a rough guide is concerned, here’s what I can offer:
Imagine a quality scale with notches at the usual points – VHS, DVD, 720p, 1080p, 4K. Now, add four notches between each standard. VHS —*—*—*—*—DVD. These points represent dimensionless intermediate improvements in image quality that are large enough for you to notice, but not large enough to declare that your video looks like it was shot in a later standard. Four is an arbitrary number I chose to make my example math work. An upscaled DVD that improved by three points would look like an excellent DVD, but it wouldn’t quite fool a knowledgeable viewer into thinking they were watching 720p or 1080p-native source. This four-point scale works best for VHS -> DVD or DVD -> 720p. There would probably only be 2-3 points of “quality” between 720p and 1080p.
If you ask Topaz Video Enhance AI to upscale the equivalent of a high-quality DVD source file with no pre-processing, you might reasonably expect a gain of 1-2 points. If your source is of middling quality, you might get 0.5 – 1. Low-quality source, and you’ll get anywhere from -3 to +1 points at most. I went negative here to acknowledge that upscaling poor-quality footage can reduce its quality.
The point of pre-processing, model blending, and other application workflows I discuss in the “Master Class” section is to increase the quality of your final upscale. Put your footage through AviSynth or VapourSynth first, and you might step up 1.25 – 2.5 points instead of 1-2. Are you willing to blend multiple upscaling models? Add another 0.5 – 1 points, depending on the condition of your source and the quality of your pre-processing. Are you willing to experiment with various different pre-processing methods, different filters in other utilities in addition to AVS/VS, and experiment with blending all of the video together, possibly with more than one trip through more than one upscaler? If you do, and your source footage is of sufficiently high quality, you may be able to gain enough perceived visual quality to pass as a native source of the next-highest standard. Even then, your video probably won’t maintain this illusion perfectly in every scene and the overall quality will still dip in wide shots with a lot of people in them. Your chances of achieving this level of improvement are better if you know how to color grade.
An example of my own work that I would argue meets the “could pass for native 720p” threshold is below. While I did not cherry pick this episode or this scene, I would say that Trials and Tribble-ations is one of the better-looking episodes of Deep Space Nine to start with.
I suggest focusing less on whether or not your upscale would pass as native HD and more on whether or not you like the output. Some TV shows never received high-quality releases and it is very difficult to compensate for a low-quality source. Raising the perceived visual quality of an upscale gets easier the higher your base resolution and overall quality. 720p is easier to convert to 1080p than DVD to 720p. DVD to ~720p is easier than boosting VHS to DVD-equivalent quality.
There is no consumer AI upscaling application that can take DVD footage and transform it into 4K-equivalent. 1080p-equivalent might be possible with literally ideal footage and a master video editor + color grader who knew every aspect of every application in their workflow. Anyone who claims they can upscale Deep Space Nine, Voyager, or any other late 1990s DVD-based show to 4K is misrepresenting the scope of what’s achievable by pretending a literal 4K resolution will deliver 4K-equivalent quality.
Table of Contents / Videos
Conclusion: The Current State of Consumer AI Upscaling Software
Right now, the only paid application worth considering is Topaz Video Enhance AI. None of the other paid apps we tested were fast enough and they all produced broken output. TVEAI is far more capable, but readers should be aware that older, lower-quality footage is much harder to improve. A lot of professionally produced DVD footage from the mid-1990s is still marginal and requires extra work to bring it to best quality.
Cupscale’s speed is good enough for small clips but a bit painful for professional use. At its best, its quality can rival or even surpass Topaz Video Enhance AI, but finding the right models is a slower affair and the app does not play nice with other GPU applications. If you do not have the money to spend on Topaz but you want to experiment with AI, I highly recommend Cupscale. I would’ve been thrilled to see an app this good two years ago, and I expect it will continue to improve over time.
Neither TVEAI nor Cupscale are not magic. They don’t – and can’t – replace the Mark I Human Eyeball or the need for some good old-fashioned experimentation. Neither application can ingest a DVD and magically spit out a native 4K-equivalent video. That doesn’t mean upscaling can’t work wonders – it just means reality continues to demand a troublesome amount of actual work and expertise that science-fiction TV shows often manage to skip.
Personally? I find the level of achievable improvement astonishing. The DVD release of SG-1 can be repaired and improved to the point that it rivals or surpasses the Blu-ray. Under ideal circumstances, shows like Deep Space Nine and Voyager can be enormously improved over their current states. Proper handling allows more marginal videos to be recovered. Upscaling is not the right tool for every job, but it shares that distinction with every video filter and application ever created.
If you’ve looked through the results I’ve shared and don’t find them particularly interesting, check back in two years and see how things have changed. I’ve watched upscalers become far more capable in just the past two years and I don’t see any reason to think improvements are going to stall. GPUs from AMD and Nvidia continue to improve at a rapid clip. AMD and Intel will continue to improve on-chip AI performance through some combination of SIMD instruction support, integrated GPU capabilities, and specialized, on-chip accelerators. If your favorite TV show is languishing on a mediocre DVD transfer, that’s a static workload that isn’t going anywhere while software and hardware continue improving at a rapid pace. In 5 years we ought to have real-time application-level AI acceleration that delivers better quality than Cupscale and TVEAI do today. By then, the performance of non-real-time applications will have leaped ahead as well.
When I started this project in 2020, top-end performance was ~0.44s per frame. 2.5 years later, top-end performance is more like 0.09s/frame for the same DVD source material. Real-time at 23.976 fps requires ~0.04s/frame with a little room for overhead. It’s not crazy to think GPUs might be able to deliver this kind of uplift in the not-so-distant future, given how quickly performance has improved thus far.
If you would like to see more samples from the videos above, you can use the links below to access some video-specific pages. There are additional samples and footage, plus more information on how each video was processed. If you’d like to read more about advanced AI processing and how to squeeze very last detail out of your video, click here.
- Chance v. Stream (320×240, Canon PowerShot circa 2006)
- Final Fantasy X: The Dance (582×416, Original PS2 disc)
- Battlestar Galactica: The Living Legend Project (720×400 as provided)
- Mamma Grizzly (720p base, original source)
- The Making of Dick Tracy (Encoded at 582×320, VHS-quality, not original source)
- Beyond the Blu-ray: Stargate SG-1 Upscaled DVD v. Official Blu-ray Release
Continue reading
Far Beyond the Stars: Improving Motion, Image Quality in the DS9 Upscale Project
It's been nine months since Joel Hruska's last Star Trek: Deep Space Nine Upscale Project update. The new encode method he debuts here offers better motion and improved image quality relative to what was possible last year.
How to Upscale Star Trek: Deep Space Nine
If you wanted the tutorial I've been promising for 16 months on how to upscale and improve Deep Space Nine, this is the article you've been waiting for.
How to Upscale Video to 4K, 8K, and Beyond
Here's how to upscale older DVDs and other video files for modern TVs, using a variety of methods.
Star Trek Deep Space Nine and Voyager Upscale Developer Diary: Getting Better Results With Topaz Video Enhance AI
Yesterday, I talked about Star Trek Deep Space Nine and Voyager, first and foremost. Today, we talk about the science and technology of upscaling, what I've learned, and how you can use it for projects of your own.