THE MAIN WORKING ZIP is uploaded in One-Drive IW
🔥 AnimeVideo-v3 model (动漫视频小模型). Please see [anime video models] and [comparisons]
🔥 RealESRGAN_x4plus_anime_6B for anime images (动漫插图模型). Please see [anime_model]
[Paper] [YouTube Video] [B站讲解] [Poster] [PPT slides]
Xintao Wang, Liangbin Xie, Chao Dong, Ying Shan
Tencent ARC Lab; Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences
Super Resolution Using Deep Learning
To upscale thermal images using the official Real-ESRGAN repository and the RealESRGAN_x{any model u need} plus model, outputting high-resolution results (e.g., 2560×2048 pixels or whatever the resolution needed) while ensuring compatibility and avoiding common errors.
Requirements
Python 3.10+
PyTorch (CPU or GPU)
CUDA 11+
Real-ESRGAN dependencies
Steps for downloading and installation:
pip install realesrgan
git clone https://github.com/xinntao/Real-ESRGAN.git
cd Real-ESRGAN
pip install -r requirements.txt
python inference_realesrgan.py -n RealESRGAN_x2plus (what this will do is install the required .pth weights according to your needs)
pip install -e . --user
pip install basicsr facexlib gfpgan –U (for GPU Support)
pip install -r requirements.txt
pip install -e . --user
pip install torch torchvision --index-url https://download.pytorch.org/whl/cu121 (For CUDA GPU)
python inference_realesrgan.py -n RealESRGAN_x2plus (FOR CUDA GPU same)
FOR CUDA VERSION CHECK:
nvcc --version
Got the latest version as 11.5, so the command for it is:
pip install torch==1.11.0+cu115 torchvision==0.12.0+cu115 --extra-index-url https://download.pytorch.org/whl/cu115
FOR checking the installation:
cd ~/Real-ESRGAN
python inference_realesrgan.py -n RealESRGAN_x2plus -i inputs/your_image.jpg (insert your image path according to your PC)
Output:
Testing 0 pic......
the output will be saved inside the results folder of Real-ESRGAN
Batch Script with Official Real-ESRGAN
Will build your batch processing script using RRDBNet (the real model) and the same internal logic used by inference_realesrgan.py.
Load the pretrained RRDBNet model (x4)
Use the RealESRGANer inference wrapper (used in inference_realesrgan.py)
Load and upscale each (640×512) image (in this case.....any res app)
Resize to 1280×720 (in this case.... You can do upto 2560x2080)
Save the result to another folder.
**Run: batch_upscaling_resize.py to get more detailing and more configuring options for your image enhancer(according to your needs)**
**Run: upscale_crop_resize.py to get the default settings for your image enhancer with cropping down(no personal modifications .. just default settings)**
There are usually three ways to inference Real-ESRGAN.
- You can try in our website: ARC Demo (now only support RealESRGAN_x4plus_anime_6B)
- Colab Demo for Real-ESRGAN | Colab Demo for Real-ESRGAN (anime videos).
You can download Windows / Linux / MacOS executable files for Intel/AMD/Nvidia GPU.
This executable file is portable and includes all the binaries and models required. No CUDA or PyTorch environment is needed.
You can simply run the following command (the Windows example, more information is in the README.md of each executable files):
./realesrgan-ncnn-vulkan.exe -i input.jpg -o output.png -n model_nameWe have provided five models:
- realesrgan-x4plus (default)
- realesrnet-x4plus
- realesrgan-x4plus-anime (optimized for anime images, small model size)
- realesr-animevideov3 (animation video)
You can use the -n argument for other models, for example, ./realesrgan-ncnn-vulkan.exe -i input.jpg -o output.png -n realesrnet-x4plus
- Please refer to Real-ESRGAN-ncnn-vulkan for more details.
- Note that it does not support all the functions (such as
outscale) as the python scriptinference_realesrgan.py.
Usage: realesrgan-ncnn-vulkan.exe -i infile -o outfile [options]...
-h show this help
-i input-path input image path (jpg/png/webp) or directory
-o output-path output image path (jpg/png/webp) or directory
-s scale upscale ratio (can be 2, 3, 4. default=4)
-t tile-size tile size (>=32/0=auto, default=0) can be 0,0,0 for multi-gpu
-m model-path folder path to the pre-trained models. default=models
-n model-name model name (default=realesr-animevideov3, can be realesr-animevideov3 | realesrgan-x4plus | realesrgan-x4plus-anime | realesrnet-x4plus)
-g gpu-id gpu device to use (default=auto) can be 0,1,2 for multi-gpu
-j load:proc:save thread count for load/proc/save (default=1:2:2) can be 1:2,2,2:2 for multi-gpu
-x enable tta mode"
-f format output image format (jpg/png/webp, default=ext/png)
-v verbose outputNote that it may introduce block inconsistency (and also generate slightly different results from the PyTorch implementation), because this executable file first crops the input image into several tiles, and then processes them separately, finally stitches together.
- You can use X4 model for arbitrary output size with the argument
outscale. The program will further perform cheap resize operation after the Real-ESRGAN output.
Usage: python inference_realesrgan.py -n RealESRGAN_x4plus -i infile -o outfile [options]...
A common command: python inference_realesrgan.py -n RealESRGAN_x4plus -i infile --outscale 3.5 --face_enhance
-h show this help
-i --input Input image or folder. Default: inputs
-o --output Output folder. Default: results
-n --model_name Model name. Default: RealESRGAN_x4plus
-s, --outscale The final upsampling scale of the image. Default: 4
--suffix Suffix of the restored image. Default: out
-t, --tile Tile size, 0 for no tile during testing. Default: 0
--face_enhance Whether to use GFPGAN to enhance face. Default: False
--fp32 Use fp32 precision during inference. Default: fp16 (half precision).
--ext Image extension. Options: auto | jpg | png, auto means using the same extension as inputs. Default: autoDownload pre-trained models: RealESRGAN_x4plus.pth
wget https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth -P weightsInference!
python inference_realesrgan.py -n RealESRGAN_x4plus -i inputs --face_enhanceResults are in the results folder
Pre-trained models: RealESRGAN_x4plus_anime_6B
More details and comparisons with waifu2x are in anime_model.md
# download model
wget https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B.pth -P weights
# inference
python inference_realesrgan.py -n RealESRGAN_x4plus_anime_6B -i inputsResults are in the results folder
@InProceedings{wang2021realesrgan,
author = {Xintao Wang and Liangbin Xie and Chao Dong and Ying Shan},
title = {Real-ESRGAN: Training Real-World Blind Super-Resolution with Pure Synthetic Data},
booktitle = {International Conference on Computer Vision Workshops (ICCVW)},
date = {2021}
}
Thanks for all the contributors.
- AK391: Integrate RealESRGAN to Huggingface Spaces with Gradio. See Gradio Web Demo.
- Asiimoviet: Translate the README.md to Chinese (中文).
- 2ji3150: Thanks for the detailed and valuable feedbacks/suggestions.
- Jared-02: Translate the Training.md to Chinese (中文).


