Training-free Stylized Text-to-Image Generation with Fast Inference
Xin Ma, Yaohui Wang*, Xinyuan Chen, Tien-Tsin Wong, Cunjian Chen*
(*Corresponding authors)
This repo contains sampling code of OmniPainter. Please visit our project page for more results.
- 🔥 May. 17, 2025 💥 The related codes are released.
Download and set up the repo:
git clone https://github.com/maxin-cn/OmniPainter
cd OmniPainter
conda env create -f environment.yml
conda activate omnipainterYou can sample high-quality images that match both the given prompt and the style reference image within just 4 to 6 timesteps, without requiring any inversion. The script has various arguments for adjusting sampling steps, changing the classifier-free guidance scale, etc:
bash run.shRelated model weights will be downloaded automatically and following results can be obtained,
| Style images | Generated Images | ||
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
| Reference | "Bird" | "Forest" | "Lion" |
Xin Ma: xin.ma1@monash.edu, Yaohui Wang: wangyaohui@pjlab.org.cn
If you find this work useful for your research, please consider citing it.
@article{ma2025omnipainter,
title={Training-free Stylized Text-to-Image Generation with Fast Inference},
author={Ma, Xin and Wang, Yaohui and Chen, Xinyuan and Wong, Tien-Tsin and Chen, Cunjian},
journal={arXiv preprint arXiv:2505.19063},
year={2025}
}OmniPainter has been greatly inspired by the following amazing works and teams: Prompt-to-Prompt, latent-consistency-model, ZePo, Z∗ and MasaCtrl. we thank all the contributors for open-sourcing.
The codes are licensed under LICENSE.











