This repository contains the implementation of the following paper.
ProEdit: Inversion-based Editing From Prompts Done Right
Zhi Ouyang∗, Dian Zheng∗, Xiao-Ming Wu, Jian-Jian Jiang, Kun-Yu Lin, Jingke Meng+, Wei-Shi Zheng+
- 🔥 Updates
- 📣 Overview
- 📋 ToDo List
- 📖 Pipeline
- ✨ Text-driven Image / Video Editing
- 🎓 Editing by Instruction
- ✒️ Citation
♥️ Acknowledgement
- [2025.12.28] The paper ProEdit is released on arXiv. 🚀
Overview of ProEdit. We propose a highly accurate, plug-and-play editing method for flow inversion that addresses the problem of excessive source image information injection, which prevents proper modification of attributes such as pose, number, and color. Our method has demonstrated impressive performance in both image editing and video editing tasks.
- Release the code for image editing(in two weeks)
- Release the code for video editing
Pipeline of our ProEdit. The mask extraction module identifies the edited region based on source and target prompts during the first inversion step. After obtaining the inverted noise, we apply Latents-Shift to perturb the initial distribution in the edited region, reducing source image information. In selected sampling steps, we fuse source and target attention features in the edited region while directly injecting source features in non-edited regions to achieve accurate attribute editing and background preservation simultaneously.
More results can be found in our project page.
With the assistance of a large language model, our method can directly perform edits guided by editing instructions.
If you find our repo useful for your research, please consider citing our paper:
@misc{ouyang2025proedit,
title={ProEdit: Inversion-based Editing From Prompts Done Right},
author={Zhi Ouyang and Dian Zheng and Xiao-Ming Wu and Jian-Jian Jiang and Kun-Yu Lin and Jingke Meng and Wei-Shi Zheng},
year={2025},
eprint={2512.22118},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2512.22118}
}We sincerely thank FireFlow, RF-Solver, UniEdit-Flow and FLUX for their awesome work! Additionally, we would also like to thank Pnp-Inversion for providing comprehensive baseline survey and implementations, as well as their great benchmark.

