pip install -r requirements.txtThe environment is very similar to Video-P2P.
We use the pre-trained stable diffusion model. You can download it here.
Since we developed our codes based on Video-P2P codes, you could refer to their github, if you need.
Please replace pretrained_model_path with the path to your stable-diffusion.
To download the pre-trained model, please refer to diffusers.
# Stage 1: Tuning to do model initialization.
# You can minimize the tuning epochs to speed up.
python run_tuning.py --config="configs/cloud-1-tune.yaml"# Stage 2: Attention Control
python run_attention_flow.py --config="configs/cloud-1-p2p.yaml"Find your results in Video-P2P/outputs/xxx/results.









