FreePCA:Integrating Consistency Information across Long-short Frames in Training-free Long Video Generation via Principal Component Analysis(arxiv link)
Setup (based on Videocrafter2)
conda create -n freepca python=3.8.5
conda activate freepca
pip install -r requirements.txt2. Download pretrained T2V models via Hugging Face, and put the model.ckpt in checkpoints/base_512_v2/model.ckpt.
| T2V-Models | Resolution | Checkpoints |
|---|---|---|
| VideoCrafter2 | 320x512 | Hugging Face |
| VideoCrafter1 | 576x1024 | Hugging Face |
| VideoCrafter1 | 320x512 | Hugging Face |
sh scripts/run_text2video.sh@misc{tan2025freepca,
title={Freepca: Integrating consistency information across long-short frames in training-free long video generation via principal component analysis},
author={Tan, Jiangtong and Yu, Hu and Huang, Jie and Xiao, Jie and Zhao, Feng},
booktitle={Proceedings of the Computer Vision and Pattern Recognition Conference},
pages={27979--27988},
year={2025}
}
Our codebase builds on Videocrafter2. Thanks the authors for sharing their awesome codebases!
