B.Tech final year project done by Anz Jo Xavier, Nidheesh S and Yedhu Krishnan PG.
- Converts a raw video into animation
- Video is sliced into frames first
- Uses Stable diffusion to convert frames to animation
- Animated frames are joined together to make animated video
- Used Controlnet in stable diffusion to track movements
- Trained a CNN Model to predict parameters for Stable diffusion considering the quality of video
- Made a FastAPI Server for easy usage
Tip
Checkpoint Used in Stable Diffusion: aniverse_V13.safetensors [9fe4fec28d]
Note
Usage of above server requires Stable diffusion Automatic1111 preinstalled in your PC.