- HeyGen
- HeyGen tutorial here My guess is that most services will have a similar workflow.
- Synthesia
- Synthesia tutorial
- invideo AI Waitlist.
- Colossyan
- Veed
- Vidyard
- Creatify
- Bigvu
- Virbo
- AI Studios
- ComfyUI implementation
- ComfyUI implementation on GitHub
- ComfyUI workflow
- ComfyUI lipsync demo
- ComfyUI Latentsync on GitHub
- Latentsync node
- voice clone in Comfy UI
- test sentences from Harvard
- CosyVoice, another cloner
- DeepFuze
- DeepFuze video instructions
- Descript
Andrew video src here
- Andrew Kastner's Zoo-s1-full.mp4 = audio only
- Week 02 Demonstration.mp4 = audio only
- Week 05 Demonstration.mp4 = audio only
- P1210614_Trim.mov = audio and video Andrew sent me this a month ago or so when we first discussed. Of the four clips I have, it's the only one with video. BUT, it's statatic - just a talking head with no gestures. It's very stiff and stilted. The other videos have great audio of Andrew speaking more expressively, bit no video of him to match.
We'll be working from sub-optimal source material. Typically a clone is made from a subject in a studio situation: well-lit, good audio, possibly green screen. We might be able to doctor the one video file I have of Andrew talking using Premiere to drop out the background, enhance the image and audio quality, and add pauses between sentences.
- Background removal via Adobe Express This may be a good preliminary approach.
- General Premiere background removal
- firefly extend feature Maybe for adding pauses between sentences.