Hello,
First of all, thank you for sharing this impressive project and demo video.
I had a couple of questions after watching the real-time demo:
Was the model used in the demo video fine-tuned for that specific environment, or was it the same model shared on Hugging Face?
We’re curious to know whether additional environment-specific adaptation was performed.
We are also planning to deploy an end-to-end driving model in our real-world setup.
Are there any key considerations or potential challenges we should be aware of when adapting DiffusionDrive for real-time use in a custom environment (e.g., latency, sensor synchronization, map dependencies, etc.)?
Any guidance or suggestions would be greatly appreciated.
Thanks again for your excellent work!