I was really amazed after reading the NeoVerse paper and watching the demo videos shared on the Hugging Face papers page. The idea of learning a 4D world model from in-the-wild monocular videos is impressive, and the results look excellent.
I wanted to ask if there are any plans to release the code and pretrained models. I’ve been checking the repository regularly and would love to test and experiment with the method once it’s available.
Great work — looking forward to future updates!