-
Notifications
You must be signed in to change notification settings - Fork 47
Description
Hi @wenbowen123 🤗
Niels here from the open-source team at Hugging Face. I discovered your work on Arxiv and was wondering whether you would like to submit it to hf.co/papers to improve its discoverability.If you are one of the authors, you can submit it at https://huggingface.co/papers/submit.
The paper page lets people discuss about your paper and lets them find artifacts about it (your models, datasets or demo for instance), you can also claim
the paper as yours which will show up on your public profile at HF, add Github and project page URLs.
Your paper introduces the Fast-FoundationStereo models (for depth-estimation) and a curated "1.4M in-the-wild stereo pairs" dataset. I saw on your GitHub repository that the "Code coming soon, please stay tuned" and your paper abstract mentions that "Our code, models and pseudo-labels will be released upon acceptance."
It'd be great to make these Fast-FoundationStereo checkpoints and the 1.4M in-the-wild stereo pairs dataset available on the 🤗 hub once they are released, to improve their discoverability/visibility.
We can add tags so that people find them when filtering https://huggingface.co/models and https://huggingface.co/datasets.
Uploading models
For your Fast-FoundationStereo models (which appear to be for depth-estimation), see here for a guide: https://huggingface.co/docs/hub/models-uploading.
In this case, we could leverage the PyTorchModelHubMixin class which adds from_pretrained and push_to_hub to any custom nn.Module. Alternatively, one can leverages the hf_hub_download one-liner to download a checkpoint from the hub.
We encourage researchers to push each model checkpoint to a separate model repository, so that things like download stats also work. We can then also link the checkpoints to the paper page.
Uploading dataset
For your "1.4M in-the-wild stereo pairs" dataset (related to depth-estimation), it would be awesome to make it available on 🤗 , so that people can do:
from datasets import load_dataset
dataset = load_dataset("your-hf-org-or-username/your-dataset")See here for a guide: https://huggingface.co/docs/datasets/loading.
Besides that, there's the dataset viewer which allows people to quickly explore the first few rows of the data in the browser.
Let me know if you're interested/need any help regarding this, especially when the code and data are ready for release!
Cheers,
Niels
ML Engineer @ HF 🤗