-
Notifications
You must be signed in to change notification settings - Fork 3
Description
Hello,
Niels here from the open-source team at Hugging Face. I discovered your work through Hugging Face's daily papers as yours got featured: https://huggingface.co/papers/2510.26794.
The paper page lets people discuss about your paper and lets them find artifacts about it (your models, datasets or demo for instance), you can also claim
the paper as yours which will show up on your public profile at HF, add Github and project page URLs.
I saw in your abstract and GitHub README that "The code, data, and benchmark will be made publicly available." It'd be great to make the ViMoGen and ViMoGen-light models, the ViMoGen-228K dataset, and the MBench benchmark available on the 🤗 hub, to improve their discoverability/visibility.
We can add tags so that people find them when filtering https://huggingface.co/models and https://huggingface.co/datasets.
Uploading models
See here for a guide: https://huggingface.co/docs/hub/models-uploading.
In this case, we could leverage the PyTorchModelHubMixin class which adds from_pretrained and push_to_hub to any custom nn.Module. Alternatively, one can leverages the hf_hub_download one-liner to download a checkpoint from the hub.
For your ViMoGen and ViMoGen-light models, given their focus on 3D human motion generation, the relevant pipeline tag would likely be text-to-3d or image-to-3d depending on whether they primarily take text or video frames as input for generating 3D motion.
We encourage researchers to push each model checkpoint to a separate model repository, so that things like download stats also work. We can then also link the checkpoints to the paper page.
Uploading dataset
Would be awesome to make the ViMoGen-228K dataset and MBench benchmark available on 🤗 , so that people can do:
from datasets import load_dataset
dataset = load_dataset("your-hf-org-or-username/your-dataset")See here for a guide: https://huggingface.co/docs/datasets/loading.
We also support Webdataset, useful for image/video datasets: https://huggingface.co/docs/datasets/en/loading#webdataset.
For ViMoGen-228K and MBench, the task categories would also align with motion generation, likely text-to-3d or image-to-3d.
Besides that, there's the dataset viewer which allows people to quickly explore the first few rows of the data in the browser.
Let me know if you're interested/need any help regarding this, especially when you are ready to release the artifacts!
Cheers,
Niels
ML Engineer @ HF 🤗