Skip to content

Sanny26/openvla

 
 

Repository files navigation

OPENVLA Worklog

  • Use pyproject.toml to set up OpenVLA (PyTorch versions are defined here).
  • Install FlashAttention and BitsAndBytes versions compatible with the pyproject.toml.

Action Comparison Pipeline

1) Download the Bridge-v2 dataset

  • Change directory to your base datasets folder:
cd <PATH TO BASE DATASETS DIR>
  • Download the full dataset (approximately 124 GB):
wget -r -nH --cut-dirs=4 --reject="index.html*" https://rail.eecs.berkeley.edu/datasets/bridge_release/data/tfds/bridge_dataset/
  • Rename the dataset to bridge_orig (required; omitting this may lead to runtime errors later):
mv bridge_dataset bridge_orig

2) Generate action-comparison JSONs

Run the following script:

python scripts/action-compare-pipeline/bridgev2_eval.py \
  --num_action_samples 8 \
  --sample_temperature 0.8

Notes

  • Using quantized model variants works better with an RTX 4090.

About

OpenVLA: An open-source vision-language-action model for robotic manipulation.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages

  • Python 99.9%
  • Makefile 0.1%