Skip to content

Benchmarks

GenManip natively supports a series of benchmarks developed by the GenManip team and community contributors. Below is a detailed introduction to each benchmark.

If you have built your own benchmark based on GenManip, we warmly welcome you to submit an issue to our repository. Please include:

  • The name of your benchmark
  • A brief description (you can include your project’s external link)
  • The corresponding asset links (you can directly modify them in download_assets.py)
  • The corresponding config file
  • If your benchmark is from a paper, please include the citation information as well.

GenManip IROS Benchmark

The IROS 2025 Challenge of Multimodal Robot Learning in InternUtopia and Real World is built on top of GenManip, forming the foundation of the Manipulation Track. It also supports the InternManip framework. GenManip natively includes these benchmarks and their different variants.

# Download data
python standalone_tools/download_assets.py --dataset IROS_Aloha # USD File
python standalone_tools/download_assets.py --dataset IROS_Aloha-dataset # Dataset
python standalone_tools/download_assets.py --dataset IROS_Aloha-layout # Layout
# Launch your model service
python standalone_tools/fake_port.py
# Run the evaluation script
python eval_V3.py -cfg configs/tasks/IROS_Aloha.yml
# Do the same for IROS_RoboTiq
python standalone_tools/download_assets.py --dataset IROS_RoboTiq # USD File
python standalone_tools/download_assets.py --dataset IROS_RoboTiq-dataset # Dataset
python standalone_tools/download_assets.py --dataset IROS_RoboTiq-layout # Layout
python standalone_tools/fake_port.py
python eval_V3.py -cfg configs/tasks/IROS_RoboTiq.yml

GenManip Scaling Pick-and-Place Benchmark

The GenManip Scaling Pick-and-Place Benchmark evaluates a model’s generalization ability across a large number of objects and tasks. It includes 200 randomly generated scenes using assets from Objaverse, each verified to be executable.

# Download data
python standalone_tools/download_assets.py --dataset objaverse_scaling # USD File
python standalone_tools/download_assets.py --dataset objaverse_scaling-layout # Layout
python standalone_tools/download_assets.py --dataset objaverse_scaling-pre_train_dataset # Dataset
python standalone_tools/download_assets.py --dataset objaverse_scaling-post_train_dataset # Dataset
# Launch your model service
python standalone_tools/fake_port.py
# Run the evaluation script
python eval_V3.py -cfg configs/tasks/objaverse_scaling.yml