main.mp4
Yufei Jia†, Guangyu Wang†, Yuhang Dong, Junzhe Wu, Yupei Zeng, Haizhou Ge, Kairui Ding, Zike Yan, Weibin Gu, Chuxuan Li, Ziming Wang, Yunjie Cheng, Wei Sui, Ruqi Huang‡, Guyue Zhou‡
- High-fidelity, hierarchical Real2Sim generation for both background node and interactive scene nodes in various complex real-world scenarios, leveraging advanced laser-scanning, generative models, physically-based re-lighting, and Mesh-Gaussian transfer.
- Efficient simulation and user-friendly configuration. By seamlessly integrating 3DGS rendering engine, MuJoCo physical engine, and ROS2 robotic interface, we provide an easy-to-use, massively parallel implementation for rapid deployment and flexible extension. The overall throughput of DISCOVERSE can achieve 650 FPS for 5 cameras rendering RGB-D frames, which is ∼3× faster than ORBIT (Issac Lab).
- Compatibilities with existing 3D assets and inclusive supports for robot models (robotic arm, mobile manipulator, quadrocopter, etc.), sensor modalities (RGB, depth, LiDAR), ROS plugins, and a variety of Sim&Real data mixing schemes. DISCOVERSE lays a solid foundation for developing a comprehensive set of Sim2Real robotic benchmarks for end-to-end robot learning, with real-world tasks including manipulation, navigation, multi-agent collaboration, etc., to stimulate further research and practical applications in the related fields.
Please refer to docker deployment, or directly download v1.6.1 docker images. If docker is used, the 📦 Install
and 📷 Photorealistic/Preparation 1-3
parts can be skipped.
git clone https://github.com/TATP-233/DISCOVERSE.git --recursive
cd DISCOVERSE
pip install -r requirements.txt
pip install -e.
Download the meshes
and textures
folders from Baidu Netdisk or Tsinghua Netdisk and place them under the models
directory. After downloading the model files, the models
directory will contain the following contents.
models
├── meshes
├── mjcf
├── textures
└── urdf
The physical engine of DISCOVERSE
is mujoco. If the user does not need the high-fidelity rendering function based on 3DGS, this section can be skipped. If photorealistic rendering is required, please follow the instructions in this subsection.
-
Install CUDA. Please install the corresponding version of CUDA according to your graphics card model from the download link.
-
pip install -r requirements_gs.txt
-
Install
diff-gaussian-rasterization
cd submodules/diff-gaussian-rasterization/ git checkout 8829d14
Modify line 154 of
submodules/diff-gaussian-rasterization/cuda_rasterizer/auxiliary.h
, change(p_view.z <= 0.2f)
to(p_view.z <= 0.01f)
.cd../.. pip install submodules/diff-gaussian-rasterization
-
Prepare 3DGS model files. The high-fidelity visual effect of
DISCOVERSE
depends on 3DGS technology and corresponding model files. The pre-reconstructed robot, object, and scene models are placed on Baidu Netdisk link and Tsinghua Netdisk link. After downloading the model files, themodels
directory will contain the following contents. (Note: Not all models are necessary. Users can download according to their own needs. It is recommended to download all ply models except those in thescene
directory, and for the models in thescene
folder, only download the ones that will be used.)
models
├── 3dgs
│ ├── airbot_play
│ ├── mmk2
│ ├── tok2
│ ├── skyrover
│ ├── hinge
│ ├── object
│ └── scene
├── meshes
├── mjcf
├── textures
└── urdf
If you want to view a single ply model, you can open SuperSplat in the browser, drag the ply model into the webpage, and you can view and perform simple editing. The webpage effect is as follows.
Please refer to our Real2Sim repository DISCOVERSE-Real2Sim for this part of the content.
- airbot_play robotic arm
python3 discoverse/envs/airbot_play_base.py
- Robotic arm desktop manipulation tasks
python3 discoverse/examples/tasks_airbot_play/block_place.py
python3 discoverse/examples/tasks_airbot_play/coffeecup_place.py
python3 discoverse/examples/tasks_airbot_play/cuplid_cover.py
python3 discoverse/examples/tasks_airbot_play/drawer_open.py
sim2real.mp4
There are many examples under the discoverse/examples
path, including ros1, ros2, grpc, imitation learning, active mapping, etc.
- Active SLAM
python3 discoverse/examples/active_slam/dummy_robot.py
- Collision Detection
python3 discoverse/examples/collision_detection/mmk2_collision_detection.ipynb
- Vehicle and Drone Collaboration
python3 discoverse/examples/skyrover_on_rm2car/skyrover_and_rm2car.py
We currently provide the entire process of data collection, model training, and inference of the act algorithm in the simulator. You can refer to Data Collection and Format Conversion, Training, Inference, and refer to the corresponding tutorials.
- Press 'h' to print help
- Press 'F5' to reload the mjcf file
- Press 'r' to reset the state
- Press '[' or ']' to switch camera view
- Press 'Esc' to set free camera
- Press 'p' to print the robot state
- Press 'g' to toggle Gaussian rendering
- Press 'd' to toggle depth rendering
- 2025.01.13: DISCOVERSE is open source
- 2025.01.16: add docker file
-
diff-gaussian-rasterization
fails to install due to mismatched pytorch and cuda versions: Please install the specified version of pytorch. -
If you want to use it on a server, please specify the environment variable:
export MUJOCO_GL=egl
You are welcome to add the author's contact information. Please add a note when adding.
DISCOVERSE is licensed under the MIT License. See LICENSE for additional details.
If you find this work helpful, please consider citing our paper:
@misc{discoverse2024,
title={DISCOVERSE: Efficient Robot Simulation in Complex High-Fidelity Environments},
author={Yufei Jia and Guangyu Wang and Yuhang Dong and Junzhe Wu and Yupei Zeng and Haizhou Ge and Kairui Ding and Zike Yan and Weibin Gu and Chuxuan Li and Ziming Wang and Yunjie Cheng and Wei Sui and Ruqi Huang and Guyue Zhou},
url={https://air-discoverse.github.io/},
year={2024}
}