Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: sync upstream changes #4

Merged
merged 25 commits into from
Jan 10, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
25 commits
Select commit Hold shift + click to select a range
538455a
feat: enable to use multiple rgb encoders per camera in diffusion pol…
HiroIshida Oct 30, 2024
e0df56d
Fix config file (#495)
ohharsen Oct 31, 2024
963738d
fix: broken images and a few minor typos in README (#499)
ivelin Nov 5, 2024
8af6935
Add support for Windows (#494)
brainwavecoder9 Nov 22, 2024
20f4667
bug causes error uploading to huggingface, unicode issue on windows. …
resolver101757 Nov 22, 2024
975c1c2
Add distinction between two unallowed cases in name check "eval_" (#489)
J4nn1K Nov 22, 2024
96c7052
Rename deprecated argument (temporal_ensemble_momentum) (#490)
KasparSLT Nov 25, 2024
32eb0ce
Dataset v2.0 (#461)
aliberts Nov 29, 2024
a2c1819
Refactor OpenX (#505)
michel-aractingi Dec 2, 2024
286bca3
Fix missing local_files_only in record/replay (#540)
Cadene Dec 3, 2024
8e7d697
Control simulated robot with real leader (#514)
michel-aractingi Dec 3, 2024
03f49ce
Update 7_get_started_with_real_robot.md (#559)
ma3oun Dec 8, 2024
44f9b21
LerobotDataset pushable to HF from any folder (#563)
Raziel90 Dec 9, 2024
4c41f6f
Fix example 6 (#572)
aliberts Dec 11, 2024
66f8736
fixing typo from 'teloperation' to 'teleoperation' (#566)
s1lent4gnt Dec 11, 2024
73b64c3
[vizualizer] for LeRobodDataset V2 (#576)
mishig25 Dec 20, 2024
59e2757
Fix broken `create_lerobot_dataset_card` (#590)
helper2424 Dec 23, 2024
4519016
Update README.md (#612)
CharlesCNorton Jan 3, 2025
8f821ec
Fix Quality workflow (#622)
aliberts Jan 8, 2025
bc16e1b
fix(docs): typos in benchmark readme.md (#614)
CharlesCNorton Jan 9, 2025
5097cd9
fix(visualise): use correct language description for each episode id …
villekuosmanen Jan 9, 2025
b8b3683
typo fix: batch_convert_dataset_v1_to_v2.py (#615)
CharlesCNorton Jan 9, 2025
25a8597
[viz] Fixes & updates to html visualizer (#617)
mishig25 Jan 9, 2025
eec6796
fixes to SO-100 readme (#600)
philfung Jan 10, 2025
ec767f4
Merge remote-tracking branch 'upstream/main' into feat/sync-upstream-…
villekuosmanen Jan 10, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions .github/workflows/quality.yml
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,7 @@ jobs:
uses: actions/checkout@v3

- name: Install poetry
run: pipx install poetry
run: pipx install "poetry<2.0.0"

- name: Poetry check
run: poetry check
Expand All @@ -64,7 +64,7 @@ jobs:
uses: actions/checkout@v3

- name: Install poetry
run: pipx install poetry
run: pipx install "poetry<2.0.0"

- name: Install poetry-relax
run: poetry self add poetry-relax
Expand Down
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -68,7 +68,7 @@

### Acknowledgment

- Thanks to Tony Zaho, Zipeng Fu and colleagues for open sourcing ACT policy, ALOHA environments and datasets. Ours are adapted from [ALOHA](https://tonyzhaozh.github.io/aloha) and [Mobile ALOHA](https://mobile-aloha.github.io).
- Thanks to Tony Zhao, Zipeng Fu and colleagues for open sourcing ACT policy, ALOHA environments and datasets. Ours are adapted from [ALOHA](https://tonyzhaozh.github.io/aloha) and [Mobile ALOHA](https://mobile-aloha.github.io).
- Thanks to Cheng Chi, Zhenjia Xu and colleagues for open sourcing Diffusion policy, Pusht environment and datasets, as well as UMI datasets. Ours are adapted from [Diffusion Policy](https://diffusion-policy.cs.columbia.edu) and [UMI Gripper](https://umi-gripper.github.io).
- Thanks to Nicklas Hansen, Yunhai Feng and colleagues for open sourcing TDMPC policy, Simxarm environments and datasets. Ours are adapted from [TDMPC](https://github.com/nicklashansen/tdmpc) and [FOWM](https://www.yunhaifeng.com/FOWM).
- Thanks to Antonio Loquercio and Ashish Kumar for their early support.
Expand Down
10 changes: 5 additions & 5 deletions benchmarks/video/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ How to decode videos?

## Variables
**Image content & size**
We don't expect the same optimal settings for a dataset of images from a simulation, or from real-world in an appartment, or in a factory, or outdoor, or with lots of moving objects in the scene, etc. Similarly, loading times might not vary linearly with the image size (resolution).
We don't expect the same optimal settings for a dataset of images from a simulation, or from real-world in an apartment, or in a factory, or outdoor, or with lots of moving objects in the scene, etc. Similarly, loading times might not vary linearly with the image size (resolution).
For these reasons, we run this benchmark on four representative datasets:
- `lerobot/pusht_image`: (96 x 96 pixels) simulation with simple geometric shapes, fixed camera.
- `aliberts/aloha_mobile_shrimp_image`: (480 x 640 pixels) real-world indoor, moving camera.
Expand Down Expand Up @@ -63,7 +63,7 @@ This of course is affected by the `-g` parameter during encoding, which specifie

Note that this differs significantly from a typical use case like watching a movie, in which every frame is loaded sequentially from the beginning to the end and it's acceptable to have big values for `-g`.

Additionally, because some policies might request single timestamps that are a few frames appart, we also have the following scenario:
Additionally, because some policies might request single timestamps that are a few frames apart, we also have the following scenario:
- `2_frames_4_space`: 2 frames with 4 consecutive frames of spacing in between (e.g `[t, t + 5 / fps]`),

However, due to how video decoding is implemented with `pyav`, we don't have access to an accurate seek so in practice this scenario is essentially the same as `6_frames` since all 6 frames between `t` and `t + 5 / fps` will be decoded.
Expand All @@ -85,8 +85,8 @@ However, due to how video decoding is implemented with `pyav`, we don't have acc
**Average Structural Similarity Index Measure (higher is better)**
`avg_ssim` evaluates the perceived quality of images by comparing luminance, contrast, and structure. SSIM values range from -1 to 1, where 1 indicates perfect similarity.

One aspect that can't be measured here with those metrics is the compatibility of the encoding accross platforms, in particular on web browser, for visualization purposes.
h264, h265 and AV1 are all commonly used codecs and should not be pose an issue. However, the chroma subsampling (`pix_fmt`) format might affect compatibility:
One aspect that can't be measured here with those metrics is the compatibility of the encoding across platforms, in particular on web browser, for visualization purposes.
h264, h265 and AV1 are all commonly used codecs and should not pose an issue. However, the chroma subsampling (`pix_fmt`) format might affect compatibility:
- `yuv420p` is more widely supported across various platforms, including web browsers.
- `yuv444p` offers higher color fidelity but might not be supported as broadly.

Expand Down Expand Up @@ -116,7 +116,7 @@ Additional encoding parameters exist that are not included in this benchmark. In
- `-preset` which allows for selecting encoding presets. This represents a collection of options that will provide a certain encoding speed to compression ratio. By leaving this parameter unspecified, it is considered to be `medium` for libx264 and libx265 and `8` for libsvtav1.
- `-tune` which allows to optimize the encoding for certains aspects (e.g. film quality, fast decoding, etc.).

See the documentation mentioned above for more detailled info on these settings and for a more comprehensive list of other parameters.
See the documentation mentioned above for more detailed info on these settings and for a more comprehensive list of other parameters.

Similarly on the decoding side, other decoders exist but are not implemented in our current benchmark. To name a few:
- `torchaudio`
Expand Down
85 changes: 53 additions & 32 deletions examples/10_use_so100.md
Original file line number Diff line number Diff line change
@@ -1,25 +1,31 @@
This tutorial explains how to use [SO-100](https://github.com/TheRobotStudio/SO-ARM100) with LeRobot.
# Using the [SO-100](https://github.com/TheRobotStudio/SO-ARM100) with LeRobot

## Source the parts

## A. Source the parts

Follow this [README](https://github.com/TheRobotStudio/SO-ARM100). It contains the bill of materials, with link to source the parts, as well as the instructions to 3D print the parts, and advices if it's your first time printing or if you don't own a 3D printer already.

**Important**: Before assembling, you will first need to configure your motors. To this end, we provide a nice script, so let's first install LeRobot. After configuration, we will also guide you through assembly.

## Install LeRobot
## B. Install LeRobot

On your computer:

1. [Install Miniconda](https://docs.anaconda.com/miniconda/#quick-command-line-install):
```bash
mkdir -p ~/miniconda3
# Linux:
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O ~/miniconda3/miniconda.sh
# Mac M-series:
# curl https://repo.anaconda.com/miniconda/Miniconda3-latest-MacOSX-arm64.sh -o ~/miniconda3/miniconda.sh
# Mac Intel:
# curl https://repo.anaconda.com/miniconda/Miniconda3-latest-MacOSX-x86_64.sh -o ~/miniconda3/miniconda.sh
bash ~/miniconda3/miniconda.sh -b -u -p ~/miniconda3
rm ~/miniconda3/miniconda.sh
~/miniconda3/bin/conda init bash
```

2. Restart shell or `source ~/.bashrc`
2. Restart shell or `source ~/.bashrc` (*Mac*: `source ~/.bash_profile`) or `source ~/.zshrc` if you're using zshell

3. Create and activate a fresh conda environment for lerobot
```bash
Expand All @@ -36,23 +42,30 @@ git clone https://github.com/huggingface/lerobot.git ~/lerobot
cd ~/lerobot && pip install -e ".[feetech]"
```

For Linux only (not Mac), install extra dependencies for recording datasets:
*For Linux only (not Mac)*: install extra dependencies for recording datasets:
```bash
conda install -y -c conda-forge ffmpeg
pip uninstall -y opencv-python
conda install -y -c conda-forge "opencv>=4.10.0"
```

## Configure the motors
## C. Configure the motors

### 1. Find the USB ports associated to each arm

Designate one bus servo adapter and 6 motors for your leader arm, and similarly the other bus servo adapter and 6 motors for the follower arm.

Follow steps 1 of the [assembly video](https://www.youtube.com/watch?v=FioA2oeFZ5I) which illustrates the use of our scripts below.
#### a. Run the script to find ports

**Find USB ports associated to your arms**
To find the correct ports for each arm, run the utility script twice:
Follow Step 1 of the [assembly video](https://www.youtube.com/watch?v=FioA2oeFZ5I), which illustrates the use of our scripts below.

To find the port for each bus servo adapter, run the utility script:
```bash
python lerobot/scripts/find_motors_bus_port.py
```

#### b. Example outputs

Example output when identifying the leader arm's port (e.g., `/dev/tty.usbmodem575E0031751` on Mac, or possibly `/dev/ttyACM0` on Linux):
```
Finding all available ports for the MotorBus.
Expand All @@ -64,7 +77,6 @@ Remove the usb cable from your DynamixelMotorsBus and press Enter when done.
The port of this DynamixelMotorsBus is /dev/tty.usbmodem575E0031751
Reconnect the usb cable.
```

Example output when identifying the follower arm's port (e.g., `/dev/tty.usbmodem575E0032081`, or possibly `/dev/ttyACM1` on Linux):
```
Finding all available ports for the MotorBus.
Expand All @@ -77,13 +89,20 @@ The port of this DynamixelMotorsBus is /dev/tty.usbmodem575E0032081
Reconnect the usb cable.
```

Troubleshooting: On Linux, you might need to give access to the USB ports by running:
#### c. Troubleshooting
On Linux, you might need to give access to the USB ports by running:
```bash
sudo chmod 666 /dev/ttyACM0
sudo chmod 666 /dev/ttyACM1
```

**Configure your motors**
#### d. Update YAML file

Now that you have the ports, modify the *port* sections in `so100.yaml`

### 2. Configure the motors

#### a. Set IDs for all 12 motors
Plug your first motor and run this script to set its ID to 1. It will also set its present position to 2048, so expect your motor to rotate:
```bash
python lerobot/scripts/configure_motor.py \
Expand All @@ -94,7 +113,7 @@ python lerobot/scripts/configure_motor.py \
--ID 1
```

Note: These motors are currently limitated. They can take values between 0 and 4096 only, which corresponds to a full turn. They can't turn more than that. 2048 is at the middle of this range, so we can take -2048 steps (180 degrees anticlockwise) and reach the maximum range, or take +2048 steps (180 degrees clockwise) and reach the maximum range. The configuration step also sets the homing offset to 0, so that if you misassembled the arm, you can always update the homing offset to account for a shift up to Β± 2048 steps (Β± 180 degrees).
*Note: These motors are currently limitated. They can take values between 0 and 4096 only, which corresponds to a full turn. They can't turn more than that. 2048 is at the middle of this range, so we can take -2048 steps (180 degrees anticlockwise) and reach the maximum range, or take +2048 steps (180 degrees clockwise) and reach the maximum range. The configuration step also sets the homing offset to 0, so that if you misassembled the arm, you can always update the homing offset to account for a shift up to Β± 2048 steps (Β± 180 degrees).*

Then unplug your motor and plug the second motor and set its ID to 2.
```bash
Expand All @@ -108,23 +127,25 @@ python lerobot/scripts/configure_motor.py \

Redo the process for all your motors until ID 6. Do the same for the 6 motors of the leader arm.

**Remove the gears of the 6 leader motors**
Follow step 2 of the [assembly video](https://www.youtube.com/watch?v=FioA2oeFZ5I). You need to remove the gear for the motors of the leader arm. As a result, you will only use the position encoding of the motor and reduce friction to more easily operate the leader arm.

**Add motor horn to the motors**
Follow step 3 of the [assembly video](https://www.youtube.com/watch?v=FioA2oeFZ5I). For SO-100, you need to align the holes on the motor horn to the motor spline to be approximately 1:30, 4:30, 7:30 and 10:30.
#### b. Remove the gears of the 6 leader motors

Follow step 2 of the [assembly video](https://youtu.be/FioA2oeFZ5I?t=248). You need to remove the gear for the motors of the leader arm. As a result, you will only use the position encoding of the motor and reduce friction to more easily operate the leader arm.

#### c. Add motor horn to all 12 motors
Follow step 3 of the [assembly video](https://youtu.be/FioA2oeFZ5I?t=569). For SO-100, you need to align the holes on the motor horn to the motor spline to be approximately 1:30, 4:30, 7:30 and 10:30.
Try to avoid rotating the motor while doing so to keep position 2048 set during configuration. It is especially tricky for the leader motors as it is more sensible without the gears, but it's ok if it's a bit rotated.

## Assemble the arms
## D. Assemble the arms

Follow step 4 of the [assembly video](https://www.youtube.com/watch?v=FioA2oeFZ5I). The first arm should take a bit more than 1 hour to assemble, but once you get use to it, you can do it under 1 hour for the second arm.
Follow step 4 of the [assembly video](https://youtu.be/FioA2oeFZ5I?t=610). The first arm should take a bit more than 1 hour to assemble, but once you get use to it, you can do it under 1 hour for the second arm.

## Calibrate
## E. Calibrate

Next, you'll need to calibrate your SO-100 robot to ensure that the leader and follower arms have the same position values when they are in the same physical position. This calibration is essential because it allows a neural network trained on one SO-100 robot to work on another.

**Manual calibration of follower arm**
/!\ Contrarily to step 6 of the [assembly video](https://www.youtube.com/watch?v=FioA2oeFZ5I) which illustrates the auto calibration, we will actually do manual calibration of follower for now.
#### a. Manual calibration of follower arm
/!\ Contrarily to step 6 of the [assembly video](https://youtu.be/FioA2oeFZ5I?t=724) which illustrates the auto calibration, we will actually do manual calibration of follower for now.

You will need to move the follower arm to these positions sequentially:

Expand All @@ -139,8 +160,8 @@ python lerobot/scripts/control_robot.py calibrate \
--robot-overrides '~cameras' --arms main_follower
```

**Manual calibration of leader arm**
Follow step 6 of the [assembly video](https://www.youtube.com/watch?v=FioA2oeFZ5I) which illustrates the manual calibration. You will need to move the leader arm to these positions sequentially:
#### b. Manual calibration of leader arm
Follow step 6 of the [assembly video](https://youtu.be/FioA2oeFZ5I?t=724) which illustrates the manual calibration. You will need to move the leader arm to these positions sequentially:

| 1. Zero position | 2. Rotated position | 3. Rest position |
|---|---|---|
Expand All @@ -153,7 +174,7 @@ python lerobot/scripts/control_robot.py calibrate \
--robot-overrides '~cameras' --arms main_leader
```

## Teleoperate
## F. Teleoperate

**Simple teleop**
Then you are ready to teleoperate your robot! Run this simple script (it won't connect and display the cameras):
Expand All @@ -165,14 +186,14 @@ python lerobot/scripts/control_robot.py teleoperate \
```


**Teleop with displaying cameras**
#### a. Teleop with displaying cameras
Follow [this guide to setup your cameras](https://github.com/huggingface/lerobot/blob/main/examples/7_get_started_with_real_robot.md#c-add-your-cameras-with-opencvcamera). Then you will be able to display the cameras on your computer while you are teleoperating by running the following code. This is useful to prepare your setup before recording your first dataset.
```bash
python lerobot/scripts/control_robot.py teleoperate \
--robot-path lerobot/configs/robot/so100.yaml
```

## Record a dataset
## G. Record a dataset

Once you're familiar with teleoperation, you can record your first dataset with SO-100.

Expand Down Expand Up @@ -201,7 +222,7 @@ python lerobot/scripts/control_robot.py record \
--push-to-hub 1
```

## Visualize a dataset
## H. Visualize a dataset

If you uploaded your dataset to the hub with `--push-to-hub 1`, you can [visualize your dataset online](https://huggingface.co/spaces/lerobot/visualize_dataset) by copy pasting your repo id given by:
```bash
Expand All @@ -214,7 +235,7 @@ python lerobot/scripts/visualize_dataset_html.py \
--repo-id ${HF_USER}/so100_test
```

## Replay an episode
## I. Replay an episode

Now try to replay the first episode on your robot:
```bash
Expand All @@ -225,7 +246,7 @@ python lerobot/scripts/control_robot.py replay \
--episode 0
```

## Train a policy
## J. Train a policy

To train a policy to control your robot, use the [`python lerobot/scripts/train.py`](../lerobot/scripts/train.py) script. A few arguments are required. Here is an example command:
```bash
Expand All @@ -248,7 +269,7 @@ Let's explain it:

Training should take several hours. You will find checkpoints in `outputs/train/act_so100_test/checkpoints`.

## Evaluate your policy
## K. Evaluate your policy

You can use the `record` function from [`lerobot/scripts/control_robot.py`](../lerobot/scripts/control_robot.py) but with a policy checkpoint as input. For instance, run this command to record 10 evaluation episodes:
```bash
Expand All @@ -268,7 +289,7 @@ As you can see, it's almost the same command as previously used to record your t
1. There is an additional `-p` argument which indicates the path to your policy checkpoint with (e.g. `-p outputs/train/eval_so100_test/checkpoints/last/pretrained_model`). You can also use the model repository if you uploaded a model checkpoint to the hub (e.g. `-p ${HF_USER}/act_so100_test`).
2. The name of dataset begins by `eval` to reflect that you are running inference (e.g. `--repo-id ${HF_USER}/eval_act_so100_test`).

## More
## L. More Information

Follow this [previous tutorial](https://github.com/huggingface/lerobot/blob/main/examples/7_get_started_with_real_robot.md#4-train-a-policy-on-your-data) for a more in-depth tutorial on controlling real robots with LeRobot.

Expand Down
4 changes: 2 additions & 2 deletions lerobot/common/datasets/v2/batch_convert_dataset_v1_to_v2.py
Original file line number Diff line number Diff line change
Expand Up @@ -159,11 +159,11 @@
**ALOHA_STATIC_INFO,
},
"aloha_static_vinh_cup": {
"single_task": "Pick up the platic cup with the right arm, then pop its lid open with the left arm.",
"single_task": "Pick up the plastic cup with the right arm, then pop its lid open with the left arm.",
**ALOHA_STATIC_INFO,
},
"aloha_static_vinh_cup_left": {
"single_task": "Pick up the platic cup with the left arm, then pop its lid open with the right arm.",
"single_task": "Pick up the plastic cup with the left arm, then pop its lid open with the right arm.",
**ALOHA_STATIC_INFO,
},
"aloha_static_ziploc_slide": {"single_task": "Slide open the ziploc bag.", **ALOHA_STATIC_INFO},
Expand Down
6 changes: 6 additions & 0 deletions lerobot/scripts/control_robot.py
Original file line number Diff line number Diff line change
Expand Up @@ -600,6 +600,12 @@ def replay(
default="lerobot/test",
help="Dataset identifier. By convention it should match '{hf_username}/{dataset_name}' (e.g. `lerobot/test`).",
)
parser_replay.add_argument(
"--local-files-only",
type=int,
default=0,
help="Use local files only. By default, this script will try to fetch the dataset from the hub if it exists.",
)
parser_replay.add_argument("--episode", type=int, default=0, help="Index of the episode to replay.")

args = parser.parse_args()
Expand Down
Loading