Skip to content

Commit

Permalink
Adds documentation and demo script for IMU sensor (#1694)
Browse files Browse the repository at this point in the history
# Description

Adds a new documentation page and demo script for explaining the IMU
sensor.
Also replace previous .png images with .jpg files for better storage
efficiency.

## Type of change

- This change requires a documentation update


## Checklist

- [X] I have run the [`pre-commit` checks](https://pre-commit.com/) with
`./isaaclab.sh --format`
- [X] I have made corresponding changes to the documentation
- [X] My changes generate no new warnings
- [X] I have added tests that prove my fix is effective or that my
feature works
- [X] I have updated the changelog and the corresponding version in the
extension's `config/extension.toml` file
- [X] I have added my name to the `CONTRIBUTORS.md` or my name already
exists there

<!--
As you go through the checklist above, you can mark something as done by
putting an x character in it

For example,
- [x] I have done this task
- [ ] I have not done this task
-->

---------

Signed-off-by: Michael Gussert <[email protected]>
Co-authored-by: James Tigue <[email protected]>
  • Loading branch information
mpgussert and jtigue-bdai authored Jan 17, 2025
1 parent af6fa10 commit b5078de
Show file tree
Hide file tree
Showing 31 changed files with 243 additions and 11 deletions.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file not shown.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file not shown.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file not shown.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file not shown.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file not shown.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file not shown.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file not shown.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file not shown.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file not shown.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file not shown.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file not shown.
12 changes: 6 additions & 6 deletions docs/source/overview/sensors/camera.rst
Original file line number Diff line number Diff line change
Expand Up @@ -71,7 +71,7 @@ Both :class:`~sensors.TiledCamera` and :class:`~sensors.Camera` classes provide
RGB and RGBA
~~~~~~~~~~~~

.. figure:: ../../_static/overview/overview_sensors_rgb.png
.. figure:: ../../_static/overview/overview_sensors_rgb.jpg
:align: center
:figwidth: 100%
:alt: A scene captured in RGB
Expand All @@ -85,7 +85,7 @@ To convert the ``torch.uint8`` data to ``torch.float32``, divide the buffer by 2
Depth and Distances
~~~~~~~~~~~~~~~~~~~

.. figure:: ../../_static/overview/overview_sensors_depth.png
.. figure:: ../../_static/overview/overview_sensors_depth.jpg
:align: center
:figwidth: 100%
:alt: A scene captured in RGB
Expand All @@ -99,7 +99,7 @@ Depth and Distances
Normals
~~~~~~~

.. figure:: ../../_static/overview/overview_sensors_normals.png
.. figure:: ../../_static/overview/overview_sensors_normals.jpg
:align: center
:figwidth: 100%
:alt: A scene captured in RGB
Expand All @@ -114,7 +114,7 @@ Motion Vectors
Semantic Segmentation
~~~~~~~~~~~~~~~~~~~~~

.. figure:: ../../_static/overview/overview_sensors_semantic.png
.. figure:: ../../_static/overview/overview_sensors_semantic.jpg
:align: center
:figwidth: 100%
:alt: A scene captured in RGB
Expand All @@ -128,7 +128,7 @@ Semantic Segmentation
Instance ID Segmentation
~~~~~~~~~~~~~~~~~~~~~~~~

.. figure:: ../../_static/overview/overview_sensors_instanceID.png
.. figure:: ../../_static/overview/overview_sensors_instanceID.jpg
:align: center
:figwidth: 100%
:alt: A scene captured in RGB
Expand All @@ -144,7 +144,7 @@ The main difference between ``instance_id_segmentation_fast`` and ``instance_seg
Instance Segmentation
"""""""""""""""""""""

.. figure:: ../../_static/overview/overview_sensors_instance.png
.. figure:: ../../_static/overview/overview_sensors_instance.jpg
:align: center
:figwidth: 100%
:alt: A scene captured in RGB
Expand Down
2 changes: 1 addition & 1 deletion docs/source/overview/sensors/contact_sensor.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
Contact Sensor
================

.. figure:: ../../_static/overview/overview_sensors_contact_diagram.png
.. figure:: ../../_static/overview/overview_sensors_contact_diagram.jpg
:align: center
:figwidth: 100%
:alt: A contact sensor with filtering
Expand Down
4 changes: 2 additions & 2 deletions docs/source/overview/sensors/frame_transformer.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
Frame Transformer
====================

.. figure:: ../../_static/overview/overview_sensors_frame_transformer.png
.. figure:: ../../_static/overview/overview_sensors_frame_transformer.jpg
:align: center
:figwidth: 100%
:alt: A diagram outlining the basic geometry of frame transformations
Expand Down Expand Up @@ -61,7 +61,7 @@ Let's take a look at the result for tracking specific objects. First, we can tak
relative orientations: tensor([[[ 0.9623, 0.0072, -0.2717, -0.0020],
[ 0.9639, 0.0052, -0.2663, -0.0014]]], device='cuda:0')
.. figure:: ../../_static/overview/overview_sensors_ft_visualizer.png
.. figure:: ../../_static/overview/overview_sensors_ft_visualizer.jpg
:align: center
:figwidth: 100%
:alt: The frame transformer visualizer
Expand Down
87 changes: 87 additions & 0 deletions docs/source/overview/sensors/imu.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,87 @@
.. _overview_sensors_imu:

Inertial Measurement Unit (IMU)
===================================

.. figure:: ../../_static/overview/overview_sensors_imu_diagram.jpg
:align: center
:figwidth: 100%
:alt: A diagram outlining the basic force relationships for the IMU sensor

Inertial Measurement Units (IMUs) are a type of sensor for measuring the acceleration of an object. These sensors are traditionally designed report linear accelerations and angular velocities, and function on similar principles to that of a digital scale: They report accelerations derived from **net force acting on the sensor**.

A naive implementation of an IMU would report a negative acceleration due to gravity while the sensor is at rest in some local gravitational field. This is not generally needed for most practical applications, and so most real IMU sensors often include a **gravity bias** and assume that the device is operating on the surface of the Earth. The IMU we provide in Isaac Lab includes a similar bias term, which defaults to +g. This means that if you add an IMU to your simulation, and do not change this bias term, you will detect an acceleration of :math:`+ 9.81 m/s^{2}` anti-parallel to gravity acceleration.

Consider a simple environment with an Anymal Quadruped equipped with an IMU on each of its two front feet.

.. literalinclude:: ../../../../source/standalone/demos/sensors/imu_sensor.py
:language: python
:lines: 39-63

Here we have explicitly removed the bias from one of the sensors, and we can see how this affects the reported values by visualizing the sensor when we run the sample script

.. figure:: ../../_static/overview/overview_sensors_imu_visualizer.jpg
:align: center
:figwidth: 100%
:alt: IMU visualized

Notice that the right front foot explicitly has a bias of (0,0,0). In the visualization, you should see that the arrow indicating the acceleration from the right IMU rapidly changes over time, while the arrow visualizing the left IMU points constantly along the vertical axis.

Retrieving values form the sensor is done in the usual way

.. code-block:: python
def run_simulator(sim: sim_utils.SimulationContext, scene: InteractiveScene):
.
.
.
# Simulate physics
while simulation_app.is_running():
.
.
.
# print information from the sensors
print("-------------------------------")
print(scene["imu_LF"])
print("Received linear velocity: ", scene["imu_LF"].data.lin_vel_b)
print("Received angular velocity: ", scene["imu_LF"].data.ang_vel_b)
print("Received linear acceleration: ", scene["imu_LF"].data.lin_acc_b)
print("Received angular acceleration: ", scene["imu_LF"].data.ang_acc_b)
print("-------------------------------")
print(scene["imu_RF"])
print("Received linear velocity: ", scene["imu_RF"].data.lin_vel_b)
print("Received angular velocity: ", scene["imu_RF"].data.ang_vel_b)
print("Received linear acceleration: ", scene["imu_RF"].data.lin_acc_b)
print("Received angular acceleration: ", scene["imu_RF"].data.ang_acc_b)
The oscillations in the values reported by the sensor are a direct result of of how the sensor calculates the acceleration, which is through a finite difference approximation between adjacent ground truth velocity values as reported by the sim. We can see this in the reported result (pay attention to the **linear acceleration**) because the acceleration from the right foot is small, but explicitly zero.

.. code-block:: bash
Imu sensor @ '/World/envs/env_.*/Robot/LF_FOOT':
view type : <class 'omni.physics.tensors.impl.api.RigidBodyView'>
update period (s) : 0.0
number of sensors : 1
Received linear velocity: tensor([[ 0.0203, -0.0054, 0.0380]], device='cuda:0')
Received angular velocity: tensor([[-0.0104, -0.1189, 0.0080]], device='cuda:0')
Received linear acceleration: tensor([[ 4.8344, -0.0205, 8.5305]], device='cuda:0')
Received angular acceleration: tensor([[-0.0389, -0.0262, -0.0045]], device='cuda:0')
-------------------------------
Imu sensor @ '/World/envs/env_.*/Robot/RF_FOOT':
view type : <class 'omni.physics.tensors.impl.api.RigidBodyView'>
update period (s) : 0.0
number of sensors : 1
Received linear velocity: tensor([[0.0244, 0.0077, 0.0431]], device='cuda:0')
Received angular velocity: tensor([[ 0.0122, -0.1360, -0.0042]], device='cuda:0')
Received linear acceleration: tensor([[-0.0018, 0.0010, -0.0032]], device='cuda:0')
Received angular acceleration: tensor([[-0.0373, -0.0050, -0.0053]], device='cuda:0')
-------------------------------
.. dropdown:: Code for imu_sensor.py
:icon: code

.. literalinclude:: ../../../../source/standalone/demos/sensors/imu_sensor.py
:language: python
:linenos:
1 change: 1 addition & 0 deletions docs/source/overview/sensors/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -17,4 +17,5 @@ The following pages describe the available sensors in more detail:
camera
contact_sensor
frame_transformer
imu
ray_caster
4 changes: 2 additions & 2 deletions docs/source/overview/sensors/ray_caster.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
Ray Caster
=============

.. figure:: ../../_static/overview/overview_sensors_rc_patterns.png
.. figure:: ../../_static/overview/overview_sensors_rc_patterns.jpg
:align: center
:figwidth: 100%
:alt: A diagram outlining the basic geometry of frame transformations
Expand All @@ -20,7 +20,7 @@ Using a ray caster sensor requires a **pattern** and a parent xform to be attach

Notice that the units on the pattern config is in degrees! Also, we enable visualization here to explicitly show the pattern in the rendering, but this is not required and should be disabled for performance tuning.

.. figure:: ../../_static/overview/overview_sensors_rc_visualizer.png
.. figure:: ../../_static/overview/overview_sensors_rc_visualizer.jpg
:align: center
:figwidth: 100%
:alt: Lidar Pattern visualized
Expand Down
144 changes: 144 additions & 0 deletions source/standalone/demos/sensors/imu_sensor.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,144 @@
# Copyright (c) 2022-2025, The Isaac Lab Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause

"""Launch Isaac Sim Simulator first."""

import argparse

from omni.isaac.lab.app import AppLauncher

# add argparse arguments
parser = argparse.ArgumentParser(description="Tutorial on adding sensors on a robot.")
parser.add_argument("--num_envs", type=int, default=1, help="Number of environments to spawn.")
# append AppLauncher cli args
AppLauncher.add_app_launcher_args(parser)
# parse the arguments
args_cli = parser.parse_args()

# launch omniverse app
app_launcher = AppLauncher(args_cli)
simulation_app = app_launcher.app

"""Rest everything follows."""

import torch

import omni.isaac.lab.sim as sim_utils
from omni.isaac.lab.assets import AssetBaseCfg
from omni.isaac.lab.scene import InteractiveScene, InteractiveSceneCfg
from omni.isaac.lab.sensors import ImuCfg
from omni.isaac.lab.utils import configclass

##
# Pre-defined configs
##
from omni.isaac.lab_assets.anymal import ANYMAL_C_CFG # isort: skip


@configclass
class ImuSensorSceneCfg(InteractiveSceneCfg):
"""Design the scene with sensors on the robot."""

# ground plane
ground = AssetBaseCfg(prim_path="/World/defaultGroundPlane", spawn=sim_utils.GroundPlaneCfg())

# lights
dome_light = AssetBaseCfg(
prim_path="/World/Light", spawn=sim_utils.DomeLightCfg(intensity=3000.0, color=(0.75, 0.75, 0.75))
)

# robot
robot = ANYMAL_C_CFG.replace(prim_path="{ENV_REGEX_NS}/Robot")

imu_RF = ImuCfg(prim_path="{ENV_REGEX_NS}/Robot/LF_FOOT", debug_vis=True)

imu_LF = ImuCfg(prim_path="{ENV_REGEX_NS}/Robot/RF_FOOT", gravity_bias=(0, 0, 0), debug_vis=True)


def run_simulator(sim: sim_utils.SimulationContext, scene: InteractiveScene):
"""Run the simulator."""
# Define simulation stepping
sim_dt = sim.get_physics_dt()
sim_time = 0.0
count = 0

# Simulate physics
while simulation_app.is_running():

if count % 500 == 0:
# reset counter
count = 0
# reset the scene entities
# root state
# we offset the root state by the origin since the states are written in simulation world frame
# if this is not done, then the robots will be spawned at the (0, 0, 0) of the simulation world
root_state = scene["robot"].data.default_root_state.clone()
root_state[:, :3] += scene.env_origins
scene["robot"].write_root_link_pose_to_sim(root_state[:, :7])
scene["robot"].write_root_com_velocity_to_sim(root_state[:, 7:])
# set joint positions with some noise
joint_pos, joint_vel = (
scene["robot"].data.default_joint_pos.clone(),
scene["robot"].data.default_joint_vel.clone(),
)
joint_pos += torch.rand_like(joint_pos) * 0.1
scene["robot"].write_joint_state_to_sim(joint_pos, joint_vel)
# clear internal buffers
scene.reset()
print("[INFO]: Resetting robot state...")
# Apply default actions to the robot
# -- generate actions/commands
targets = scene["robot"].data.default_joint_pos
# -- apply action to the robot
scene["robot"].set_joint_position_target(targets)
# -- write data to sim
scene.write_data_to_sim()
# perform step
sim.step()
# update sim-time
sim_time += sim_dt
count += 1
# update buffers
scene.update(sim_dt)

# print information from the sensors
print("-------------------------------")
print(scene["imu_LF"])
print("Received linear velocity: ", scene["imu_LF"].data.lin_vel_b)
print("Received angular velocity: ", scene["imu_LF"].data.ang_vel_b)
print("Received linear acceleration: ", scene["imu_LF"].data.lin_acc_b)
print("Received angular acceleration: ", scene["imu_LF"].data.ang_acc_b)
print("-------------------------------")
print(scene["imu_RF"])
print("Received linear velocity: ", scene["imu_RF"].data.lin_vel_b)
print("Received angular velocity: ", scene["imu_RF"].data.ang_vel_b)
print("Received linear acceleration: ", scene["imu_RF"].data.lin_acc_b)
print("Received angular acceleration: ", scene["imu_RF"].data.ang_acc_b)


def main():
"""Main function."""

# Initialize the simulation context
sim_cfg = sim_utils.SimulationCfg(dt=0.005, device=args_cli.device)
sim = sim_utils.SimulationContext(sim_cfg)
# Set main camera
sim.set_camera_view(eye=[3.5, 3.5, 3.5], target=[0.0, 0.0, 0.0])
# design scene
scene_cfg = ImuSensorSceneCfg(num_envs=args_cli.num_envs, env_spacing=2.0)
scene = InteractiveScene(scene_cfg)
# Play the simulator
sim.reset()
# Now we are ready!
print("[INFO]: Setup complete...")
# Run the simulator
run_simulator(sim, scene)


if __name__ == "__main__":
# run the main function
main()
# close sim app
simulation_app.close()

0 comments on commit b5078de

Please sign in to comment.