Skip to content
This repository has been archived by the owner on Apr 6, 2020. It is now read-only.

Road to BERDY

claudia-lat edited this page Jul 24, 2015 · 4 revisions

Final Goal

I will try to briefly described the problem that BERDY is expected to solve. Given a set of measurements we want to estimate the whole-body "state" of the robot.

State of the robot

In this dynamics estimation context, with "state" we define all the quantities of interest, not only the "state" in control. This encompass

  • the "position" of the free floating robot (i.e. its joint positions and the position/orientation of a link wrt an inertial frame) q,
  • the "velocity" of the free floating robot (i.e. its joint velocities and the twist of a link wrt an inertial frame) \nu.
  • the "acceleration" of the free floating robot (i.e. its joint acceleration and the twist acceleration [or equivalently the angular acceleration and a point acceleratio] of a link wrt an inertial frame) \dot{\nu}.
  • the internal torques exerted on the joints of the robot \tau
  • the external force/torques applied on the robot (i.e. where they are applied and their intensities) f^x_i .

Considered Sensors

The sensors that we should merge in the estimation are:

  • Six Axis F/T sensors embedded in a link (measuring the internal forces f between the two parts of the link)
  • Gyroscope (measuring the angular velocity of a link wrt to an inertial frame)
  • Accelerometer (measuring the proper linear acceleration wrt to an inertial frame, i.e. the acceleration on a link minus the gravity acceleration)
  • Skin (measuring the normal pressure exerted by an external force on a link)

Some additional information that can be modeled as "sensors" from BERDY point of view, even if they are not strictly speaking sensors are:

  • Orientation sensors (measuring the orientation of a link wrt to a given external frame [this is not strictly speaking a sensor, but rather the output of a lower level processing of some sensors, such as an IMU or a Camera])
  • At some point we can make some reasonable assumption (given the output of the other sensors or the state of a controller) that a given link is fixed, i.e. it has 0 velocity and acceleration. This is a really useful information that we may need to in BERDY, for example to realize a basic odometry system. We may need to understand how we want to insert this informations, and with which interface.

Intermediate goals

I thinks it is useful to define some intermediate goals, to reach before the definitive goal of the global estimation:

Note that goals T1, T2, T3 are all related to improve the joint torque estimation (hence the T) and do not require any kind of estimation of the orientation or position of the floating base.

Torque estimation related Goals

Goal T0: Replicate wholeBodyDynamicsTree algorithm with external forces at the end effectors

Given the measurement of:

  • the embedded Six Axis F/T sensors
  • an IMU (plus the numerical derivative of the angular velocity)
  • the joint encoders (plus the numerical derivatives for getting angular velocities and accelerations)
  • given that the external wrenches are located at the end effectors

Return:

  • the intensity of the external wrenches
  • the value of the joint torques

This is exactly the algorithm implemented (but with an hardcoded assumption on the number of IMUs) in the TorqueEstimationTree class that is used in wholeBodyDynamicsTree . However replicate the same results with the BERDY framework will be an important first step.

Goal T1: Replicate wholeBodyDynamicsTree algorithm

Given the measurement of:

  • the embedded Six Axis F/T sensors
  • an IMU (plus the numerical derivative of the angular velocity)
  • the joint encoders (plus the numerical derivatives for getting angular velocities and accelerations)
  • the location of the external wrenches (from the processed skin)

Return:

  • the intensity of the external wrenches
  • the value of the joint torques

This is exactly the algorithm implemented (but with an hardcoded assumption on the number of IMUs) in the TorqueEstimationTree class that is used in wholeBodyDynamicsTree . However replicate the same results with the BERDY framework will be an important first step.

Goal T2: Torque estimation with accelerometers

Given the measurement of:

  • the embedded Six Axis F/T sensors
  • an IMU (plus the numerical derivative of the angular velocity)
  • an arbitrary numbers of accelerometers
  • the joint encoders (plus the numerical derivatives for getting angular velocities)
  • the location of the external wrenches (from the processed skin)

Return:

  • the intensity of the external wrenches
  • the value of the joint torques
  • the joint accelerations

With respect to Goal T1, we stop using the joint accelerations obtained through numerical derivation (that are inherently noisy and with a big delay) and we do the torque estimation using at least some accelerometers .

Goal T3: Torque estimation with accelerometers & gyros

Given the measurement of:

  • the embedded Six Axis F/T sensors
  • an arbitrary numbers of accelerometers
  • an arbitrary number of gyroscopes
  • the joint encoders (plus the numerical derivatives for getting angular velocities ==> only if necessary )
  • the location of the external wrenches (from the processed skin)

Return:

  • the intensity of the external wrenches
  • the value of the joint torques
  • the joint accelerations
  • the joint velocities

With respect to Goal T2, we start using the EKF result submitted to IROS to start exploiting redundancy even for the estimation of joint velocities, not just for the estimation of the "dynamics" variables.

Floating base position/orientation estimate

Once we completed Goal T3, we should have a working implementation of both the dynamics variables (d) MAP estimation and of position and velocities using the EKF. We have then everything ready to start to estimate the position and orientation of the floating base of the robot.

Goal FB1

Given the measurement of:

  • the embedded Six Axis F/T sensors
  • an arbitrary numbers of accelerometers
  • an arbitrary number of gyroscopes
  • the joint encoders (plus the numerical derivatives for getting angular velocities ==> only if necessary )
  • the location of the external wrenches (from the processed skin)
  • an orientation sensor of a link (i.e. an output of the low-level estimate from an IMU)

Return:

  • the intensity of the external wrenches
  • the value of the joint torques
  • the joint accelerations
  • the joint velocities
  • the orientation of the base

Using the BERDY machinery to estimate the orientation of the floating base may seem an overkill, especially given the we assume that an orientation sensor is available, but it is useful to exploit the measurement redundancy to remove constant offset in the estimation.

Goal FB2

Given the measurement of:

  • the embedded Six Axis F/T sensors
  • an arbitrary numbers of accelerometers
  • an arbitrary number of gyroscopes
  • the joint encoders (plus the numerical derivatives for getting angular velocities ==> only if necessary )
  • the location of the external wrenches (from the processed skin)
  • an orientation sensor of a link (i.e. an output of the low-level estimate from an IMU)
  • an information on an absolute position (initially this can come from the information of which link can be assume fixed in the ground, that in future it could come from vision)

Return:

  • the intensity of the external wrenches
  • the value of the joint torques
  • the joint accelerations
  • the joint velocities
  • the orientation of the base wrt to an inertial frame
  • the position of the base origin wrt to an inertial frame

This will be a scenario quite similar to the initial Goal stated.