Skip to content

Extensions of some PyBullet environments to develop and compare state representation learning algorithms with reinforcement learning.

License

Notifications You must be signed in to change notification settings

astrid-merckling/bullet_envs

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

45 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

bullet_envs

[Watch a demo]

This library extends some original PyBullet environments provided in pybullet_envs.

PyBullet environments are similar to those provided by MuJoCo which are fully compatible with OpenAI Gym, with the difference of being open-source.

Only the following environments are fully supported:

  • TurtlebotMazeEnv-v0
  • ReacherBulletEnv-v0
  • HalfCheetahBulletEnv-v0
  • InvertedPendulumSwingupBulletEnv-v0

Contributions

  • All environments have a camera rendering in def render which is wrapped into an OpenAI gym wrapper.

  • TurtlebotMazeEnv-v0 is proposed here as a new environment, built from the original Turtebot implemented in pybullet_robots. It includes a version where one of the walls has a randomly sampled color at each time step. The observation space corresponds to a first-person perspective camera.

  • ReacherBulletEnv-v0 has a new version with a randomly moving ball as a distractor which corresponds to the file reacher_distractor.xml

Installation

Clone this repo and have its path added to your PYTHONPATH environment variable:

cd <installation_path_of_your_choice>
git clone https://github.com/astrid-merckling/bullet_envs.git
cd bullet_envs
export PYTHONPATH=$(pwd):${PYTHONPATH}

You can install the dependencies as:

pip install gym==0.17.2
pip install pybullet==2.6.4
pip install opencv-python==4.1.2.30

Usage

Example to run and visualize ReacherBulletEnv-v0 with a randomly moving ball (distractor=True), where the observation space is chosen to be the camera:

import gym

"register bullet_envs in gym"
import bullet_envs.__init__

env_name = 'ReacherBulletEnv-v0'
actionRepeat = 1
maxSteps = 50
"OpenAI Gym env creation"
env = gym.make('PybulletEnv-v0', env_name=env_name, renders=True, distractor=True, actionRepeat=actionRepeat,
               maxSteps=maxSteps * actionRepeat, image_size=64, display_target=True)

"running env on 5 episodes"
num_ep = 5
for episode in range(num_ep):
    obs = env.reset()
    done = False
    while not done:
        # follow a random policy
        action = env.action_space.sample()
        obs, reward, done, info = env.step(action)
        "get the image observation from the camera"
        obs = env.render()

Example to run and visualize TurtlebotMazeEnv-v0 with a randomly sampled wall color (wallDistractor=True), where the observation space is chosen to be the first-person camera:

import gym

"register bullet_envs in gym"
import bullet_envs.__init__

env_name = 'TurtlebotMazeEnv-v0'
actionRepeat = 1
maxSteps = 100
"OpenAI Gym env creation"
env = gym.make(env_name, renders=True, wallDistractor=True, maxSteps=maxSteps, image_size=64, display_target=True)

"running env on 2 episodes"
num_ep = 2
for episode in range(num_ep):
    obs = env.reset()
    done = False
    while not done:
        # follow a random policy
        action = env.action_space.sample()
        obs, reward, done, info = env.step(action)
        "get the image observation from the camera"
        obs = env.render()

See OpenAI Gym for more details on the Python env class.

About

Extensions of some PyBullet environments to develop and compare state representation learning algorithms with reinforcement learning.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published