Skip to content

enactic/openarm_isaac_lab

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

OpenArm Isaac Lab

IsaacSim Isaac Lab Python Linux platform License

Overview

This repository provides simulation and learning environments for the OpenArm robotic platform, built on NVIDIA Isaac Sim and Isaac Lab. It enables research and development in reinforcement learning (RL), imitation learning (IL), teleoperation, and sim-to-real transfer for both unimanual (single-arm) and bimanual (dual-arm) robotic systems.

What this repo offers

  • Isaac Sim models for OpenArm robots.
  • Isaac Lab training environments for RL tasks (reach, lift a cube, open a drawer).
  • Imitation learning, teleoperation interfaces, and sim-to-sim / sim-to-real transfer pipelines are currently under development and will be available soon.

This repository has been tested with:

  • Ubuntu 22.04
  • Isaac Sim v5.1.0
  • Isaac Lab v2.3.0
  • Python 3.11

Index

Installation Guide

(Option 1) Docker installation (linux only)

  1. Pull the minimal Isaac Lab container
docker pull nvcr.io/nvidia/isaac-lab:2.3.0
  1. Create container
xhost +
docker run --name isaac-lab --entrypoint bash -it --gpus all --rm -e "ACCEPT_EULA=Y" --network=host \
   -e "PRIVACY_CONSENT=Y" \
   -e DISPLAY \
   -v $HOME/.Xauthority:/root/.Xauthority \
   -v ~/docker/isaac-sim/cache/kit:/isaac-sim/kit/cache:rw \
   -v ~/docker/isaac-sim/cache/ov:/root/.cache/ov:rw \
   -v ~/docker/isaac-sim/cache/pip:/root/.cache/pip:rw \
   -v ~/docker/isaac-sim/cache/glcache:/root/.cache/nvidia/GLCache:rw \
   -v ~/docker/isaac-sim/cache/computecache:/root/.nv/ComputeCache:rw \
   -v ~/docker/isaac-sim/logs:/root/.nvidia-omniverse/logs:rw \
   -v ~/docker/isaac-sim/data:/root/.local/share/ov/data:rw \
   -v ~/docker/isaac-sim/documents:/root/Documents:rw \
   nvcr.io/nvidia/isaac-lab:2.3.0
  1. Clone git at your HOME directory
cd /workspace
git clone [email protected]:enactic/openarm_isaac_lab.git
  1. Install python package with
cd openarm_isaac_lab
python -m pip install -e source/openarm
  1. With this command, you can verify that OpenArm package has been properly installed and check all the environments where it can be executed.
python ./scripts/tools/list_envs.py

(Option 2) Local installation

It is assumed that you have created a virtual environment named env_isaaclab using miniconda or anaconda and will be working within that environment.

  1. Clone git at your HOME directory
cd ~
git clone [email protected]:enactic/openarm_isaac_lab.git
  1. Activate your virtual env which contains Isaac Lab package
conda activate env_isaaclab
  1. Install python package with
cd openarm_isaac_lab
python -m pip install -e source/openarm
  1. With this command, you can verify that OpenArm package has been properly installed and check all the environments where it can be executed.
python ./scripts/tools/list_envs.py

Reinforcement Learning (RL)

You can run different tasks and policies.

Replace <TASK_NAME> and <POLICY_NAME> with one of the following available tasks:

Task Description Task Name Policy Name Demo
Reach target position Isaac-Reach-OpenArm-v0 rsl_rl, rl_games, skrl
Lift a cube Isaac-Lift-Cube-OpenArm-v0 rsl_rl, rl_games, skrl
Open a cabinet's drawer Isaac-Open-Drawer-OpenArm-v0 rsl_rl, rl_games, skrl
Reach target position (Bimanual) Isaac-Reach-OpenArm-Bi-v0 rsl_rl, rl_games, skrl

Training Model

python ./scripts/reinforcement_learning/<POLICY_NAME>/train.py --task <TASK_NAME> --headless

Replay Trained Model

python ./scripts/reinforcement_learning/<POLICY_NAME>/play.py --task <TASK_NAME> --num_envs 64

Analyze logs

python -m tensorboard.main --logdir=logs

And open the google and go to http://localhost:6006/

Sim2sim

Coming soon...

Sim2Real Deployment using OpenArm

Coming soon...

Related links

License

Apache License 2.0

Copyright 2025 Enactic, Inc.

Code of Conduct

All participation in the OpenArm project is governed by our Code of Conduct.