This repository contains code for the paper "Efficient Symbolic Approaches for Quantitative Reactive Synthesis with Finite Tasks." In this work we introduce symbolic algorithms for quantitative reactive synthesis. We consider resource constrained robotic manipulators that need to interact with a human to achieve a complex task expressed in linear temporal logic. This code implement Symbolic Value Iteration for Min-max Reachbility games with quantative constraints as well Hybrid Regret-Minimizing strategy synthesis algorithm. See the paper for more details.
clone this repo with:
git clone --recurse-submodules [email protected]:aria-systems-group/sym_quant_reactive_synth.git .
Make sure you have Docker installed. If not, follow the instructions here.
Note: The --recurse-submodule
will automatically initialize and update each submodule in the repository. The command will throw an error if you do not have access to the private submodule cudd_and_wrapper
. Please send me an email at :[email protected] so that I can give access to the dependent submodule.
-
cd
into the root of the project -
Build the image from the Dockerfile
docker build -t <image_name> .
Note: the dot looks for a Dockerfile in the current repository.Then spin an instance of the container by using the following command
docker run -it --name <docker_container_name> <docker image name>
For volume binding
docker run -v <HOST-PATH>:<Container-path>
For example
docker run -it -v $PWD:/root/symbolic_planning/src --name symbolic_planning sym_planning_image
In the above command symbolic_planning
is the name of the container and sym_planning_image
is the name of the docker image.
Additionally, if you are more used to GUI and would like to edit or attach a container instance to the VSCode (Link) then follow the instructions below:
- Make sure you have the right VS code extensions installed
- install docker extension
- install python extension
- install remote container extension
- Now click on the
Remote Explore
tab on the left and attach VScode to a container.
- This will launch a new vs code attached to the container and prompt you to a location to attach to. The default is root, and you can just press enter. Congrats, you have attached your container to VSCode.
All the tests related scripts are available in the tests/
directory. I use python unittest for testing individual components of my source code. Here are some commands to run the tests:
To run a specific test package:
python3 -m unittest discover -s tests.<directory-name> -bv
To run a specific test script:
python3 -m tests.<directory-name>.<module-nane> -b
To run all tests:
python3 -m unittest -bv
For more details see the tests/README.md
. Note, all commands must be run from <root/of/project>
. Additionally, I have included to shell scripts to run regret and non-regret unit tests. You can call these <test_script>.sh file with -v
(default is False) for verbose test results.
To run the code, first make sure that you set the correct flags to True
in config.py script. Specifically for the Robotic Manipulation scenarios the relevant flags are:
REGRET_SYNTHESIS: bool = True
- Set this flag to true when you want compute to Regret minimizing strategies in using our ADD approach - IROS 23formulas: List[str] = [<insert formula here>]
- The formula specified in LTLf or scLTL. Note that you can only input one formula and it should be passed as a list.SIMULATE_STRATEGY: bool = True
- To rollout the strategy computed.
Some optimization flags:
REGRET_HYBRID: bool = True
- Set this flag to true when you construct graph of utility and best response explicitly (see IROS23 paper)MONOLITHIC_TR: bool = True
- Set this flag to true when using Monolithic TR (instead of Partitioned TR) during regret str synthesis. Monolithic has faster synthesis times than Partitioned TR approach.
Within the main.py
script found in the root folder, on Line 226, you can manually set the arguments to True to modify the output of the framework as follows:
verbose: bool = True
- For more verbose output.just_adv_game: bool = True
- To only play a min-max game where the human is viewed as an adversary.run_monitor: bool = True
- Set this flag to True if you want the option to choose an action as a human player.
In all the experiments we have Robot Region
and Human Region
.
Robot Region
- Set of locations where only the robot can manipulate objectsHuman Region
- Set of location where both the robot and the human can manipulate the object.
Refer to pddl_files
folder's README for more details about the semantics of domain and how it is encoded in the domain and problem files.
Note on formulas: The formulas must be correctly written in the config.py
script (as List[str]) for the code to run correctly. For e.g., formula F(p01 & p22)
indicates that ''eventually box 0 should be at location 1 and box 2 should be at location 2''. The predicates pij
correspond to box i
being placed at location j
.
We follow Spot's syntax, i.e., and: &,
; or: |, ||
; implies: ->, -->
; equivalent: <->, ,<-->
; not: ~, !
. For more details click on Help
button on this website.
To run the main script use:
python3 main.py
Note on running the code: Currently, the toolbox only outputs the strategy as a sequence of actions on the game graph. The Simulator code (for strategy simulation) has not been integrated yet! I plan to finish this integration and add command line interface to easily set the above flags to true or false, choose domain & problem files, and input formulas. For now, we need to manually edit the config.py
for setting the flags to either true or false and for writing the formulas, and modifying main.py
for appropriate PDDL files.
For simple pick and place domain experiments run the Python main.py
script as mentioned above. Make sure the formula in config.py
(Line 78-82) matches the problem file name in main.py
Line 178. You can play with the specification as long the pij
is a valid predicate as per the problem file.
Uncomment Lines 169-170 and comment other domain and problem file Lines in main.py
to input the correct domain and problem files. In the config.py
uncomment Line 33-34 and comment 35-36 to specify which locations are support and top locations of the arch.
The corresponding LTL formula is ['( F(p01 & p20 & p12) ) | ( F(p36 & p07 & p18) )'] and should be written in the config.py
before running the code.
Experiments:
ICRA 22 - Supplementary video - Robot Demos - YouTube Link
To run the ARIA LAB
construction example from IROS23 paper switch to this branch and just run the main script.
Note: Running these experiments will take around 30 minutes. All the experiments were run single-threaded on an Intel i5 -13th Gen 3.5 GHz CPU with 32 GB RAM. The source code is in Python, and we use Cython-based wrapper for the CUDD package for ADD operations, MONA for LTLf to DFA translation, and SPOT for LTL to DFA translation.
More Links:
- IROS23 Presentation - YouTube Link
Experiments:
Task - Build ARIA LAB either on the left or right side
- IROS23 Exp1 - Regret Minimizing behavior where human is cooperative YouTube Link
- IROS23 Exp2 - Regret Minimizing behavior where human does not cooperative YouTube Link
- IROS23 Exp3 - Regret Minimizing behavior where the human is cooperative but intervenes twice. YouTube Link
- IROS23 Exp4 - Regret Minimizing behavior where the human is cooperative but intervenes only once. YouTube Link
- IROS23 Exp5 - Adversarial behavior where human does not intervene YouTube Link
- IROS23 Exp6 - Adversarial behavior where human intervenes cooperatively. The robot still finishes the task away from the human YouTube Link
- src/symbolic_graphs - Contains code for running the main scripts and constructing the graphs
- src/algorithms - contains various synthesis code.
- test - test scripts to check if the code for abstraction construction, strategy synthesis, and simulating the synthesized strategy/policy is functioning correctly.
- pddl_files - PDDL domains for static and dynamic manipulator domain. Also contains Grid world examples.
- Cudd_and_wrapper - A cython-based wrapper for the CUDD decision diagram package.
- Regret_synthesis_toolbox - Source code for the explicit implementation of Min-max reachability algorithm and Regret-minimizing strategy synthesis. See this paper for more information.
The PDDL files used for benchmarking the symbolic synthesis code (Fig. 5 from Experiments section) can be found here.
If you get GPG keyerror issues for standard Ubuntu packages or when installing SPOT library then make sure that root partition is not full.
Running sudo apt clean
may resolve the issue.
In addition, the following commands should help clean up space related to Docker:
docker system df # which can show disk usage and 'Build Cache'
docker image prune # add -f or --force to not prompt for confirmation
docker container prune # add -f or --force to not prompt for confirmation
docker builder prune # remove docker build cache
Note, above commands will permanently remove existing images and container. Run at your own risk.
Install x11 using the sudo apt-get install xorg openbox
command and run xhost +
in a terminal to enable GUI forwarding. This enables visualizing the RVIZ simulation from within the docker container. Please refer to ServerGUI for more info.
Additionally run the docker container with explicit instruction of X11 forwarding, like
docker run -it -e DISPLAY=$DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix --name <docker_container_name> <docker_image_name>
See this link for more details.
All contributions welcome! All content in this repository is licensed under the MIT license.
If the code is useful in your research, and you would like to acknowledge it, please cite this paper:
@article{muvvala2023efficient,
title={Efficient Symbolic Approaches for Quantitative Reactive Synthesis with Finite Tasks},
author={Muvvala, Karan and Lahijanian, Morteza},
booktitle={2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
year={2023},
pages={8666-8672},
doi={10.1109/IROS55552.2023.10342496}}
}
Please contact me if you have questions at :[email protected]