Skip to content

drawing

Landing page with tools and frameworks for the new class of urban routing games, where fleets of collaborative autonomous vehicles (CAVs) learn to make better route choice decisions in urban traffic systems.

The core elements are:

  1. RouteRL Multi-Agent Reinforcement Learning framework for modeling and simulating the collective route choices of humans and autonomous vehicles - SoftwareX

drawing

  1. URB Urban Routing Benchmark: Benchmarking MARL algorithms on the fleet routing tasks - NIPS 2025

drawing


With which you may run a standard task, such as:

In the town of Nemours inhabited only by human drivers, at some point, a given share of drivers mutate to CAVs and delegate their routing decisions to algorithms. Then, for a period of time, the CAV agents develop routing strategies to minimize their delay (e.g. using MARL). This process (both learning and new state) affects traffic and ill its users (human and autonomous vehicles).

drawing

RouteRL can run this task for an arbitrary city with arbitrary demand (most likely from predefined case studies) and configuration. You may use some algorithm (own or from torchRL) and analyze results to draw conclusions. Or you may compete in URB to dominate the leaderboard with your best-performing algorithm tested across variety of tasks.


🔖 For the overview of scientific contributions and societal impact see the COeXISTENCE group web page.

🫵 To collaborate mail us or see contoribution guidelines at repsective repos.

👩‍🎓 prospective students, PhDs or visiting scholars welcomed - please mail Rafał Kucharski

🏃‍♀️ In the typical use-case:

  • You import road network of a given urban areas from Open Street Map
  • You generate a demand pattern, where each of agents is specified with own traits and travel demans $(o_i, d_i, \tau_i$)
  • You control your experiment with a .json file and specify details of conducted experiment (or set of experiments).
  • You specify your human behaviour models to accurately reproduce how human drivers select routes.
  • You generate choice set of paths for each agent to select from.
  • You connect with SUMO traffic simulator to be used as environment to compute travel costs.
  • You run $n$ days of human learning (SUMO days), hoping the system will stabilize in proximity of Wardrop User Equilibrium
  • You introduce mutation and replace some human agents with AVs.
  • You determine reinforcement learning algorithm for each agent by defining rewards, observations and hyperparameters
  • You train your algorithms until it finds suitable policy
  • You roll-out the trained policy and observe impact of new routing on the system.
  • You further allow humans to adapt to actions of AVs and allow AVs to refine its policies.

📜 Complete list of available software (work-in-progress, sandboxes, discontinued projects or side quests) is:

  1. JanuX tool for generating a set of path options in directed graphs. It is designed for efficient routing or creating path options for custom requirements.

drawing

  1. Coalition formation repo where we demonstrate (for the first time) that CAVs may form exclusive routing coalitions in traffic.
  2. General Decision Model framework to simulate the decision process of humans that can join CAV fleet.
  3. RoutingZOO a simulation platform where virtual drivers experiment with routing strategies to navigate from origins to destinations in dense urban networks.
  4. Wardropian Cycles a concept bridging between System Optimum and User Equilibrium Assignment in a day-to-day context.
  5. parcour early prototype version of RouteRL by Onur Akman.
  6. BottleCOEX - Lightweight Simulation of Coexistence of CAVs and Human Drivers in Two-route Bottleneck Scenarios with Macroscopic Traffic Model.

drawing


Credits

URB is part of COeXISTENCE (ERC Starting Grant, grant agreement No 101075838) and is a team work at Jagiellonian University in Kraków, Poland by: Ahmet Onur Akman, Anastasia Psarou, Łukasz Gorczyca, Michał Hoffmann, Lukasz Kowalski, Paweł Gora, and Grzegorz Jamróz, within the research group of Rafał Kucharski.


Pipeline at glance (from here)

image

Pinned Loading

  1. references references Public

    Repository of relevant papers

    TeX

  2. RouteRL RouteRL Public

    RouteRL is a multi-agent reinforcement learning framework for modeling and simulating the collective route choices of humans and autonomous vehicles.

    Jupyter Notebook 27 9

Repositories

Showing 10 of 20 repositories

Top languages

Loading…

Most used topics

Loading…