This tutorial walks you through the process of running traffic simulations in Flow with trainable RLlib-powered agents. Autonomous agents will learn to maximize a certain reward over the rollouts, using the RLlib library (citation) (installation instructions). Simulations of this form will depict the propensity of RL agents to influence the traffic of a human fleet in order to make the whole fleet more efficient (for some given metrics).
In this exercise, we simulate an initially perturbed single lane ring road, where we introduce a single autonomous vehicle. We witness that, after some training, that the autonomous vehicle learns to dissipate the formation and propagation of "phantom jams" which form when only human driver dynamics are involved.
All simulations, both in the presence and absence of RL, require two components: a scenario, and an environment. Scenarios describe the features of the transportation network used in simulation. This includes the positions and properties of nodes and edges constituting the lanes and junctions, as well as properties of the vehicles, traffic lights, inflows, etc... in the network. Environments, on the other hand, initialize, reset, and advance simulations, and act as the primary interface between the reinforcement learning algorithm and the scenario. Moreover, custom environments may be used to modify the dynamical features of an scenario. Finally, in the RL case, it is in the environment that the state/action spaces and the reward function are defined.
Flow contains a plethora of pre-designed scenarios used to replicate highways, intersections, and merges in both closed and open settings. All these scenarios are located in flow/scenarios. For this exercise, which involves a single lane ring road, we will use the scenario LoopScenario
.
The scenario mentioned at the start of this section, as well as all other scenarios in Flow, are parameterized by the following arguments:
These parameters are explained in detail in exercise 1. Moreover, all parameters excluding vehicles (covered in section 2.2) do not change from the previous exercise. Accordingly, we specify them nearly as we have before, and leave further explanations of the parameters to exercise 1.
One important difference between SUMO and RLlib experiments is that, in RLlib experiments, the scenario classes are not imported, but rather called via their string names which (for serializtion and execution purposes) must be located within flow/scenarios/__init__.py
. To check which scenarios are currently available, we execute the below command.
In [ ]:
import flow.scenarios as scenarios
print(scenarios.__all__)
Accordingly, to use the ring road scenario for this tutorial, we specify its (string) names as follows:
In [ ]:
# ring road scenario class
scenario_name = "LoopScenario"
Another difference between SUMO and RLlib experiments is that, in RLlib experiments, the scenario classes do not need to be defined; instead users should simply name the scenario class they wish to use. Later on, an environment setup module will import the correct scenario class based on the provided names.
In [ ]:
# input parameter classes to the scenario class
from flow.core.params import NetParams, InitialConfig
# name of the scenario
name = "training_example"
# network-specific parameters
from flow.scenarios.loop import ADDITIONAL_NET_PARAMS
net_params = NetParams(additional_params=ADDITIONAL_NET_PARAMS)
# initial configuration to vehicles
initial_config = InitialConfig(spacing="uniform", perturbation=1)
The Vehicles
class stores state information on all vehicles in the network. This class is used to identify the dynamical features of a vehicle and whether it is controlled by a reinforcement learning agent. Morover, information pertaining to the observations and reward function can be collected from various get
methods within this class.
The dynamics of vehicles in the Vehicles
class can either be depicted by sumo or by the dynamical methods located in flow/controllers. For human-driven vehicles, we use the IDM model for acceleration behavior, with exogenous gaussian acceleration noise with std 0.2 m/s2 to induce perturbations that produce stop-and-go behavior. In addition, we use the ContinousRouter
routing controller so that the vehicles may maintain their routes closed networks.
As we have done in exercise 1, human-driven vehicles are defined in the Vehicles
class as follows:
In [ ]:
# vehicles class
from flow.core.params import VehicleParams
# vehicles dynamics models
from flow.controllers import IDMController, ContinuousRouter
vehicles = VehicleParams()
vehicles.add("human",
acceleration_controller=(IDMController, {}),
routing_controller=(ContinuousRouter, {}),
num_vehicles=21)
The above addition to the Vehicles
class only accounts for 21 of the 22 vehicles that are placed in the network. We now add an additional trainable autuonomous vehicle whose actions are dictated by an RL agent. This is done by specifying an RLController
as the acceleraton controller to the vehicle.
In [ ]:
from flow.controllers import RLController
Note that this controller serves primarirly as a placeholder that marks the vehicle as a component of the RL agent, meaning that lane changing and routing actions can also be specified by the RL agent to this vehicle.
We finally add the vehicle as follows, while again using the ContinuousRouter
to perpetually maintain the vehicle within the network.
In [ ]:
vehicles.add(veh_id="rl",
acceleration_controller=(RLController, {}),
routing_controller=(ContinuousRouter, {}),
num_vehicles=1)
Several environments in Flow exist to train RL agents of different forms (e.g. autonomous vehicles, traffic lights) to perform a variety of different tasks. The use of an environment allows us to view the cumulative reward simulation rollouts receive, along with to specify the state/action spaces.
Envrionments in Flow are parametrized by three components:
SumoParams
specifies simulation-specific variables. These variables include the length of any simulation step and whether to render the GUI when running the experiment. For this example, we consider a simulation step length of 0.1s and activate the GUI.
Note For training purposes, it is highly recommanded to deactivate the GUI in order to avoid global slow down. In such case, one just needs to specify the following: render=False
In [ ]:
from flow.core.params import SumoParams
sumo_params = SumoParams(sim_step=0.1, render=False)
EnvParams
specifies environment and experiment-specific parameters that either affect the training process or the dynamics of various components within the scenario. For the environment WaveAttenuationPOEnv
, these parameters are used to dictate bounds on the accelerations of the autonomous vehicles, as well as the range of ring lengths (and accordingly network densities) the agent is trained on.
Finally, it is important to specify here the horizon of the experiment, which is the duration of one episode (during which the RL-agent acquire data).
In [ ]:
from flow.core.params import EnvParams
# Define horizon as a variable to ensure consistent use across notebook
HORIZON=100
env_params = EnvParams(
# length of one rollout
horizon=HORIZON,
additional_params={
# maximum acceleration of autonomous vehicles
"max_accel": 1,
# maximum deceleration of autonomous vehicles
"max_decel": 1,
# bounds on the ranges of ring road lengths the autonomous vehicle
# is trained on
"ring_length": [220, 270],
},
)
Now, we have to specify our Gym Environment and the algorithm that our RL agents will use. To specify the environment, one has to use the environment's name (a simple string). A list of all environment names is located in flow/envs/__init__.py
. The names of available environments can be seen below.
In [ ]:
import flow.envs as flowenvs
print(flowenvs.__all__)
We will use the environment "WaveAttenuationPOEnv", which is used to train autonomous vehicles to attenuate the formation and propagation of waves in a partially observable variable density ring road. To create the Gym Environment, the only necessary parameters are the environment name plus the previously defined variables. These are defined as follows:
In [ ]:
env_name = "WaveAttenuationPOEnv"
RLlib and rllab experiments both generate a params.json
file for each experiment run. For RLlib experiments, the parameters defining the Flow scenario and environment must be stored as well. As such, in this section we define the dictionary flow_params
, which contains the variables required by the utility function make_create_env
. make_create_env
is a higher-order function which returns a function create_env
that initializes a Gym environment corresponding to the Flow scenario specified.
In [ ]:
# Creating flow_params. Make sure the dictionary keys are as specified.
flow_params = dict(
# name of the experiment
exp_tag=name,
# name of the flow environment the experiment is running on
env_name=env_name,
# name of the scenario class the experiment uses
scenario=scenario_name,
# simulator that is used by the experiment
simulator='traci',
# sumo-related parameters (see flow.core.params.SumoParams)
sim=sumo_params,
# environment related parameters (see flow.core.params.EnvParams)
env=env_params,
# network-related parameters (see flow.core.params.NetParams and
# the scenario's documentation or ADDITIONAL_NET_PARAMS component)
net=net_params,
# vehicles to be placed in the network at the start of a rollout
# (see flow.core.vehicles.Vehicles)
veh=vehicles,
# (optional) parameters affecting the positioning of vehicles upon
# initialization/reset (see flow.core.params.InitialConfig)
initial=initial_config
)
First, we must import modules required to run experiments in Ray. The json
package is required to store the Flow experiment parameters in the params.json
file, as is FlowParamsEncoder
. Ray-related imports are required: the PPO algorithm agent, ray.tune
's experiment runner, and environment helper methods register_env
and make_create_env
.
In [ ]:
import json
import ray
try:
from ray.rllib.agents.agent import get_agent_class
except ImportError:
from ray.rllib.agents.registry import get_agent_class
from ray.tune import run_experiments
from ray.tune.registry import register_env
from flow.utils.registry import make_create_env
from flow.utils.rllib import FlowParamsEncoder
In [ ]:
# number of parallel workers
N_CPUS = 2
# number of rollouts per training iteration
N_ROLLOUTS = 1
ray.init(redirect_output=True, num_cpus=N_CPUS)
Here, we copy and modify the default configuration for the PPO algorithm. The agent has the number of parallel workers specified, a batch size corresponding to N_ROLLOUTS
rollouts (each of which has length HORIZON
steps), a discount rate $\gamma$ of 0.999, two hidden layers of size 16, uses Generalized Advantage Estimation, $\lambda$ of 0.97, and other parameters as set below.
Once config
contains the desired parameters, a JSON string corresponding to the flow_params
specified in section 3 is generated. The FlowParamsEncoder
maps objects to string representations so that the experiment can be reproduced later. That string representation is stored within the env_config
section of the config
dictionary. Later, config
is written out to the file params.json
.
Next, we call make_create_env
and pass in the flow_params
to return a function we can use to register our Flow environment with Gym.
In [ ]:
# The algorithm or model to train. This may refer to "
# "the name of a built-on algorithm (e.g. RLLib's DQN "
# "or PPO), or a user-defined trainable function or "
# "class registered in the tune registry.")
alg_run = "PPO"
agent_cls = get_agent_class(alg_run)
config = agent_cls._default_config.copy()
config["num_workers"] = N_CPUS - 1 # number of parallel workers
config["train_batch_size"] = HORIZON * N_ROLLOUTS # batch size
config["gamma"] = 0.999 # discount rate
config["model"].update({"fcnet_hiddens": [16, 16]}) # size of hidden layers in network
config["use_gae"] = True # using generalized advantage estimation
config["lambda"] = 0.97
config["sgd_minibatch_size"] = min(16 * 1024, config["train_batch_size"]) # stochastic gradient descent
config["kl_target"] = 0.02 # target KL divergence
config["num_sgd_iter"] = 10 # number of SGD iterations
config["horizon"] = HORIZON # rollout horizon
# save the flow params for replay
flow_json = json.dumps(flow_params, cls=FlowParamsEncoder, sort_keys=True,
indent=4) # generating a string version of flow_params
config['env_config']['flow_params'] = flow_json # adding the flow_params to config dict
config['env_config']['run'] = alg_run
# Call the utility function make_create_env to be able to
# register the Flow env for this experiment
create_env, gym_name = make_create_env(params=flow_params, version=0)
# Register as rllib env with Gym
register_env(gym_name, create_env)
In [ ]:
trials = run_experiments({
flow_params["exp_tag"]: {
"run": alg_run,
"env": gym_name,
"config": {
**config
},
"checkpoint_freq": 1, # number of iterations between checkpoints
"checkpoint_at_end": True, # generate a checkpoint at the end
"max_failures": 999,
"stop": { # stopping conditions
"training_iteration": 1, # number of iterations to stop after
},
},
})
The simulation results are saved within the ray_results/training_example
directory (we defined training_example
at the start of this tutorial). The ray_results
folder is by default located at your root ~/ray_results
.
You can run tensorboard --logdir=~/ray_results/training_example
(install it with pip install tensorboard
) to visualize the different data outputted by your simulation.
For more instructions about visualizing, please see tutorial05_visualize.ipynb
.
If you wish to do transfer learning, or to resume a previous training, you will need to start the simulation from a previous checkpoint. To do that, you can add a restore
parameter in the run_experiments
argument, as follows:
trials = run_experiments({
flow_params["exp_tag"]: {
"run": alg_run,
"env": gym_name,
"config": {
**config
},
"restore": "/ray_results/experiment/dir/checkpoint_50/checkpoint-50"
"checkpoint_freq": 1,
"checkpoint_at_end": True,
"max_failures": 999,
"stop": {
"training_iteration": 1,
},
},
})
The "restore"
path should be such that the [restore]/.tune_metadata
file exists.
There is also a "resume"
parameter that you can set to True
if you just wish to continue the training from a previously saved checkpoint, in case you are still training on the same experiment.
In [ ]: