This tutorial walks you through the process of running traffic simulations in Flow with trainable rllab-powered agents. Autonomous agents will learn to maximize a certain reward over the rollouts, using the rllab library [1]. Simulations of this form will depict the propensity of RL agents to influence the traffic of a human fleet in order to make the whole fleet more efficient (for some given metrics).
In this exercise, we simulate an initially perturbed single lane ring road, where we introduce a single autonomous vehicle. We witness that, after some training, that the autonomous vehicle learns to dissipate the formation and propagation of "phantom jams" which form when only human driver dynamics is involved.
All simulations, both in the presence and absence of RL, require two components: a scenario, and an environment. Scenarios describe the features of the transportation network used in simulation. This includes the positions and properties of nodes and edges constituting the lanes and junctions, as well as properties of the vehicles, traffic lights, inflows, etc... in the network. Environments, on the other hand, initialize, reset, and advance simulations, and act as the primary interface between the reinforcement learning algorithm and the scenario. Moreover, custom environments may be used to modify the dynamical features of an scenario. Finally, in the RL case, it is in the environment that the state/action spaces and the reward function are defined.
Flow contains a plethora of pre-designed scenarios used to replicate highways, intersections, and merges in both closed and open settings. All these scenarios are located in flow/scenarios. For this exercise, which involves a single lane ring road, we will use the scenario LoopScenario
.
The scenario mentioned at the start of this section, as well as all other scenarios in Flow, are parameterized by the following arguments:
These parameters are explained in detail in exercise 1. Moreover, all parameters excluding vehicles (covered in section 2.2) do not change from the previous exercise. Accordingly, we specify them as we have before, and leave further explanations of the parameters to exercise 1.
In [ ]:
# ring road scenario class
from flow.scenarios.loop import LoopScenario
# input parameter classes to the scenario class
from flow.core.params import NetParams, InitialConfig
# name of the scenario
name = "training_example"
# network-specific parameters
from flow.scenarios.loop import ADDITIONAL_NET_PARAMS
net_params = NetParams(additional_params=ADDITIONAL_NET_PARAMS)
# initial configuration to vehicles
initial_config = InitialConfig(spacing="uniform", perturbation=1)
# traffic lights (empty)
from flow.core.params import TrafficLightParams
traffic_lights = TrafficLightParams()
The VehicleParams
class stores state information on all vehicles in the network. This class is used to identify the dynamical features of a vehicle and whether it is controlled by a reinforcement learning agent. Morover, information pertaining to the observations and reward function can be collected from various get
methods within this class.
The dynamics of vehicles in the VehicleParams
class can either be depicted by sumo or by the dynamical methods located in flow/controllers. For human-driven vehicles, we use the IDM model for acceleration behavior, with exogenous gaussian acceleration noise with std 0.2 m/s2 to induce perturbations that produce stop-and-go behavior. In addition, we use the ContinousRouter
routing controller so that the vehicles may maintain their routes closed networks.
As we have done in exercise 1, human-driven vehicles are defined in the VehicleParams
class as follows:
In [ ]:
# vehicles class
from flow.core.params import VehicleParams
# vehicles dynamics models
from flow.controllers import IDMController, ContinuousRouter
vehicles = VehicleParams()
vehicles.add("human",
acceleration_controller=(IDMController, {}),
routing_controller=(ContinuousRouter, {}),
num_vehicles=21)
The above addition to the Vehicles
class only accounts for 21 of the 22 vehicles that are placed in the network. We now add an additional trainable autuonomous vehicle whose actions are dictated by an RL agent. This is done by specifying an RLController
as the acceleraton controller to the vehicle.
In [ ]:
from flow.controllers import RLController
Note that this controller serves primarirly as a placeholder that marks the vehicle as a component of the RL agent, meaning that lane changing and routing actions can also be specified by the RL agent to this vehicle.
We finally add the vehicle as follows, while again using the ContinuousRouter
to perpetually maintain the vehicle within the network.
In [ ]:
vehicles.add(veh_id="rl",
acceleration_controller=(RLController, {}),
routing_controller=(ContinuousRouter, {}),
num_vehicles=1)
In [ ]:
scenario = LoopScenario(name="ring_example",
vehicles=vehicles,
net_params=net_params,
initial_config=initial_config,
traffic_lights=traffic_lights)
Several environments in Flow exist to train RL agents of different forms (e.g. autonomous vehicles, traffic lights) to perform a variety of different tasks. The use of an environment allows us to view the cumulative reward simulation rollouts receive, along with to specify the state/action spaces.
Envrionments in Flow are parametrized by three components:
SumoParams
specifies simulation-specific variables. These variables include the length of any simulation step and whether to render the GUI when running the experiment. For this example, we consider a simulation step length of 0.1s and activate the GUI.
Note For training purposes, it is highly recommanded to deactivate the GUI in order to avoid global slow down. In such case, one just need to specify the following: render=False
In [ ]:
from flow.core.params import SumoParams
sumo_params = SumoParams(sim_step=0.1, render=False)
EnvParams
specifies environment and experiment-specific parameters that either affect the training process or the dynamics of various components within the scenario. For the environment "WaveAttenuationPOEnv", these parameters are used to dictate bounds on the accelerations of the autonomous vehicles, as well as the range of ring lengths (and accordingly network densities) the agent is trained on.
Finally, it is important to specify here the horizon of the experiment, which is the duration of one episode (during which the RL-agent acquire data).
In [ ]:
from flow.core.params import EnvParams
env_params = EnvParams(
# length of one rollout
horizon=100,
additional_params={
# maximum acceleration of autonomous vehicles
"max_accel": 1,
# maximum deceleration of autonomous vehicles
"max_decel": 1,
# bounds on the ranges of ring road lengths the autonomous vehicle
# is trained on
"ring_length": [220, 270],
},
)
Now, we have to specify our Gym Environment and the algorithm that our RL agents we'll use. To specify the environment, one has to use the environment's name (a simple string). A list of all environment names is located in flow/envs/__init__.py
. The names of available environments can be seen below.
In [ ]:
import flow.envs as flowenvs
print(flowenvs.__all__)
We will use the environment "WaveAttenuationPOEnv", which is used to train autonomous vehicles to attenuate the formation and propagation of waves in a partially observable variable density ring road. To create the Gym Environment, the only necessary parameters are the environment name plus the previously defined variables. These are defined as follows:
In [ ]:
env_name = "WaveAttenuationPOEnv"
pass_params = (env_name, sumo_params, vehicles, env_params, net_params,
initial_config, scenario)
We begin by creating a run_task
method, which defines various components of the RL algorithm within rllab, such as the environment, the type of policy, the policy training method, etc.
We create the gym environment defined in section 3 using the GymEnv
function.
In this experiment, we use a Gaussian MLP policy: we just need to specify its dimensions (32,32)
and the environment name. We'll use linear baselines and the Trust Region Policy Optimization (TRPO) algorithm (see https://arxiv.org/abs/1502.05477):
batch_size
parameter specifies the size of the batch during one step of the gradient descent. max_path_length
parameter indicates the biggest rollout size possible of the experiment. n_itr
parameter gives the number of iterations used in training the agent.In the following, we regroup all the previous commands in one single cell
In [ ]:
from rllab.algos.trpo import TRPO
from rllab.baselines.linear_feature_baseline import LinearFeatureBaseline
from rllab.policies.gaussian_mlp_policy import GaussianMLPPolicy
from rllab.envs.normalized_env import normalize
from rllab.envs.gym_env import GymEnv
def run_task(*_):
env = GymEnv(
env_name,
record_video=False,
register_params=pass_params
)
horizon = env.horizon
env = normalize(env)
policy = GaussianMLPPolicy(
env_spec=env.spec,
hidden_sizes=(32, 32)
)
baseline = LinearFeatureBaseline(env_spec=env.spec)
algo = TRPO(
env=env,
policy=policy,
baseline=baseline,
batch_size=1000,
max_path_length=horizon,
discount=0.999,
n_itr=1,
)
algo.train(),
Using the above run_task
method, we will execute the training process using rllab's run_experiment_lite
methods. In this method, we are able to specify:
n_parallel
cores you want to use for your experiment. If you set n_parallel
>1, two processors will execute your code in parallel which results in a global roughly linear speed-up.snapshot_mode
, which specifies how frequently (blank).mode
which can set to be local is you want to run the experiment locally, or to ec2 for launching the experiment on an Amazon Web Services instance.seed
parameter which calibrates the randomness in the experiment. tag
, or name, for your experiment.Finally, we are ready to begin the training process.
In [ ]:
from rllab.misc.instrument import run_experiment_lite
for seed in [5]: # , 20, 68]:
run_experiment_lite(
run_task,
# Number of parallel workers for sampling
n_parallel=1,
# Keeps the snapshot parameters for all iterations
snapshot_mode="all",
# Specifies the seed for the experiment. If this is not provided, a
# random seed will be used
seed=seed,
mode="local",
exp_prefix="training_example",
)
In [ ]: