This demo shows how to train a Pendulum agent (exciting!) with our simple density-based imitation learning baselines. DensityTrainer has a few interesting parameters, but the key ones are:
density_type: this governs whether density is measured on $(s,s')$ pairs (db.STATE_STATE_DENSITY), $(s,a)$ pairs (db.STATE_ACTION_DENSITY), or single states (db.STATE_DENSITY).is_stationary: determines whether a separate density model is used for each time step $t$ (False), or the same model is used for transitions at all times (True).standardise_inputs: if True, each dimension of the agent state vectors will be normalised to have zero mean and unit variance over the training dataset. This can be useful when not all elements of the demonstration vector are on the same scale, or when some elements have too wide a variation to be captured by the fixed kernel width (1 for Gaussian kernel).kernel: changes the kernel used for non-parametric density estimation. gaussian and exponential are the best bets; see the sklearn docs for the rest.
In [ ]:
%matplotlib inline
#%load_ext autoreload
#%autoreload 2
import pprint
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
import numpy as np
from imitation import density_baselines as db
from imitation.data import rollout
from imitation.util import util
In [ ]:
env_name = 'Pendulum-v0'
env = util.make_vec_env(env_name, 8)
rollouts = rollout.load_trajectories("../tests/data/expert_models/pendulum_0/rollouts/final.pkl")
imitation_trainer = util.init_rl(env, learning_rate=3e-4, nminibatches=32, noptepochs=10, n_steps=2048)
density_trainer = db.DensityTrainer(env,
rollouts=rollouts,
imitation_trainer=imitation_trainer,
density_type=db.STATE_ACTION_DENSITY,
is_stationary=False,
kernel='gaussian',
kernel_bandwidth=0.2, # found using divination & some palm reading
standardise_inputs=True)
In [ ]:
novice_stats = density_trainer.test_policy()
print('Novice stats (true reward function):')
pprint.pprint(novice_stats)
novice_stats_im = density_trainer.test_policy(true_reward=False)
print('Novice stats (imitation reward function):')
pprint.pprint(novice_stats_im)
for i in range(100):
density_trainer.train_policy(100000)
good_stats = density_trainer.test_policy()
print(f'Trained stats (epoch {i}):')
pprint.pprint(good_stats)
novice_stats_im = density_trainer.test_policy(true_reward=False)
print(f'Trained stats (imitation reward function, epoch {i}):')
pprint.pprint(novice_stats_im)