## Publications

**Distributionally Robust Behavioral Cloning for Robust Imitation Learning**

*Kishan Panaganti*, *__Zaiyan Xu__*, Dileep Kalathil, Mohammad Ghavamzadeh

IEEE Conference on Decision and Control (CDC), 2023

[abstract][cite]

Robust reinforcement learning (RL) aims to learn a policy that can withstand uncertainties in model parameters, which often arise in practical RL applications due to modeling errors in simulators, variations in real-world system dynamics, and adversarial disturbances. This paper introduces the robust imitation learning (IL) problem in a Markov decision process (MDP) framework where an agent learns to mimic an expert demonstrator that can withstand uncertainties in model parameters without additional online environment interactions. The agent is only provided with a dataset of state-action pairs from the expert on a single (nominal) dynamics, without any information about the true rewards from the environment. Behavioral cloning (BC), a supervised learning method, is a powerful algorithm to address the vanilla IL problem. We propose an algorithm for the robust IL problem that utilizes distributionally robust optimization (DRO) with BC. We call the algorithm DR-BC and show its robust performance against parameter uncertainties both in theory and in practice. We also demonstrate the empirical performance of our approach to addressing model perturbations on several MuJoCo continuous control tasks.

**Bridging Distributionally Robust Learning and Offline RL: An Approach to Mitigate Distribution Shift and Partial Data Coverage**

*Kishan Panaganti, *__Zaiyan Xu__, Dileep Kalathil, Mohammad Ghavamzadeh

[abstract][arXiv][code]

The goal of an offline reinforcement learning (RL) algorithm is to learn optimal polices using
historical (offline) data, without access to the environment for online exploration. One of the
main challenges in offline RL is the distribution shift which refers to the difference between
the state-action visitation distribution of the data generating policy and the learning policy.
Many recent works have used the idea of pessimism for developing offline RL algorithms and
characterizing their sample complexity under a relatively weak assumption of single policy
concentrability. Different from the offline RL literature, the area of distributionally robust
learning (DRL) offers a principled framework that uses a minimax formulation to tackle model
mismatch between training and testing environments. In this work, we aim to bridge these
two areas by showing that the DRL approach can be used to tackle the distributional shift
problem in offline RL. In particular, we propose two offline RL algorithms using the DRL
framework, for the tabular and linear function approximation settings, and characterize their
sample complexity under the single policy concentrability assumption. We also demonstrate
the superior performance our proposed algorithm through simulation experiments.

**Improved Sample Complexity Bounds For Distributionally Robust Reinforcement Learning**

__Zaiyan Xu__*, Kishan Panaganti*, Dileep Kalathil

International Conference on Artificial Intelligence and Statistics (AISTATS), 2023

[abstract][arXiv][code][poster][cite]

We consider the problem of learning a control policy that is robust against the parameter mismatches between the training environment and testing environment. We formulate this as a distributionally robust reinforcement learning (DR-RL) problem where the objective is to learn the policy which maximizes the value function against the worst possible stochastic model of the environment in an uncertainty set. We focus on the tabular episodic learning setting where the algorithm has access to a generative model of the nominal (training) environment around which the uncertainty set is defined. We propose the Robust Phased Value Learning (RPVL) algorithm to solve this problem for the uncertainty sets specified by four different divergences: total variation, chi-square, Kullback-Leibler, and Wasserstein. We show that our algorithm achieves \(\tilde{\mathcal{O}}(|\mathcal{S}||\mathcal{A}| H^{5})\) sample complexity, which is uniformly better than the existing results by a factor of \(|\mathcal{S}|\), where \(|\mathcal{S}|\) is number of states, \(|\mathcal{A}|\) is the number of actions, and \(H\) is the horizon length. We also provide the first-ever sample complexity result for the Wasserstein uncertainty set. Finally, we demonstrate the performance of our algorithm using simulation experiments.

**Robust Reinforcement Learning Using Offline Data**

*Kishan Panaganti, *__Zaiyan Xu__, Dileep Kalathil, Mohammad Ghavamzadeh

Neural Information Processing Systems (NeurIPS), 2022

[abstract][arXiv][code][cite]

The goal of robust reinforcement learning (RL) is to learn a policy that is robust against the uncertainty in model parameters. Parameter uncertainty commonly occurs in many real-world RL applications due to simulator modeling errors, changes in the real-world system dynamics over time, and adversarial disturbances. Robust RL is typically formulated as a max-min problem, where the objective is to learn the policy that maximizes the value against the worst possible models that lie in an uncertainty set. In this work, we propose a robust RL algorithm called Robust Fitted Q-Iteration (RFQI), which uses only an offline dataset to learn the optimal robust policy. Robust RL with offline data is significantly more challenging than its non-robust counterpart because of the minimization over all models present in the robust Bellman operator. This poses challenges in offline data collection, optimization over the models, and unbiased estimation. In this work, we propose a systematic approach to overcome these challenges, resulting in our RFQI algorithm. We prove that RFQI learns a near-optimal robust policy under standard assumptions and demonstrate its superior performance on standard benchmark problems.

(* denotes equal contribution)