Apprenticeship Learning Using Linear Programming

Umar Syed, Robert Schapire, and Michael Bowling. Apprenticeship Learning Using Linear Programming. In Proceedings of the Twenty-Fifth International Conference on Machine Learning (ICML), pp. 1032–1039, 2008.

Download

[PDF] 

Abstract

In apprenticeship learning, the goal is to learn a policy in a Markov decision process that is at least as good as a policy demonstrated by an expert. The difficulty arises in that the MDP's true reward function is assumed to be unknown. We show how to frame apprenticeship learning as a linear programming problem, and show that using an off-the-shelf LP solver to solve this problem results in a substantial improvement in running time over existing methods -- up to two orders of magnitude faster in our experiments. Additionally, our approach produces stationary policies, while all existing methods for apprenticeship learning output policies that are "mixed", i.e. randomized combinations of stationary policies. The technique used is general enough to convert any mixed policy to a stationary policy.

BibTeX

@InProceedings(08icml-apprenticeship,
  Title = "Apprenticeship Learning Using Linear Programming",
  Author = "Umar Syed and Robert Schapire and Michael Bowling",
  Booktitle = "Proceedings of the Twenty-Fifth International Conference on Machine Learning (ICML)",
  Year = "2008",
  Pages = "1032--1039",
  AcceptRate = "27\%",
  AcceptNumbers = "155 of 583"
)

Generated by bib2html.pl (written by Patrick Riley) on Fri Feb 13, 2015 15:54:28