Back to Search Start Over

Planning under uncertainty with Bayesian nonparametric models

Authors :
Jonathan P. How.
Massachusetts Institute of Technology. Department of Aeronautics and Astronautics.
Klein, Robert H. (Robert Henry)
Jonathan P. How.
Massachusetts Institute of Technology. Department of Aeronautics and Astronautics.
Klein, Robert H. (Robert Henry)
Publication Year :
2014

Abstract

Thesis: S.M., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 2014.<br />Cataloged from PDF version of thesis.<br />Includes bibliographical references (pages 111-119).<br />Autonomous agents are increasingly being called upon to perform challenging tasks in complex settings with little information about underlying environment dynamics. To successfully complete such tasks the agent must learn from its interactions with the environment. Many existing techniques make assumptions about problem structure to remain tractable, such as limiting the class of possible models or specifying a fixed model expressive power. Complicating matters, there are many scenarios where the environment exhibits multiple underlying sets of dynamics; in these cases, most existing approaches assume the number of underlying models is known a priori, or ignore the possibility of multiple models altogether. Bayesian nonparametric (BNP) methods provide the flexibility to solve both of these problems, but have high inference complexity that has limited their adoption. This thesis provides several methods to tractably plan under uncertainty using BNPs. The first is Simultaneous Clustering on Representation Expansion (SCORE) for learning Markov Decision Processes (MDPs) that exhibit an underlying multiple-model structure. SCORE addresses the co-dependence between observation clustering and model expansion. The second contribution provides a realtime, non-myopic, risk-aware planning solution for use in camera surveillance scenarios where the number of underlying target behaviors and their parameterization are unknown. A BNP model is used to capture target behaviors, and a solution that reduces uncertainty only as needed to perform a mission is presented for allocating cameras. The final contribution is a reinforcement learning (RL) framework RLPy, a software package to promote collaboration and speed innovation in the RL community. RLPy provides a library of learning agents, function approximators, and problem domains for performing RL experiments. RLPy also provides a suite of tools that help automate tasks throughout the experiment pipeline, from initial prototyping th<br />by Robert H. Klein.<br />S.M.

Details

Database :
OAIster
Notes :
119 pages, application/pdf, English
Publication Type :
Electronic Resource
Accession number :
edsoai.on1140045156
Document Type :
Electronic Resource