Sample-efficient reinforcement learning with stochastic ensemble value expansion

J Buckman, D Hafner, G Tucker… - Advances in neural …, 2018 - proceedings.neurips.cc
Advances in neural information processing systems, 2018proceedings.neurips.cc
There is growing interest in combining model-free and model-based approaches in
reinforcement learning with the goal of achieving the high performance of model-free
algorithms with low sample complexity. This is difficult because an imperfect dynamics
model can degrade the performance of the learning algorithm, and in sufficiently complex
environments, the dynamics model will always be imperfect. As a result, a key challenge is
to combine model-based approaches with model-free learning in such a way that errors in …
Abstract
There is growing interest in combining model-free and model-based approaches in reinforcement learning with the goal of achieving the high performance of model-free algorithms with low sample complexity. This is difficult because an imperfect dynamics model can degrade the performance of the learning algorithm, and in sufficiently complex environments, the dynamics model will always be imperfect. As a result, a key challenge is to combine model-based approaches with model-free learning in such a way that errors in the model do not degrade performance. We propose stochastic ensemble value expansion (STEVE), a novel model-based technique that addresses this issue. By dynamically interpolating between model rollouts of various horizon lengths, STEVE ensures that the model is only utilized when doing so does not introduce significant errors. Our approach outperforms model-free baselines on challenging continuous control benchmarks with an order-of-magnitude increase in sample efficiency.
proceedings.neurips.cc
Showing the best result for this search. See all results