Go home now Header Background Image
Search
Submission Procedure
share: |
 
Follow us
 
 
 
 
Volume 13 / Issue 9

available in:   PDF (183 kB) PS (250 kB)
 
get:  
Similar Docs BibTeX   Write a comment
  
get:  
Links into Future

 

Focus of Attention in Reinforcement Learning

Lihong Li (Rutgers University, USA)

Vadim Bulitko (University of Alberta, Canada)

Russell Greiner (University of Alberta, Canada)

Abstract: Classification-based reinforcement learning (RL) methods have recently been pro-posed as an alternative to the traditional value-function based methods. These methods use a classifier to represent a policy, where the input (features) to the classifier is the state and theoutput (class label) for that state is the desired action. The reinforcement-learning community knows that focusing on more important states can lead to improved performance. In this paper,we investigate the idea of focused learning in the context of classification-based RL. Specifically, we define a useful notation of state importance, which we use to prove rigorous bounds on policyloss. Furthermore, we show that a classification-based RL agent may behave arbitrarily poorly if it treats all states as equally important.

Keywords: attention, function approximation, generalization, reinforcement learning

Categories: I.2.6, M.0, M.1