10

Can Q-learning (and SARSA) be directly used in a Partially Observable Markov Decision Process (POMDP)? If not, why not? My intuition is that the policies learned will be terrible because of partial observability. Are there ways to transform these algorithms so that they can be easily used in a POMDP?

DrMcCleod
  • 583
  • 4
  • 12
drerD
  • 298
  • 2
  • 6
  • I think answers so far are missing the fact that classifying a problem as a POMDP is not a simple binary choice. There are both types and degrees of POMDP. For instance, Open AI Gym's [LunarLander-v2](https://gym.openai.com/envs/LunarLander-v2/) is strictly a POMDP as the agent does not observe the mountains which vary on each episode, and can have a strong effect on the outcome. However, a straightforward DQN can solve it well. Do you have a specific problem in mind when asking this question? Are you interested in these degrees of POMDP, or should we assume a difficult one, like Poker? – Neil Slater Apr 03 '19 at 09:33
  • 3
    In the paper [Deep Recurrent Q-Learning for Partially Observable MDPs](https://arxiv.org/pdf/1507.06527.pdf), the DQN (an extension of Q-learning) is further extended to handle the partially observed nature of a single frame of an Atari 2600 game. – Philip Raeisghasem Apr 03 '19 at 09:49
  • @NeilSlater What do you mean by "degrees"? Do you mean that full information about states are sometimes available even in the context of POMDPs? I don't get how this is strictly related to the question. – nbro Apr 03 '19 at 09:58
  • @nbro: I mean there is more than one way for a system to be a POMDP. Philip linked a paper above that I was going to refer as well - one way for an environment to be a POMDP is to have states that do not summarise history fully, but where actual state is relatively easy to infer from history - and you can see that DQN can be extended using a RNN in place of regular NN, without many other changes for that kind of environment. There are other ways that a system can be a POMDP, for example in some games, the environment will use additional knowledge *against* the agent, which is harder to solve. – Neil Slater Apr 03 '19 at 11:44
  • @NeilSlater Yes, I am aware of this. I added a paragraph to my answer that is related to this. – nbro Apr 03 '19 at 11:45
  • 1
    @nbro: That's fine. It would be hard to cover all the ways a system can be a POMDP (and would require a different sort of answer). In quite a few cases, extensions to DQN may not even be required, or can be straightforward. Which is why I am asking OP for clarification. – Neil Slater Apr 03 '19 at 11:47
  • @NeilSlater I don't think the question needs more clarification. In general, Q-learning assumes that the current state is Markov and accessible, but this is not the case in POMDP, so Q-learning (tabular one, at least) shouldn't be directly applicable to POMDP. Nowadays, people use NNs to represent the Q functions, so a lot of approximations can be done, and Q-learning or variations of it are applied to partially observable problems/environments (where some engineering is made in order to build/represent artificial states). – nbro Apr 03 '19 at 11:50
  • @nbro: There's more to it than that, and I think clarification would help. – Neil Slater Apr 03 '19 at 11:51
  • @NeilSlater I think you brought up a good point that I wasn’t not aware of. Then I hope to know how applicable Q learning and SARSA are to each type and degree of POMDP. – drerD Apr 03 '19 at 16:59

1 Answers1

6

The usual (as presented in Reinforcement Learning: An Introduction) $Q$-learning and SARSA algorithms use (and update) a function of a state $s$ and action $a$, $Q(s, a)$. These algorithms assume that the current state $s$ is known. However, in POMDP, at each time step, the agent does not know the current state, but it maintains a "belief" (which, mathematically, is represented as a probability distribution) in what the current state might be, so it cannot maintain (an approximation of) the function $Q(s, a)$. Hence, the usual Q-learning and SARSA algorithms shouldn't be directly applicable to a POMDP.

However, $Q$-learning is often used in the contexts where observations emitted by the environment (or transformations of the raw observations) are used to build the current state (which is assumed to be Markov, even if it is not). For example, in the original DQN, the action taken at the current step and the raw observation and the reward emitted by the environment (after this action is taken) are combined to produce the current (Markov) state. It might not be the case that the way they combine the action, the reward and the observation is sufficient to fully describe the current state (which might not even be Markov).

In this report, Deep Reinforcement Learning with POMDPs, the author attempts to use Q-learning in a POMDP setting. He suggests to represent a function, either $Q(b, a)$ or $Q(h, a)$, where $b$ is the "belief" over the states and $h$ the history of previously executed actions, using neural networks. So, the resulting parameterized functions would be denoted by $Q(b, a; \theta)$ or $Q(h, a; \theta)$, where $\theta$ is a vector representing the parameters of the corresponding neural network. Essentially, the author uses a DQN (with an experience replay buffer and target network), but the results are not great: $Q$ values converge, but policies do not and they are not robust (in that they are sensitive to small perturbations).

nbro
  • 39,006
  • 12
  • 98
  • 176
  • Another approach (can't recall paper) is to factorize state space into observability classes, states of the same class have the same observation, assign reward and transition distributions to those classes, after that work with them as observable MDP – mirror2image Apr 03 '19 at 10:10
  • @mirror2image any reference to that? – drerD Apr 04 '20 at 02:34
  • 1
    @drerD google POMDP infosets (also infosets determinization) – mirror2image Apr 06 '20 at 07:48
  • @mirror2image You can provide a formal answer! I encourage you to do it. – nbro Apr 06 '20 at 12:47