5

Following the DQN algorithm with experience replay:

Store transition $\left(\phi_{t}, a_{t}, r_{t}, \phi_{t+1}\right)$ in $D$ Sample random minibatch of transitions $\left(\phi_{j}, a_{j}, r_{j}, \phi_{j+1}\right)$ from $D$ Set

$$y_{j}=\left\{\begin{array}{cc}r_{j} & \text { if episode terminates at j+1} \\ r_{j}+\gamma \max _{d^{\prime}} \hat{Q}\left(\phi_{j+1}, a^{\prime} ; \theta^{-}\right) & \text {otherwise }\end{array}\right.$$

Perform a gradient descent step on $\left(y_{j}-Q\left(\phi, a_{j} ; \theta\right)\right)^{2}$ with respect to the network parameters $\theta$.

We calculate the $loss=(Q(s,a)-(r+Q(s+1,a)))^2$.

Assume I have positive but changing rewards. Meaning, $r>0$.

Thus, since the rewards are positive, by calculating the loss, I notice that almost always $Q(s)< Q(s+1)+r$.

Therefore, the network learns to always increase the $Q$ function , and eventually, the $Q$ function is higher in same states in later learning steps.

How can I stabilize the learning process?

Faizy
  • 1,074
  • 1
  • 6
  • 30
BestR
  • 183
  • 1
  • 7

4 Answers4

1
  1. You can use discount factor gamma less then one.

  2. You can use finite time horizon - only for states which are no farther away then T time steps reward propagate back

  3. You can use sum of rewords averaged over time for Q

All of those are legitimate approaches.

mirror2image
  • 695
  • 5
  • 14
  • @mimirror2image 1. My discounr factor is 0.9. This increasing process happens even with discount factor 0.1. 2. How to calculate it? The network should calculate it. 3.Sum of rewards will still be > 0, which returns to the same procedure of increasing q function – BestR Apr 24 '19 at 15:23
1

Therefore,the network learns to always increase the Q function , and eventually the Q function is higher in same states in later learning steps

If your value function keeps increasing in later steps that means that the network is still learning those Q-values, you shouldn't necessarily prevent that. Your Q-values won't increase forever even if the rewards are always positive. You basically have a regression problem here and when the value of $Q(s,a)$ becomes very close to the predicted value of $r+Q(s',a)$ value of $Q(s,a)$ will stop increasing by itself.

Brale
  • 2,306
  • 1
  • 5
  • 14
  • Yes, but than the rewards are "swollen" in the Q function std. The Q function should'nt be more than 5*max_reward. – BestR Apr 24 '19 at 15:20
  • 1
    sorry, I didn't quite understand, are you saying that Q-value can't be more than 5 times maximum possible reward ? What makes you think that. – Brale Apr 24 '19 at 15:41
  • Im saying it should not be. Otherwise, the rewards will be swollen in the q function standard deviation – BestR Apr 24 '19 at 16:04
  • @BestR: there is no basis in theory for your statement about this limit to the action value - its value is supposed to be the discounted sum of future rewards, and that can be any arbitrary amount times individual rewards, depending on how you have set up the environment. Please could you explain more where this constraint that you want has come from, as it may help clarify your question? – Neil Slater Apr 24 '19 at 18:53
  • I'm still a bit confused, why exactly are you calculating std of Q-values, it's not part of the DQN algorithm or are you doing that for your own statistics ? Also, what do you mean by rewards get "swollen" in Q-value std, why exactly are you using rewards to calculate std of Q-values. Are you maybe changing your rewards based on the value of std for Q-values ? – Brale Apr 25 '19 at 07:35
0

I changed the rewards to be negative and positive by substructing the mean reward.

It seems to improve the Q function boundries.

BestR
  • 183
  • 1
  • 7
0

The popular Q-learning algorithm is known to overestimate action values under certain conditions. It was not previously known whether, in practice, such overestimations are common, whether they harm performance, and whether they can generally be prevented. In this paper, we answer all these questions affirmatively. In particular, we first show that the recent DQN algorithm, which combines Q-learning with a deep neural network, suffers from substantial overestimations in some games in the Atari 2600 domain. We then show that the idea behind the Double Q-learning algorithm, which was introduced in a tabular setting, can be generalized to work with large-scale function approximation. We propose a specific adaptation to the DQN algorithm and show that the resulting algorithm not only reduces the observed overestimations, as hypothesized, but that this also leads to much better performance on several games. https://www.aaai.org/ocs/index.php/AAAI/AAAI16/paper/viewPaper/12389

taarraas
  • 101