I'm training both DQN and double DQN in the same environment, but DQN performs significantly better than double DQN. As I've seen in the double DQN paper, double DQN should perform better than DQN. Am I doing something wrong or is it possible?
Asked
Active
Viewed 1,491 times
8
-
1Hello. I am facing the same issue. What explanation finally did you give ? – ddaedalus Jan 10 '21 at 12:18
-
I have the same result, DQN performs better than Double DQN. I m try to understand why. – Mustafa Jan 25 '21 at 16:28
-
DDQN was designed to address Q-function overestimation. So it is possible that your DQN which performs "better" is just a red herring where Q learnt is bogus and over-optimistic. So when DDQN produces lower numbers it only looks "worse" but in fact is closer to realistic solution.. Just a thought.. – Boppity Bop Sep 10 '22 at 13:26
2 Answers
2
There is no thorough proof, theoretical or experimental that Double DQN is better then vanilla DQN. There are a lot of different tasks, paper and later experiments only explore some of them. What practitioner can take out of it is that on some tasks DDQN is better. That's the essence of Deep Mind's "Rainbow" approach - drop a lot of different methods into bucket and take best results.

mirror2image
- 695
- 5
- 14
-
1
-
1I have a problem where my DQN is better than DoubleDQN and I cannot give a valid explanation on why. – ddaedalus Jan 10 '21 at 12:14
1
That may happen when the value of the state is bad. You can find the example and explain about that in the link below.

I_Al-thamary
- 52
- 1
- 13
-
1What do you mean by "the value of the state is bad"? As I'm using an OpenAI-Gym environment, the value of the state is just the observation that I'm getting from it. – Angelo Apr 09 '19 at 07:53
-
1@Angelo you can read the blog above to understand the answer . As you calculate all actions at one state but all these actions do not affect the environment in a relevant way. – I_Al-thamary Apr 09 '19 at 08:11
-
1
-
2We prefer answers that contain the necessary info to answer the question, rather than answers that just link to another external resource, so I suggest that you provide the necessary info from that article in your answer. – nbro Nov 04 '20 at 21:23
-
In your link, it says that in DuelingDQN session, not in DoubleDQN as asked. – ddaedalus Jan 10 '21 at 12:16
-
@ddaedalus It has both of them and you can see this: https://medium.com/@parsa_h_m/deep-reinforcement-learning-dqn-double-dqn-dueling-dqn-noisy-dqn-and-dqn-with-prioritized-551f621a9823 – I_Al-thamary Jan 10 '21 at 14:57