I have the following code (below), where an agent uses Q-learning (RL) to play a simple game.
What appears to be questionable for me in that code is the fixed learning rate. When it's set low, it's always favouring the old Q-value over the learnt/new Q-value (which is the case in this code example), and, vice-versa, when it's set high.
My thinking was: shouldn't the learning rate be dynamic, i.e. it should start high because at the beginning we don't have any values in the Q-table and the agent is simply choosing the best actions it encounters? So, we should be favouring the new Q-values over the existing ones (in the Q-table, in which there's no values, just zeros at the start). Over time (say every n
number of episodes), ideally we decrease the learning rate to reflect that, over time, the values in the Q-table are getting more and more accurate (with the help of the Bellman equation to update the values in the Q-table). So, lowering the learning rate will start to favour the existing value in the Q-table over the new ones. I'm not sure if my logic has gaps and flaws, but I'm putting it out there in the community to get feedback from experienced/experts opinions.
Just to make things easier, the line to refer to, in the code below (for updating the Q-value using the learning rate) is under the comment: # Update Q-table for Q(s,a) with learning rate
import numpy as np
import gym
import random
import time
from IPython.display import clear_output
env = gym.make("FrozenLake-v0")
action_space_size = env.action_space.n
state_space_size = env.observation_space.n
q_table = np.zeros((state_space_size, action_space_size))
num_episodes = 10000
max_steps_per_episode = 100
learning_rate = 0.1
discount_rate = 0.99
exploration_rate = 1
max_exploration_rate = 1
min_exploration_rate = 0.01
exploration_decay_rate = 0.001
rewards_all_episodes = []
for episode in range(num_episodes):
# initialize new episode params
state = env.reset()
done = False
rewards_current_episode = 0
for step in range(max_steps_per_episode):
# Exploration-exploitation trade-off
exploration_rate_threshold = random.uniform(0, 1)
if exploration_rate_threshold > exploration_rate:
action = np.argmax(q_table[state,:])
else:
action = env.action_space.sample()
new_state, reward, done, info = env.step(action)
# Update Q-table for Q(s,a) with learning rate
q_table[state, action] = q_table[state, action] * (1 - learning_rate) + \
learning_rate * (reward + discount_rate * np.max(q_table[new_state, :]))
state = new_state
rewards_current_episode += reward
if done == True:
break
# Exploration rate decay
exploration_rate = min_exploration_rate + \
(max_exploration_rate - min_exploration_rate) * np.exp(-exploration_decay_rate*episode)
rewards_all_episodes.append(rewards_current_episode)
# Calculate and print the average rewards per thousand episodes
rewards_per_thousands_episodes = np.array_split(np.array(rewards_all_episodes), num_episodes/1000)
count = 1000
print("******* Average reward per thousands episodes ************")
for r in rewards_per_thousands_episodes:
print(count, ": ", str(sum(r/1000)))
count += 1000
# Print updated Q-table
print("\n\n********* Q-table *************\n")
print(q_table)