2

I have a single-agent RL model in which the dimension of the dimension of the action space is $70$. This action space is too big and the deep RL agent is not learning properly. The boundaries of the action space are $-1$ and $1$.

My question is, how can I reduce the dimensionality of the action space? I have tried to use auto-encoders with random vectors of dimension $70$ between $-1$ and $1$, but it is not working properly. I am training the encoders using a hidden layer with 10 neurons. However, comparing the original action with the result of encoding and decoding it I can see that the average difference between the components is $0.2$ when the action is in the range $[-1,1]$

Leibniz
  • 69
  • 4
  • 3
    Can you provide more details about 1. how you're using and training auto-encoders and 2. what you mean by "it's not working"? – nbro May 02 '22 at 14:21

0 Answers0