3

I'm working on a problem that involves an RL agent with very large states. These states consist of several pieces of information about the agent. The states are not images, so techniques like convolutional neural networks will not work here.

Is there some general solutions to reduce/compress the size of the states for reinforcement learning algorithms?

nbro
  • 39,006
  • 12
  • 98
  • 176

1 Answers1

0

Compression will be lossy, some detailed features in the state will be off calculation.

A common technique might be using max-pool function or layer (before feeding to policy network if RL here is deep RL).

Max-pooling is very lossy, use some other classic compression algos such as Zip, Rar but using these classic no-loss compression is weird and extremely slow in the model pipeline.

Possible solution if allowing lossy data, commonly: Use max-pool (giving out high contrast data), average-pool (giving out blurred data).

To keep data intact, TensorFlow can compress tensors: "only sacrificing a tiny fraction of model performance. It can compress any floating point tensor to a much smaller sequence of bits."
See: https://github.com/tensorflow/compression

Dee
  • 1,283
  • 1
  • 11
  • 35
  • 3
    I think the OP is looking for lossy compression that works with state representations *in use* as input features, in which case zip etc will not work. However, it is not 100% clear from the question. In addition, pooling algorithms only work with dimensional patterns of identical signals (images, audio, financial series etc) - you should make that clear and it looks like the OP wants to exclude those solutions. – Neil Slater Mar 12 '21 at 07:48
  • 1
    Exactly, the format of input data is in the form of tabular data, so I'm not sure max pooling or other signal processing solutions will work here. – Saeid Ghafouri Mar 12 '21 at 13:22
  • this may be relevant then: https://github.com/tensorflow/compression – Dee Mar 12 '21 at 14:00