Compression will be lossy, some detailed features in the state will be off calculation.
A common technique might be using max-pool function or layer (before feeding to policy network if RL here is deep RL).
Max-pooling is very lossy, use some other classic compression algos such as Zip, Rar but using these classic no-loss compression is weird and extremely slow in the model pipeline.
Possible solution if allowing lossy data, commonly: Use max-pool (giving out high contrast data), average-pool (giving out blurred data).
To keep data intact, TensorFlow can compress tensors: "only sacrificing a tiny fraction of model performance. It can compress any floating point tensor to a much smaller sequence of bits."
See: https://github.com/tensorflow/compression