The use of normalisation in neural networks and many other (but not all - decision trees are a notable exception) machine learning methods, is to improve the quality of the parameter space with respect to optimisers that will apply to it.
If you are using a function approximator that benefits from normalisation in supervised learning scenarios, it will also benefit from it in reinforcement learning scenarios. That is definitely the case for neural networks. And neural networks are by far the most common approximator used in deep reinforcement learning.
Unlike supervised learning, you will not have a definitive dataset where you can find mean and standard deviation in order to scale to the common $\mu = 0, \sigma = 1$. Instead you will want to scale to a range, such as $[-1, 1]$.
You may also want to perform some basic feature engineering first, such as using log of a value, or some power of it - anything that would make the distribution of values you expect to see more like a Normal distribution. Again, this is something you could do in supervised learning more easily, but maybe you know enough about the feature to make a good guess.