Dennis Soemers provides an important point that from a theoretical standpoint, this can be seen as a non-issue. However, what you bring up is an important practical issue of potential-based reward shaping (PBRS).
The issue is actually worse than you describe---it's more general than $s = s'$. In particular, the issue presents itself differently based on the sign of your potential function. For example, in your case it looks like the potential function is positive: $P(s) > 0$ for all $s$. The issue (as you have found) is that an increase in potential (regardless of whether $s = s'$) might not be enough to overcome the multiplication by $\gamma$, and thus the PBRS term may be negative. In particular, only when the fold-change in $P$ is large enough ($\frac{P(s')}{P(s)} > \frac{1}{\gamma}$) will the PBRS term actually be positive.
The situation changes when the potential function is negative, i.e. if $P(s) < 0$ for all $s$. In this case, you can actually get a positive PBRS signal even when there is a decrease in potential! In particular, only when the fold-change in $P$ is large enough (same inequality as before) will the PBRS term actually be negative.
To summarize, when $P > 0$, a decrease in potential will always lead to a negative PBRS term, but an increase must overcome a barrier due to $\gamma$ for the term to be positive. When $P < 0$, an increase in potential will always lead to a positive PBRS term, but a decrease must overcome a barrier due to $\gamma$ for the term to be negative.
The intuition behind PBRS is that improving the potential function should be rewarded, and decreasing it should be penalized. However, it turns out that whether or not this holds true depends on things like 1) the sign of the potential function, 2) the fold-change in potential, or 3) the resolution of your environment. For #3, if the temporal resolution of your environment can be altered such that an action brings you "partway" from $s$ to $s'$, then at some environment resolution you will run into one of the two problematic circumstances above. Another issue is that PBRS is highly sensitive to, for example, adding a constant to the potential function.
Another related issue is that whether or not some constant improvement to the potential function leads to a positive/negative reward depends on how far you are from the "goal" state. Often potential functions are chosen such that they estimate how good a state is (after all, the best option for a potential function is the optimal value function). Say we choose $\gamma=0.99$ and that $P(s_{goal}) = 1000$ represents a goal state. Then increasing potential by one from $P(s) = 900$ to $P(s') = 901$ will have a negative reward of $-8.01$. In contrast, increasing potential by one from $P(s) = 90$ to $P(s') = 91$ will have a small positive reward of $+0.09$. This is another issue: the sign of the PBRS term depends on distance from the goal.
This paper has some interesting examples and outlines many of the issues above.
From my own experience, this is a large practical issue. The LunarLanderContinuous-v2 environment from OpenAI Gym includes a PBRS term, but they exclude the multiplication by $\gamma$ (i.e., $\gamma = 1$), presumably because the environment benchmark doesn't know the true discounting the RL user chooses. This environment can be solved using DDPG, for example, without significant hyperparameter tuning. However, if you use $\gamma = 0.99$ for your RL formulation, and edit the LunarLander code such that the PBRS term includes $\gamma = 0.99$, then DDPG fails to solve the environment. So, this is not a small computational issue---it has dramatic effects on training.
My solution has been to simply set $\gamma = 1$ in the PBRS term, even when using, say, $\gamma = 0.99$ in the RL formulation. This solves (or rather, circumnavigates) every issue above. While this loses out on the theoretical guarantee that adding the PBRS term does not affect the optimal policy, it can severely help training. (And there are no optimality guarantees using neural networks as function approximators anyway.)
This solution also seems to be what most benchmark environments have adopted. For example, most MuJoCo environments use PBRS terms with no $\gamma$ (equivalent to $\gamma = 1$). Alternatively, the omission of $\gamma$ could be attributed to the fact that including it would require the environment to know a priori what value of $\gamma$ the RL user chose. While feeding this into an OpenAI gym environment is easy to do, it's not typically done.
Keep in mind that while the theory guarantees that the optimal policy won't change by adding the PBRS term, adding the term doesn't necessarily help you approach the optimal policy. Yet, the whole point of using PBRS at all is to help you approach a good policy. So, it's a bit of a paradox, and I was comfortable with sacrificing the theoretical guarantee of policy invariance if it meant I could actually get to a good policy in the first place.