Doing something like the dense, distance-based reward signal you propose is possible... but you have to do it very carefully. If you're not careful, and do it in a naive manner, you are likely to reinforce unwanted behaviour.
For example, the way I read that reward function you propose, it provide a positive reward for any steps taken by the agent, with larger rewards for steps that get you closer to the goal (except for steps moving back into the start, those would have a reward of $0$. There does not appear to be any "compensation" with negative rewards for moves that take you back away from the goal; in fact, such steps also still seem to carry positive rewards! This means that the optimal behaviour that your agent can end up learning is to keep moving around in circles (somewhat close to the goal, but never quite stepping into the goal) for an infinite amount of time, continuously racking up those positive rewards.
The idea of adding some extra (heuristic) rewards to speed up learning is referred to as "reward shaping". Naive approaches to reward shaping often end up unintentionally modifying the "true" objective, as highlighted above. The correct way to implement reward shaping, which provably does not modify the optimal policy, is Potential-Based Reward Shaping. The basic intuition behind this is that, if you use reward shaping to encourage "movement" in one "direction", you should also provide equivalent (taking into account discount factor $\gamma$) discouragement for subsequent "movement" in the other "direction".
Now, there is this really cool paper named "Expressing Arbitrary Reward Functions as Potential-Based Advice" which proposes a method that can automatically convert from additional reward shaping functions specified in the more "natural" or "intuitive" manner like you did, into (approximately) a potential-based one that is more likely to actually function correctly. This is not quite straightforward though, and the approach involves learning an additional value function which makes additional predictions used to implement the "conversion". So... in practice, in a simple grid-world like yours, I think it's going to be simpler to just figure out the correct potential-based definition yourself than trying to learn it like this, but it's cool stuff nevertheless.