Planning problems have been the first problems studied at the dawn of AI (Shakey the robot). Graph search (e.g. A*) and planning (e.g. GraphPlan) algorithms can be very efficient at generating a plan. As for problem formulation, for planning problems PDDL is preferred. Although planning problems in most cases only have discrete states and actions, the PDDL+ extention covers continuos dimensions of the planning problem also.
If a planning problem has non-deterministic state transitions, classical planning algorithms are not a well suited solution method, (some) reinforcement learning is considered to be a well suited solution in this case. From this point of view, if a planning problem has a state transition probability less than 1, classical planning methods (A*, GraphPlan, FF planner, etc.) are not the right tool to solve these problems.
Looking at reinforcement learning examples, in some cases environments are used to showcase reinforcement learning algorithms which could be very well solved by search/planning algorithms. They are fully deterministic, fully observable and sometimes they even have discrete action and state spaces.
Given an arbitrary fully deterministic planning problem, what is/are the chartacteristic(s) which make reinforcement learning, and not "classical planning" methods better suited to solve the problem?