The following is from the book "Reinforcement learning and optimal control", by D. P. Bertsekas.
Chapter 2, page 52:
"The motivation for $l$-step lookahead is that for increasing values of $l$, one may require a less accurate approximation $\tilde{J}_{k+l}$ to obtain good performance. Otherwise expressed, for the same quality of cost function approximation, better performance maybe obtained as $l$ becomes larger. This makes intuitive sense, since in this case, the cost of more stages is treated exactly with optimization."
My question:
Why is it that for increasing values of $l$, one may require a less accurate approximation $\tilde{J}_{k+l}$ to obtain good performance?