This is the maximum value achievable under any policy
The statement says that $Q^*(s,a)$ is the maximum possible state-action value that is achievable (at most) by any policy, $\pi$: including the random policy, and the optimal policy. It is also explained in slide 23:
- "An optimal value function is the maximum achievable value"
- "Once you have $Q^*$ we can act optimally"
- "Optimal value maximizes over all decisions"
Indeed, the optimal policy $\pi^*$ would always achieve $Q^*$. It does not mean that the implicit policy (the one that can be derived from it: $\pi^*(s)=\arg\max_aQ^*(s,a)$) is not important, neither that it cannot be computed from $Q^*$.
You have to think about this in terms of policy evaluation (or prediction, in classical RL terminology): you have the optimal action-value table, that you can use to compute the value (or return) that some policy achieves, as well as to derive the optimal policy by taking the action that maximizes $Q^*$ for a given state.