The feedback given by humans to align artificial intelligence is limited by the reaction time and processing speed of the finite number of us, now less than $2^{33}$. As an artificial intelligence (or a growing number of them) grows in complexity, more and more possible actions need to be checked for human compatibility. But human feedback cannot grow at the pace of that expanding machine, therefore necessarily weakening the coupling between natural and artificial intelligence.
Does this make alignment impossible in practice?