The PSSH is often attacked via either Godel's theorems or Turing's incomputability theorem.
However, both attacks have an implicit assumption: that to be intelligent is to be able to decide undecidable questions. It's really not clear that this is so.
Consider what Godel's theorems say, in essence:
- "powerful" formal systems cannot prove, using only techniques from within the system, that they are self-consistent.
- There are statements that are true that cannot be proven within a given "powerful" formal system.
Suppose that we allow both of those facts. The missing step in the argument is the following statements:
- You need to be able to prove the consistency of your own reasoning system to be considered intelligent.
- You need to be able to correct reason out a proof of all true statements to be considered intelligent.
The main problem is, under this definition, humans are probably not considered intelligent! I certainly have no way to prove that my reasoning is sound and self-consistent. Moreover, it is objectively not so! I frequently believe contradictory things at the same time.
I also am not able to reason out proofs of all the statements that appear to be true, and it seems entirely plausible that I cannot do so because of the inherent limitations of the logical systems I'm reasoning with.
This is a contradiction. The overall argument was that one of these 4 statements is false:
- Godel's theorems say symbol systems lack some important properties.
- Intelligent things have the properties that Godel says symbol systems lack.
- Humans are intelligent.
- Humans can't do the things Godel says symbol systems can't do.
Some authors (like John Searle) might argue the false premise is 4. Most modern AI researchers would argue that the false premise is 2. Since intelligence is a bit nebulous, which view is correct may rely on metaphysical assumptions, but most people agree on premises 1 & 3.