4

According to the Wikipedia page of the physical symbol system hypothesis (PSSH), this hypothesis seems to be a vividly debated topic in philosophy of AI. But, since it's about formal systems, shouldn't it be already disproven by Gödel's theorem?

My question arises specifically because the PSSH was elaborated in the 1950s, while Gödel came much earlier, so at the time the Incompleteness theorems were already known; in which way does the PSSH deal with this fact? How does it "escape" the theorem? Or, in other words, how can it try to explain intelligence given the deep limitations of such formal systems?

nbro
  • 39,006
  • 12
  • 98
  • 176
olinarr
  • 745
  • 6
  • 20
  • 1
    I think this is why the basic definition of active intelligence is rooted in utility, not a specific quality or level of capability. Human-level intelligence may be distinct, but an process does not need human-level capability to demonstrate intelligence per se. Current learning algorithms demonstrate narrow intelligence in an increasing array of tasks. (Great question, btw!) – DukeZhou Mar 29 '19 at 16:40
  • 1
    There's an idea that this fundamental definition of intelligence is controversial, but the [game theorists](https://en.wikipedia.org/wiki/John_von_Neumann) and [computationalists](https://en.wikipedia.org/wiki/Computational_theory_of_mind) have the "heaviest hitters" by far, and it's the only defined model we fully understand, so the burden of proof is on the dissenters.) – DukeZhou Mar 29 '19 at 18:26

3 Answers3

3

The PSSH is often attacked via either Godel's theorems or Turing's incomputability theorem.

However, both attacks have an implicit assumption: that to be intelligent is to be able to decide undecidable questions. It's really not clear that this is so.

Consider what Godel's theorems say, in essence:

  1. "powerful" formal systems cannot prove, using only techniques from within the system, that they are self-consistent.
  2. There are statements that are true that cannot be proven within a given "powerful" formal system.

Suppose that we allow both of those facts. The missing step in the argument is the following statements:

  1. You need to be able to prove the consistency of your own reasoning system to be considered intelligent.
  2. You need to be able to correct reason out a proof of all true statements to be considered intelligent.

The main problem is, under this definition, humans are probably not considered intelligent! I certainly have no way to prove that my reasoning is sound and self-consistent. Moreover, it is objectively not so! I frequently believe contradictory things at the same time.

I also am not able to reason out proofs of all the statements that appear to be true, and it seems entirely plausible that I cannot do so because of the inherent limitations of the logical systems I'm reasoning with.

This is a contradiction. The overall argument was that one of these 4 statements is false:

  1. Godel's theorems say symbol systems lack some important properties.
  2. Intelligent things have the properties that Godel says symbol systems lack.
  3. Humans are intelligent.
  4. Humans can't do the things Godel says symbol systems can't do.

Some authors (like John Searle) might argue the false premise is 4. Most modern AI researchers would argue that the false premise is 2. Since intelligence is a bit nebulous, which view is correct may rely on metaphysical assumptions, but most people agree on premises 1 & 3.

nbro
  • 39,006
  • 12
  • 98
  • 176
John Doucette
  • 9,147
  • 1
  • 17
  • 52
  • Err.. do most professional scientists still agree on premise 3 given what happened this year? – user21820 Aug 19 '20 at 12:59
  • @user21820 What happened last year? Are you talking about how we dealt with covid, or is this is a trivial but wrong guess? – nbro Feb 02 '21 at 01:26
  • For people interested in this topic, the AIMA book (3rd edition at least) contains a small section related to this: [26.1.2 The mathematical objection](https://cs.calvin.edu/courses/cs/344/kvlinden/resources/AIMA-3rd-edition.pdf#page=1041). – nbro Feb 02 '21 at 01:27
  • @nbro: Yes the coronavirus pandemic response indeed... It's not just systemic failure of many governments; significant fraction of humans seem stupid in their behaviour; have you seen the studies on how many people believe this or that conspiracy theory, apparently with genuine conviction? As well as studies that show that their stupid behaviour is strongly correlated with their choice of news sources? – user21820 Feb 02 '21 at 03:51
  • The error is #4. It's an invalid implication. Humans can't solve everything doesn't imply they can't solve things that symbol systems can't solve. – yters Dec 11 '22 at 03:16
2

Although there seems to be an apt analogy between Gödel's theorems and the PSHH, there is nothing formal linking the two together.

More concretely, Gödel's theorems are about systems that decide certain "truths" about mathematics, but unless I am mistaken, the PSSH doesn't imply that the symbol system of the mind needs to decide truths. Though implicitly us humans do decide facts about math, there isn't a formal interpretation of how that might be done in the PSHH, thus Gödel's theorems do not apply.

However, this answer is still good, under the assumption that the formal system we are talking about does indeed decide certain truths about math.

nbro
  • 39,006
  • 12
  • 98
  • 176
k.c. sayz 'k.c sayz'
  • 2,061
  • 10
  • 26
0

I think your conceptualization of this is a bit off. All PSSH states is "A physical symbol system has the necessary and sufficient means for general intelligent action."

Gödel's theorems state 2 basic things:

  1. Any sufficiently powerful formal system cannot prove its own consistency.

  2. There are theorems in any sufficiently powerful formal system that cannot be proved within the system.

PSSH doesn't have too much to do with Gödel.

hisairnessag3
  • 1,235
  • 5
  • 15