1

An example is the halting problem, which states computing cannot be solved by exhaustion, but which humans avoid trivially by becoming exhausted.

Humans typically give up what seems like a lost cause after a certain point, whereas a computer will just keep chugging along.

  • Do flaws have utility?

Can inability be a strength and will AGI require such limitations to achieve human-level intelligence? Humans are simply not capable of infinite loops, except arguably in cases of mental illness. Are there other problems similar to the halting problem where weakness is a benefit?

nbro
  • 39,006
  • 12
  • 98
  • 176
DukeZhou
  • 6,237
  • 5
  • 25
  • 53
  • 1
    It's not clear to me why you claim that humans can solve the halting problem. If that was true, that would imply that there are functions that Turing machines cannot compute, so there are more powerful computers than Turing machines, which, as far as I know, has not yet been proven (see the Church-Turing thesis). – nbro Jun 05 '21 at 01:22
  • 1
    I agree with nbro, I think you need to make it a little clearer that you don't mean "give up" = "solve". Other than that, I think this could be an interesting question about utility and embodiment (at least that would be my take on what leads to an answer) – Neil Slater Jun 05 '21 at 08:04
  • 2
    Computers can also give up after a certain point. This is one way that algorithms are made to have guaranteed halting: by including a counter that ends the algorithm after a certain number of steps, if not halted before. A more physical analogy is running the computer off a battery. Then when it runs out of power it will stop through 'exhaustion' like a human. You said "give up what seem like a lost cause" which to me is very different from exhaustion -- it is more that the human has guessed their procedure does not halt and therefore terminates early. I don't really see that as a flaw. – user7834 Jun 12 '21 at 17:17
  • @NeilSlater & nbro — good point. I should have used "solved" and replaced it with "avoid". Basically, nature seems to produce that which is minimally optimal, and designs are often flawed but just adequate enough. Workarounds seem to be our holy grail in much of computer development. – DukeZhou Jun 17 '21 at 22:44
  • @user7834 I think that's a solid answer, because workarounds have high utility in applied computing, even when not optimal solutions. Wondering if there are other "flaws" that would be useful too #FeatureNotABug – DukeZhou Jun 18 '21 at 01:02
  • @DukeZhou Glad it's helpful. – user7834 Jun 18 '21 at 12:20
  • I think this post still contains wrong assumptions. 1. The fact that, sometimes, we're able to recognize an unsolvable or difficult problem is not a flaw, but rather the opposite. In fact, if there's a way to do what you want but in a smarter way (e.g. by spending less time), that would have more utility. Right? 2. We're not always able to tell whether a program will terminate. In some cases, that's easy for us, but we can't tell for sure in all cases too. Of course, that will happen if we run out of resources. – nbro Oct 16 '21 at 23:06
  • Anyway, to make this question answerable, you need to define what you mean by "flaw" by providing some concrete examples of what you really have in mind. Regarding the loops, [here](https://ai.stackexchange.com/q/2516/2444) you have another related post. Regarding this ambiguous claim "Humans are simply not capable of infinite loops, except arguably in cases of mental illness": if by 'a person capable of "loops"', you mean someone that continuously does the same "wrong" thing, of course, in practice, the loop will not be infinite because the person will eventually die. – nbro Oct 16 '21 at 23:06
  • But that's the same thing with computers. In theory (i.e. Turing machines), they can loop forever, but, in practice, this does not happen. If humans are computers, then, in theory, we may also be able to loop forever. – nbro Oct 16 '21 at 23:13

0 Answers0