2

I'm aware of those AI systems that can play games and neural networks that can identify pictures.

But are they really thinking? Do they think like humans? Do they have consciousness? Or are they just obeying a bunch of codes?

For example, when an AI learns to play Pacman, is it really learning that it should not touch the ghosts, or are they just following a mechanical path that will make them win the game?

nbro
  • 39,006
  • 12
  • 98
  • 176
Rahul R
  • 159
  • 3
  • I'm not sure if I should close this post as too broad or not. Probably yes. However, maybe your questions boil down to the "simple" questions: "Can/do artificial intelligences really think like a human?" or "Can/do AIs have consciousness?". The fact that I don't really know what is your question suggests that I should close this post. – nbro Dec 07 '20 at 18:12

3 Answers3

5

Do genetic algorithm and neural networks really think?

Genetic algorithms and neural networks are vastly different concepts. Both of them do not think.

I'm aware of those AI programmes which can play games and neural networks which can identify pictures. But are they really thinking

Depends on how you define "thinking", but I say "no".

Do they think like humans?

No.

Do they have consciousness?

No.

Or are they just obeying a bunch of codes?

Yes. It is a machine. A program.


One example where most people realize that it is very different is the common thread for RNNs which generate text (example texts). They can generate syntactically correct texts, but they "forget" about any actors in it. They don't connect the cables.

But then, on the other hand, would you say a bacterium thinks? An ant? A mouse? A dog? A chimpanzee? Do we "think"? Where exactly is the border - and why?

Martin Thoma
  • 1,055
  • 7
  • 17
4

TL;DR Ignore the hype, current systems (in 2018) are very far removed from human-like "thinking", despite interesting and useful results. State-of-the-art for "thinking and behaving like a creature in general" has not reached the sophistication level of insects, even though we have example narrow AIs that can beat the world's best at intellectual games.

There are some issues with the question as phrased, it is not a precise question, and includes some common wooly logic that many people have when discussing AI. However, I think these occur so often, that they are worth addressing here:

But are they really thinking.

Define "thinking". It is not easy.

Do they think like humans?

We don't fully know or understand how humans think.

Do they have consciousness?

We don't fully know or understand what consciousness is.

Or are they just obeying a bunch of codes?

Are you just obeying a "bunch of codes"? There is no reason to suspect that humans have a magic "something" that powers thought. All scientific evidence points to humans being sophisticated machines that follow the laws of physics and chemistry. However, the level of complexity is such that how brains, thinking and consciousness work is a very hard problem to solve. It is also possible that our lack of knowledge obscures some unknown property of the brain or "thinking" that means it is more than applied physics and chemistry - but there is zero evidence for such a thing.

For example, when an AI learns to play pacman, is it really learning that it should not touch the ghosts or are they just following a mechanical path which will make them win the game?

This is an interesting question, and there is more to it than a simple yes/no answer:

Learning is not the same as thinking. We expect an intelligent entity to be able to learn, when it receives new information. However, something that only learns - the only feature of it is that it gains some performance measure when fed experience - is only solving part of what it means to be intelligent or thinking.

A human player starts with assumptions. A game like Pacman presents a simplified world that obeys many rules that we are already familiar with. In the game there is a space (the screen), in which rules of distance apply. There are objects (the pacman, walls, pills, power pills, ghosts) that have coherent shapes, and persistent properties that are recognisable. Object persistence is a thing. The game play follows familiar concepts of time. All these things are at least initially meaningless to a neural network.

A neural network generalises. One of the reasons of the success of Deep Learning is that neural networks can, within limits, learn rules such as "avoid the ghosts" or in the case of a typical agent in DQN, that ghosts getting closer to the player is an indication that there is a low value in staying still or moving towards a ghost, and a high value in moving away towards an escape route. Not only that, but there is a good chance that a deep neural network will learn an internal representation that really does detect the ghosts and learn an association between them and the type of action, and this can be inspected by generating "heat maps" of neuron response to different areas of an image

Neural networks require far more experience than humans to learn to play well. This demonstrates that they are learning differently. Some of this may be due to human innate knowledge transferring to playing a game. However, a typical reinforcement-learning training scenario would need to demonstrate that ghosts are dangerous 100s of times before the neural network finally is able to generalise well. Whilst a human would learn that after only a few encounters, maybe only one. There is much research in this area, as making learning as efficient as possible is an important goal.

Neural networks are too direct and simple to possess internal world models. Raw generalisation is not the same as having the kind of rich internal model that you may be considering as "thinking". After training, the neural network is a function which maps a visual field of pixels into values and/or actions. There is no internal "narrative", even though the function is enough to behave correctly, it does not encode concepts such as space, object persistence, object traits etc, and it most definitely does not experience them.

In some ways, asking if an artificial neural network can think is like asking if a small slice of your retina, or a tiny cube (less than 1mm3) of brain tissue can think. Or perhaps if your walking reflex and innate sense of balance counts as thinking. Current state-of-the-art efforts are at that kind of scale both in terms of computer power and complexity.

Neil Slater
  • 28,678
  • 3
  • 38
  • 60
1

This is a really interesting question that can't be answered correctly since we lack a common understanding or a universally valid definition of what "thinking" means. Still I will try to give my humble opinion on it.

First of all I would like to mention that consciousness might not exist in a binary fashion (as possessing it or not) but in a gradual fashion.

In my understanding what neural networks do is learning a mapping, i.e. a function from the input space of their sensors to the output space of the actuators. If the neural network is playing Pacman, input consists of the pixels of the game and the output are actions the agent can take in the game. Since this process runs in a sequential session (i.e. input $\rightarrow_{NN}$ output; input $\rightarrow_{NN}$ output and so on) I would not go so far as to consider the neural network conscious.

An interesting approach on this question comes from Douglas Hofstadter with his book "Gödel, Escher, Bach". The main thesis of this book (in my understanding) is that self-reference is a crucial and necessary precondition for (self-) consciousness.

  • 2
    Why do you think the answer cannot be answered correctly? – Martin Thoma Nov 08 '18 at 15:19
  • 1
    What I mean is what you wrote as "Depends on how you define 'thinking'". We do not have a common understanding or a universally valid definition of thinking. So we don't know what we're talking about when we say "thinking". Thus, we can't answer a question about "thinking". – adriculteur Nov 08 '18 at 15:25
  • 1
    Thank you! That is a good point and might be worth to make more clear directly in your answer. I was close to downvote it, because I interpreted your answer very different. With that explanation, have your upvote :-) – Martin Thoma Nov 08 '18 at 15:37
  • @MartinThoma thank you for your feedback. I edited my answer accordingly. – adriculteur Nov 08 '18 at 16:02