4

Assuming I have a quite advanced AI with consciousness which can understand the basics of electronics and software structures.

Will it ever be able to understand that its consciousness is just some bits in memory and threads in an operating system?

nbro
  • 39,006
  • 12
  • 98
  • 176
Andy
  • 69
  • 2

2 Answers2

5

This is a great question, elements of which I have also been pondering on, though we are very far from being able to actually wrestle with it algorithmically. This question raises all kinds of metaphysical questions (Kant himself showed that pure reason is not sufficient for all questions, but I'm going to avoid that rabbit hole and focus on the mechanics of your question.)

  • Consciousness: This is distinct from self-awareness, and fundamentally, may be said to require only awareness of something.

Consciousness, most scientists argue, is not a universal property of all matter in the universe. Rather, consciousness is restricted to a subset of animals with relatively complex brains. The more scientists study animal behavior and brain anatomy, however, the more universal consciousness seems to be. A brain as complex as the human brain is definitely not necessary for consciousness.
Source: Scientific American "Does Self-Awareness Require a Complex Brain?"

Thus, an automata that receives input may be said to be consciousness, with the caveat that this idea is probably still considered radical. The key is distinguishing mere "consciousness" from much more complex concepts such as self-awareness.

  • Self-Awareness: the holy grail. This is the idea that a set of elements, such as a human organism, is aware of itself.

But this is sticky, because automata that use Machine Learning are "aware" of themselves in that the may modify their "thought" process and even their "physical" structure.

But ML systems are certainly not self-aware in the human sense. A question might be, is this simply a function of these systems not being full Algorithmic General Intelligences, or is there more to it? If there is more to it, is it strictly a metaphysical question, or can an answer be derived through purely rational means? Even if the latter were the case, there is still the problem of subjectivity, as in: "Is the automata truly self-aware or is it just mimicking self-awareness?" which brings us back to the metaphysical question of "Is there a difference?".

However,

  • If there were a full Algorithmic General Intelligence that had consciousness equatable with human consciousness, that was aware, and even able to work with the basic components of it's corpus*, it would certainly be able to grasp that it's consciousness is a function of the "bits and bytes", just as a human is aware we are soft machines, and that our consciousness is a function of our bodies and minds.

I intentionally use corpus because it relates both to text (which may be code or even a string of bits in its most reduced form, per the concept of a Turing Machine) and also has an anatomical meaning, as in the body of an organism. Corpus comes from the Latin and the extension of its meaning to include matter-as-information is modern.

DukeZhou
  • 6,237
  • 5
  • 25
  • 53
  • I don't understand how this sentence "_But this is sticky, because automata that use Machine Learning are "aware" of themselves in that the may modify their "thought" process and even their "physical" structure._" is true. Which programs that use ML are self-aware? According to which definition of self-awareness would this be true? – nbro Dec 09 '21 at 21:30
1

Machines will never be conscious.

Let's try this theoretical thought exercise. You memorize a whole bunch of shapes. Then, you memorize the order the shapes are supposed to go in, so that if you see a bunch of shapes in a certain order, you would "answer" by picking a bunch of shapes in another proper order. Now, did you just learn any meaning behind any language? Programs manipulate symbols this way. (previously, people have either skirted this question or never had a satisfactory answer)

The above was my reformulation of Searle's rejoinder to System Reply to his Chinese Room Argument.

pixie
  • 11
  • 1
  • 1
    The giant elephant in the room is we still don't know how human consciousness works. Counter-arguments generally follow the idea that if something exists in nature, it can probably be modeled or replicated with a different mechanism. Jury is still well out on this theoretical subject. – DukeZhou Mar 11 '19 at 20:40
  • 1
    "if something exists in nature, it can probably be modeled or replicated with a different mechanism." I'd like to know what exactly backs up that assertion, if anything. That's not a counterargument because arguing via assertions is invalid. At least there's a thought experiment behind my position. – pixie Mar 15 '19 at 04:18
  • @pixie A person could argue that an airplane flies, like a bird, so we have been able to replicate nature in the past. The big problem with this statement is that airplanes do not fly like birds. So, we may be able to do something similar to nature, but not exactly the same. – nbro Nov 11 '19 at 21:21