Assuming I have a quite advanced AI with consciousness which can understand the basics of electronics and software structures.
Will it ever be able to understand that its consciousness is just some bits in memory and threads in an operating system?
Assuming I have a quite advanced AI with consciousness which can understand the basics of electronics and software structures.
Will it ever be able to understand that its consciousness is just some bits in memory and threads in an operating system?
This is a great question, elements of which I have also been pondering on, though we are very far from being able to actually wrestle with it algorithmically. This question raises all kinds of metaphysical questions (Kant himself showed that pure reason is not sufficient for all questions, but I'm going to avoid that rabbit hole and focus on the mechanics of your question.)
Consciousness, most scientists argue, is not a universal property of all matter in the universe. Rather, consciousness is restricted to a subset of animals with relatively complex brains. The more scientists study animal behavior and brain anatomy, however, the more universal consciousness seems to be. A brain as complex as the human brain is definitely not necessary for consciousness.
Source: Scientific American "Does Self-Awareness Require a Complex Brain?"
Thus, an automata that receives input may be said to be consciousness, with the caveat that this idea is probably still considered radical. The key is distinguishing mere "consciousness" from much more complex concepts such as self-awareness.
But this is sticky, because automata that use Machine Learning are "aware" of themselves in that the may modify their "thought" process and even their "physical" structure.
But ML systems are certainly not self-aware in the human sense. A question might be, is this simply a function of these systems not being full Algorithmic General Intelligences, or is there more to it? If there is more to it, is it strictly a metaphysical question, or can an answer be derived through purely rational means? Even if the latter were the case, there is still the problem of subjectivity, as in: "Is the automata truly self-aware or is it just mimicking self-awareness?" which brings us back to the metaphysical question of "Is there a difference?".
However,
I intentionally use corpus because it relates both to text (which may be code or even a string of bits in its most reduced form, per the concept of a Turing Machine) and also has an anatomical meaning, as in the body of an organism. Corpus comes from the Latin and the extension of its meaning to include matter-as-information is modern.
Machines will never be conscious.
Let's try this theoretical thought exercise. You memorize a whole bunch of shapes. Then, you memorize the order the shapes are supposed to go in, so that if you see a bunch of shapes in a certain order, you would "answer" by picking a bunch of shapes in another proper order. Now, did you just learn any meaning behind any language? Programs manipulate symbols this way. (previously, people have either skirted this question or never had a satisfactory answer)
The above was my reformulation of Searle's rejoinder to System Reply to his Chinese Room Argument.