Artificial consciousness is a challenging theoretical and engineering objective. Once that major challenge is met, the computer's conscious awareness of itself would likely be a minor addition, since the conscious computer is just another object of which its consciousness can be aware.
A child can look in the mirror and recognize that moving their hands back and forth or making faces produces corresponding changes in the reflection. They recognize themselves. Later on they realize that exerting physical control over their own movement is much easier than exerting control over another person's hands or face.
Some learn that limited control of the faces and manual operations of others is possible if certain social and economic skills are mastered. They become employers, landlords, investors, activists, writers, directors, public figures, or entrepreneurs.
Anyone who has studied the cognitive sciences or experienced the line between types of thought because they are a professional counselor or just a deep listener knows that the lines around consciousness are blurry. Consider these.
- Listening to speech
- Watching a scene
- Focusing on a game
- Presenting an idea
- Washing up for work
- Driving a car
- Choosing a purchase
Any one of these things can be done with or without certain kinds of consciousness, subconsciousness, impulse, or habit.
Subjectively, people report getting out of the car and not recalling having driven home. One can listen to someone talking, nod in affirmation, respond with, "Yeah, I understand," and even repeat what they said, and yet appear to have no memory of the content of the speech if queried in depth. One can read a paragraph and get to the end without comprehension.
In contrast, a person may mindfully wash up for work, considering the importance of hygiene and paying attention like a surgeon preparing for an operation, noticing the smell of the soap and even the chlorination of the city water.
Between those extremes, partial consciousness is also detectable by experiment and in personal experience. Consciousness most definitely requires attention functionality, which tentatively supervises the coordination of other brain-body sub-systems.
Once a biological or artificial system achieves the capacity to coordinate attentively, the objects and tasks toward which they can be coordinated can be interchanged. Consider these.
- Dialog
- Playing to win
- Detecting honesty or dishonesty
Now consider how similar or different these mental activities are when we compare self-directed or externally directed attention.
- One can talk to one's self or talk to another
- One can play both sides of a chess game or play against another
- One can scrutinize one's own motives or those of another
This is an illustration why the self- part of self-consciousness is not the challenge in AI. It is the attentive (yet tentative) coordination that is difficult. Early microprocessors, designed to work in real time control systems, included (and still include) exception signaling that simplistically models this tentativeness. For instance, while playing to win in a game, one might try to initiate dialog with the subject. Attention may shift when the two activities require the same sub-systems.
We tend to consider this switching of attention consciousness too. If we are the person trying to initiate dialog with the person playing to win, we might say, "Hello?" The question mark is because we are wondering if the player is conscious.
If one was to diminish the meaning of consciousness to the most basic criteria, one might say this.
"My neural net is intelligent in some small way because it is conscious of the disparity between my convergence criteria and the current behavior of the network as it is parametrized, so it is truly an example of artificial intelligence, albeit a primitive one."
There is nothing grossly incorrect about that statement. Some have called that, "Narrow Intelligence." That is a slightly inaccurate characterization, since there may be an astronomical number of possible applications of an arbitrarily deep artificial network that uses many of the most effective techniques available in its design.
The other problem with narrowness as a characterization is the inference that there are intelligent systems that are not narrow. Every intelligent system is narrow compared to a more intelligent system. Consider this thought experiment.
Hannah writes a paper on general intelligence with excellence, both in theoretical treatment and in writing skill. Many quote it and reference it. Hannah is now so successful in her AI career that she has the money and time to build a robotic system. She bases its design on her now famous paper and spares no expense.
To her surprise, the resulting robot is so adaptive that its adaptability exceeds even himself. She names it Georgia Tech for fun because she lives near the university.
Georgia becomes a great friend. She learns at an incredible rate and is a surprisingly great housemate, cleaning better than Hannah thought humanly possible, which may be literally true.
Georgia applies to Georgia Tech, just down the bus line from Hannah's house and studies artificial intelligence there. Upon the achievement of a PhD after just three years of study, Georgia sits with Hannah after a well attended Thesis Publication party that Hannah graciously held for her.
After the last guest leaves, there is a moment of silence as Hannah realizes the true state of her household. She thinks, "Will Georgia now exceed me in her research?" Hannah finally, sheepishly asks, "In complete honesty, Georgia, do you think you are now a general intelligence like me?"
There is a pause. With a forced look of humility, Georgia replies, "By your definition of general intelligence, I am. You are no longer."
Whether this story becomes true in 2018, 3018, or never, the principle is clear. Georgia is just as able to analyze herself comparatively with Hannah as Hannah is similarly able. In the story, Georgia applies the definition created in Hannah's paper because Georgia is now able to conceive of many definitions of intelligence and chooses Hannah's as the most pertinent in the context of the conversation.
Now imagine this alteration to the story.
... She thinks, at what level is Georgia thinking? Hannah finally, sheepishly asks, "In complete honesty, Georgia, are you now as conscious as me?"
Georgia thinks through the memory of all uses of the word conscious in her past studies — a thousand references in cognitive science, literature, law, neurology, genetics, brain surgery, treatment of brain injury, and addiction research. She pauses for a few microseconds to consider it all thoroughly, while at the same time sensing her roommates body temperature, neuro-chemical balances, facial muscle motor trends, and body language.
Respectfully, she waits 3.941701 extra seconds, which she calculated as the delay that would minimize any humiliation to Hannah, whom she loves, and replies, "Conscious of what?"
In Georgia's reply may be a hypothesis of which Hannah may or may not be aware. For any given automatons, $a, b, \ldots$, given consciousness, $C$, of a scenario $s$, we have a definition, $\Phi_c$ that can be applied to evaluate the aggregate of all aspects of consciousness of any of the automations, $x$, giving $\Phi_c(C_x(s))$. Georgia's (apparently already proven) hypothesis is thus.
$\forall \Phi_c(C_a(s)) \;\;\; \exists \;\;\; b, \, \epsilon>0 \;\; \ni \;\; \Phi_c(C_b(s)) + \epsilon > \Phi_c(C_a(s))$
This is a mathematical way of saying that there can always be someone or some thing more conscious of a given scenario, whether or not she, he, or it is brought into existence. Changing the criteria of evaluation from consciousness to intelligence, we have thus.
$\forall \Phi_i(C_a(s)) \;\;\; \exists \;\;\; b, \, \epsilon>0 \;\; \ni \;\; \Phi_i(C_b(s)) + \epsilon > \Phi_i(C_a(s))$
One can only surmise that Hannah's paper defines general intelligence relative to what whatever is the smartest thing around, which was once well-educated human beings. Thus Hannah's definition of intelligence is dynamic. Georgia applies the same formula to the new situation where she is now the standard against which lesser intelligence is narrow.
Regarding the ability to confirm consciousness, it is actually easier to confirm than intelligence. Consider this thought experiment.
Jack is playing chess with Dylan using the new chess set that Jack bought. In spite of the aesthetic beauty of this new set, with its white onyx and black agate pieces, Dylan moves each piece with prowess and checkmates Jack. Jack wonders if Dylan is more intelligent than him and asks what would be a normal question under those conditions.
"Dylan, buddy, how long have you been playing chess?"
Regardless of the answer and regardless whether Dylan is a robot with a quantum processor of advanced AI or a human being, the intelligence of Dylan cannot be reliably gauged. However, there is NO DOUBT that Dylan was conscious of the game play.
In the examples in the lists at the top of this answer there are a particular sets of requirements to qualify as consciousness. For the case of Jack and Dylan playing, a few things MUST working in concert.
- Visual recognition of the state of the board
- Motor control of the arm and hand to move pieces
- Tactile detection in finger and thumb tips
- Hand-eye cordination
- Grasp coordination
- A model of how to physically move board pieces
- A model of the rules of chess in memory
- A model of how to win when playing it (or astronomical computational power to try everything possible permutation that makes any sense)
- An internal representation of the board state
- Attention execution, visually and in terms of the objective of winning
- Prioritization that decides, unrelated to survival odds or asset accumulation, whether to beat Jack in chess, do something else, or nothing (non-deterministic if the ancient and commonplace notion of the causal autonomy of the soul is correct)
The topology of connections are as follows, and there may be more.
1 ⇄ 4 ⇄ 2
3 ⇄ 5 ⇄ 2
4 ⇄ 6 ⇄ 5
7 ⇄ 8 ⇄ 9
6 ⇄ 10 ⇄ 8
10 ⇄ 11
This is one of many integration topologies that support one of many types of things to which consciousness might apply.
Whether looking in the mirror just to prepare for work or whether looking deeply, considering the ontological question, "Who am I?" each mix of consciousness, subconsciousness, impulse, and habit require a specific topology of mental features. Each topology must be coordinated to form its specific embodiment of consciousness.
To address some other sub-questions, it is easy to make a machine that claims itself to be conscious a digital voice recorder can be programmed to do it in five seconds by recording yourself saying it.
Getting a robot to read this answer or some other conception, consider it thoughtfully, and then construct the sentence from knowledge of the vocabulary and conventions of human speech to tell you its conclusion is an entirely different task. The development of such a robot may take 1,000 more years of AI research. Maybe ten. Maybe never.
The last question, switched from plural to singular is, "If [an artificially intelligent device] is only [operating] on predefined rules, without consciousness, can we even call it intelligent?" The answer is necessarily dependent upon definition $\Phi_i$ above, and, since neither $\Phi_c$ nor $\Phi_i$ have a standard definition within the AI community, one can't determine the cross-entropy or correlation. It is indeterminable.
Perhaps formal definitions of $\Phi_c$ and $\Phi_i$ can now be written and submitted to the IEEE or some standards body.