I was thinking about the following:
According to Sir Roger Penrose "No computer has any awareness of what it does.".
Now some context to his statement:
Penrose’s argument in summary in his book (The emperor's new mind: concerning computers, minds, and the laws of physics) is that:
- We don’t understand physics well enough to use it to describe our brains.
- We don’t understand the mind well enough to create a framework capable of accommodating human consciousness.
- Since our minds do not operate ‘computationally’ and are non-algorithmic, our intelligence, therefore, can’t be recreated by computers (Source).
Let's look at the latest controversy started by a Google engineer who claimed that Lambda
is self aware. But, then I came across Searle’s Chinese Room and realized that Lambda is not self aware.
Then there are other self learning AIs like OpenAIFive
that was really good in playing DOTA2.
But how do we know this AI understands what it's doing? If not then is there a possibility that a hypothetical AGI in future might be aware of its actions?
As a note I will add that the brain is still a big mystery and there's no unanimous consensus on a single definition of consciousness. And I think every definition out there is just a combination of thought and faith/belief. And Penrose as of lately does however thinks that there might be a connection between consciousness and quantum mechanics.