5

As I see some cases of machine-learning based artificial intelligence, I often see they make critical mistakes when they face inexperienced situations.

In our case, when we encounter totally new problems, we acknowledge that we are not skilled enough to do the task and hand it to someone who is capable of doing the task.

Would AI be able to self-examine objectively and determine if it is capable of doing the task?

If so, how would it be accomplished?

nbro
  • 39,006
  • 12
  • 98
  • 176

5 Answers5

3

Several AI systems will come up with a level of confidence to the solution found. For example, neural networks can indicate how relatable is the input problem to the ones it was trained with. Similarly, genetic algorithms work with evaluation functions, that are used to select best results, but depending on how they're built (the functions), they can indicate how close the algorithms are to an optimal solution.

In this case, the limit to when this is acceptable or not will depend on a threshold set beforehand. Is 50% confidence good enough? Maybe it's ok for OCR apps (spoiler: it's not), but is it for medical applications?

So yes, AI systems do currently have the capacity of determining if they're performing well or not, but how acceptable that is is currently based on the domain of the problem, which currently stands outside of what is built into an AI.

Alpha
  • 458
  • 3
  • 12
1

Would AI be able to self-examine objectively and determine if it is capable of doing the task?

Our ability to self-examine comes definitively from the memory of our experiences; indeed, for this reason it can't be objective. In the same way AI could be able to determine the heuristically optimal strategy to solve a problem if and only if it has some sort of memory of previous tasks e.g. speech recognition.

Science is constantly working to improve our understanding of things. Trying to mimic the human brain seems to be a difficult problem at the moment; though we are able to replicate almost fully simpler organisms as C. elegans, a roundworm.

Lovecraft
  • 332
  • 1
  • 12
1

I would concur with the answer given to you by Lovecraft. One of the major problems with A.I. programmers is that they are always trying to push computers to do things which are designed for "mature" intelligent creatures who have prior experience and knowledge of solving problems. -As if these things can be imputed without the A.I. having to achieve the necessary and vital "learn by trial and error" experience first. For example: when allowing for task examination; self evaluation and risk assessment.

You have answered your own question, because these things can only be gained by "experience". However, the only way to surmount this is to expose a prototype A.I. to the main problems; help it to solve them, and then to take its memory and use it as a template for other A.I's.

Technically, AI's which have learned to solve prior problems could make their memories available to others on demand, so that an inexperienced AI could solve an issue without having achieved the skills needed.

However, I would like to add that mimicking intelligence is not in itself "intelligence". Many programmers fall into the trap of believing that to emulate something is qualitatively the same expression as the genuine article. This is a fallacy which infers that we only have to simulate intelligence without understanding the real mechanisms which create it.

This "copying" of sentience is done all the time and despite how good we have become in copying over the last few years, each new algorithm is just that: a simulation without genuine sentience or intelligence!

Engage
  • 47
  • 6
0

Would AI be able to self-examine objectively and determine if it is capable of doing the task?

A possible approach might be the one suggested and studied by J.Pitrat (one of the earliest AI researcher in France, his PhD on AI was published in the early 1960s and he is now a retired scientist). Read his Bootstrapping Artificial Intelligence blog and his Artificial Beings: the conscience of a conscious machine book.

(I'm not able to summarize his ideas in a few words, even if I do know J.Pitrat -and even meet him once in a while- ; grossly speaking, he has a strong meta-knowledge approach combined with reflexive programming techniques. He is working -alone- since more than 30 years on his CAIA system, which is very difficult to understand, because even while he does publish his system as a free software pragram, CAIA is not user friendly, with a poorly documented common line user interface; while I am enthusiastic about his work, I am unable to explore his system.)

But defining what "conscience" or "self-awareness" could precisely mean for some artificial intelligence system is a hard problem by itself. AFAIU, even for human intelligence, we don't exactly know what that really means and how does that exactly work. IMHO, there is no consensus on some definition of "conscience", "self-awareness", "self-examination" (even when applied to humans).

But whatever approach is used, giving any kind of constructive answer to your question requires a lot of pages. J.Pitrat's books & blogs are a better attempt than what anyone could answer here. So your question is IMHO too broad.

-1

It's not possible as this is the distinction between AI and humans, truly science will never understand the subconscious it's that little black box that no one can reverse engineer. This is why pursuing singularity is a fools dream to the extreme.

The reason why machinery lacks this because of the lack thereof a soul. science cannot produce a soul, this is why a machine cannot be self aware we can program fancy algorithms all day that mimic things but it's emotionless it cannot sit in judgement because it lacks real self awareness that is human self awareness it's like trying to make an orange into an apple.

kenorb
  • 10,423
  • 3
  • 43
  • 91
Ben Madison
  • 194
  • 2
  • 4