2

Post singularity AI will surpass human intelligence. The evolution of AI can take any direction, some of which may not be preferable for humans. Is it possible to manage the evolution of super-intelligent AI? If yes, how? One way I can think of is following. Instead of having a mobile AI like humanoid, we can keep it immobile, like a box, like current super computers. It can be used to solve problems of maths, theoretical science etc.

akm
  • 171
  • 4

4 Answers4

2

Assuming super-intelligence is possible, the answer is probably yes and no.

Yes in Kurzweil-like scenarios, where super-intelligence is an extension of human beings by technology (we are already in to some extent). Then control follows, as super-intelligence depends on us. It would extend our capabilities, such as speed of processing, extent of processing, etc. Even then control is debatable, as a remote-controlled killing machine would be part of a super-intelligent organism, partially human "controlled", partially autonomous.

No in "Future of Life Institute"-like scenarios, where super-intelligence is independent from humans. The thinking is simple: What can we hope to do facing someone way more intelligent? The usual parallel is to compare this scenario with the arrival of the "developed" conquistadors in early America. Gunpowder vs. mere raw strength and arrows.

Eric Platon
  • 1,490
  • 10
  • 21
  • You mean Kurzweil? what is "FIL-like scenarios"? – akm Mar 02 '17 at 10:59
  • 1
    let me counter argue in the case of non human like super-intelligence. of course, they will be made by humans. for such intelligence to do any harm, they must need resources i.e. power,materials etc. can't we control them by controlling these resources? can we think of any situation where they won't need resources to harm? but then again,we may not be able think of such situation because we are low intelligence! :) – akm Mar 02 '17 at 11:04
  • @AmitMaurya That's the whole story. We may reach a point where we cannot wrap our head around what is happening, either because of complexity, or mere too-high pace of change. It seems you have follow-up questions, and those would be terrific new entries on the site. – Eric Platon Mar 07 '17 at 21:52
1

Competition always gives better result. If machines will try to improve themselves, we as human beings will definitely try to improve ourself.

Rishi Raj
  • 11
  • 2
0

Without going into more detail at the moment (b/c I'm time strapped), I strongly urge you to research the Control Problem.

My own personal view is that humans are more problematic than machines. Machines are at least rational.

To be more specific, I believe human "management" (read as "mis-management") of powerful AI is potentially more of a problem than super-intelligent AI left to it's own devices.

Humans are known to abuse power, and history is filled with such examples. Machines, at least, have a clean slate in this regard.

DukeZhou
  • 6,237
  • 5
  • 25
  • 53
  • thanks. problem is machines are created by humans.problematic humans may create problematic machines. even if machines are good initially,the way they will evolve can be problematic. – akm Mar 02 '17 at 22:27
  • @AmitMaurya Could be problematic, but I see that as a factor of humans imposing poor ideas based on shoddy understanding. Details here: http://ai.stackexchange.com/a/2878/1671 – DukeZhou Mar 02 '17 at 22:30
  • 1
    u r very correct when u say "Humans are known to abuse power, and history is filled with such examples". power corrupts. so what will happen with powerful AIs? "leaving it on its own" may not be a good idea. – akm Mar 02 '17 at 22:56
  • @AmitMaurya Yes, but humans have emotions that cloud logic and lead to poor decisions. Machines are not so burdened. The only apocalyptic scenarios regarding AI that are completely sound, imho, are the ones regarding lack intelligence (grey goo) or insufficient intelligence ("Hal 2000", "I Robot", etc.) Hal was smart but he couldn't contextualize or reconcile contradictions. . – DukeZhou Mar 03 '17 at 20:24
0

Yes, it is possible.

When humans were working on the first nuclear bomb, some field experts of the time thought that when the reaction went super-critical, it would not stop, and would devour the earth. It was a plausible possibility given our understand of nuclear energy at the time, and we didn't know for sure until we did it.

Some scientists synthesize black-hole like environments in laboratories. Some experts think that if a certain point is accidentally crossed due to ignorance or negligence, we may devour our planet with a self made black hole.

The situation is the same with AI. Until we actually create a super-intelligent AI, we cannot say with certainty whether it will be controlled or controllable until it happens. Until that time comes the answer to your question is yes, it's possible, but that does not mean it will or will not happen that way.

Nomadyn
  • 49
  • 2