6

Science Fiction has frequently shown AI to be a threat to the very existence of mankind. AI systems have often been the antagonists in many works of fiction, from 2001: A Space Odyssey through to The Terminator and beyond.

The Media seems to buy into this trope as well. And in recent years we have had people like Elon Musk warn us of the dangers of an impending AI revolution, stating that AI is more dangerous than nukes.

And, apparently, experts think that we will be seeing this AI revolution in the next 100 years.

However, from my (albeit limited) study of AI, I get the impression that they are all wrong. I am going to outline my understanding below, please correct me if I am wrong:

  • Firstly, all of these things seem to be confusing Artificial Intelligence with Artificial Consciousness. AI is essentially a system to make intelligent decisions, whereas AC is more like the "self-aware" systems that are shown in science fiction.

  • Not AI itself, but intelligence and intelligent decision-making algorithms are something we've been working with and enhancing since before computers have been around. Moving this over to an artificial framework is fairly easy. However, consciousness is still something we are learning about. My guess is we won't be able to re-create something artificially if we barely understand how it works in the real world.

  • So, my conclusion is that no AI system will be able to learn enough to start thinking for itself, and that all our warnings of AI are completely unjustified.

  • The real danger comes from AC, which we are a long, long way from realizing because we are still a long way off from defining exactly what consciousness is, let alone understanding it.


So, my question is, assuming that my understanding is correct, are any efforts are being made by companies or organizations that work with AI to correct these popular misunderstandings in sci-fi, the media, and/or the public?

Or are the proponents of AI ambivalent towards this public fear-mongering?

I understand that the fear mongering is going to remain popular for some time, as bad news sells better than good news. I am just wondering if the general attitude from AI organizations is to ignore this popular misconception, or whether a concerted effort is being made to fight against these AI myths (but unfortunately nobody in the media is listening or cares).

nbro
  • 39,006
  • 12
  • 98
  • 176
Jimmery
  • 186
  • 7
  • 1
    Related book recommendation: Life 3.0, https://www.amazon.co.uk/Life-3-0-Being-Artificial-Intelligence/dp/024123719X (many other books are available on this subject, but I have read it, and think it covers this topic reasonably well) – Neil Slater Oct 16 '19 at 14:27
  • As written the question is hard to answer, as it would require someone to gather/poll data from "AI organisations" or "proponents of AI" (whatever those are - do you have a working definition?) Any chance you could re-frame it - either re-work or drop the last two paragraphs would help. The paragraph starting "So my question is . . ." seems OK to me, because it doesn't ask us to figure out motivations and politics across multiple people and organisations, just identify some public initiatives. – Neil Slater Oct 16 '19 at 14:36
  • 1
    Guns and bombs also don't think by itself, yet it's dangerous. –  Oct 16 '19 at 16:14
  • @NeilSlater I apologize if this question is hard to answer, and I understand the points you are making. Really all I was after was any information on the public initiatives (as you stated), and the preamble before "so my question is..." was to give the question context (admittedly coloured by my own frustrations with the misconceptions surrounding AI) - the final two paragraphs were there to add some weight and specificity to the question, but if you feel I should remove them I am more than happy to do so – Jimmery Oct 17 '19 at 10:25
  • I feel that the last two paragraphs don't add specificity, and pose unanswerable questions about motivations and behaviour across a very broad set of people ("are the proponents of AI ambivalent") and organisations ("the general attitude from AI organizations"). No-one here can speak with authority about such broad groups. So yes please either remove or re-word those paragraphs. – Neil Slater Oct 17 '19 at 12:55
  • I advise you to carefully learn the strong arguments for AI risk. Your understanding of it is pretty poor (for one, consciousness is pretty much irrelevant to the argument), so you should probably not advocate for a given conclusion. Robert Miles does very clear introductory videos about it on YouTube, but the understanding the detailed arguments will require some proper reading. – Veedrac Oct 23 '19 at 10:12
  • Of course, the general media's conception of AI risk is completely wrong, and can safely be ignored, but that should come as no surprise. – Veedrac Oct 23 '19 at 10:13
  • @Veedrac point taken - in order to try and educate myself on AI risks I have asked this question: https://ai.stackexchange.com/questions/16054/what-are-the-societal-risks-associated-with-ai – Jimmery Oct 24 '19 at 09:14
  • @Jimmery That question is like asking ‘what are the main risks of uranium?’ The people talking about radiation poisoning from poor handling and the people talking about nuclear armageddon are talking about completely different things. Given this subreddit is about current ML practice, it's unlikely to be a good source of info about AI risk. – Veedrac Oct 24 '19 at 09:58
  • @Veedrac if you want to supply any links to information that would help me out (instead of just being critical) please feel free... also, "subreddit"? you might be confusing your sites here... – Jimmery Oct 24 '19 at 10:13
  • @Jimmery lol whoops, blame /r/MachineLearning ;). I've already recommended Robert Miles' videos for introductory stuff (eg. [Why Would AI Want to do Bad Things? Instrumental Convergence](https://www.youtube.com/watch?v=ZeecOKBus3Q)). A decent book might be [The AI Does Not Hate You](https://www.amazon.com/Rationalists-Artificial-Intelligence-Geeks-World/dp/1474608779/ref=tmm_hrd_swatch_0?_encoding=UTF8&qid=&sr=), per Scott Aaronson's recommendation (having not read it myself). For in-depth stuff, idunno, there's no real tome that covers everything AFAIK. – Veedrac Oct 24 '19 at 11:31

1 Answers1

2

Nothing.

Its in almost everyone's favor for it to stay that way financially. Having non-technical individuals associate AI with terminators makes a perception that the field has greater capabilities than it does $\rightarrow$ this leads to grants, funding, etc...

Is there any negative? Yes. Misconceptions always have drawbacks. We see the creation of dumb ethics boards and such cough cough Elon Musk.

But if history has anything to say about this, as the field gains popularity (which it is dnagerously quick), information will spread by definition, and eventually misconceptions will be laid to rest.

Note that this answer is biased and based upon my own opinions

mshlis
  • 2,349
  • 7
  • 23