3

I have trouble finding material (blog, papers) about this issue, so I'm posting here.

Taking a recent well known example: Musk has tweeted and warned about the potential dangers of AI, saying it is "potentially more dangerous than nukes", referring the issue of creating a superintelligence whose goals are not aligned with ours. This is often illustrated with the paperclip maximiser though experiment. Let's call this first concern "AI alignment".

By contrast, in a recent podcast, his concerns seemed more related to getting politicians and decision makers to acknowledge and cooperate on the issue, to avoid potentially dangerous scenarios like an AI arms race. In a paper co-authored by Nick Bostrom: Racing to the Precipice: a Model of Artificial Intelligence Development, the authors argue that developing AGI in a competitive situation incentivises us to skim on safety precautions, so it is dangerous. Let's call this second concern "AI governance".

My question is about the relative importance between these two issues: AI alignment and AI governance.

It seems that most institutions trying to prevent such risks (MIRI, FHI, FLI, OpenAI, DeepMind and others) just state their mission without trying to argue about why one approach should be more pressing than the other.

How to assess the relative importance of those two issues? And can you point me any literature about this?

Douglas Daseeco
  • 7,423
  • 1
  • 26
  • 62
  • 2
    Welcome to AI! Informally, most of the scholars I've been reading seem to suggest the social impacts strongly outweigh the technical issues at present. Specifically, we know that automation is going to replace many, many jobs in the near future, we know that strong forms of AI have more and more control over our behaviors (social media engagement strategies, as an example.) By contrast, apocalyptic scenarios involving AI are still hypothetical. (Doesn't mean the technical issue is not important, just less immediate.) – DukeZhou Sep 24 '18 at 21:04

1 Answers1

1

Neither AI alignment nor AI governance are important yet. We are so far away from AGI that we don't even know what is missing.

We don't set up safety instructions for interstellar travel, so why should we do it for AGI? I can also come up with a lot of dangers of that...

There are real dangers of AI, though. Including societal issues:

  1. Blind trust: Trusting a machine, although it is not perfect. Just because it is good in most cases and seems to be rational/objective. Example: Machine Bias
  2. Unemployment: AI has the potential to replace a lot of low-skill jobs with very few super high skilled jobs. For example, self-driving cars (< 100 people for one manufacturer, I guess) could replace all jobs in trasportation (1,076,200 of 31,373,700 in Germany (source) - that is 3.4%!). See CG gray video.
  3. Weapon Systems: The thread of leathal autonomous weapons is real. There is the danger of efficient slaughterbots and the danger of errors - similar to the Wech Baghtu wedding party airstrike or that one

There are a couple of other problems, but I think they are less severe:

  • Better fakes: lyrebird.ai gives a good demo
  • Spam: I think it's easier to improve the spam filters, but humans might have a harder time
  • Data is dominance: Companies like Amazon will have a dominant position on the market as they were the first to aquire important customer information
  • Bubbles / false hopes: AI (or let's better say Machine learning) is often seen as a golden hammer. It is not.
Martin Thoma
  • 1,055
  • 7
  • 17
  • 1
    I don't think you can say `Neither AI alignment nor AI governance are important yet`. Here: https://youtu.be/yQE9KAbFhNY?t=287, Robert Miles makes the point that most experts in the field take those concerns seriously. Source: https://arxiv.org/pdf/1705.08807.pdf at page 13. – Benjamin Crouzier Apr 11 '19 at 13:25