5

Are there any real-world examples of unintentional "bad" AI behaviour? I'm not looking for hypothetical arguments of malicious AI (AI in a box, paperclip maximizer), but for actual instances in history where some AI directly did something bad due to its direct action that was not intended behavior.

Interpretations of the meaning of "AI", "bad", and "unintentional" are left open due to obvious reasons. Be free with your interpretation, but use some common sense please.

Ex: Microsoft Tay became a racist not too long after being hooked up to the internet, thanks to internet trolls "teaching" her bad things.

I can't think of any other instances. So the following examples are just hypothetical scenarios to demonstrate what I mean.

Ex: A self-driving car drove off-track after being presented with an adversarial example, crashing into people.

Ex: Surgery bot goes haywire and accidentally kills someone.

Ex: Weaponized drone targets civilians against its design.

k.c. sayz 'k.c sayz'
  • 2,061
  • 10
  • 26
  • Define **bad**. – Rahul Aug 13 '17 at 18:37
  • question edited to address your problem. – k.c. sayz 'k.c sayz' Aug 13 '17 at 19:03
  • And that makes it even more broad. Does "bad" means unemployment or AI enslaving humans ? What exactly is it ? – Rahul Aug 13 '17 at 19:05
  • I dunno. The examples you provide suggest you have an idea of what to expect, so do you have any examples of adversarial AI behaviour in the wild under those interpretations? "Be free with your interpretation, but use common sense please." – k.c. sayz 'k.c sayz' Aug 13 '17 at 19:20
  • By "in the wild", you mean fully independent automata, not directed by humans? (i.e. partisan, adversarial investment algorithms are not "in the wild") I think the use of the idiom is good, but just wanted to make sure I'm understanding your meaning. – DukeZhou Aug 14 '17 at 22:36
  • question edited to address your problem. more specifically addressed to you: i'm just looking for instances that has "actually happened before", as opposed to being hypothetical (ai in box) – k.c. sayz 'k.c sayz' Aug 15 '17 at 14:21
  • I believe any example of overturned [AI sentence in court](https://www.wired.com/2017/04/courts-using-ai-sentence-criminals-must-stop-now/) would count. I don't know of any - but I'm quite scared to think how many innocent people may be in prison, because the AI judged basing on ethnicity, social background, education level and other "risk assessment" factors, while downplaying facts of that specific case. Knowing such algorithms -, if 30 statistical factors say "he's the type of person that would do it" and 1 individual fact says "he didn't do it - he wasn't there", the AI will say "guilty." – SF. Aug 17 '17 at 12:10
  • the algorithm in question doesn't sentence or give verdict in court. it provides a metric for risk assessment, which judges then use when sentencing people. although strictly speaking this example fits the criteria above, its not really what i'm looking for. perhaps you also think this way even just a little, from the observation that you chose to comment here rather than post a full fledged answer. – k.c. sayz 'k.c sayz' Aug 17 '17 at 17:57

2 Answers2

5

There is the case of the tesla accident where the car was in autopilot and crashed into a truck because it appears the vehicle mistook a lightly coloured truck for the sky, killing the driver: https://www.newscientist.com/article/2095740-tesla-driver-dies-in-first-fatal-autonomous-car-crash-in-us/

Having said that, it appears the car had been trying to tell the driver to pay attention: http://uk.reuters.com/article/us-tesla-crash-idUKKBN19A2XC

redhqs
  • 291
  • 1
  • 5
4

The infamous Flash Crash of 2010 may qualify.

It didn't involve Artificial General Intelligence (which is still a hypothetical) or even "strong narrow AI" (such as AlphaGo) but does involve algorithmic decision-making, which is a form of basic Artificial Intelligence.

Algorithmic trading already represents a significant percentage of all market activity, and I suspect that percentage will only increase.

From Business Insider: Algos could trigger the next stock market crash

DukeZhou
  • 6,237
  • 5
  • 25
  • 53