1

How will morality questions be settled in the domain of self-driving cars?

For example

  1. If a dog is crossing the road, I'd expect the car to try to avoid it. But what if this leads to .00001% more risk for the driver? What is the 'risk cut-off'?

  2. What if a cockroach is crossing the road? Will the car have a list of animals okay to run over?

  3. What if a kid is crossing the street and avoiding it would kill the driver?

These questions seem to not really have an answer, yet self-driving cars are almost ready. What are they doing about all of this?

nbro
  • 39,006
  • 12
  • 98
  • 176
N00b101
  • 191
  • 1
  • 5
  • The answers in the linked question do a pretty good job of answering this duplicate question. In short though, a self driving car will behave exactly the way it is told to. The programmer makes these decisions, not the software. The software will just follow these rules more predictably/reliable then the human. The car's [fail safe](https://en.wikipedia.org/wiki/Fail-safe) designs mean that if a self driving car injures/kills someone, no one else could have possible done better. – Tezra Oct 20 '17 at 17:51
  • I also answered this question here: https://ai.stackexchange.com/questions/4148/how-would-ai-prioritize-situational-ethics/4172#4172 – hisairnessag3 Oct 21 '17 at 04:31

4 Answers4

1

As far as I know, there is still a huge debate about this topic. I would say, that the main rule for every self-driving car is to avoid a crash if possible.

The question one should always ask is, in what situation would a crash really happen, and would a human react differently?

My answer is no. The point is, a human might try to avoid the child (3) but it would be out of instinct rather than "consideration". The driver might even harm others in this situation.

Since a self-driving car will normally follow the rules a critical situation will most of the time arise due to the other person, not the car. So I believe it is best to protect the driver at all costs.

The dog vs child problem could be solved via advanced animal recognition (human vs no human)

Regarding no (2): Too small = no human ==> car will ignore it.

nbro
  • 39,006
  • 12
  • 98
  • 176
ortusolis
  • 34
  • 2
1

I don't think these questions will need to be answered.

A self driving car will almost certainly avoid a situation like the ones described well before a human would have and hence would not have to choose.

For example it would slow down as soon as it sees a child close to the road. It will identify and react to the fact that the child starts moving towards the road and act before the situation requires the "drastic" scenarios that we invent.

If for some reason the car has to choose, it can also make the impact/avoidance have the highest chance of not killing the occupant of the vehicle considering how well designed cars are these days and that the speeds involved shouldn't be anything wild like 200mph.

Not to mention a network of cars along with street cameras and sensors would act/work together to resolve as a swarm intelligence so the car ahead or the traffic camera can warn/tell other cars that something they cannot see is a potential hazard. I can go on and on...

In my opinion, the bottom line is a self driving car will not road rage, drive at dangerous speeds in a residential area, get tired and fall asleep, text and drive or drink and drive, etc... I cannot wait.

Ben N
  • 2,579
  • 2
  • 20
  • 35
Rob
  • 119
  • 5
1

The core issue with this question rests in probability. Specifically:

What if a kid is crossing the street and avoiding it would kill the driver?

How does the AI know for certain that avoiding it would kill the driver?

and certainty rears its head re:

1) If a dog is crossing the road, I'd expect the car to try to avoid it. But what if this leads to .00001% more risk for the driver? What is the 'risk cut-off'?

There would likely be no "hard cutoff". Earlier fuzzy logic systems have been implemented in automotive gear shifting and anti-lock breaking, but it is precisely the "fuzziness" that made them effective. Contemporary AI is far more sophisticated, and part of that sophistication rests in what might be though of as dynamic thresholds for decision-making. Because certainty only exists in special, limited cases (such as solved games), estimation must be used.


Regarding the cockroach, it would likely be too small to warrant a response, although a swarm of cicadas might affect the car's sensing ability and prompt poor-visibility navigational protocols. In general I'm sure pet-sized animals and bigger would be avoided, in the case of actual pets for humanitarian reasons, and for animals like deer, for reasons of driver risk (impaled by the horns at the worst, and at the least potentially costly damage to the vehicle.)

But I suspect the protocols for this would be breaking or swerving if there is a clear margin on either side of the animal (i.e. not a barrier, wall or cliff) and the direction change is controllable (i.e. hitting the animal is likely to result in less harm than an actual crash, and certainly less risk to the human, except in the case of the deer's horns.)

DukeZhou
  • 6,237
  • 5
  • 25
  • 53
0

As others have said, your question is, and will continue to be a hot topic. I also agree that eventually self-driving cars will be able to handle your hypothetical situations better than many human drivers. I am not prepared to say when that "eventually" will eventuate.

However, I can also imagine some human drivers deliberately trying to cause self-driven cars to make poor decisions. For example, a "team" of three or more cars could easily confound a self-driven car's programming by co-ordinating their actions, especially once the actual program code used by the car is known. I'm thinking of situations where the self-driven car is boxed in by human-driven cars which indicate they are about to make a move and then do not, while others change speed and direction at the same time, or at slightly different times. Humans can be incredibly sneaky and unethical, and some are very good at finding exploits and weaknesses.

  • Hey, welcome to Artificial Intelligence Stack Exchange! On this site, especially since we have some measure of subjectivity in most questions here, we prefer if answers have some source or something to back them up. If you could [edit] your answer to include such a source that would be great. Thanks! – Mithical Oct 15 '17 at 09:04
  • Thanks for the clarification. The other answers all included opinions with no supporting source so I responded in like manner. – Lysistrata Oct 16 '17 at 05:20
  • I disagree, not to mention that what would a human do in the same situation? So what does you scenario prove? The biggest threat is that the self driving cars are hacked to perform carnage in a swarm like intelligent manner.. NOW THAT is scary because it would be staggeringly effective. – Rob Oct 20 '17 at 17:19
  • 1
    @Rob - I agree that a human would have greater difficulty. But self-drive cars will be a target for that kind of behaviour initially for a variety of reasons e.g. malice, fun, etc etc. Hacking is also going to be a huge problem, no doubt about that at all IMO. – Lysistrata Oct 21 '17 at 20:57
  • @Lysistrata I see hacking as the #1 problem. You can be sure that the simple questions like stated above will have use casses and tests to make sure that in normal operating conditions they will resolve appropriately. Anyone who has ever worked on class A systems (that is systems that if they mess up someone dies) will tell you this line of question is a non issue, we know how to make systems that "fail safe". A hacked vehicle on the other hand will not follow spec/design and can be made to do bad things, a swarm of them can do catastrophic level bad things. – Rob Oct 24 '17 at 12:01