Questions tagged [asimovs-laws]

For questions about Asimov's Laws in real life; to ask a question about something in Asimov's books, see https://scifi.stackexchange.com or https://literature.stackexchange.com/

Isaac Asimov created the fundamental Three Laws of Robotics, by which the robots in his Robot series were bound to obey.

The three laws are:

1.) A robot must not harm a human being, or through inaction allow a human to come to harm.

2.) A robot must obey orders given by a human, as long as that does not contradict with the First Law.

3.) A robot must protect its own existence, as long as that does not contradict with Rules One or Two.

6 questions
20
votes
3 answers

Are Asimov's Laws flawed by design, or are they feasible in practice?

Isaac Asimov's famous Three Laws of Robotics originated in the context of Asimov's science fiction stories. In those stories, the three laws serve as a safety measure, in order to avoid untimely or manipulated situations from exploding in…
3442
  • 768
  • 4
  • 11
6
votes
4 answers

Is it possible to implement Asimov's Three Laws of Robotics?

Would it be possible to put Asimov's three Laws of Robotics into an AI? The three laws are: A robot (or, more accurately, an AI) cannot harm a human being, or through inaction allow a human being to be harmed1 A robot must listen to instructions…
Mithical
  • 2,885
  • 5
  • 27
  • 39
5
votes
1 answer

Why is AI safety so much harder than Isaac Asimov's "Three Laws of Robotics"?

I understand that AI researchers are trying to create AI designs that allow for desired behavior without undesirable side-effects. A classic example of an attempt is Isaac Asimov's Three Laws of Robotics. This idea seems to have been debunked due to…
N00b101
  • 191
  • 1
  • 5
0
votes
1 answer

How does a robot protect its own existence

What are the many ways that artificial intelligence robots protect their existence? Isaac Asimov's "Three Laws of Robotics" A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey orders…
jehovahsays
  • 111
  • 3
0
votes
0 answers

is Law Zero of AI ethic actually a thing?

i was discussing ethic with chatGPT and I asked "can the law 0 solve this dilemma ?", expecting an answer related to Asimov law of robotic. But I got this instead : "Law 0" is not a widely recognized concept in the field of AI ethics. However, some…
ker2x
  • 163
  • 7
-1
votes
1 answer

Would empathy in AI be a reliable tool/capacity, or contribute to a solution to avoid harm done to humans or to other versions of AI?

TL;DR Would providing AI the capability of experiencing something as close as possible to the subjective human experience and from that acquiring empathy in the process be a solution, or contribute to a solution, that seeks to prevent AI to cause…