0

i was discussing ethic with chatGPT and I asked "can the law 0 solve this dilemma ?", expecting an answer related to Asimov law of robotic. But I got this instead :

"Law 0" is not a widely recognized concept in the field of AI ethics. However, some experts have proposed additional ethical principles or "laws" to complement or expand upon the Three Laws of Robotics described by Isaac Asimov.

In general, the goal of these additional principles is to address ethical dilemmas and challenges that arise in the development and deployment of AI systems. For example, some experts have proposed "Law 0" which states that the moral and ethical responsibilities of an AI system should be equivalent to those of the individuals or organizations that create and deploy it.

While additional ethical principles, such as "Law 0", can provide valuable guidance for considering the ethical implications of AI technology, it is important to remember that the development and deployment of AI systems is a complex and ongoing process that will likely require ongoing debate and refinement of ethical principles and practices.

Ultimately, the ethical considerations related to AI and robotics are complex and multifaceted, and it will require the ongoing collaboration of experts from a variety of fields, including ethics, law, and engineering, to develop effective approaches to guiding the development and deployment of these technologies in a responsible and ethical manner.

I asked for an author but chatGPT didn't know. I googled as well but didn't get any answer. Does it perhaps exist with an other name ? is it an AI hallucination ?

ker2x
  • 163
  • 7
  • 1
    I've never heard of this law. The zeroth law is not that one, but [`A robot may not harm humanity, or, by inaction, allow humanity to come to harm.`](https://en.wikipedia.org/wiki/Three_Laws_of_Robotics). So, either ChatGPT is referring to some other law (unlikely) or it's wrong (likely). We should never blindly trust these statistical AIs when it comes to facts and logical reasoning. In fact, the text that you quoted, which I assumed was generated by ChatGPT, is very convincing. – nbro Feb 06 '23 at 11:47
  • 1
    Anyone that believes in anything without checking it in another source (many people) would be fooled. That's why ChatGPT is dangerous. ChatGPT is a statistical parrot. People need to understand this. – nbro Feb 06 '23 at 11:50
  • it is very convincing indeed, that's the scary part of chatGPT. it can hallucinate with high precision :) – ker2x Feb 06 '23 at 13:51
  • 1
    I believe we should not spend much of our time using language with a thing that does not use language as we do, meaning what we say. We have assumptions about talking entities which do not hold at all in this case. – Jaume Oliver Lafont Feb 06 '23 at 15:22
  • 1
    Besides, a machine can produce text at a faster pace than humanity can check it. We cannot afford to have that among us. – Jaume Oliver Lafont Feb 06 '23 at 15:41
  • i just asked google bard and it used this post as a source ... – ker2x Jul 20 '23 at 12:17

0 Answers0