1

Earlier this month, Google released a set of principles governing their AI development initiatives. The stated principles are:

Objectives for AI Applications:

  1. Be socially beneficial.
  2. Avoid creating or reinforcing unfair bias.
  3. Be built and tested for safety.
  4. Be accountable to people.
  5. Incorporate privacy design principles.
  6. Uphold high standards of scientific excellence.
  7. Be made available for uses that accord with these principles.

AI Applications not to be Pursued:

  1. Technologies that cause or are likely to cause overall harm. Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints.
  2. Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.
  3. Technologies that gather or use information for surveillance violating internationally accepted norms.
  4. Technologies whose purpose contravenes widely accepted principles of international law and human rights.
    SOURCE: Artificial Intelligence at Google: Our Principles

My questions are:

  • Are this guidelines sufficient?
  • Are there any "I, Robot" conflicts
  • How much does this matter if other corporations and state agencies don't hew to similar guidelines?
nbro
  • 39,006
  • 12
  • 98
  • 176
DukeZhou
  • 6,237
  • 5
  • 25
  • 53

1 Answers1

1

Artificial Intelligence at Google — Our Principles

Objectives for AI Applications

  1. Be socially beneficial.
  2. Avoid creating or reinforcing unfair bias.
  3. Be built and tested for safety.
  4. Be accountable to people.
  5. Incorporate privacy design principles.
  6. Uphold high standards of scientific excellence.
  7. Be made available for uses that accord with these principles.

AI Applications We Will Not Pursue

  • Technologies that cause or are likely to cause overall harm. Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints.
  • Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.
  • Technologies that gather or use information for surveillance violating internationally accepted norms.
  • Technologies whose purpose contravenes widely accepted principles of international law and human rights.

Questions

Are these guidelines sufficient?

They are not sufficiently defined to be as sufficient as a policy, not even in the paragraphs following each, but it is presented as a vision comprised of principles, which should not be expected to be as defined as policy. Here is at least one caveat for each objective.

  1. Half the world has would have been too poor to communicate globally had Linux Torvalds not somewhat antisocially insist on certain things.
  2. Distributing money equally among homeless heroin addicts and entrepreneurs is considered fair by many, yet it is merely equal and perhaps monumentally unfair.
  3. National defense may be considered for safety, but someone who survived Hiroshima or a Syrian refuge may find the definition tragic.
  4. Google cannot be accountable to people without revealing its company confidential ranking, at which point, it would be exploited and they'd have to change it.
  5. Incorporating privacy design principles is far from not storing data so that it cannot be subpoenaed.
  6. The concepts of the singularity, solar panels sustainability, the climate crisis, the ample availability of fossil fuels within U.S borders, and cell communications technology are all considered scientific excellence by the general public. That these lack the primary signs of rigor in mathematics, statistics, economics, and engineering doesn't occur to anyone, so upon what basis will Google judge? Will they seek corroboration between theoretical models and previous ones along with empirical validation for every fact?
  7. Being made available, if they are to be a soluble corporation, will depend on the financial condition of the customer, which is in conflict with fairness (#2) and no I, Robot precedence rules are established.

That segues into the next question, skipping the non-pursuit items, which could be similarly treated.


Are there any "I, Robot" conflicts?

The Three Laws of Asimov worked well with the robotic character R. Daneel Olivaw in the Foundation sci fi novel series. Daneel simulated human emotion, transcended human selfishness, and maintained the three laws. In the screenplay adaptation of his I, Robot, the three laws didn't work out as well. VIKI decided that human freedom was in conflict with law two.

Because the eleven dos and don'ts are not codified in precise policies, there are 10! (3,628,800) potential conflicts. There is at least one that is already clear, mentioned above.


How much does this matter if other corporations and state agencies don't hew to similar guidelines?

In a world where information may have become more powerful than money, perhaps it matters quite a bit when an information giant takes a position.

It's Wikipedia that is most questionable in that through their web service language evolution may have been democratized to a self-defeating extreme. One can create a word definition without peer review that gains more public authority than the definition in rigorously prepared and cross-correlated encyclopedias and dictionaries. Even in this artificial intelligence forum, where the education level is high, people define tags with Wikipedia links. I am tempted to endeavor to remove them all from here. :)

Douglas Daseeco
  • 7,423
  • 1
  • 26
  • 62