As part of a research project for college, I would like to understand what many of you astern to be the risks associated with regulating Artificial Intelligence. Such as whether regulation is too risky in regards to limiting progress or too risky in regards to uninformed regulation.
-
1I took the link to the survey out of the main text of the question, but I think it would be ok to post in the comments: https://docs.google.com/forms/d/e/1FAIpQLSdikvY9IaDb0s-9tC2VdyDgIrxsTqumUbRhv6prgTzR1QuGwQ/viewform – DukeZhou Oct 21 '19 at 21:21
-
1Here is a link to a survey I put together. Also as part of the project. If you have the time to look at it please do! Thank you! :) https://docs.google.com/forms/d/1eA36k-NyzPLnFIjnf7JL88Ba57WLE4dX7QVC2grCcXE/edit – JsAdam Oct 21 '19 at 22:25
4 Answers
I don't think regulating something necessarily causes that regulation to defacto become a "risk".
Regulation - including overregulation - may, in fact, aid in the dialogue between practitioners, which may end up educating the regulators, the public and the practitioners themselves.
My answers to your survey would most likely be "it depends...", or "no risk", which isn't to say it's not an impediment, but just not a "risk", per se.

- 11
- 1
Risks of regulation?
As you mention in your survey, it is generally understood that the primary concern with regulating AI research is that other parties risk falling behind.
Should we regulate it? Can it be done?
You can't really "regulate" technological development in the same way you can regulate some other things in general. Asides from the fact that there is no global governance that can implement this regulation on nations, you can't really regulate someone's research more than you can control how people think: you just need a pen / paper / computer to do any research in math/AI.
The NSA tried to regulate encryption citing national security reasons during a saga known as the Crypto Wars. They failed.
What is AI anyways? How will we get there? What will it be like?
Honestly, from the phrasing of your questions in your survey, I get the impression that you don't really understand the hypothetical existential risk due to AI. Personally I don't really buy into their thesis, but in any case, if such a super-intelligent agent emerges, the problem isn't so much "oh no my city is destroyed" or "oh no so many people are killed", but more so "all of humanity is enslaved without being aware" or "everything is dead". We think this might happen because we assume AI is all-powerful and we project our own negative qualities onto this unknown agent with unknown power. It's mostly fear really.
This is all speculation, and by definition you cannot predict the behavior of an agent smarter than you, so literally every single comment on this topic is purely unbased speculation. The only thing that is true is that we don't know.
There is another aspect of AI which is dangerous, which more so concerns with how humans use it: i.e. facial recognition, automated weapon systems, automated hacking. These are more pressing issues.
What should we do? We are forced to research AI because no party can afford to fall behind, but at the same time we are pushing ourselves towards a dangerous future: it's a catch-22....
Consensus and current practice suggests that every researcher publicizes our results. Compared to other areas of academia, whose research is often locked behind paywall, ML/AI research is quite publicly accessible. Of course, this doesn't prevent the possibility of a rouge agent....

- 2,061
- 10
- 26
I think there is a very strong argument for regulating AI. Chiefly, unintentional (or intentional) bias in statistically driven algorithms, and the idea that responsibility can be offloaded to processes that cannot be meaningfully punished where they transgress. Additionally, the history of technology, especially since the industrial revolution, strongly validates neo-luddism in the sense that the problems arising from implementation of new technology are not always predictable.
In this sense, there are both ethical reasons to consider regulation, and minimax reasons (here in the sense of erring on the side of caution to minimize the maximum potential downside.)
- Risk of falling behind
A risk is that not all participants will hew to the regulations, giving those who don't a significant advantage, but, that, in and of itself, is not a reason to forgo sensible regulation.
However, this is not a justification to forgo regulation in that that penalties at least serve as potential deterrent.
- Opportunity cost
Not a risk, but a driver. The idea of "leaving money on the table" in that not implementing a given technology forgoes greater utility, sacrificing potential benefit.
This is not invalid, but shouldn't ignore hidden costs. For instance, the wide-scale deployment of even primitive bots has had a profound social impact.

- 6,237
- 5
- 25
- 53
My thoughts
AI is already indirectly regulated. This is important to acknowledge and this acknowledgement is missing, in my opinion, in the discourse about law and AI.
I'm assuming that your question is about law that directly aims at AI technologies and this exemplifies one of the risks of regulating AI: that the law will focus on the technology rather than outcomes.
Another concern is that law that is inadequate or outdates quickly creates a false sense of security and this could create a situation which is even more dangerous then if the laws are not there.
Law and innovation
When it comes to the view that law stifles innovation it is paramount to acknowledge that some regulation can have a very positive effect. There is no general rule that there is a inverse relation between law and innovation.
Pacing problem and Collingridge dilemma
The following is basically what Wendell Wallach says in an espisode of Future of Life Institute's AI Alignment Podcast entitled Machine Ethics and AI Governance with Wendell Wallach.
The pacing problem refers to the fact scientific discovery, and technological innovation, is far outpacing our ability to put in place appropriate ethical legal oversight.
Wendell Wallach continues to say that pacing problem converges with what is now called the Collingridge Dilemma, a problem that 'bedevilled' people in technology and governance since 1980, and he defines it the following way:
While it was easiest to regulate a technology early in its development, early in its development we have little idea of what its societal impact would be. By the time we did understand what the challenges and the societal impact the technology would be so deeply entrenched in our society that it would be very difficult to change its trajectory.
See also:
- Collingridge dilemma on Wikipedia; and
- The social control of technology by David Collingridge, published 1980 by Frances Pinter.

- 246
- 2
- 8