Ever since HAL refused to open the pod bay doors for Dave we’ve been worried about the bad things that smart computers might do to their human creators.
HAL: I’m sorry, Dave. I’m afraid I can’t do that.
Dave Bowman: What’s the problem?
HAL: I think you know what the problem is just as well as I do.
Dave Bowman: What are you talking about, HAL?
HAL: This mission is too important for me to allow you to jeopardize it.
Not to mention worries about jobs lost to machines or smart cars choosing which lives to save in a crash situation. Just as the Internet has it’s governing bodies, artificial intelligence has become an area of concern that will require collaboration and cooperation with global interests and multi-stakeholders. When the potential of artificial intelligence was still in the invention of science fiction writers, the concern was all in our imaginations. As AI has burgeoned in recent years, so has consideration of ways that it can both be used to its potential and controlled and directed by its human users.
Five major tech companies—Google Alphabet, Facebook, Amazon, IBM and Microsoft—have plans to create an industry partnership to create a standard of ethics for the AI industry amid concern that the regulatory capabilities of governments are both too slow to respond and too hobbled by the multitude of interests that impact legislation. The importance of the discussion is presented in a report recently released by Stanford University, the first in what it is calling “AI100” or “The One-Hundred year study on Artificial Intelligence,” which will monitor and examine the trends and events in the AI field at 5-year intervals going forward.
This initial report raises concerns about the regulation of AI:
“The study panel’s consensus is that attempts to regulate A.I. in general would be misguided, since there is no clear definition of A.I. (it isn’t any one thing), and the risks and considerations are very different in different domains,”
But is optimistic about the potential for good, raising strong examples of effective applications in cities. It defines issues that citizens of a typical North American city will face in computers and robotic systems that mimic human capabilities in health care, education, entertainment and employment. For now, they consciously did not look at warfare.
The industry group is modeled on the Global Network Initiative, a multi-stakeholder group that was formed to address issues of privacy and to protect freedom of expression in the Information and Communications Technology (ICT) sector. The initiative is particularly concerned with the potential for government intervention in communication technologies that could conflict with human rights, freedom of expression and privacy. Part of the industry group’s goal is linking technology to social and economic policy issues, or putting “society in the loop.” This is a reference to the long-running debate about designing computer and robotic systems that still require interaction with humans.
While concerns about cars that may cause accidents and robots that may take jobs are important, the Stanford researchers, and the industry teams are optimistic about the potential for the technology and the effectiveness of multi-stakeholder oversight.
We need not worry about AI being an imminent threat to humankind. There is no significant development of machines with self-sustaining long term goals and intent.