The first Swiss Global Digital Summit was held this week in Geneva, drawing some of the biggest figures from the tech and business worlds. The goal of the summit was to establish a set of ethical guidelines to direct technological development, specifically where it concerns data usage and the accelerated blossoming of artificial intelligence.
The representatives at the summit included the heads of Credit Suisse, UBS and Adecco, and high-level executives from Facebook, Google, Huawei and IBM. Following this inaugural session, the members will meet again at a signing ceremony during the World Economic Forum in Davos next January.
At Davos the Swiss Digital Initiative (SDI) will be launched in earnest and of the first order will be the unveiling of a number of projects aimed at hemming the rapid expansion of digital activity and artificial intelligence into an agreed upon ethical framework. As of right now, the projects are still unclear, though the Swiss version of thelocal.ch reported that future projects could be connected to the establishment of what they are calling a digital “transparency label.”
Microsoft president Brad Smith spoke at the summit and stressed that advancing an ethical framework needed to be done with a “sense of urgency.” Smith called for technology to be “guided by values, and that those values be translated into principles and that those principles be pursued by concrete steps."
"We are the first generation of people who have the power to build machines with the capability to make decisions that have in the past only been made by people," he told those gathered at the summit.
Smith added that he believes that a level of transparency must be established moving forward “to ensure that the people who create technology, including at companies like the one I work for remain accountable to the public at large."
The Bigger Perspective
There are two criticisms that come to mind immediately regarding this news. First, this is something that should have been started a long time ago. We are at a stage where AI is being groomed to take over fundamental roles in various parts of life, including roles that are ethically fraught.
As for data, we long ago descended into what can only be called a personal data crisis. It seems like every week a new transgression comes to light wherein it is revealed that the people who assured us they weren’t “being evil” — or at least exploiting the power and information that had — in actuality were doing just that.
It is baffling that after Snowden and Cambridge Analytica and the thousand other smaller examples of government and big tech companies encroaching on citizen and user privacy, attempts at fighting back against such abuses are still portrayed as radical and dangerous. Last week I wrote about how government officials keep on bringing up 9/11 in regards to the danger that cryptocurrency, and specifically private cryptocurrency, presents. This kind of hysteria is something we are very familiar with at Bytecoin. Ironically, all the 9/11 stuff was cooked up in the aftermath of the Libra announcement, in which Facebook proposed what would amount to the most traceable currency that has ever existed. It will be interesting to see how the media and government react to David Chaum’s new Praxxis project once it gets off the ground.
Secondly, are these people really the ones who should be establishing ethical frameworks? Facebook, Google, Microsoft, Huawei and IBM? These are the same entities causing the problems we are all currently dealing with. Microsoft is currently bidding for a $10 billion Pentagon contract to develop the cloud infrastructure for the US military in addition to the work they are already doing developing military AI. And yet Brad Smith, Microsoft president, was chosen to deliver the keynote address at the summit.
If there is going to be a solid framework for both AI and for personal data it has to be comprehensive and it has to be established by people that are not beholden to organizations and companies whose interests, be it monetary or otherwise, lie in exploiting ethical grey areas and/or acting in a way that is ethically reprehensible.
The Ethics of AI
In 1942 biochemist and author Isaac Asimov introduced The Three Laws of Robotics in a short story called “Runaround.” Asimov had developed the laws at the behest of his editor, but later said that he should not be praised for creating them because they are “obvious from the start, and everyone is aware of them subliminally.” The laws are as follows:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
Though Asimov’s Laws were put forth in a fictional context, they have gone on to shape the discourse on the ethics of AI.
Traditionally, the ethical considerations concerning artificial intelligence have been divided into two branches: roboethics and machine ethics. Roboethics deals with the moral responsibilities humans have when they develop artificially intelligent beings, and machine ethics deals with the moral responsibility of robots capable of making decisions.
As it stands now, machine ethics are not as pressing as roboethics. In the past a potential line was drawn that would have prevented AI technology from replacing people in positions that require respect and care, such as a judge or a soldier. But now the introduction of AI into soldiery seems to just be a matter of time.
With the United States and Russia devoting resources to developing autonomous drone weapons, the AI arms race seems to be on. The danger that intelligent weaponry poses is the stuff of science fiction nightmares and it has lead to people like Stephen Hawking, Max Tegmark and Noam Chomsky leading efforts to halt its development.
Something miraculous would have to happen for the tide to be turned back. The Initiative is a good idea in theory, but the scope needs to be changed to reflect the true reality and gravity of the problem.
Asimov said that whenever someone asked him whether The Three Laws will actually be used to govern the behavior of robots he would always give the same answer.
He would say, “Yes, the Three Laws are the only way in which rational human beings can deal with robots—or with anything else,” but he would always remember “sadly that human beings are not always rational.”