Professor Walter Sinnott-Armstrong lectures on the ethics of Lethal Autonomous Weapons (LAW). Photo Credit: Duke Government Relations

February 15, 2019

The problem with artificial intelligence is that is still fairly stupid. It is prone to bias and misunderstands context clues. It misreads street signs and confuses humans for robots. But humans also succumb to biases and false positives. Humans find patterns where they don’t exist and miss patterns where they do. When it comes to the role of Congress in regulating America’s A.I. ecosystem, human and artificial intelligence must work together to check each other’s weaknesses.

The solution to this imbalance lies in understanding the strengths of both humans and machines, argued three Duke University professors in a briefing for Congressional staff on Feb. 15. Vincent Conitzer, Kimberly J. Jenkins University Professor of New Technologies; Walter Sinnott-Armstrong, Chauncey Stillman Professor of Practical Ethics in the Department of Philosophy and the Kenan Institute for Ethics; and Nita Farahany, Professor of Law and Philosophy all spoke at the seminar.

Meant to highlight the central ethical questions of A.I. research, the lunchtime event coincided with President Trump’s recent Artificial Intelligence Executive Order (EO). Trump’s EO aims to educate workers in STEM fields, increase access to cloud computing systems, increase access to the data needed to build A.I. systems and to promote cooperation with friendly foreign powers. It did not set aside new funding for these priorities.

Professor Conitzer began the program with a problem: the inability of A.I. to not choose. He used a Winograd Schema, a sentence with an ambiguous word that can be resolved in two or more ways, to show how humans employ ambiguity. The English language does not assign genders to plural pronouns. So, if Google Translate moves the sentence “The men agreed with the women because they are right” into Spanish, it will have to translate “they” as either masculine or feminine.

Google Translate often defaults to masculine pronouns – possibly reflecting the selection data that trained the algorithm. Google’s artificially intelligent language translation makes decisions humans either avoid or can use context clues to properly address. That algorithmic bias, however, also highlights the problems humans have in resolving ambiguity and asks what that means for the digital future. Seen as a type of human to A.I. checks and balances, it may be exactly A.I.’s failures that help resolve some of our own.

At present, A.I. systems don’t embrace ambiguity the way humans can and don’t understand context clues either. In a way, that precision of artificially intelligent systems to hone in on a problem or trend may help humans better understand biases and logical fallacies.

A.I.’s future doesn’t only promise controlled human bias and progress, however. Some popular literature such as the Oxford University philosophy professor Nick Bostrom’s book Superintelligence asks what happens when artificial intelligence surpasses general human intelligence. Bostrom predicts a dire future where A.I. reaches beyond human control and supersedes the wishes of humankind.

When asked about the possibility for a superintelligence-type A.I. takeover, professor Conitzer redirected the conversation from long-term predictions to near-term threats. He felt the real threat to human liberty is not from an A.I. acting autonomously, but from a human using A.I. systems to control other humans.

Conitzer noted that A.I. might enable large-scale surveillance and manipulation of societies, including through the use of ‘deep-fakes’ [A.I.-generated fake images and videos]. “It’s become incredibly easy to doctor images and video in realistic ways,” he warned.

The potential of A.I. to create society-wide effects has led some countries to lead from the top with national strategic plans, such as President Trump’s A.I. Executive Order.

In addition to plans, however, several G20 countries have committed significant resources to the research and development of A.I. China and the U.S. still lead in A.I. deployment and R&D, with the U.S. taking a private-sector-led approach and the Chinese a public-sector one.

The investments in federal R&D differ greatly. By fiscal year 2017, the U.S. had spent 2.5 billion USD on federally-funded A.I. research. By 2020, the Chinese plan to invest 70 billion USD in A.I. R&D, and by 2030, the Chinese plan to invest a total of 150 billion USD in artificial intelligence R&D.

One participant asked Farahany whether the private or the public sector should take the lead in funding A.I. research. “Because A.I. is still in such a nascent phase of its development,” Farahany responded “and because we as a society are going to increasingly face ethical and legal dilemmas from its use and development, there is an important role for government in the field. They [federal research agencies] have a chance to be at the forefront of and to help spur greater innovation in A.I., and we should make sure they have the resources they need to do so.”

Professor Sinnott-Armstrong also touched on the role of government in providing definitions and direction for the use and deployment of A.I., particularly in the ethically uneasy use of lethal autonomous systems. The Department of Defense has adopted a fairly precise definition of both artificial intelligence and lethal autonomous weapons. The DoD took a top-down ethical approach instead of a bottom-up one. This intra-agency direction, Sinnott-Armstrong advised, may not work for government agencies that prefer a networked and not a chain-of-command approach.

Because of their increased precision, lethal autonomous weapons systems may be more moral than traditional human-to-human engagement. Sinnott-Armstrong argued, though, that autonomous weapons systems should not be deployed by the government too quickly, and their performance could be improved by incorporating ethical oversight into the artificial intelligence. “When properly implemented, future autonomous and semi-autonomous weapons might be able to increase effectiveness and deterrence while also reducing mistakes and civilian deaths,” he suggested.

In an ironic twist, the professors noted that the advent of A.I. may require even more human discernment and analysis than now required – in both military and non-military uses.

Several audience members asked about the ethical dilemmas of predictive policing. Predictive policing uses large amounts of demographic data to identify crime trends. Predictive policing offers one of the most acute examples of ways A.I. can either check human bias or accelerate it.

For example, an A.I. predictive policing software system may predict a result 80% of the time given a certain scenario. In such a case, it is similarly true to say ‘100% of people who drink water will die.’ There may be a 100% rate of correlation, but there is nothing valid in the relationship between the drinking of water and death. Links can be technically true and yet conceptually unhelpful at the same time.

Similarly, an A.I. predictive policing system can correlate variables such as race, class and geography and still not understand their causal relationships.

One trend resonated through the entire program: patterns alone do not bare objective truth. As each professor noted in their presentation, human analysis of a pattern reveals truth. The problem with artificially intelligent programs is that many of them ‘learn’ from reams of human-generated data. If the data lacks quality, the results will too. Human and artificial intelligence have much to offer each other, but only if they ask the right questions.