The latest news and resources from Duke Government Relations.


The DC Digest - November 23, 2021

  • Lawmakers are in Recess, With a Long To-Do List Upon Their Return
  • President Biden Announces Kusnezov to Lead DHS Science & Technology Directorate
  • Higher Education Groups Request Clear Guidance from Department of Education on Resuming Student Loan Repayment

The Need for Transparency and Interpretability at the Intersection of AI and Criminal Justice

“No human can calculate patterns from large databases in their head. If we want humans to make data-driven decisions, machine learning can help with that,” Cynthia Rudin explained regarding the opportunities that artificial intelligence (AI) presents for a wide range of issues, including criminal justice.

On November 15th, Rudin, Duke professor of computer science and recipient of the 2021 Squirrel AI Award for Artificial Intelligence for the Benefit of Humanity. joined her colleague Brandon Garrett, the L. Neil Williams, Jr. Professor of Law and director of the Wilson Center for Science and Justice, for “The Equitable, the Ethical and the Technical: Artificial Intelligence’s Role in The U.S. Criminal Justice System.” The panel was moderated by Nita Farahany, the Robinson O. Everett Professor of Law and founding director of Duke Science & Society. At the event, there was representation from numerous House and Senate congressional offices as well as the Departments of Transportation and Justice, National Institutes of Health (NIH), American Association for the Advancement of Science (AAAS) and the Duke community.

Rudin started off the conversation by providing listeners with a simple definition, “AI is when machines perform tasks that are typically something that a human would perform.” She also described machine learning as a type of “pattern-mining, where an algorithm is looking for patterns in data that can be useful.” For instance, an algorithm can analyze an individual’s criminal history to identify patterns and could be used to help predict whether that person is more likely to commit a crime in the future.

Garrett added that AI applications pose a potential solution for human error – we can be biased, too lenient, too harsh, or “just inconsistent” – and these flaws can be exacerbated by time constraints and other factors. When it comes to AI in the criminal justice system, an important question to consider is whether AI has the potential to provide “better information to inform better outcomes” and better approaches to the criminal system, especially considering the presence of racial disparities.    

However, applying AI tools to the criminal justice system should not be taken lightly. “There are a lot of issues that we need to take into account as we are designing AI tools for criminal justice,” said Farahany, “including issues like fairness and privacy, particularly with biometric data since you can’t change your biometrics, or transparency, which is related to due process.”

What does it mean for an algorithm to be fair? Rudin estimated that about “half the theoretical computer scientists in the world are working to define algorithmic fairness.” So, researchers like her are looking at different fairness definitions and trying to determine whether the risk prediction models being used in the justice system satisfy those definitions of fairness.

When it comes to facial recognition systems there is “generally a tradeoff between privacy, fairness and accuracy,” Rudin stated. When software searches the general public’s pictures, it invades individual privacy, however, because the model collects pictures of everyone, it’s extremely accurate and unbiased.  Similarly, Garrett noted that the federal government is a heavy user of facial recognition technologies and there is no law that regulates it, pointing to the federal FACE database. “One would hope that the federal government would be a leader in thinking carefully about those issues and that hasn’t always been true,” however, he also praised the National Institute of Standards and Technology (NIST) and Army Research Lab for their work in the space.

Throughout the conversation, the speakers emphasized the importance of transparency and interpretability, as opposed to “black box AI” models. 

“A black box predictive model,” said Rudin, “is a formula that is too complicated for any human to understand or it’s proprietary, which means nobody is allowed to understand its inner workings.” Likening the concept to a “secret sauce” formula, Rudin explained that many people believe that, due to its secretive nature, black box AI must be extremely accurate. However, she pointed out the model’s limitations and occasional inaccuracies, whereas interpretable and “understandable to humans” models can perform just as well.

“Interpretation also matters, because we want people like judges to know what they are doing,” explained Garrett, “and if they don’t know what something means, then they may be a lot less likely to rely on it.”

In the discussion, Garrett also gave his thoughts about legislation currently being considered in Congress. He mentioned the recently introduced Justice in Forensic Algorithms Act, which seeks to allocate additional resources to NIST. Regarding the legal landscape of AI and criminal justice, he recommended that the federal government provide “resources for NIST to be doing vetting and auditing of these technologies, and they should not be black box, they should be interpretable and all of that information should be accessible to all of the sides – the judge, prosecution and defense – so that they can understand the results that these technologies are spitting out and so they can be explained to jurors and other fact finders.”

Posted 11/22/2021

The DC Digest - November 19, 2021

  • The House Passes the Build Back Better Bill
  • Higher Education Groups Urge Congress to Pass FY22 Bill Swiftly and Include Robust Funding for NIH and Several Other Agencies
  • Kvaal and Marten Testify Before Congress on Educational COVID-19 Relief Funds   
  • Tweet of the Week!

The Duke Digest - November 18, 2021

  • General Mark Milley's Day at Duke
  • Jennifer Lodge Announced as Duke's New Vice President for Research and Innovation
  • Out of This World - The Need for a Diplomatic Approach to Space Debris 
  • And Much More...

The DC Digest - November 16, 2021

  • President Biden Signs Infrastructure Bill Into Law; Congress Focuses on Social Spending and FY22 Reconciliation
  • Senate to Potentially Vote on Robert Bonnie's Nomination Today
  • Higher Education Community Urges for a National Strategy and Policies to Increase International Student Enrollment
  • A Look Back on Veterans Day
  • General Milley's Visit to Duke 

The DC Digest - November 12, 2021

  • Biden Administration Begins Cancelling Student Loan Debt for Public Servants 
  • DHS Announces Fee Exemptions to Streamline Processing for Afghan Nationals
  • General Milley Visits Duke's Campus Today 

Tweet of the Week!

The Duke Digest - "Veterans Day Edition" - November 11, 2021

  • Celebrating the Duke Veteran Community in 2021 
  • Unpacking the Hiring & Bias Veterans Face Transitioning to the Workforce
  • General Dempsey & Peter Feaver Discuss COVID-19's Impact on Civil-Military Relations
  • And Much More...

How the Veteran Transitions Research Initiative Helps Inform Veteran Transition Success

Upon exiting the military, veterans encounter unique challenges when transitioning to the civilian workforce. Housed at Duke University’s Fuqua School of Business, the Veteran Transitions Research Initiative (VTRI) conducts research that can help inform initiatives focused on enhancing veterans’ transition to the workforce. The VTRI’s work has received national attention and participation from Microsoft, Amazon, the Call of Duty Endowment, LinkedIn and several U.S. universities.

Aaron C. Kay, the J Rex Fuqua Professor of International Management at the Duke Fuqua School of Business, Sean Kelley, Duke Fuqua School of Business Faculty in Residence and David Sherman, professor of social psychology at the University of California, Santa Barbara (UCSB), co-lead the VRTI’s research efforts, centered around issues related to veteran hiring and bias. Stemming from his experience serving in the U.S. Navy, Kelley was motivated to break down barriers for his fellow veterans entering the civilian workforce.

In honor of Veterans Day and the important work the VTRI does for veterans, Kay answered five questions about his research on veteran hiring and bias:

What initially drove you to research veteran hiring and bias?

I’ve always studied issues related to discrimination, stereotyping and inequality from a social psychological lens. Much of my work has looked at those issues in the context of gender and also socio-economic status. Sean Kelley had taken an interest in my research on how wording choices in job advertisements can contribute to gender inequality in the applicant pool.

A few years later, he reached out to ask me what type of similar work there is on the psychological processes that affect how people treat military veterans in the workplace. Listening to Sean, it became clear to me this is a real social justice issue that needs attention. A post-doctoral student I was working with at the time, Steven Shepherd, and I started to research some of the ways people might unwittingly stereotype veterans. This research led to a publication showing that people view veterans as great fits for jobs that require a lot of doing but less adept for jobs that require feeling and relating to others. And we were off and running.

What is an example of a unique challenge that veterans face as they transition to civilian careers?

I tend to think of this more from the perspectives of the obstacles that veterans face that other, non-veterans, do not. The most glaring is stereotypes, or preconceptions people hold about military veterans. And, in particular, the beliefs they hold about veterans that, while maybe positive and seemingly complimentary, are nonetheless stereotypes. When an average hirer or manager learns an applicant or an employee is a veteran, what immediately comes to mind regarding their strengths? What do they assume (or presume) about that person’s motives, interests, and talents and, importantly, how do those presumptions affect the jobs they assign them to and where they get funneled? People tend to have a sense that negative beliefs are “stereotypes” and so they at least try to regulate them. But they often are unaware of the ways the positive or flattering preconceptions they hold about a group can also be restrictive which, ironically, can make them even more problematic. Much of our research is investigating what, specifically, these positive stereotypes look like and what effects they are having on employment outcomes.

What prompted you to start the Veterans Transitions Research Initiative and what are your plans for the future of the initiative?

There are many people doing wonderful important research on psychological and social issues related to veteran transitions. But the topic is not mainstream amongst researchers – like me – that generally investigate social justice, social inequality, discrimination and stereotyping. The point of the VTRI is to inspire more people to take up this issue in their research.

We - the VTRI, while located at Duke, is co-directed with David Sherman, a social psychologist at UCSB and Sean Kelley, an Executive in Residence at Fuqua - feel more minds are needed, and the VTRI seeks to encourage more social scientists, especially in psychology and organizational behavior, to integrate this population of military veterans and this issue more generally (veteran transitions) into their programs of research, and to test and develop their theories in this specific context. We are working hard on publishing highly visible work that makes this point and bringing people together – researchers with a wide range of experience working with veterans as well as industry partners – to learn from and inspire one another.

How can employers better support their veteran and military-affiliated employees?

One way to approach it is by looking towards other examples of programmatic research on transitions that have been successful at experimentally testing and implementing interventions on a wide scale, for example, through onboarding efforts with students from a wide range of backgrounds who are entering college. They have found that messages centered on how challenges - such as feeling as though one doesn’t belong at the institution - are common, experienced widely, but get better over time can help students succeed in the new environment, particularly when there are other aspects of the institution that are committed to the success of all students.

Are there ways the federal government can support your research or benefit from the findings of this research?

We have given a talk on our research at the Department of Defense's Military-to-Civilian Transition Research Forum. This was a great experience as the forum brings together researchers from different disciplines – social psychology, industrial-organizational psychology, military psychology – as well as people from government and veteran-serving organizations. This forum would seem to be a great arena for the federal government to directly support and stimulate relevant research via funding grants.

By Saralyn Carcy, 11/10/21

The DC Digest - November 5, 2021

  • House Poised to Vote on Biden's Build Back Better Act
  • Higher Education Association Request Visa Processing Flexibilities for Afghan Students and Scholars
  • Duke's Ginsburg Selected to Lead NIH All of Us Research Program
  • NASA Announces New Office Devoted to Technology and Policy
  • Tweet of the Week!



1 2 3 42