Current debates around artificial intelligence promise much but define little. This past Wednesday, Oct. 10, the Defense Innovation Board held an open hearing on the best, most ethical way to incorporate AI into the battle space and into Department of Defense daily operations. One idea kept resurfacing: the meaning of intelligence.

Having committed to the recently vogue world of machine learning, the Department of Defense faces a daunting task optimizing, accessing and streamlining millions of users and apps and thousands of software systems, many of them mission critical.

Duke University Professor in the Department of Mechanical Engineering and Materials Science Mary “Missy” Cummings recently joined the board and quickly offered suggestions on the best way to frame machine learning systems.

Drawing from her research on autonomous systems, she spoke about the fallacy in thinking that computers lack human bias. Humans must code the system, select data, select testing facilities, select participants and much else. Humans design the entire world in which one builds an autonomous system.

But the DoD should take more from this example. It is not enough to say that machines carry their creator’s bias. Cummings rightly posited that patterns in a code or statistic do not themselves own objective truth.

The most important part of any machine learning system will still be the human interacting with it. The next generation warfighter will not just face kinetic threats on the battlefield, but also the heuristics of a machine meant to help them.