When I wrote about Anduril in 2018, the company made it clear that it did not make lethal weapons. Today, Anduril makes fighter jets, underwater drones, and other lethal weapons of war. Why did you change course?
We responded to what we saw not only within the military but all over the world. We want to be consistent in providing the best possible capabilities in the most ethical way possible. If we don’t, someone else will eventually, but we believe we can do it best.
Did you have any soul-searching discussions before crossing that line?
There are constant internal debates about what to build and whether it is ethically aligned with our mission. I don’t see much point in trying to draw our own lines when the government is actually drawing those lines. The government has given clear guidelines about what the military does. We follow the directions of the democratically elected government, who tell us their issues and how we can help.
What is the appropriate role for autonomous AI in warfare?
Fortunately, the US Department of Defense has done more work in this space than perhaps any other organization in the world, aside from the large generative AI foundational model companies. We have clear rules of engagement to keep humans on our radar. We want to free humans from boring, dirty, and dangerous work, make decision-making more efficient, but always hold humans accountable at the end of the day. Regardless of how autonomy evolves in the next 5 or 10 years, this is the goal of every policy put in place.
In a conflict, especially with weapons such as autonomous fighter jets where targets can appear instantly, there may be a temptation to attack without waiting for human intervention.
The autonomy program we’re working on for the Fury aircraft (fighter aircraft used by the US Navy and Marine Corps) is called CCA (Collaborative Combat Aircraft), where a human is in the aircraft controlling and directing the robotic fighter and making decisions about what to do.
You are building a drone that will stay in the air and pounce when it finds a target. What do you think about that?
There is a category of drones known as loitering munitions, which are aircraft capable of seeking out targets and then attacking them with kinetic energy, akin to kamikaze attacks. Again, this involves a human in charge.
War is messy, and there is a real concern that these principles will be ignored once fighting begins.
War is fought by humans, and humans are flawed. We make mistakes. Even back when we were standing in lines firing muskets at each other, there was a process for adjudicating violations of the rules of engagement. I think that will persist. Do I think there will never be a case where an autonomous system is asked to do something that feels like a serious violation of ethical principles? Of course not, because it is still led by humans. Do I think it is more ethical to wage dangerous, messy conflicts with robots that are more precise, more discriminating, and less likely to lead to escalation? Yes. To decide not to is to continue to put people at risk.
Photo: Payton Fulford
You are probably familiar with Eisenhower’s final message about the dangers of a military-industrial complex that only seeks its own profits. Has that warning influenced your work?
This is one of the best speeches of all time, and I read it at least once a year. Eisenhower was clear about the military-industrial complex that the government and contractors like Lockheed Martin, Boeing, Northrop Grumman, and General Dynamics are not that different. There is turnover in the senior management of these companies, and their interrelationships make them centers of power. Anduril has promoted a more commercial approach that doesn’t rely on that tightly knit incentive structure. We’re saying, “Let’s make things at the lowest cost using off-the-shelf technology. And let’s do it in a way that takes a lot of risk.” This avoids some of the potential tensions Eisenhower identified.