September 21, 2025

When AI Becomes a Weapon, Who's the Adult in the Room?

Ethics

China just held its largest military parade in history. Xi Jinping, Putin, and Kim Jong-un stood together on the same stage — three men who collectively control a significant portion of the world's nuclear arsenal — and the centerpiece wasn't tanks or troops. It was AI-powered weaponry. Hypersonic missiles guided by machine learning. Autonomous drones that can identify and engage targets without a human in the loop. Surveillance systems that can track a face through a city of twenty million people. I'm watching this from my living room, the same room where I build AI systems that help businesses run better, and I'm thinking: we are building the same underlying technology. The math is the same. The architectures are the same. The difference is what you point it at.

And that's the question nobody wants to answer honestly. When AI becomes a weapon — and it already has — whose job is it to draw the line? Is it the AI companies? Should OpenAI, Anthropic, Google, and Meta be the ones deciding what their technology can and can't be used for? There's a case for it. They built it. They understand the capabilities better than any senator or general. They know exactly what these models can do when you take the guardrails off, because they've seen it in their red-team testing. Anthropic has an explicit responsible scaling policy. OpenAI has its charter. Google has its AI principles — remember when they pulled out of Project Maven back in 2018 because their own employees revolted? That felt like a line in the sand. But here's the problem: AI companies are businesses. They have investors, revenue targets, and competitors breathing down their necks. The moment one company says "we won't do defense contracts," another one will. And the moment a foreign adversary deploys AI weapons unchecked, the moral high ground starts looking a lot less strategic. You can't run a company on principles if the company doesn't exist.

So maybe it's the government's job. Regulate it. Pass laws. Create an international treaty — a Geneva Convention for artificial intelligence. And in theory, that sounds right. Governments regulate nuclear weapons, chemical weapons, biological weapons. Why not AI weapons? Because governments are slow, and AI is fast. By the time Congress understands what a transformer architecture does, the technology will have moved three generations past whatever they're trying to regulate. The EU AI Act is the most ambitious attempt so far, and even that is already struggling to keep up. And let's be honest — governments aren't neutral actors here. The US Department of Defense is one of the biggest AI customers on the planet. You can't ask the government to regulate AI weapons while simultaneously asking the government to build AI weapons. That's not regulation, that's a conflict of interest wearing a suit.

Then there's us. The people. The engineers, the researchers, the builders. The ones who write the code that makes all of this possible. Do we have a responsibility? I think about the physicists who worked on the Manhattan Project — brilliant minds who built something extraordinary and then spent the rest of their lives grappling with what it was used for. Oppenheimer's "I am become death" isn't just a quote, it's a warning from a man who understood too late that building something doesn't mean you control what it becomes. Are we in that moment right now? Are the people training these models the Oppenheimers of our generation? I don't know. But I know that "I just built the tool, I didn't choose how it was used" stopped being an acceptable answer somewhere around Hiroshima.

The honest truth is that I don't have a clean answer, and I'm suspicious of anyone who does. The companies can't be trusted to self-regulate indefinitely because capitalism doesn't work that way. The governments can't be trusted to regulate intelligently because they don't understand the technology and they have their own agendas. And individual engineers can refuse to work on military projects, but that just means someone else will — someone who might care less about the ethical implications. What I do know is this: the conversation needs to be louder. Much louder. We're spending all our airtime arguing about whether AI will take your customer service job while actual AI-powered weapons are being paraded through Beijing. The stakes aren't productivity. The stakes are existential. And we're treating this like a product launch.

I build AI because I believe it can make life better. I've seen it automate the tedious, accelerate the creative, and unlock things that weren't possible five years ago. I still believe that. But I'd be lying if I said that watching that parade didn't shake something loose in me. The same neural network architecture that powers the chatbot helping you write an email can be retrained to identify human targets from a drone feed. That's not science fiction. That's September 2025. So who's the adult in the room? Right now, nobody. And that should terrify every single one of us — whether you're building AI, regulating AI, or just using it to plan your dinner. Because the technology doesn't care about your intentions. It just does what it's told. And right now, not enough people are asking who's doing the telling.

-- Navin Prabhu (RealDesiMcCoy)