March 7, 20266 min readAI Governance

Governance Lag: From Local Surveillance to Military AI

Mike Fanale | Systems & Strategy
Governance Lag: From Local Surveillance to Military AI

AI did not begin with chatbots, and the governance problem did not begin with large language models. From surveillance systems at the local level to autonomous and semi-autonomous tools in defense, the real question is where human judgment ends and accountability begins.

The public conversation about AI often begins in the wrong era.

For many people, “AI” still means chatbots, image generators, and large language models. That is understandable. Consumer-facing tools made artificial intelligence visible in everyday life in a way older systems never did.

But high-consequence institutions did not begin using AI when LLMs became popular.

Long before generative AI became a public obsession, algorithmic systems were already embedded in defense, intelligence, logistics, simulation, and surveillance workflows. Anthropic itself has said its models are already used across national-security agencies for intelligence analysis, planning, simulation, and cyber-related work.

That matters because it changes the real question.

The issue is not whether AI should exist inside military or federal systems. In practice, that question was answered years ago. The more serious question is where automation stops supporting human judgment and starts displacing it.

That is the line that matters, because it is also the line where accountability begins to blur. The Department of Defense’s own doctrine reflects this concern by requiring autonomous and semi-autonomous weapon systems to allow appropriate levels of human judgment over the use of force, while its responsible AI guidance emphasizes systems that are traceable, reliable, governable, and attentive to privacy and civil liberties.

The same governance problem at different scales

The same governance problem now appears at multiple levels: in local surveillance systems, in federal data collection, and in military AI.

The scale changes. The pattern does not.

At the local level, the problem appears when systems collect, classify, retain, or infer more than the public meaningfully understands before transparency and oversight catch up. At the federal level, the same pattern becomes more consequential. AI systems already sit inside defense and intelligence workflows. As those systems become faster, more predictive, and more operationally significant, the public is left with a basic but increasingly urgent question: where does human judgment remain decisive, and where is accountability quietly handed off to automated systems?

What history already teaches

History offers both warning signs and guide rails.

One of the clearest warning stories came in 1983, when Soviet officer Stanislav Petrov received a computerized early-warning alert indicating an incoming U.S. missile strike. The system was wrong. Petrov did not treat the machine’s output as self-validating truth, and catastrophe was avoided. Whatever else that episode represents, it remains a durable lesson in what happens when a human being remains responsible for questioning a system that appears urgent, authoritative, and time-compressed.

A different warning came in 1991 at Dhahran, Saudi Arabia, during the Gulf War. A GAO review found that a Patriot missile battery failed to track and intercept an incoming Scud missile because of a software problem in its weapons-control computer. The tracking error worsened the longer the system operated continuously, and the patch meant to correct the problem did not arrive until after the incident. That is not a story about science-fiction AI. It is a story about software, operational assumptions, and field consequences converging under pressure. Once software sits inside a high-consequence chain, design assumptions and technical limits stop being abstract engineering concerns. They become governance concerns.

Those examples matter because they frame the real choice. Less human oversight does not simply mean faster decisions. It also means faster false positives, faster escalation, faster operationalization of software defects, and faster normalization of outputs that may be wrong. Real human oversight, by contrast, does not require paralysis. It requires bounded systems, documented thresholds, traceable accountability, and enough authority for a human being to question or override a machine before a bad output becomes a real-world consequence.

Doctrine is not blind, but doctrine is not enough

The Department of Defense is not blind to these risks.

Directive 3000.09 requires appropriate human judgment over the use of force, realistic testing, and alignment with Department of Defense responsible-AI principles. The department’s responsible-AI framework emphasizes traceability, reliability, governability, and privacy and civil liberties. Those are not cosmetic principles. They exist because the U.S. government already recognizes that speed and capability alone are not enough.

But doctrine does not eliminate pressure.

A January 2026 defense AI strategy memo frames speed as decisive, states that the risks of not moving fast enough outweigh the risks of imperfect alignment, and pushes procurement toward models free from usage-policy constraints for lawful military use. That language is revealing. It treats friction as a problem to be reduced. And once friction becomes the enemy, oversight starts to look like delay rather than protection.

That is where governance lag becomes dangerous.

In fast-moving systems, institutions are always tempted to remove whatever slows the loop: extra review, human confirmation, documentation, independent scrutiny, bounded-use clauses, or public explanation. Yet those are often the exact mechanisms that keep mistakes from becoming normalized practice. Oversight is easy to dismiss as drag right up until something goes wrong. In high-consequence environments, however, oversight is not drag. It is accountability.

Recent public-health and emergency-response debates have also shown how institutional urgency can compress timelines, harden narratives, and weaken trust when governance and communication fail to keep pace with complexity.

Surveillance, inference, and the limits of “lawful use”

The problem becomes even sharper when surveillance enters the picture.

Anthropic’s public explanation of its “mass domestic surveillance” concern is notable precisely because it is not science fiction. Its argument is simpler and more contemporary: powerful AI can aggregate commercially available information and turn disconnected inputs into a far more comprehensive picture of a person’s life at massive scale. In that setting, “lawful use” can become a weak safeguard if the law itself has not kept pace with what the technology now makes possible.

That is why legal baselines, while necessary, are not sufficient. Posse Comitatus still matters because it reflects a longstanding legal boundary against using the military as a domestic police force. But once inference power expands far beyond what older systems could do, the public-facing question is no longer only what data is collected. It is also what can be inferred, assembled, scored, predicted, and operationalized from data that may have been acquired through channels that appear lawful on paper.

This is the point at which debates over surveillance, automation, and military AI begin to converge. The issue is not whether these technologies are powerful. That is already obvious. The issue is whether governance keeps pace with the power to infer, classify, prioritize, and act.

What real human-AI cohesion would require

That is why the current debate should not be reduced to partisan shorthand.

It is not meaningfully understood as pro-AI versus anti-AI. It is not clarified by left-versus-right framing. And it is not solved by pretending that consumer-facing LLMs invented the problem.

The issue is governance.

Handing over certain decisions is reckless. Refusing to use technology that can legitimately extend human capability is also reckless. The serious position sits in the middle: use the tools, define the thresholds, preserve real human accountability, and build oversight mechanisms strong enough to survive institutional pressure.

That is what human-AI cohesion should mean.

It should not mean symbolic human involvement after the meaningful decision has already been shaped by the machine. It should not mean vague promises of responsibility without traceability, logs, override authority, or clear doctrine. And it should not mean asking the public to trust systems first and understand them later.

A governance model that supports speed rather than merely slowing it down would do four things.

First, it would define autonomy thresholds clearly. Systems can assist, recommend, prioritize, or simulate in many useful ways. But the line where authorization, escalation, or force become real must be explicit, documented, and reviewable. “Appropriate human judgment” is only meaningful if institutions make it operationally measurable.

Second, it would preserve traceable accountability. When a system shapes a high-consequence decision, responsibility cannot dissolve into vendor language, software opacity, or institutional fog. Someone must still own the judgment, and the process must be reconstructible after the fact. That is what traceability and governability are supposed to protect.

Third, it would treat testing and auditability as part of speed, not as obstacles to it. Real governance is not the opposite of operational tempo. It is what makes tempo sustainable. Systems that are bounded, tested, logged, and reviewable do not merely slow errors down; they keep errors from silently scaling. The Department of Defense’s responsible-AI guidance points in this direction by focusing on life-cycle testing, governability, and explicit use cases rather than on vague trust alone.

Fourth, it would align procurement with governance rather than treating governance as a downstream patch. This is where the recent Anthropic-Pentagon conflict was especially revealing. Anthropic’s public position was not that AI should be barred from defense use. It explicitly said the opposite: its models are already used in national-security contexts. Its refusal was narrower — mass domestic surveillance and fully autonomous weapons — grounded in the claim that today’s frontier models are not reliable enough for those uses and that AI makes certain forms of surveillance dangerously scalable. At the same time, Anthropic revised its Responsible Scaling Policy under competitive pressure, underscoring how fragile purely voluntary guardrails become when state incentives and market incentives intensify simultaneously. That is exactly why governance cannot depend on trust alone.

What a workable model looks like

International frameworks point in the same direction. NATO’s AI strategy emphasizes lawfulness, accountability, reliability, traceability, and governability. The U.K.’s defense AI strategy argues for adopting AI at pace and scale while still upholding democratic standards, values, and norms. Those documents do not frame oversight as the opposite of capability. They treat it as a condition of legitimate capability. That is the healthier model.

That model matters because it offers a real alternative to the false choice between speed and restraint.

Less human oversight can make a system appear efficient right up until it fails. Real human oversight, by contrast, does not mean rejecting automation. It means designing a system in which human beings remain morally and operationally responsible, where thresholds are explicit, where override authority is real, where logs and audits exist, and where faster systems do not quietly dissolve accountability in the name of inevitability.

Technology does not determine the future on its own.

Governance does.

The question is not whether artificial intelligence will continue advancing. It will. The question is whether oversight, transparency, and accountability will advance with it, or whether those systems will once again be built only after public trust has already been spent and preventable damage has already been done.

That is the governance question now sitting in plain view — locally, federally, and militarily.

And it will not become easier by pretending AI began with LLMs or that faster systems somehow erase the need for slower, accountable judgment.

Selected Sources

  • Anthropic statement on defense use, mass domestic surveillance, and fully autonomous weapons.
  • U.S. Department of Defense Directive 3000.09 on autonomous weapon systems.
  • U.S. Department of Defense responsible-AI guidance.
  • January 2026 defense AI strategy memo.
  • GAO review of Patriot failure at Dhahran.
  • Stanislav Petrov historical reference.
  • NATO AI strategy.
  • U.K. Defence AI Strategy.
  • Posse Comitatus baseline.
Share this insight

Stay ahead of the curve

Get the latest insights on AI governance, sales automation, and human-centric technology delivered to your inbox.

Ready to transform your business?

Let's discuss how AI can work for you.