AI Laws Need a Kill Switch

Originally published in National Review.

There is a better way to regulate this rapidly changing technology.

Every new parent learns quickly: Don’t overspend on newborn clothes. What looks roomy in the store becomes tight within weeks. The smarter approach is simple — buy what fits now, monitor growth, and adjust as needed. Lawmakers regulating artificial intelligence (AI) are taking the opposite path, assuming that their initial choices will fit the industry forever.

The risk of poor regulatory decisions is high, given the speed and significance of AI progress. Yet, across the country, state legislatures are rushing to pass AI laws built around brittle assumptions about how AI systems are developed, deployed, and used. These bills often include conflicting, rigid definitions, burdensome compliance obligations, and enforcement frameworks that reflect the current, fleeting moment in AI development. AI, like a newborn, is growing fast and unevenly. If you think it’s impressive now, just wait until it can walk. Although some of these laws may indeed provide a net benefit in the short run, their costs will likely be far greater as they inevitably stop operating as intended while remaining firmly on the books.

That mismatch carries real costs. AI development is changing along multiple dimensions at once. Model architectures are evolving. Today, we’re focused on large language models. Tomorrow, world models could well be the new “thing.” A few years from now, we may be relying on entirely new classes of AI systems. This fluidity and unpredictability characterize the whole AI ecosystem. Industry norms around safety, evaluation, and documentation are already changing faster than policymakers can keep up with. Adoption patterns will presumably continue to vary by sector, income, and geography. For all these reasons and more, a statute calibrated to one configuration of the AI ecosystem can become misaligned with current needs even as its text remains unchanged.

Strict AI laws impose high costs on dynamic activities. To comply with rigid regulations, companies, researchers, and investors will necessarily distort behavior — allocating their resources, time, and attention to the path of least regulatory resistance. Developers may optimize for compliance rather than performance. Smaller, nascent firms will face high barriers to entry as today’s laws accumulate like TIME magazine covers in a hoarder’s garage. In turn, society will lose access to useful tools, as AI companies remove or withhold prohibited features. Meanwhile, we’ll be paying for regulators who enforce rules that no longer track actual risk. None of these outcomes requires bad intent. They are the predictable result of treating a dynamic technological system as static.

Most AI legislation is written as though the hardest work is passing the bill. My scan of bills from Washington to Virginia uncovered very few, if any, with robust provisions for mandatory, exacting retrospective review of regulations. Even fewer account for the possibility of their obsolescence by establishing timed sunset clauses.

Admittedly, sunset clauses are a blunt instrument. They terminate a law based on the passage of time rather than on evidence about whether the law is working, misfiring, or no longer necessary. A statute can expire even if it is well-calibrated — or persist until its sunset date even after its assumptions have collapsed. In fast-moving domains like AI, the calendar is a poor proxy for risk or social value.

There is a better way to design technology law: Every AI statute should include a mandatory “kill switch” tied to explicit, testable conditions.

A kill switch compels legislative humility, or the admission that a law’s expiration date may arrive sooner than its authors expect. They are best thought of as built-in mechanisms that automatically narrow, suspend, or nullify a law when defined factual triggers are met. Those benchmarks might relate to measurable outcomes of AI models, market concentration, compliance burdens, adoption rates, or the development of alternative industry standards.

This approach forces a discipline that’s currently missing from AI policymaking. Legislators would need to specify what success looks like and what failure would require. And they must accept that not all regulatory bets pay off as conditions change.

A kill-switch requirement would also stand to improve AI governance over time. A law that can turn itself off creates pressure on lawmakers to define its scope carefully and justify its costs. Such a provision also preserves democratic accountability by ensuring that continued enforcement depends on evidence, not inertia.

The alternative is what we see now: laws that age silently into technological irrelevance. Like too-small clothes kept in the dresser out of habit, they remain part of the wardrobe long after they stop serving their purpose.

Good parents do not try to predict exactly how their child will grow. Instead, they plan to adapt — buying what fits now and expecting to do so again in a matter of months, if not weeks. AI legislation should follow the same logic: Write rules that work now, measure whether they continue to work, and include mechanisms to change the rules when they no longer do.

If lawmakers want AI regulation that protects consumers without freezing innovation, they should stop treating permanence as a proxy for seriousness. The most responsible technology laws are the ones designed to be outgrown.