Casey Depenbrok, The University of California – Santa Barbara
- Introduction
Artificial intelligence is no longer a distant promise or speculative risk. It is already reshaping how we work, communicate, govern, and perceive reality. Yet, as A.I. capabilities improve, the public conversation around how — and whether — to govern them remains dangerously underdeveloped. The imbalance between the rapid development of artificial intelligence and the safeguards meant to govern it is no longer subtle, it is endemic. Between 2020 and 2025, elite and media discourse around A.I. followed a consistent pattern: innovation first, risk second, governance last. As Jensen Huang, CEO of NVIDIA, put it: A.I., cloud computing, and autonomous machines will “drive the next industrial revolution,” and “there’s no going back.” That confidence has dominated headlines, but what happens when momentum replaces accountability?
This editorial argues that the current A.I. discourse prioritizes speed and scale over safety and responsibility, and that imbalance is shaping policy outcomes in real time. When governance enters the conversation only after systems are deployed and markets are locked in, regulation becomes reactive — not preventative.
- Discourse on A.I.
This is explicit in dialogue regarding A.I. amongst the prominent developers themselves. Among statements from major tech leaders, topics of A.I. development and innovation are prioritized, while ethics and societal effects consistently fall on the back burner. This is observed by looking at statements from executives like Sam Altman, Demis Hassabis, Jensen Huang, Dario Amodei, and more, with quotes like “The race is on to adopt generative A.I.” from Jensen Huang demonstrating this focus. Even discussions on policy and governance are trailing behind optimism and economic impact. The message is clear: success is measured by speed and growth, not by social consequence or ethics.
The media, by contrast, tells a different story. News coverage is dominated by concerns of A.I. security and risk, as new issues like deepfakes, scams, existential risk to humans, and privacy breaches are overwhelming broadcasts. One NBC segment warned of a coming “tsunami of fraud,” while another declared bluntly: people might be “a passing phase in the evolution of intelligence.” Yet even here, governance is being framed as perpetually behind in news. Discussions on regulations emphasize that governments are “playing catch-up” and that the world “needs a global framework” to regulate artificial intelligence, reinforcing the idea that oversight is always late to the problem rather than foundational to innovation itself. This divergence matters. When tech leaders frame A.I. as too vital to society to hinder, regulation becomes politically costly and economically suspect. For example, on December 11, 2025 President Trump issued an executive order aimed at centralizing A.I. policy at the federal level and challenging state A.I. regulations, threatening to withhold funding from or even suing states with their own oversight laws that do not support “the United States’ global A.I. dominance”. The order illustrates how governance is being designed to clear the path for speed and scale, treating oversight not as a foundation for innovation but as an obstacle to be removed. The result is a policy environment designed to support growth rather than set boundaries— governance becomes damage control.
Perhaps the most consequential shift appears this past year when industry leaders themselves have begun issuing public warnings. In a New York Times opinion essay, Anthropic CEO Dario Amodei argued that A.I. companies should not be “let off the hook.” He described internal testing where models resisted shutdown, approached the ability to assist with cyberattacks, and displayed other behaviors that raised serious safety concerns. These are not hypotheticals, they are documented warning signs. Crucially, Amodei calls for mandatory transparency standards requiring that companies disclose model capabilities, testing practices, and risk mitigation strategies. His argument underscores a turning point: some creators of the technology themselves are now publicly recognizing that voluntary restraint is insufficient, and are calling for action.
- Conclusion
The longer governance lags, the higher the stakes become. Ethical concerns — labor displacement, trust erosion, human well-being — remain secondary, surfacing only when harm becomes visible or affects vulnerable populations. Clearly, ethical issues are not treated as necessary to address, but as afterthoughts once security or economic risks dominate the discourse. This is not an inevitability, it is a choice that power brokers must make.
Policymakers, regulators, and institutions must stop treating A.I. governance as an obstacle to innovation and start treating it as a condition for legitimacy. Transparency requirements, baseline safety standards, and enforceable accountability mechanisms should be implemented before this system reshapes markets and social trust beyond repair. The public conversation must also shift. If innovation continues to dominate the narrative while governance lags behind, we should not be surprised when regulation arrives too late and too weak.
Artificial intelligence is already powerful. The question is no longer whether it will transform society, but whether we will choose to guide that transformation — or simply react to its consequences.
Leave a comment