Tech Explained: Here’s a simplified explanation of the latest technology update around Tech Explained: Governing the AI transition: Lessons from the 1996 Telecommunications Act in Simple Termsand what it means for users..
There are more than 300 bills related to artificial intelligence (AI) that have been introduced in the U.S. Congress and approximately 1,200 in state legislatures.
Legislating in the midst of a technology transition is both important and risky. It is important because protecting the public interest requires rules and expectations rather than an absence of rules that allows companies to act unilaterally in their own interest. It is risky because lawmakers tend to define tomorrow in terms of what is known today—a reality that inhibits the agility necessary in an environment of fast-moving innovation.
The last time Congress attempted to legislate in the midst of a technology transition was the Telecommunications Act of 1996, signed into law by President Clinton on Feb. 8, 1996. The new law updated the Communications Act of 1934. A 30-year look back can help inform today’s discussion about a national policy for the destabilizing effects of artificial intelligence.
Thirty years ago, the destabilizing event was the shift from analog to digital technology. The effect was to collapse long-established business categories and scramble market structures. In a prescient move that informs today, the new act did not seek to predict the path of technology but focused on the market structures that would determine that future.
To oversee this competitive focus, Congress empowered the Federal Communications Commission (FCC) to identify and address chokepoints that could thwart effective competition—many of which were controlled by the established companies. In the period following passage, the FCC conducted over 100 rulemakings and other actions to implement this mandate.
Today, as AI reshapes the economy and society, a new wave of destabilizing technological forces has returned. The ’96 act is not merely a story about “telecom”—it is a case study in governing a technological transition. It is a story of determining what the government should regulate—technology or power—and the importance of an expert agency to oversee the process.
The 1996 moment: When technology collapsed the categories
Before 1996, communications markets were regulated as separate categories. Telephone service, broadcasting, and cable operated under different regulations reflecting their different technologies and purposes.
Then digitization arrived. Suddenly, communications shifted from analog waveforms to digital bits. Once everything became the zeroes and ones of digital information that could be transmitted over a common architecture, decades-long distinctions began to collapse. Phone calls could travel not only over traditional telephone networks, but also over cable TV networks or through the air. Video could travel through the air or over wire and no longer required a single-purpose display terminal. The network no longer determined the service; software did.
The ’96 act was a recognition that digitization created the opportunity for competition through cross-entry. Let local phone companies enter video and long-distance. Let the cable companies offer voice. Let new companies into the market to compete across boundaries that technology was erasing.
At the time, it was an inspired bet. It also pioneered a policy that remains deeply relevant for AI policy today—that technology policy decisions are not just about the technology itself, but how powerful actors strive for and achieve market dominance.
Convergence, consolidation, and chokepoints
One structural fact stands out 30 years after the Telecom Act—scale won. Over time, the market’s gravitational pull toward scale proved stronger than the statute’s aspiration toward competitive rivalry. The lesson of the ’96 act is not that competition policy always prevails, but that it requires eternal vigilance.
In broadcasting, concentration rose sharply and localism weakened. In telephony, the old AT&T—broken apart by an antitrust suit in 1982—has, in key respects, reassembled through reconsolidation. Local cable operators have rolled up into a couple of companies with national scope. Across covered industries, financial and technological forces favored large firms that could bundle, leverage, and cross-subsidize.
The intervening years also transformed the nature of the FCC. Prior to the ’96 act, the FCC was a public-interest regulator overseeing the rather secure activities of concentrated private power. The ’96 act transformed the FCC from the management of monopolies to promoting competition across converged markets. In many ways, it transformed the agency from a cop on the monopoly beat to a referee adjudicating among battling interests. Were the interconnection fees just and reasonable? Was the transfer of broadcast licenses necessary to permit consolidation in the public interest? The ‘96 Act didn’t eliminate regulation; it simply redirected its priorities.
The platform era: Open networks, closed superstructures
Not addressed by the ’96 act was the newly emerging internet and its demonstration of the benefits of openness.
The riotous innovation of the early internet was a result of its openness. Companies like Google and Facebook originated as upstart competitors against more established enterprises. They prevailed thanks to open standards that enabled innovation to develop a better product, and open network access that allowed them to reach users unfettered. This openness powered extraordinary economic growth.
As platform companies grew, they built vertically integrated closed ecosystems. These barriers protected their dominance against the very kind of upstart competition that had enabled their own rise. Building on this closed superstructure, they expanded through network effects and acquisition, and then leveraged that dominance through self-preferencing, tying, and the other competition repression efforts.
Crucially, the dominant platforms developed another tactic: the externalization of costs and risks. Just as industrial polluters externalized cleanup costs, the online platform companies transferred the social costs of privacy infringement, misinformation, polarization, effects on children, and other harms to the public at large. It was a predictable pattern of corporate behavior. Today many of those same companies manifest that same behavior again in their AI exploits.
This pattern of openness followed by closure set the stage for AI’s emergence.
Algorithms to AI: The business model of online platforms
AI did not arrive as a scientific miracle that happened to land on Silicon Valley’s doorstep. It emerged within the online platform economy because the business model of the platforms required it.
That online platform business model is fundamentally about prediction—predicting what users will click, what they will buy, what they will believe, and what will keep them engaged. The early machine learning systems of the online platform companies were essential to the recommendation systems, targeted advertising, ranking algorithms, and personalization engines of the online platform business model.
As opinion writers in The New York Times soberly observed, “Social media was the first contact between A.I. and humanity, and humanity lost.”
It is in this sense that the foundation models of AI are not a departure from the platform era. They are its logical continuation by many of the same companies in pursuit of prediction at a higher order of magnitude and an even bigger business opportunity. The platforms didn’t “discover” AI. Their business model demanded it, and their monopoly profits financed it.
The new stack: From last mile to models
The Telecommunications Act emerged when the “last mile” was the key chokepoint. Control over the physical connection to the customer made market power durable for telephone, cable, and broadcast companies.
AI presents a different architecture but the same structural threat. While telecom bottlenecks centered on control of the physical access to consumers, AI chokepoints are economic, thus more subtle but equally as exclusionary.
AI power is organized as a stack of interdependent layers that can act as bottlenecks.
At the bottom of the stack are the microprocessors that power the ever-more-complex algorithms of AI. Atop that layer, and built on those chips, is the cloud layer of computational capacity. That compute powers the algorithmic models that function like the factories of the industrial era to transform inputs into economic outputs. Finally, at the top of the stack, is the end deliverable of applications in which the power of the model is harnessed to address specific needs.
Each of these layers represents both an opportunity for new applications as well as a chokepoint for the dominant AI companies to control the ability of others to innovate. On the beneficial side, integration across these layers can produce efficiencies, performance gains, and improved safety. The risk arises when that integration is weaponized to foreclose third-party innovation. Controlling a layer’s capabilities is both an opportunity to deliver an application and, at the same time, a way to limit the activities of new companies with new application concepts.
The parallelism between the infrastructure control of the telecom companies and that of the large AI companies reinforces the need for policy to focus on the behavior of those who control that capability and whether it is openly available, even to potential competitors, so that anti-competitive bottlenecks are not created.
Understanding this stack structure reveals why the industry is now shifting strategy.
The coming shift: From models to applications
A crucial inflection point in the intelligence era is now underway. Foundation models are starting to show signs of moving toward commoditization. Despite continued performance improvements, competition and open-source models are compressing margins.
Training and running frontier models is extraordinarily expensive. Margins on models are likely to compress as performance gaps narrow, open models intensify price pressure, and customers demand portability and lower costs.
When returns at the model layer are uncertain, it is only logical for firms to seek profits elsewhere. The stack layer where economic value ultimately resides is at the top where applications meet specific needs. ChatGPT, itself an application built on OpenAI’s foundation model, is embedded in Microsoft’s products. Google’s Gemini is an application inside Gmail, Docs, and other products, an application within Samsung devices, as well as Apple’s primary generative AI partner. Anthropic’s Claude is an application that is being expanded to organize work and facilitate collaboration.
The ’96 Act promised incumbents entry into new markets in exchange for opening their own. As their central business activity commoditized, firms pursued higher-margin activities—a pattern repeating in AI. As long distance phone service was commoditized by MCI, Sprint, and others, long distance giant AT&T entered the local exchange market. As that market was invaded by the new “competitive local exchange carriers” (CLECS), incumbents began bundling new digital applications such as video, broadband, voice, and mobile in a single cost-saving package that locked in consumers.
These were entirely rational corporate decisions in response to the economic gravity of higher margins and customer stability. That gravitational pull also exists for the dominant AI companies. A move into AI applications is a response to the commoditization of models. By embedding proprietary AI applications into workflows, model owners create switching costs and customer “stickiness” enabling them to move from a commoditizing activity to one with durable and defensible pricing power.
It is precisely in that transition—from model creation to application domination—that the risk of stack capture becomes acute.
The lesson for AI: Regulate behaviors
The lesson of the Telecom Act was to focus less on the specific applications of digital technology and more on promoting and protecting the marketplace competition that would drive faster, better, cheaper telecommunications services. It is a model that can still work 30 years later when it comes to AI oversight. Regulation based on what models do matters but is not the essential governance problem.
That problem is how to promote the diffusion of innovative AI applications. It is these applications that drive discovery and productivity in the domestic market and thus create opportunities internationally. China’s strategy of promoting diffusion of AI applications across its economy demonstrates the competitive importance of this approach.
Such a diffusion strategy begins with the recognition that competing with China begins with competition at home. Implementing such policy, however, necessitates confronting the concentrated power of those controlling the AI stack to deny access, discriminate, self-preference, bundle and tie services, foreclose through defaults and integration, and use dominance in one or more layers to capture the top and most valuable layer.
The decades since the Telecommunications Act have elevated the importance of policy that focuses on oversight of the behavior of dominant companies.
In telecom, the crucial issue was never whether voice and video converged. It was who controlled interconnection and who controlled access.
In AI, the lesson repeats. The crucial issue is who controls the essential inputs and who controls distribution—and whether they use that power to prevent competitors from building applications.
These lessons from telecom suggest a specific approach to AI governance.
A 2-step framework: Non-discrimination first, then oversight
The 1996 Act’s attention to chokepoints is thus a starting point for AI oversight. The central policy objective must be the prevention of anti-competitive control of essential capabilities. This argues for a two-step regulatory framework overseen by an independent expert agency with technical capacity and ongoing authority to adapt oversight as technology evolves.
Step 1: Ex ante non-discrimination
Make essential inputs accessible on fair, transparent and non-discriminatory terms. This includes compute access, model access and licensing, interoperability and portability, and distribution neutrality (especially defaults and bundling)—all, of course, subject to national security and safety protections.
This is not micromanagement. It is structural prevention: ensuring that markets remain contestable.
Step 2: Ex post-governance
Once openness exists, it becomes possible to govern its impacts such as safety evaluation and auditing, civil rights protections, consumer protections, accountability mechanisms, and antitrust enforcement against exclusionary conduct.
Here, the observation of the former Chair of the Federal Trade Commission (FTC) is important. “There is no AI exception to the laws on the books,” Lina Khan observed. AI-enabled fraud is still fraud, AI-enabled discrimination is still discrimination, and AI-aided collusion is still a violation of the antitrust laws.
The important point is sequencing: Step two is impossible without step one. You cannot audit, govern, or hold accountable systems you cannot see, test, or access.
In this sense, non-discriminatory openness is not simply “competition policy.” It is governance infrastructure to ensure the AI marketplace remains open for innovation from many, not just a self-selected few.
The relevance of the Telecom Act of ’96 to the policy challenges of AI is not because AI is telecom, but because technological transitions produce bottlenecks that become the birthplace of durable private power.
In 1996, Congress sought to legislate a transition by prioritizing competition. Now, in the AI era, we face an even more consequential transition. The stakes are not merely market structure or consumer pricing. The stakes extend to labor markets, intellectual property, national security, democracy, and the control of the tools of knowledge itself.
AI governance begins with a simple lesson from history: Technological transitions are not just a technology challenge—they are power problems.
If we fail to address the concentration of AI power, we risk consequences far beyond market structure.
Degraded competition undermines innovation and, with it, international competitiveness and national security. More fundamentally, when a small number of firms control information flows, infrastructure, and decision systems essential to society’s functioning, they threaten the democratic fabric itself.
The Brookings Institution is committed to quality, independence, and impact.
We are supported by a diverse array of funders. In line with our values and policies, each Brookings publication represents the sole views of its author(s).
