In Bob Woodward’s Plan of Attack, Colin Powell is quoted warning President Bush before the Iraq invasion: “You are going to be the proud owner of 25 million people. You’ll own all their hopes, aspirations, and problems. You’ll own it all.” It was a vivid encapsulation of what became shorthand as the “Pottery Barn rule” - if you break it, you own it. It was a warning (ultimately ignored) about responsibility. Once the United States broke Iraq’s existing order, responsibility for the mess shifted, permanently, to the one who broke it.
Which brings me to artificial intelligence. Or rather - the generative // language models we’ve colloquially accepted as artificial intelligence.
AI is not being introduced to industries so much as imposed on them, whether they are ready or not. The models are powerful, the hype is overwhelming, and the adoption curve is steep. The costs and consequences, however, are poorly understood. The analogy holds: when AI breaks workflows, industries, or even public trust, those who pushed it in will own the fallout.
This is not an argument against AI adoption. The economic incentives are too strong, the competitive dynamics too unforgiving. But as Powell might have warned: once you deploy AI at scale, you have accepted responsibility not just for its benefits but for its failures. And unlike traditional software, those failures are systemic.
Software or Infrastructure?
A recurring theme: what looks like a product is often better understood as infrastructure. Windows looked like a product, but it was infrastructure for the PC ecosystem. iOS looked like a product, but it became infrastructure for the mobile economy. AI sits in the same category. It looks like a tool - an app that generates text or images - but in practice it is a probabilistic reasoning layer inserted into workflows across industries.
That distinction matters: because infrastructure failures have different implications. A bad product can be returned. Infrastructure failures cascade. If Copilot generates incorrect financial models in Excel, the problem isn’t a feature bug; it is an error embedded in a company’s decision-making. If an AI system used in healthcare produces misleading treatment recommendations, the liability doesn’t stop at the vendor’s terms of service. It flows through doctors, insurers, regulators, and ultimately the entire healthcare system.
That is ownership.
Who Owns the Errors?
AI vendors are clear about disclaiming responsibility. Terms of service for OpenAI, Anthropic, or Google make it explicit: the outputs are not guaranteed to be correct, and liability is disclaimed.
But enterprises deploying AI can't shrug off those disclaimers. If a bank uses an LLM to generate investment recommendations, the bank owns the outcome in the eyes of customers and regulators. If a university deploys AI tutors and students fail, the institution - not the model vendor - faces the reputational hit.
This liability gap is the essence of the Pottery Barn problem. Vendors can disclaim, but enterprises simply cannot. The decision to adopt AI is a decision to absorb new categories of risk: hallucinations, bias, unexplainable outcomes. It's responsibility without full control.
Regulation by Deployment
The U.S. invasion of Iraq created a governance vacuum; AI deployment is creating a not-dissimilar crisis in multiple industries. Regulators are scrambling to issue frameworks: the EU’s AI Act, NIST’s risk guidelines, China’s draft rules.
But the reality is that companies deploying AI are acting as de facto regulators. A firm that uses AI in hiring implicitly decides what fairness looks like in employment. A startup embedding AI in diagnostics implicitly sets standards of care. These are governance decisions made through product deployment, not policy debate.
The precedent here is social media. For years, Facebook insisted it was a tech company, not a media company. That fiction collapsed once the consequences of unmoderated content became clear. At that point, Facebook was forced into an editorial role it never wanted. The same arc is playing out with AI. Enterprises that frame AI as “just a tool” will discover that they are shaping norms and rules for entire sectors.
They'll own it whether they like it or not.
The Incentive Problem
Why adopt AI if the risks are so high? The answer is obvious: competition. A newsroom that resists AI-assisted workflows risks higher costs. A consulting firm that avoids AI research assistants risks slower output. A hospital that declines AI diagnostics risks falling behind on efficiency metrics. The incentives mirror those of social platforms: grow faster, capture more value, and figure out the costs later.
The problem, again, is ownership. Once AI is embedded, you can't walk away when things break. Every misdiagnosis, every biased hiring outcome, every hallucinated legal memo becomes part of the institution’s responsibility. Investors may cheer adoption, but customers and regulators will hold the deploying organization accountable.
Trust as Collateral
Consumer tolerance of failure depends on context. People laugh at autocorrect mistakes, but they don’t laugh at incorrect tax filings or unsafe medical prescriptions. Once trust is broken at the institutional level, it rarely returns. Autonomous vehicles = a clear example. Statistically, self-driving systems may be safer, but every highly publicized crash reshapes public opinion. When trust collapses, the economic promise collapses with it.
AI faces the same trust cliff. Companies that choose (and it is a choice) not to prepare for inevitable errors will discover that customers don’t care about disclaimers or probabilistic reasoning. They care about outcomes. Trust, once broken, becomes nearly impossible to restore.
Owning Without Understanding
The most dangerous part of AI ownership: the opacity of the systems. Most enterprises are not training foundation models themselves; they are layering workflows on top of OpenAI, Anthropic, or Google. These models are black boxes. No CIO can fully explain why a model produced a particular output. But once deployed, responsibility for those outputs lies with the enterprise.
This is ownership without comprehension. It's one thing to take responsibility for a piece of software whose source code you control. It's another to accept responsibility for a probabilistic system trained on billions of parameters you cannot audit. Enterprises are putting themselves in the position Powell described: owners of outcomes they cannot fully control.
The Pottery Barn rule applies: if you break workflows, markets, or trust by deploying AI, you own the consequences. Vendors will disclaim, regulators will lag, but enterprises will bear the costs. That is the structural reality.
The lesson from Iraq is not only about hubris in foreign policy, although that lesson might prove valuable if it's ever absorbed - it's also about responsibility in disruption. Breaking is easy. Owning is hard. The companies that recognize this early - by building oversight systems, preparing for liability, and planning for failure - will be better positioned than those that assume AI adoption is simply another software upgrade.
Because in reality, adopting AI is like a regime change. Once you topple the old order, you own what comes next.