The Law of Shitty Moats
AI memory will be huge, but not a silver bullet. Moat diggin’ ain’t what it used to be.
Years ago, Andrew Chen published The Law of Shitty Clickthroughs, which states:
Over time, all marketing strategies result in shitty clickthrough rates.
Good ideas get copied, cheap channels get oversaturated, and audiences become desensitized. It’s a red queen’s race: to keep generating the same returns, you can’t stay still. You have to keep discovering new good ideas. This is capitalism doing its thing.
Of course, some businesses do escape the rat race, to some extent. Apple doesn’t really care what CPM TikTok influencers are charging these days, because they have many structural competitive advantages—collectively, a “moat”1—that prevents their competition from being able to offer the same value they can at the same cost basis.
As startups grow and mature into large enterprises, executive teams increasingly become preoccupied with moats, for good reason. Their success is no longer a secret. Of course they want to innovate, but it’s easier to innovate when you’re not involved in cutthroat competition (n.b. Thiel’s Zero to One).
I would imagine OpenAI is in this situation now. ChatGPT is enormously important to OpenAI overall, how do we make sure it keeps growing? How can we put ourselves in a position to deliver value that no one else can? This is where memory obviously comes in. What LLM product would you rather use? The one that already knows everything about you? Or the new guy?
Predictably, tech pontificators (a group for whom I have much affection) have taken to Twitter and Substack to proclaim this the beginning of OpenAI’s era of true dominance for consumer LLM interfaces. And to be honest they’re probably to some extent right. But I don’t think AI memory will be the silver bullet a lot of people are making it out to be, because I have a different model of how moats get created.
To understand why, I propose a new law, similar to the Law of Shitty Clickthroughs, which I will creatively name The Law of Shitty Moats:
Over time, as a source of competitive advantage becomes common knowledge, it becomes harder for new entrants to capitalize on it.
Basically, nature abhors a vacuum, and a moats create a vacuum of competition. All the other players in the ecosystem (not just direct competitors but also suppliers, buyers, substitutes, etc—shout out Porter’s five forces) have zero interest in seeing one player dig themselves into an impenetrable position. If everyone can obviously see what you’re doing, it’s going to be harder (not impossible, but harder) to turn it into a big, long-term competitive advantage. In the early phases of a market’s maturation, before the moat is constructed, the other players have relatively more power and they will be able to do things to stop, slow, and/or mitigate it.
One great recent example is the AI system integration layer. You can imagine ChatGPT probably would have a much easier time building an integration ecosystem than Claude, because it has so much bigger scale. In a world where the value of an integration ecosystem was less well understood, maybe they would have run away with it. But instead Anthropic released MCP very early in the game, an open standard for integrating sources of data with LLM systems, thus massively neutralizing ChatGPT’s scale advantage. It’s not hard to imagine we may see similar open memory efforts take off.
The concept of “alpha” from investing here helps us understand what’s going on. Alpha is when you have knowledge that others don’t that you can use to profit. If you have real alpha, your actions probably look crazy to most onlookers. Billy Beane’s Oakland A’s looked ridiculous to most other managers; they were wrong. OpenAI introducing memory is not crazy. It’s obvious.
Switching costs are literally a textbook source of competitive advantage for software businesses. SAP, Adobe, Oracle, and more have been milking it since the 80s. But I would hypothesize that fewer modern businesses have been able to build as strong a moat based on switching costs, because we are more sensitive to getting trapped in proprietary systems. This is hard to prove or disprove, because there’s not an easy way to generate a counter-factual world where everything is the same except people have no idea about switching costs, but a world where the Law of Shitty Moats is real is more aligned with basic economic principles than a world where it isn’t.
To be clear, I’m not saying ChatGPT memory won’t be a popular feature, or that it won’t contribute towards OpenAI’s moat. All I’m saying is, other players in the ecosystem (including consumers) understand the idea of lock-in, and they don’t like it. So I don’t think this is game over for everyone competing with ChatGPT.
In fact, the real galaxy brained take is, if I were OpenAI, and if memory turns out to create as much value for users as it seems like it could, I’d want to make memory as open, accessible, and inter-operable as possible (modulo the obvious privacy constraints).
Why? I think OpenAI would rather be the operating system than just the one single killer app. To capture the full opportunity in front of them, they’ll need to behave more like a platform than an aggregator, to use Ben Thompson’s parlance. In other words they’ll ultimately make a bigger impact on the world and I think also be a bigger business that’s safer from regulatory scrutiny if they are more like Microsoft and AWS, less like Facebook / Google.
Who knows though! Will be fascinating to watch it play out.
Moats are the source of a bit of a major cultural divide in tech. The two camps are:
A) Business nerds — typically fans of Buffet, Christensen, Helmer, etc. Often MBAs, either literally or in spirit. A financial model hate to see them coming. Obviously they love moats.
B) Technology nerds — enjoyers of Hacker News, O’Reilly books, Mastodon, etc. Love elegant architectures. Alternates between hating moats and denying they exist. Often heard saying, “the only real moat is to innovate and build a better product.”
The business nerds are macro-right (moats are real, important, etc) but often micro-wrong. They get strategy drunk and make stupid decisions.
The technology nerds are macro-wrong and micro-right (great products matter most, execution and innovation are decisive, and moats are often less impenetrable than they seem).
This post shares some themes. The moats are shrinking!
https://open.substack.com/pub/openroadventuress/p/how-to-innovate-when-nothing-is-defensible?r=2wngd&utm_medium=ios
I love the ending “who knows though!” Made me laugh. I often feel the same