In 2017, Marty Cagan defined four foundational product risks: value, usability, feasibility and viability. “Do users want it, can they use it, can we deliver it, and is there a business case to support it?” Late in 2023, I advocated for formal assessment of an additional foundational product risk: ethicality. “Should we build it?” Now, I think it’s time we introduce another foundational product risk beginning with “A”: acceleration.
If you’re even remotely technology-adjacent you’ll have some understanding of what “acceleration risk” refers to. It’s when the rate of progress nukes products at a velocity faster than expected in comparison to business-as-usual market dynamics. It’s a radically shortened frontier-research-to-productisation pipeline.
It’s more than mere technological evolution. It’s when a year-old product is “sherlocked” by a month old product because an emerging capability came to market. It’s search engines being bypassed by answer engines. It can manifest as startups doing app launchers and meeting notes being existentially threatened and compelled to act because of a BigTech feature release. It’s what presents as defensible moats turning out to not be so, at all. And it doesn’t have to be at the app layer. It can be something like the reverse engineering of embeddings developing into more than an academic capability and forcing adaptations. It’s, of course, related to the manic unfolding of AI capabilities.
Vedika Jain has a neat, quick-and-dirty framework for addressing this acceleration risk. The key questions:
- Commoditisability: could an OSS frontier model copy 80% of your value in <12 mo?
- Non-substitutable assets: do you own data/hardware others can’t scrape or buy?
- Complement vs. substitute: does AGI raise demand for you?
- Regulatory/trust moat: is there a legal or brand reason buyers would pick you?
- Capital gearing: do margins improve as compute gets cheaper?
- Network/brand: will users stay if a free clone appears tomorrow?
There are other useful fictions starting to emerge, and they’re not all as reactive. Some are more abundance-orientated, like the directive to assume that AI capabilities in six months will be an order of magnitude more developed than the current batch. But what I think is critically important is that they cannot be treated the same as the useful fictions we use to address the other risks.
Value, usability, feasibility and viability risk have half-lives that map tightly to the archetypical cases we make for them:
- The user case: the fundamental value proposition made to a core user
- The business case: the systems enabling value creation, delivery and capture
- The market case: the macro environment or competitive landscape’s latent upside
- The team case: the character and pedigree of the people involved
- The narrative case: the pull of a mission or vision
- The technological case: raw innovations or novel cross-applications of technology
The OG four risks can be “solved” for a period and then revisited at appropriate future points. Ethical risk, in contrast, has no half-life, no decay of its findings. It’s eternal, existing (for the most part) outside of time.
Acceleration risk, though, isn’t really about the formal mitigation of risk itself. Acceleration risk comes into play during limited historical epochs. Times when Lenin’s apocryphal dictum is all but true: “there are decades when nothing happens; and there are weeks when decades happen.” It’s about one’s exposure to acceleration. To understand the risk management vs. exposure dichotomy consider another concept: luck.
People like Naval Ravikant, James Clear, Tim Ferriss et al. have argued that one can accumulate more success (define that however you like) by systematically “creating a greater surface area for luck” in your life.
Most of the approaches recommended are akin to setting up a specific interface with an environment that, if it is invoked, results in an outcome with a potentially massive upside. Think going to a party, sending a cold DM, escaping an established process and so on. Heck, one could frame venture deals as contractual luck engineering.
Conversely, one can also set up to minimise exposure to unluckiness. You can’t lose your car in a poker game if you don’t play poker (or, if you must play, you enforce a bound on the downside stakes).
Let’s look at the product I’ve been building, Subset. It’s an app for saving, sharing and searching web and app content. An obvious thing to build for such an app would be chat with one’s bookmarks. But that’s precisely the sort of thing that’s vulnerable to acceleration. Look at OpenAI merging their deep research and tool-using agent capabilities. Chat-with-bookmarks is a paradigm-constrained move in an epoch of paradigmatic change. Instead, we’re building primitives that we can choose to leverage in the paradigm we think is emerging.
In such epochs of acceleration, the Thielian approach of “calling your shot” has slightly more weight than the Graham-ian approach of “call your customers”. But in neither case during an acceleration epoch should these playbooks be used to de-risk; they should be run to engineer exposure and maximise the upside one can accrue from acceleration.
A hack for assessing one’s acceleration risk, whether you’re managing exposure not risk, is as follows. Imagine you wake up to messages from a close colleague that [insert lab or model provider here] has just Dropped Something New. Your colleague doesn’t say what. Annoyingly, you can’t reach them or any of your team. In that moment: are you excited, or scared? If it’s the latter, you should be doubly scared. You’re probably vulnerable to acceleration and liable to lose at whatever ensemble of finite games you’re playing. If it’s the former, you should be doubly excited, because it means you’re positioned to gain from exposure to acceleration.
