As Switching Costs Decline, Usage Will Be the New Moat
The Next Big Thing in AI Will Be a Lawsuit, Software Strategy and the Future of Defensibility, A New Constitution for Business Strategy
The Age of AI and the Death of Switching Costs: A New Constitution for Business Strategy
I recently had a conversation with a public markets investor friend who has been toying with a radical career shift—leaving finance to pursue a PhD in Business Strategy.
Not an MBA. A PhD.
Naturally, this sparked debate. If you listed the top 100 operators or capital allocators alive today, how many have PhDs? Almost none. Yet my friend argued that the influence of academia on business strategy is far greater than most operators realize.
His evidence? Michael Porter.
Porter’s Five Forces framework underpins nearly all modern business strategy. Market positioning, competitive dynamics, and moats—fundamental to how we evaluate companies—are all derivatives of Porter’s thinking. His academic work has quietly shaped the playbooks of the world’s best investors and executives for decades.
But in the age of AI, my friend and I discussed, business strategy may need a new framework.
The Erosion of Switching Costs
We focused on one force in particular: switching costs—the friction that locks customers into a product and sustains a business’s moat. In traditional SaaS, switching costs are massive. Data migration is painful, integrations are sticky, and operational inertia keeps businesses on the same software for years.
But what happens when AI agents can seamlessly extract, transform, and migrate data in seconds?
The most important business lawsuit of the decade is brewing—one that could redefine software economics entirely.
Salesforce vs. AI: The Defining Lawsuit
Oracle, Salesforce and other SaaS giants (particularly those that are system of records), have long relied on high switching costs to maintain its dominance. Salesforce, like many software providers, includes specific clauses in its terms of service to control how users interact with its platform. While these clauses don't explicitly mention "human-only" usage, they effectively restrict automated interactions. For instance, the Salesforce Program Agreement prohibits users from:
"reverse engineer[ing], disassembl[ing] or decompil[ing] any software or tangible objects embodying any Confidential Information."
It also states:
"You may not... use any robot, spider, other automatic device, or manual process to monitor or copy any of the material available through the Services."
This restriction can be interpreted to limit automated processes that attempt to extract or manipulate data in ways not intended by Salesforce.
Microsoft's Terms and Conditions for the Microsoft 365 Developer Program include provisions that restrict certain automated activities. Specifically, users are prohibited from:
"use[ing] the Services in any manner that could damage, disable, overburden, or impair any Microsoft service, or the network(s) connected to any Microsoft service, or interfere with any other party’s use and enjoyment of any Services."
Other companies have more explicit prohibitions against automated interactions. For example, Apsona's Terms of Service state:
"You must not use automated bots or other software to send more messages through our Platform than humanly possible."
This clearly restricts the use of automation tools that could mimic or exceed human usage patterns. In a world of AI augmented intelligence, how do we define what’s ‘humanly possible’?
Legal precedents have addressed the enforceability of such licensing restrictions. In Bowers v. Baystate Technologies, Inc. (2003)1, the U.S. Court of Appeals for the Federal Circuit upheld a shrink-wrap license agreement that prohibited reverse engineering, ruling that such contractual agreements can supersede certain rights typically allowed under copyright law, such as fair use for reverse engineering.
Additionally, the case of Oracle USA, Inc. v. Rimini Street, Inc. highlighted the legal challenges surrounding automated data extraction. Oracle sued Rimini Street for, among other claims, unauthorized automated downloading of Oracle's software support materials. The court ruled in favor of Oracle, emphasizing that violating the terms of service through automated processes constituted copyright infringement and breach of contract2.
These examples illustrate that companies often include clauses in their licensing agreements to restrict automated interactions, and courts have, in some cases, upheld the enforceability of these provisions.
As AI agents like OpenAI’s Operator emerge, capable of autonomously navigating UI, pulling records, and executing workflows across software platforms by essentially mimicking human interactions, the tension between technological advancement and contractual restrictions is likely to intensify. This could potentially lead to significant legal battles that could reshape the software industry's landscape.
If courts uphold human-only licensing, SaaS businesses retain their moats. AI agents will be banned from automating switching costs away, ensuring that incumbent platforms remain buffered by classic switching cost forces.
If courts strike down these licensing restrictions, switching costs collapse to near-zero3.
Regardless, SaaS businesses will need to rethink defensibility, particularly in a world where AI can instantly migrate customers to the best product.
A New Playbook for Business Strategy
Business strategy has always been about moats—barriers that prevent disruption. Porter’s frameworks helped companies navigate industrial competition, but AI is dismantling traditional advantages at an unprecedented pace.
AI is also redefining software—not just how it’s built, but how it accrues value and maintains defensibility. In the past, software moats were built around proprietary data, distribution, or integrations, but in the age of AI-native applications, the real moat will come from usage throughput—the continuous interaction between users and software that refines and enhances its intelligence.
The Shift: From Static Software to Dynamic, Adaptive AI
Historically, software has been deterministic: developers wrote explicit instructions, and users interacted with it in predefined ways. AI, however, introduces probabilistic computing, where outputs are dynamically generated based on patterns inferred by analyzing a massive corpus of data. This means that the interface—how users interact with AI—becomes paramount, because every action trains the system to become better for that individual user.
Consider ChatGPT. Your experience—and the quality of its output—is only as good as the quality of your prompt. But what if software didn’t need to rely on explicit user input? What if applications could infer intent dynamically, adjusting and structuring workflows in real-time based on passive and active engagement?
Lessons from Social Media: Data Network Effects Through Usage
We’ve already seen a version of this in TikTok and Instagram, which sit atop a gargantuan database of content—hundreds of billions of photos and short-form videos. Unlike traditional systems-of-record software like Salesforce, where data is explicitly entered, TikTok and Instagram optimize their product experience through passive signals—likes, views, shares, comments, watch time—creating self-reinforcing data network effects.
Each micro-interaction refines their AI-powered recommender algorithms, which in turn further personalizes the user experience, making the product more valuable the more you use it. Crucially, the user isn’t explicitly entering unique data; they are merely using the product, and that usage alone creates compounding intelligence.
I pontificated on this dynamic in AI is Here: Musings on What it Could Mean. As I wrote:
Indeed, each day when we consume user generated content produced by our friends, and increasingly by creators and individuals we most likely will find interesting, we train sophisticated AI models with digital gestures. Each additional like, comment, scroll, click and view acts as a lever that further refines our own hyper-personalized call-and-response query. This query acts similarly to a perfect prompt that one might submit to ChatGPT.
Instead of natural language prompting the model, a user’s interaction with the interface causes the model to crawl the entirety of content published on the network and fetch the photos or videos that best meet the refined ‘call’. The model then display that object within the inventory slot in the Feed.
Taking a step back, this interface is a major improvement to how media distribution operated prior to social media feeds. In that era, users had to actively query, in natural language, the types of media that they wanted to consume online. This at first was done directly in the search bar by typing in a URL and later via search engines.
In the age of search engines, Google rose to dominance because of, among other reasons, their ability to accurately fetch the supply of webpages on the Internet given the call stipulated by the natural language of the search query.
Indeed, upon prompting the search engine, a user was shown a list of links with related images and short descriptions. Users relied upon — and still do — the search ranking, short text previews and brand recognition of publishers to choose which media to consume. From there, they then had to click the link and travel to the webpage that ultimately distributed the media.
This consumption experience requires quite a significant amount of time and energy from the user. Furthermore, each search doesn’t materially adjust the ‘call’ that the search engine uses to fetch supply specific to the user (it does so specific to the supply by ranking webpages to “key words”), so each incremental query still largely relies upon the conditions stipulated by the prompt in the natural language search. Instead of further training the model to personalize the call and response, search engines created artificial scarcity by monetized the finite real estate of the search results via new SEO products and key word bidding.
But as users shifted their engagement from desktop to mobile, a few things happened.
One, the cost and energy required to produce media drastically decreased. Each mobile phone also dubbed as a professional camera. When paired with a intuitive touch interface, this increased the production of media content by many orders of magnitude across the Internet. Similarly, the time and energy it took to consume media content greatly decreased. Apps require a click to open and then the simple and intuitive gestures of scrolling, tapping and sometimes typing to consume — a much easier experience than Search.
Secondly, the total time we spent engaging — both consuming and producing — media content increased substantially. Instead of sitting at your desktop, a user could be anywhere on their phone generating and consuming content.
Both the increase in daily time spent with digital media and the interface innovation of smartphones and Feeds meant that users informed distributors (social media networks) what they liked to consumed in an exponentially more efficient and absolute manner.
This enabled each call-and-response query — what to show in the next inventory slot of the Feed — to have exceptional instructions that only got better with each incremental use.
Usage became the new moat.
The Future: AI Software as a Feed, Not a Static Interface
In an AI-first world, software will behave more like a feed, structuring the experience in real-time based on usage patterns. The most valuable applications will:
Minimize the gap between action and feedback—allowing AI to continuously refine its understanding of the user’s intent.
Foster rapid usage throughput, ensuring the software gains contextual depth over time.
Warp to the user’s Jobs-To-Be-Done (JTBD)—meaning the software shapes itself around how the user works, rather than requiring the user to conform to predefined workflows.
The winners will be those who design AI-native interfaces that make usage itself the core input for intelligence. Every founder or product leader should be asking themselves, how can my customer offer me unique training data to improve my underlying model, and then how can I deliver that personalized intelligence as seamlessly as a social media feed?
Just as TikTok optimizes for entertainment, AI software will optimize for creating a domain-specific genius that can generate workflows, decision-making, and automation—getting better, not because users feed it structured data, but because every interaction refines the AI’s understanding of the user’s Job To Be Done.
In this world, usage becomes the moat for products that explicitly design reinforcement and internal data network effects into their product. Shockingly, this is currently few and far between.
The more such an AI-driven product is used, the better it gets at solving problems for that specific user, creating an insurmountable advantage over competitors who lack the same engagement depth. Memory becomes critical.
The future of software isn’t just intelligent—it’s adaptive, personalized, and shaped by how you use it, just like social media feeds are today.
https://en.wikipedia.org/wiki/Bowers_v._Baystate_Technologies%2C_Inc.?utm_source=chatgpt.com
https://newmedialaw.proskauer.com/2018/01/24/ninth-circuit-issues-important-decision-on-software-licensing-practices-and-web-scraping/?utm_source=chatgpt.com
For a savvy public investor, this is the trade of the century—a bet on whether software companies will be able to legally prevent AI from dismantling their moats.