Software Was Never Meant to Last Forever

By Iain,

There is a particular kind of frustration that anyone who has worked inside a mid-sized organisation will recognise. You are eighteen months into a Salesforce implementation. The original scope was clean and reasonable. But somewhere around month four, somebody realised that your sales process doesn’t quite match the way Salesforce thinks a sales process should work.

So you customised it, and then you customised the customisation. And now you have a system that is technically Salesforce but behaves like something your team built from scratch, except that you are still paying Salesforce prices for the privilege of maintaining it, and every platform update threatens to break the fragile connective tissue holding it all together.

This is the central paradox of monolithic enterprise software and it has been the same for twenty five years. You either bend your business to fit the tool, flattening whatever operational advantage you had into a vendor’s default workflow. Or you bend the tool to fit your business and end up maintaining what is effectively bespoke software while remaining locked into a vendor relationship. Neither outcome is good, and both are expensive. And until very recently, both were better than the alternative, which was building everything yourself. That alternative is changing in fundamentally important ways.

The two failure modes

I want to be specific about what’s wrong with the current model because the problems are different depending on which path you chose, and they compound in different ways.

The first failure mode is over-customisation. This is the Salesforce trap. You start with a platform that promises to handle your CRM, your pipeline management, your reporting, your automation. The platform works for about 60% of your actual needs. The remaining 40% gets filled in with custom objects, Apex triggers, third-party integrations, and workflow rules that somebody built three years ago and nobody fully understands anymore. Your “SaaS product” is now a custom application running on someone else’s infrastructure. You get the worst combination of outcomes.

Every vendor upgrade becomes a risk assessment exercise because any change might break your custom logic. You need specialists to manage the system, which defeats the purpose of buying off-the-shelf software in the first place. And you are still paying per-seat licensing fees for a product that barely resembles the one you originally subscribed to.

The second failure mode is over-conformity. This is what happens when you decide you won’t customise and instead reshape your operations around the software. Your sales team adopts the vendor’s idea of what a pipeline stage should look like. Your project managers use the tool’s default workflow even though it doesn’t match how your team actually delivers work. Your reporting reflects what the software can easily measure rather than what matters to your business.

The subtle problem here is that the things most worth protecting in any business tend to be the operational details that make you different from your competitors. When you flatten those into a vendor’s generic workflow, you erode the very things that give you an edge.

Both of these failure modes have been tolerated for a long time because the economics made them the least bad option. Building custom software was slow, expensive, and created its own maintenance burden. Buying and adapting monolithic SaaS was painful, but cheaper than starting from scratch. That trade-off was rational and it held for more than two decades. But it is breaking now, and the speed at which it is breaking has caught the market off guard.

The SaaSpocalypse is a thing, even if the name is silly

Jefferies traders coined the term “SaaSpocalypse” last week and it stuck because the numbers behind it are hard to argue with. The S&P North American software index dropped 15% in January, its worst month since October 2008. Roughly $300 billion in market value evaporated from software, financial data, and exchange stocks in the space of forty-eight hours. Hedge funds have shorted approximately $24 billion in software stocks in 2026 so far. Thomson Reuters fell nearly 16% in a single session. Xero dropped over 15% in Asia-Pacific markets. Even Microsoft, which you would think has enough diversification to weather this, is trading 21% below its annual high and has slipped to fourth in the most valuable company rankings behind Nvidia, Google, and Apple.

What makes this selloff different from previous tech corrections is that the fear isn’t about competition from other software companies. It is about the possibility that the entire category of enterprise SaaS might shrink. The per-seat licensing model that has powered the industry for 20+ years assumes that more employees using the software means more revenue.

If AI agents can handle the work that those employees were doing inside the software, companies need fewer seats. And if AI tooling can help companies build custom solutions in-house, they might not need the third-party software at all.

The question worth asking is whether this fear is justified or whether it is the kind of panic that markets periodically generate around new technology. I think the answer is somewhere in the middle, but leaning more toward justified than most incumbents would like to admit.

How AI is reshaping this

The AI angle in this story is often discussed in vague terms. People talk about “disruption” and “transformation” without being precise about what AI is actually doing to change the economics and the capabilities involved. I think there are four distinct roles AI is playing here and they are worth separating because they have different implications and different timelines.

Making it possible to leave

The first role is the most immediate one. AI is collapsing the cost and timeline of migrating away from monolithic systems.

Migration cost has always been the real moat protecting incumbent SaaS vendors. It was never the product quality or the feature set that kept you locked in. It was the fact that switching meant a multi-year project with consultants, data migration specialists, and the very real risk of breaking something important halfway through. The switching cost was so high that it functioned as a form of captivity. You stayed with Salesforce or Oracle or SAP because leaving would be worse than staying, even if staying was painful.

That calculus is changing faster than most people expected. On Palantir’s Q4 2025 earnings call, their CTO claimed that their platform can now complete complex SAP ERP migrations from ECC to S4 in as little as two weeks, where that same work previously took years. Amazon used AI agent coordination through Amazon Q Developer to modernise thousands of legacy Java applications, completing upgrades in a fraction of the expected time. Genentech built agent ecosystems on AWS to decompose monolithic research processes into coordinated microservices.

If these examples are even directionally correct, the implication is significant. The lock-in argument for staying with your monolith weakens every time migration gets cheaper and faster. The barrier to exit that protected Salesforce’s recurring revenue was never the quality of the CRM. It was the terror of the migration project. That terror is becoming less justified.

Making it cheap to build

The second role is the one that feeds into the disposability thesis, and I think it is the most structurally disruptive of the four.

The cost of building purpose-specific software is falling toward zero. Coding platforms, AI agent builders, and tools like Claude Code and Cursor are turning what used to be six-month development projects into work that takes days. On most current AI agent platforms, building a functional agent takes between fifteen and sixty minutes. By 2026, roughly 40% of enterprise software is expected to be built using natural-language-driven development where prompts guide AI to generate working logic.

This changes the fundamental value proposition of SaaS. The entire model was built on the assumption that buying software from a vendor would be cheaper than building it yourself. That assumption held because building was expensive and required scarce technical talent. But when the build cost drops far enough, the comparison flips. Why would you pay per-seat licensing for a general-purpose tool that sort of fits your workflow when you can generate a purpose-built microservice that exactly fits your workflow, at negligible cost, in an afternoon?

And this is where the disposability idea comes in. Traditional software development follows a predictable path. You invest heavily in building, which means you need the software to last a long time, which means you need ongoing maintenance, which means you need vendor support contracts or internal engineering capacity, which creates its own form of lock-in. When the build cost drops far enough, that entire chain collapses. You don’t need software that lasts ten years if you can rebuild it in a day. You generate what you need, use it, and when the context changes, you generate a replacement rather than trying to fix what you already have.

This sounds radical but it is already happening at the edges. The concept of “disposable software” has entered the vocabulary of people building with these tools. Applications generated for a single use case and then retired once that use case is finished, because the cost of creating them was so low that longevity was never part of the design. Think of it as software that behaves more like a document than like an asset. You produce it when you need it and discard it when you don’t.

Moving the value from the application to the orchestration layer

The third role is where the argument moves past “cheaper replacement” and into territory that I think is more interesting and less well understood.

Here the value migrates away from the SaaS application itself and toward the intelligence that coordinates between applications. A monolithic platform tries to be the whole stack. It wants your sales data, your marketing data, your support data, and your analytics all living inside its walls so it can offer you an integrated experience. But that integrated experience only works within the boundaries of what that single vendor decided to build, and it only spans the domains that vendor chose to enter.

Agentic microservices break that constraint entirely. BCG found that early adopters of agentic AI reported 20% to 30% faster workflow cycles from agents that auto-resolve IT service tickets, reroute supplies to cover inventory shortages, and trigger procurement flows without human input. These gains didn’t come from any single tool getting smarter. They came from an orchestration layer that could act across system boundaries that no individual product controls. An AI agent that monitors your inventory system, detects a shortage, queries your supplier database, triggers a procurement flow, and updates your finance system is doing something that sits outside the remit of any single vendor because it spans multiple domains with different data models and access patterns.

The uncomfortable implication for SaaS vendors is that in this model, the individual tools are still there, still doing what they do. But the intelligence that makes them useful has moved outside of them and into the agent layer. Your product becomes a commoditisable component rather than a destination, and the value accrues to whoever controls the coordination.

The composable enterprise concept captures this well. In this model, SaaS applications become modular and interchangeable components that AI agents dynamically select, integrate, and swap as needed. Instead of one monolithic application trying to do everything, you have a collection of specialised microservices with an AI orchestration layer that coordinates between them. Unlike traditional integration approaches where engineers hard-wire connections between systems, agents orchestrate tools dynamically and learn from past actions to optimise future workflows. A Salesforce workflow is static, but an agentic workflow adapts.

The orchestration layer is too important to delegate

The previous section argues that value is migrating from individual applications to the intelligence that coordinates between them. This section is about who should own that intelligence, because right now the default answer is heading in the wrong direction.

Every major SaaS vendor is racing to embed AI into their product. Salesforce has Einstein. HubSpot has Breeze. Microsoft has Copilot. Monday.com has its AI assistant. Each of these systems is being built independently, optimised for the vendor’s own data model, trained on the vendor’s idea of what your workflow should look like, and shipped on the vendor’s timeline. If you use ten SaaS tools, you are about to have ten separate AI systems making decisions about different parts of your business with no awareness of each other. That is not an AI strategy, it’s a confusing patchwork. And the decisions each one makes happen inside infrastructure you don’t control, can’t fully inspect, and are governed through contracts rather than architecture.

The alternative is treating the orchestration layer as something your organisation owns. With custom agentic microservices, you choose the models, the data boundaries, and the governance framework. You get a single coherent AI architecture that spans your entire operation rather than a collection of vendor AI bolt-ons.

The question that sophisticated buyers are starting to ask is not “which AI features does this SaaS product include” but “do I want ten vendors each running their own AI on fragments of my business, or do I want to own that layer myself.” The orchestration layer is where the competitive advantage lives. It is the thing that will encode your operational logic, your decision-making patterns, and your institutional knowledge. Delegating that to a collection of vendors is a strategic mistake that will get harder to unwind the longer you wait.

The counter-arguments

I should be honest about where this thesis is weakest because the bullish version of this story can veer into fantasy if you don’t account for reality.

Sticky infrastructure is a real constraint and it would be dishonest to pretend otherwise. ERP systems with decades of accumulated data, compliance records, and institutional knowledge don’t vanish overnight. The data gravity of established platforms combined with organisational inertia are a genuine barrier that will slow this transition considerably. And I’ve learnt never to underestimate organisational inertia.

The code quality concern is also legitimate. Experienced developers are warning that the flood of AI-generated code arriving in 2026 will create a significant cleanup effort in 2027, as teams hunt down bugs and errors that were introduced by AI systems that are very good at producing code that works and less good at producing code that is maintainable and secure. If you are generating disposable microservices, this matters less because you throw them away. If you are generating anything that needs to persist, it matters a lot.

Compliance and audit trails remain genuine obstacles. Regulated industries need accountability chains that disposable tooling doesn’t naturally provide. When an AI agent makes a decision that affects a financial transaction or a healthcare outcome, somebody needs to be able to explain why it made that decision and trace the logic back to its source. The governance infrastructure for agentic systems is immature.

And the “BofA paradox” is worth considering. Bank of America pointed out that the current selloff relies on two mutually exclusive scenarios. Either there’s a bubble and AI investment returns are deteriorating to the point where growth is unsustainable, or AI is so powerful that it makes entire business models obsolete. Both of those things probably can’t be true at the same time. I think the resolution is that AI doesn’t need to be universally powerful to disrupt SaaS. It just needs to make custom-building cheaper than subscribing to and customising a monolith, and that bar is now considerably lower.

What this means if you run a mid-sized business

For medium sized organisations this shift arguably matters more than it does for the Fortune 500. You never had the budget for a massive Salesforce implementation. You were always dealing with the friction of tools that were designed for companies ten times your size, simplified into plans that stripped out the features you actually needed while charging you for ones you didn’t.

The opportunity here is to stop treating software procurement as a decade-long commitment. If build costs continue to fall, the sensible approach shifts from “which platform should we commit to for the next five years” toward “what do we need this quarter, and what is the fastest way to build it.” Your tools evolve as fast as your business does instead of lagging behind by whatever interval your vendor needs to ship their next release.

The risk, of course, is that you end up with a collection of loosely connected microservices that nobody fully understands, which is its own version of technical debt. The difference is that technical debt in disposable systems is bounded. If a microservice stops serving your needs, you retire it and build a replacement rather than spending months trying to fix something that was bolted onto a monolith you don’t control.

Where this goes

The monolith era wasn’t wrong, it was the best available option given the economics of the time. If building custom software costs a hundred times more than buying off-the-shelf, you buy off-the-shelf and live with the trade-offs. That arithmetic has been stable for twenty or more years and it has supported a trillion-dollar industry.

But that arithmetic is changing quickly. The question isn’t whether custom AI microservices will replace all SaaS, because they clearly won’t in the near term. ERPs will survive, systems of record will survive, and anything with deep data gravity and regulatory entanglement will survive for years, possibly decades due to organisational inertia.

However, mid-market and the application layer are now exposed. The tools that sit between you and your data, the ones that charge you per seat for the privilege of clicking through their interface, are the ones that face the most pressure. When an AI agent can interact with your data directly and build the workflow you need in the moment you need it, the interface layer becomes optional. And optional is a dangerous place to be when your entire revenue model is built on being necessary.

The market seems to agree. Whether it is overreacting or simply pricing in a future that hasn’t fully arrived is the kind of question that separates good investment analysis from speculation. But the structural argument is sound. Build costs are falling, migration costs are falling, agentic capabilities are rising, and the organisations doing the buying are starting to notice that they have options they didn’t have two years ago.

Software was never supposed to last forever. We just didn’t have a better alternative until now.

More from the blog:

  • The vibe coding spectrum: from weekend hacks to the dark factory

    A year ago, Andrej Karpathy posted a tweet that would come to define how an entire industry talks about itself. “There’s a new kind of coding I call ‘vibe coding,’” he wrote, “where you fully give in to the vibes, embrace exponentials, and forget that the code even exists.” He d…

  • Claude Opus 4.6 just shipped agent teams. But can you trust them?

    Anthropic shipped Claude Opus 4.6 this week. The headline features are strong: a 1M token context window (a first for Opus models), 128K output tokens, adaptive thinking that adjusts reasoning depth to the task, and top-of-the-table benchmark scores across coding, finance, and l…

  • AI slop: psychology, history, and the problem of the ersatz

    In 2025, the term “slop” emerged as the dominant descriptor for low-quality AI-generated output. It has quickly joined our shared lexicon, and Merriam-Webster’s human editors chose it as their Word of the Year. As a techno-optimist, I am at worst ambivalent about AI outputs, so…

  • The missiles are the destination

    One of my uncommon enjoyments is the work that happens right in the middle of a big problem that needs to be solved, or even a nosedive. A calmness kicks in, the path gets clearer and I can usually tunnel vision my way through to course correction. I used to think this was spec…

  • Fall back

    What creative studios and dev shops (and probably everyone else, too) need to do to stay relevant in the AI era without becoming commoditized slop. What’s covered: Your people are your moat · Easy to do, hard to be the best · Quality and simplicity · Never look to others · Don…

All blog posts

Let’s chat

Whether you have a challenge in mind or just want to connect, let’s chat. You can also drop us an email, connect on LinkedIn or save our contact information for later.

A playful, hand-drawn illustration of a group of characters holding up scorecards with the number ‘11’. They sit behind a table scattered with various other numbers.