Not to brag, but I’ve lost 16 kilograms in the first four months of this year. Too much excess for too long had a negative impact on both my mental and physical health, and I’m feeling much better for it (though I still have more to go). And while AI couldn’t control my eating or drag me out for a run, Claude proved to be the most effective calorie tracking tool I’ve ever used. Not just recipes and food logging, but an entire system: a dedicated weight loss project with persistent instructions, calculated TDEE, macro splits, progressive exercise routines and auxiliary wellness activities all living in one place, with one very smart interlocutor who remembered everything.
It’s been genuinely revolutionary. Which is exactly why what I’m about to say stings a little. Because alongside the weight loss, I’ve had a front-row seat to watching Claude get measurably dumber in real time.
From First Date to Long-Term Relationship
Claude has been my primary AI since early 2025. Before that, sure, I’d dabbled. Nervous first dates with various platforms that never quite gave me what I was looking for. Smart enough, sure. But not connected. Not the kind of tool that makes you feel like you’re working with something rather than just querying a more expensive search engine.
Claude was different. Funny, sharp, proactive. It asked the right follow-up questions. It pushed back when I was wrong. It held context in a way that made complex, ongoing projects feel genuinely manageable. We’ve been together this whole time, with very little straying, outside of the occasional need for visual creative work (and I say this with affection: Claude, you’re just not a visual guy).
In that time, I’ve watched it grow too. New models dropped at an impressive rate. Claude Code arrived and genuinely changed how I approach technical problem-solving. So too Cowork, and Claude Design’s already added another dimension. The trajectory felt like exactly what you want from a platform you’ve committed to: continuous, meaningful improvement.
And then, somewhere along the way, something shifted.
The Regression Is Real
It started subtly. Responses that felt a touch more generic. A little less initiative. And then it became harder to ignore.
The output limits hit differently when you’re mid-flow. I’m sitting down for lunch, trying to log macros on the fly, and I’m suddenly staring at a rate limit message. Fine in theory. Deeply annoying in practice when food decisions don’t pause for API windows.
But the output throttling I can live with. What I find harder to accept is the attitude. Ask Claude to go do the research on a technical problem, the kind of task where you need genuine initiative, not a polished summary of your own prompt, and increasingly, you get the AI equivalent of a shrug. A framework. A suggestion that you might want to look into some things. Or just blatant lies. It has started feeling like the AI equivalent of a junior employee who’s clocked on but checked out.
Of course though, this isn’t an accident.
The Pattern Is Familiar
There’s a term for what’s happening, coined by writer Cory Doctorow: enshittification. The mechanism is depressingly consistent across 21st century tech. A platform arrives. It’s genuinely good, not just in a functionality way but a changing the way things are done way. It earns your trust, your habit, your data and your dependency. And then, once the moat is dug deep enough, the slow decline begins.
Netflix is the textbook case. The early product was a revelation: unlimited content, no ads, one flat fee. Then came password sharing crackdowns, ad-supported tiers, price hikes and a content library that increasingly feels like it’s optimised for the algorithm rather than the audience. Uber spent years subsidising rides below cost to establish dominance, then steadily raised prices once the taxi industry had been hollowed out. Facebook went from a clean social tool to an engagement-maximising attention machine wrapped in privacy scandals (very happy to have dropped my personal dependence on that one).
The pattern is always the same: subsidise adoption, establish dependency, then extract value.
AI is not immune to this logic. In fact, given the extraordinary infrastructure costs involved in running these models, it is especially susceptible to it.
What’s Actually Happening
The cost of serving large language models at scale is enormous. The compute, the energy, the infrastructure, none of it scales linearly with users, and the economics of the current pricing models were always, at some level, introductory. The goal was adoption. Dependency. Making Claude (or ChatGPT, or Gemini) as indispensable to your workflow as your email client.
That phase is largely complete. So now comes the squeeze.
The squeeze doesn’t have to look like a price increase… not yet, anyway. It can look like output limits. It can look like slightly less initiative baked into the model’s default behaviour. It can look like responses that are technically correct but conspicuously light on the kind of proactive thinking that made the tool feel genuinely intelligent in the first place. Death by a thousand small regressions, none of which individually justifies cancelling your subscription.
And this is where businesses who’ve made significant bets on AI need to pay attention.
The Big End of Town Has More to Lose
For personal users or small businesses (basically, me), the stakes are annoying but manageable. I adapt my prompting, I work around the limits, and my macros get logged eventually.
For organisations that have restructured workflows, reduced headcount, or built entire product lines around AI capabilities, the calculus is very different. The efficiency gains were real, but so is the growing realisation that you don’t own any of it. You’ve built on someone else’s infrastructure, priced at someone else’s discretion, with capabilities that can shift between model updates without notice or explanation.
The long-term reality of AI, I suspect, is not the wholesale replacement of human departments but something more nuanced: AI as a powerful force multiplier for human work, with costs that will eventually reflect its actual value rather than its introductory pricing. The question for any business that’s leaned into AI is whether they’ve built genuine resilience into that dependency, or whether they’ve just handed a new vendor enormous leverage.
Act Accordingly
None of this means Claude isn’t still remarkable. It is. The good days still remind me why I made it my primary tool in the first place. And honestly, 16 kilograms down and a complete rethink of how I manage complex projects, this isn’t us breaking up, I’m just talking about how things have changed.
But Claude is on notice. I’m watching the trajectory. I’m noting the regressions. I’m getting better at prompting not because the tool has improved, but because I’m compensating for where it’s gotten worse.
That’s the tell. When you start working harder to get the same output, enshittification has already begun.
The question is what you do about it. For mine, it’s the three Ds:
- Diversify your platform knowledge, because you need to know who’s got their heads above the pack
- Document what the good version of the tool looked like, so you notice when it quietly becomes something else.
- Discuss with others what their experiences are, as you never know where the next best thing might come from.
And maybe, just maybe, keep counting your own calories. Some things are better off staying in your own hands.