60 employees who can’t be productive without AI?
And this is progress?
This is a most excellent place for technology news and articles.
60 employees who can’t be productive without AI?
And this is progress?
Your point is well-taken, but this is also exactly why AI reliance is dangerous. Anyone who sees this should realize the precarity of relying on products that can just be locked away from you.
Like Gmail? Google drive? Slack?
I'm not defending AI, but I can come up with >10 products that would absolutely cripple the company I work at if the provider suddenly says "Soz, terms of service violation".
Vendor reliance is dangerous. That doesn't just apply to AI. If the company in OP's message had both Claude and Gemini they'd been okay, so the problem isn't with AI explicitly - the problem is with reliance on services that are critical for workflows, and providers being able to change their mind at a moment's notice.
In any case, leaving aside where the problem is, the idea that 60 employees can't use Natural Intelligence to do their jobs means there's something really wrong with that company...
My company is pivoting hard to Claude for everything, and besides the fact that it's irritating as fuck to use, it has me worried about shenanigans like in this article. For almost 50 years, they've had a "no reliance upon 3rd party platforms for core functions," but since they hired an AI apologist to the C-suite, all that has gone out the window in a matter of months.
Got me thinking I should warm up my resume...
Got me thinking I should warm up my resume...
Don’t wait, start now. The job market is a nightmare and finding one that isn’t being consumed by incompetent C-level AI FOMO is getting harder every day. I work on life-saving medical equipment and AI is being pushed on us for things that could literally kill people if not done correctly. Why would anyone spend 30 minutes using AI and risking people’s lives when I can just write it myself in 5 or 10? Madness. Complete, society-scale madness. The people pushing AI have no fucking idea what they are doing or how engineering works. People are going to die.
I’ve been unemployed for going on 18 months. It’s awful and the market is the worst it’s been since I’ve been working (15 years or so).
Its ok tho, there's no recession, becuz stock marmket!!!!! 11!1!11!!!
Regardless of the fact that work has ground to a halt the CEO will continue to claim productivity has never been higher since implementing AI
This makes me so happy about my employer. I'm sysadmin for a newspaper.
We had an all-company test run 2 weeks ago to answer the question "What if we're hacked?"
Turns out we're able to produce a printed and online newspaper within a work day if NONE of our normal IT systems (hardware, software, e-mail, network) are accessible.
Everything we need has a redundancy that's kept completely physically separated from the network until the day it's needed.
Oh no! How did this happen? ...I mean, how exactly did this happen? Is there a tutorial on how other engineers at other companies can replicate this?
Just another form of vendor lock-in. If your business model is mostly/entirely dependent on an external party, that should be a well understood risk.
The only people winning are selling shovels
Dude, it's 2026. We don't sell shovels, we sell shovel subscriptions.
I am responsible for gathering information on AI to determine whether we should use it for our next project. The ask was to use it for a critical process task. Immediately in my head I was like "no, we are not using AI at all", but I obviously need quantifiable data. This is just another thing to add to my list of why using AI for core processes is one of the stupidest things you could ever do.
That's what happens when you are renting your very skills from a company. You'll hone nothing and you'll be happy.
but but ai better, ai future, we pay moni to all companiea Nd buy ai or we will be left without any growth - pleaz buy all ai- ai goof for making woled better place because it makes billionaires richer and they will definitely use that fo donate for charity
( blinking twice Elon musk and Mark Zuckerberg told me to say that, I'm being held at gunpoint)
Aaaaaand example #99999... Of why tech sovereignty is so important. The moment you start outsourcing your control, you become vulnerable to this exact kind of action by a company.
Everybody got sucked into the cloud "magic" for years, but now we are seeing the monster emerge more and more as proprietary technology enshitifies.
Luckily, there is a boom happening across the FOSS world, more and more people are finally waking up to the principles of software freedom and actual ownership.
May it continue to grow, as the corpos struggle and wither.
Or... taps mic... don't fucking rely on AI for your business! Play stupid games, win stupid prizes.
This has nothing to do with AI.
Don't rely on software or workflows or really anything that you can't easily switch if said company decides to stop doing business with you.
If you do, it better be a strategic partnership where something like this can't happen.
In this case, their workflows should have been AI provider agnostic or had a way to continue functioning if Claude went down.
This definitely has to do with AI. Because CEOs are losing their stupid minds over it. I agree with you in principle, but let's not lose sight of the fact that this specific technology is what CEOs are drooling over. Even in my company I had to tell the owner/CEO, "What problem are you trying to solve with AI?" His response was his mouth being open with a dumb look on his face.
So no business should rely on AI (or, to your point, any software) that it becomes detrimental to their business or workforce should that access be revoked.
Many commenters were quick to point out that he should never have coupled his company so closely with Claude to begin with, a reasonable critique by itself. However, it's worth noting that the story could have easily been the same if it had instead been Amazon Web Services, Azure, or an authentication provider like Okta.
You are so close, you almost got it!
This is the nightmare scenario for any team that built their whole workflow around a cloud API. No warning, no clear reason, no real support path. just a Google form and 60 people sitting on their hands.
The uncomfortable truth is that "terms of service" at this scale is just "we can pull the rug whenever." Anthropic isn't unique here either. OpenAI, Google, all of them have the same opaque enforcement problem. It's a big part of why I've been building tools that run on local inference by default. Not because cloud is bad, but because your users shouldn't be one vague policy complaint away from a complete outage.
Local gives you continuity even when the upstream disappears.
This is true for any company using 3rd party services. I worked for one that used a 3rd party messaging service to send out mfa texts to users. The company was hacked and went offline, so we couldnt send any mfa codes.... and of course, they had no plan b.
In business, always have a backup
60 employees were dead in the water, as reportedly their daily workflows rely on the AI assistant's
Is that a joke? 60 employees do not know how to do their job? This is not Anthropic's problem.
Just continue coding using the natural neural networks in the brains of those 60 employees until the problem has been resolved and/or another AI provider selected. It's not like Claude invented coding. Sure, it's a pretty useful tool. But it is possible to research obscure APIs and develop software manually.
Either they didn't pay, they found an exploit, or, more likely, someone at Claude was reviewing their conversations. Take note, any business that cares about IP or confidentiality.
I'll bring two theories to the table.
a) they got caught distilling for their own models b) they re-sold their $200/mo plans as APIs
Ironically, this is a great case study to illustrate the value of Chinese models. They've released a number that are on par with Claude's latest models under "open weight" licenses that would allow you to run them yourselves if you wanted to, or to hire some other third party to provide API access. It wouldn't matter what the original company's "usage policy" is in that case.
There are a couple of Western open models that aren't bad either, but they tend to be aimed at a smaller and simpler use case than Claude.
What models exactly? And what kind of hardware do you need to run them? Also, are there any GitHub repos that replicate Claude projects?
The one currently making the headlines is Kimi K2.6, on the benchmarks it's just short of Opus 4.7. It's a trillion-parameter model so it won't run on desktop computers, but it's something a company could run on reasonably buildable servers for their own use.
For local use, I've been finding Qwen3.6's 35B parameter model to be uncannily good. Gemma4 is also good, that's one of the Western ones. These models won't do the sort of heavy lifting that Opus can do but you don't need that heavy lifting for all tasks.
https://bannedbyanthropic.com/
I believe the word is capricious. Everything cloud based is at the whim of someone else.
There are ways to mitigate against that, but ultimately if it's not yours...it's not yours.
You're going to see a lot more of this and other forms of fuckery as the VC money dries up.
https://www.wheresyoured.at/four-horsemen-of-the-aipocalypse/
Now this company can see which employee can actually still program, and which is just a "AI Prompt Engineer".
Oh my God, my Eliza 2.0 chatbot is blocked. I'm experiencing withdrawals already, my productivity is down 76.8%.
Fucking hilarious that the "best" chatbot can't even manage a decent support chatbot...
That's one way to save costs.