AI has normalized single 9's of availability, even for non-AI companies such as Github that have to rapidly adapt to AI aided scaleups in patterns of use. Understandably, because GPU capacity is pre-allocated months to years in advance, in large discrete chunks to either inference or training, with a modest buffer that exists mainly so you can cannibalize experimental research jobs during spikes. It's just not financially viable to have spades of reserve capacity. These days in particular when supply chains are already under great strain and we're starting to be bottlenecked on chip production. And if they got around it by serving a quantized or otherwise ablated model (a common strategy in some instances), all the new people would be disappointed and it would damage trust.
Less 9's are a reasonable tradeoff for the ability to ship AI to everyone I suppose. That's one way to prove the technology isn't reliable enough to be shipped into autonomous kill chains just yet lol.
Are employees from Anthropic botting this post now? This should be one of the top most voted posts in this website but it's nowhere on the first 3 pages.
Also remember, using claude to code might make the company you're working for richer. But you are forgetting your skills (seen it first hand), and you're not learning anything new. Professionally you are downgrading. Your next interview won't be testing your AI skills.
Depends on what you consider your "skills". You can always relearn syntax, but you're certainly not going to forget your experience building architectures and developing a maintainable codebase. LLMs only do the what for you, not the why (or you're using it wrong).
There are three sides to this depending on when you started working in this field.
For the people who started before the LLM craze, they won't lose their skills if they are just focusing on their original roles. The truth is people are being assigned more than their original roles in most companies. Backend developers being tasked with frontend, devops, qa roles and then letting go of the others. This is happening right now. https://www.reddit.com/r/developersIndia/comments/1rinv3z/ju...
When this happens, they don't care or have the mental capacity to care about a codebase in a language they never worked before. People here talk about guiding the llms, but at most places they are too exhausted to carry that context and lets claude review claude itself.
For the people who are starting right now, they're discouraged from all sides for writing code themselves. They'll never understand why an architecture is designed a certain way. Sure ask the llm to explain but it's like learning to swim by reading a book. They have to blindly trust the code and keep hitting it like a casino machine (forgot the name, excuse me) burning tokens which makes these companies more money.
For the people who are yet to begin, sorry for having to start in a world where a few companies hold everyone's skills hostage.
The syntax argument is correct, but from what I am seeing, people _are_ using it wrong, i.e. they have started offloading most of their problem solving to be LLM first, not just using it to maybe refine their ideas, but starting there.
That is a very real concern, I've had to chase engineers to ensure that they are not blindly accepting everything that the LLM is saying, encouraging them to first form some sense of what the solution could be and then use the LLM to refine it further.
As more and more thinking is offloaded to LLMs, people lose their gut instinct about how their systems are designed.
> Your next interview won't be testing your AI skills
Not that I disagree with your overall point, but have you interviewed recently? 90% of companies I interacted with required (!) AI skills, and me telling them how exactly I "leverage" it to increase my productivity.
> AI/LLM knowledge without programming knowledge can make a mess.
That makes sense.
> Programming knowledge without AI/LLM knowledge can also make a mess.
How? I'd imagine that most typically means continuing to program by hand. But even someone like that would probably know enough to not mindlessly let an LLM agent go to town.
> How? I'd imagine that most typically means continuing to program by hand.
I think the use of LLMs is assumed by that statement. The point is that even experienced programmers can get poor results if they're not aware of the tech's limitations and best-practices. It doesn't mean you get poor results by default.
There is a lot of hype around the tech right now; plenty of it overblown, but a lot of it also perfectly warranted. It's not going to make you "ten times more productive" outside of maybe laying the very first building blocks on a green field; the infamous first 80% that only take 20% of the time anyway. But it does allow you to spend a lot more time designing and drafting, and a lot less time actually implementing, which, if you were spec-driven to begin with, has always been little more than a formality in the first place.
For me, the actual mental work never happened while writing code; it happened well in advance. My workflow hasn't changed that much; I'm just not the one who writes the code anymore, but I'm still very much the one who designs it.
Yes, I've seen many people become _too_ hands-off after an initial success with LLMs, and get bitten by not understanding the system.
Hirers, above, are more focused on the opposite side, though: engineers who try AI once, see a mess or hallucinations, and decide it's useless. There is some learning to figure out how to wield it.
Sorry but focusing on the hand coding part misses the whole picture and would derail the conversation. Comparisons like that are often dishonest.
Hiring someone who writes Rust with Claude but never written anything with it in their lives, never faced the edge cases, never took the wrong decisions feels naive to me. At the end of the day it's still a next token generator, an impressive one. It can hold context but not relate with anything outside that context. Someone needs to take accountability.
You learn using a very powerfool tool. This is a tool, like text editor and compiler.
But you focus on the logic and function more instead of syntax details and whims of the computer languages used in concert.
The analogy from construction is to be elevated from being a bricklayer to an engineer. Or using various shaped shovels with wheelbarrel versus mechanized tools like excavators and dumpers in making earthworks.
... of course for those the focus is in being the master of bricklayers, which is noble, no pun intended, saying with agreeing straight face, bricklaying is a fine skill with beautiful outputs in their area of use. For those AI is really unnecessary. An existential threat, but unnecessary.
I agree with you, syntax details are not important but they haven't been important for a long time due to better editors and linters.
> But you focus on the logic and function more instead of syntax details and whims of the computer languages used in concert.
This is exactly my point. I learned logical mistakes when my first if else broke. Only reason you or I can guide these into good logic is because we dealt with bad ones before all this. I use claude myself a lot because it saves me time. But we're building a culture where no one ever reads the code, instead we're building black boxes.
Again you could see it as the next step in abstraction but not when everyone's this dependent on a few companies prepared to strip the world of its skills so they can sell it back to them.
I switched from OpenAI to Anthropic over the weekend due to the OpenAI fiasco.
I haven't been using the service long enough to comment on the quality of the responses/code generation, although the outages are really quite impactful.
I feel like half of my attempted times using Claude have been met with an Error or Outage, meanwhile the usage limits seem quite intense on Claude Code. I asked Claude to make a website to search a database. It took about 6 minutes for Claude to make it, meanwhile it used 60% of my 4h quota window. I wasn't able to re-find it past asking it to make some basic font changes until I became limited. Under 30 minutes and my entire 4 hour window was used up.
Meanwhile with ChatGPT Codex, a multi-hour coding session would still have 20%+ available at the end of the 4/5 hour window.
I have been using anthropic almost exclusively for a year, while trying other models, and this has literally never happened. I have NEVER experienced a downtime event. At most a random error in a chat but that is immediately solved on the subsequent request. I use the desktop app, the mobile app, the api with several apps in production that I monitor and reliability has never been an issue.
I pay about $1500 per month on personal api use fyi.
I assume you're doing things with the API that aren't coding tasks that could be done with Claude Code? Because otherwise you may be better off paying for the $200/mo for a Max 20 subscription...
I’ve had semi regular downtime since I stayed using Claude about two months ago. I love it but I find it less reliable than alternatives. This is evidenced on their status page (regularly showing red bars).
You're not wrong, for sufficient simple cases it's at a disadvantage. But once things get complicated, it wins by being the only thing that you can get to work without going insane.
And yeah, any serious use completely assumes a Max sub.
Codex limits are weird, I can’t barely use up all the limits of the basic subscription.
Switched to Claude max just because I can combine both. I can say since the weekend, I only have had problems. When it works it’s great. But I am seriously thinking to just cancelling this experiment.
Yeah, the influx of people is disrupting my work, but it brings me joy to witness OpenAI’s decline in consumer support. So much for their Jonny Ive product, whatever it was.
I must have missed something: why are people moving from OpenAI? Since they released gpt-5.3-codex I'be been using it and claude with opus-4.6 and Codex has always been better, more accurate, less prone to allucinations. I can do more with a 20$ OpenAI pland than with a Claude Max 100
I cannot imagine how you can properly supervise an LLM agent if you can't effectively do the work yourself, maybe slightly slower. If the agent is going a significant amount faster than you could do it, you're probably not actually supervising it, and all kinds of weird crap could sneak in.
Like, I can see how it can be a bit quicker for generating some boilerplate, or iterating on some uninteresting API weirdness that's tedious to do by hand. But if you're fundamentally going so much faster with the agent than by hand, you're not properly supervising it.
So yeah, just go back to coding by hand. You should be doing tha probably ~20% of the time anyhow just to keep in practice.
I hope they improve their incident response comms in the future. 2.5 hours with nothing more than "We are continuing to investigate this issue" is pretty poor form.
Their past history of incident handling looks just as bad.
I was having an extended incognito chat with claude.ai, and then it stopped responding. I saved the transcript in a notepad and checked in another tab whether it was down. i wonder if the incognito session is gone, and whether by reposting it i can resurrect it. I have done so with Gemini but there it has codes like "Gemini said", which I do not see here. If anyone knows that, appreciate a solution.
No wonder. It's performance overall was noticeably, like it had regressed to coding models from 1.5 years ago. I've try not to use claude during peak US hours because it tends to struggle more then with reasoning and correctness it seems than off hours.
Never noticed it being outright down like this except for today (and yesterday), never had actual downtime except for few failed requests that worked after a retry which coincides with AWS datacenters going offline.
> Two facilities in the United Arab Emirates sustained direct hits, while a third facility in Bahrain was damaged by a drone strike "in close proximity,"
Also to add context: AWS has contracts with the US military: "The Joint Warfighting Cloud Capability (JWCC) contract enables AWS to continue providing Department of Defense (DoD) customers with secure, reliable, and mission-critical cloud services." https://aws.amazon.com/federal/defense/jwcc/
Making them a target for retaliation ofc.
friends in the middle east have said that there have been a few missiles flying overhead, possibly reduced media coverage as it is an ongoing operation.
well there has been pretty large deals going on in UAE especially when it comes to AI since they can get any power capacity with a flick of their fingers for an unbeatable price and the latency in AI doesn't really matter since the first token is usually seconds anyway. And it's not just AWS it's the entire region.
They decohere much faster as the context grows. Which is fine, or not, depending on whether you consider yourself a software engineer amplifying your output by automating the boilerplate, or an LLM cornac.
AWS actually hosts the models. Security & isolation is part of the proposed value proposition for people and organizations that need to care about that sort of stuff.
It also allows for consolidated billing, more control over usage, being able to switch between providers and models easily, and more.
I typically don’t use Bedrock, but when I have it’s been fine. You can even use Claude Code with a Bedrock API key if you prefer
I’ve been using Claude Code w/ bedrock for the last few weeks and it’s been pretty seamless. Only real friction is authenticating with AWS prior to a session.
Bedrock runs all their stuff in house and doesn’t send any data elsewhere or train on it which is great for organizations who already have data governance sign off with AWS.
Less 9's are a reasonable tradeoff for the ability to ship AI to everyone I suppose. That's one way to prove the technology isn't reliable enough to be shipped into autonomous kill chains just yet lol.
Also remember, using claude to code might make the company you're working for richer. But you are forgetting your skills (seen it first hand), and you're not learning anything new. Professionally you are downgrading. Your next interview won't be testing your AI skills.
Depends on what you consider your "skills". You can always relearn syntax, but you're certainly not going to forget your experience building architectures and developing a maintainable codebase. LLMs only do the what for you, not the why (or you're using it wrong).
For the people who started before the LLM craze, they won't lose their skills if they are just focusing on their original roles. The truth is people are being assigned more than their original roles in most companies. Backend developers being tasked with frontend, devops, qa roles and then letting go of the others. This is happening right now. https://www.reddit.com/r/developersIndia/comments/1rinv3z/ju... When this happens, they don't care or have the mental capacity to care about a codebase in a language they never worked before. People here talk about guiding the llms, but at most places they are too exhausted to carry that context and lets claude review claude itself.
For the people who are starting right now, they're discouraged from all sides for writing code themselves. They'll never understand why an architecture is designed a certain way. Sure ask the llm to explain but it's like learning to swim by reading a book. They have to blindly trust the code and keep hitting it like a casino machine (forgot the name, excuse me) burning tokens which makes these companies more money.
For the people who are yet to begin, sorry for having to start in a world where a few companies hold everyone's skills hostage.
That is a very real concern, I've had to chase engineers to ensure that they are not blindly accepting everything that the LLM is saying, encouraging them to first form some sense of what the solution could be and then use the LLM to refine it further.
As more and more thinking is offloaded to LLMs, people lose their gut instinct about how their systems are designed.
Not that I disagree with your overall point, but have you interviewed recently? 90% of companies I interacted with required (!) AI skills, and me telling them how exactly I "leverage" it to increase my productivity.
AI/LLM knowledge without programming knowledge can make a mess.
Programming knowledge without AI/LLM knowledge can also make a mess.
That makes sense.
> Programming knowledge without AI/LLM knowledge can also make a mess.
How? I'd imagine that most typically means continuing to program by hand. But even someone like that would probably know enough to not mindlessly let an LLM agent go to town.
I think the use of LLMs is assumed by that statement. The point is that even experienced programmers can get poor results if they're not aware of the tech's limitations and best-practices. It doesn't mean you get poor results by default.
There is a lot of hype around the tech right now; plenty of it overblown, but a lot of it also perfectly warranted. It's not going to make you "ten times more productive" outside of maybe laying the very first building blocks on a green field; the infamous first 80% that only take 20% of the time anyway. But it does allow you to spend a lot more time designing and drafting, and a lot less time actually implementing, which, if you were spec-driven to begin with, has always been little more than a formality in the first place.
For me, the actual mental work never happened while writing code; it happened well in advance. My workflow hasn't changed that much; I'm just not the one who writes the code anymore, but I'm still very much the one who designs it.
Hirers, above, are more focused on the opposite side, though: engineers who try AI once, see a mess or hallucinations, and decide it's useless. There is some learning to figure out how to wield it.
Hiring someone who writes Rust with Claude but never written anything with it in their lives, never faced the edge cases, never took the wrong decisions feels naive to me. At the end of the day it's still a next token generator, an impressive one. It can hold context but not relate with anything outside that context. Someone needs to take accountability.
It is the contrary!
You learn using a very powerfool tool. This is a tool, like text editor and compiler.
But you focus on the logic and function more instead of syntax details and whims of the computer languages used in concert.
The analogy from construction is to be elevated from being a bricklayer to an engineer. Or using various shaped shovels with wheelbarrel versus mechanized tools like excavators and dumpers in making earthworks.
... of course for those the focus is in being the master of bricklayers, which is noble, no pun intended, saying with agreeing straight face, bricklaying is a fine skill with beautiful outputs in their area of use. For those AI is really unnecessary. An existential threat, but unnecessary.
> But you focus on the logic and function more instead of syntax details and whims of the computer languages used in concert.
This is exactly my point. I learned logical mistakes when my first if else broke. Only reason you or I can guide these into good logic is because we dealt with bad ones before all this. I use claude myself a lot because it saves me time. But we're building a culture where no one ever reads the code, instead we're building black boxes.
Again you could see it as the next step in abstraction but not when everyone's this dependent on a few companies prepared to strip the world of its skills so they can sell it back to them.
I haven't been using the service long enough to comment on the quality of the responses/code generation, although the outages are really quite impactful.
I feel like half of my attempted times using Claude have been met with an Error or Outage, meanwhile the usage limits seem quite intense on Claude Code. I asked Claude to make a website to search a database. It took about 6 minutes for Claude to make it, meanwhile it used 60% of my 4h quota window. I wasn't able to re-find it past asking it to make some basic font changes until I became limited. Under 30 minutes and my entire 4 hour window was used up.
Meanwhile with ChatGPT Codex, a multi-hour coding session would still have 20%+ available at the end of the 4/5 hour window.
I pay about $1500 per month on personal api use fyi.
Jk, but how though? Would it be possible to give an example? You don't have to give into details. Totally cool if you can't.
Dude... whose going to tell him ?
And yeah, any serious use completely assumes a Max sub.
Switched to Claude max just because I can combine both. I can say since the weekend, I only have had problems. When it works it’s great. But I am seriously thinking to just cancelling this experiment.
https://news.ycombinator.com/item?id=47188697
https://news.ycombinator.com/item?id=47189650
I cannot imagine how you can properly supervise an LLM agent if you can't effectively do the work yourself, maybe slightly slower. If the agent is going a significant amount faster than you could do it, you're probably not actually supervising it, and all kinds of weird crap could sneak in.
Like, I can see how it can be a bit quicker for generating some boilerplate, or iterating on some uninteresting API weirdness that's tedious to do by hand. But if you're fundamentally going so much faster with the agent than by hand, you're not properly supervising it.
So yeah, just go back to coding by hand. You should be doing tha probably ~20% of the time anyhow just to keep in practice.
We build systems that can fail in unpredictable ways, and without knowing the system we built deeply is hard to understand what's going on.
https://status.claude.com
More datacenters? I thought it was just one.
> Two facilities in the United Arab Emirates sustained direct hits, while a third facility in Bahrain was damaged by a drone strike "in close proximity,"
Also to add context: AWS has contracts with the US military: "The Joint Warfighting Cloud Capability (JWCC) contract enables AWS to continue providing Department of Defense (DoD) customers with secure, reliable, and mission-critical cloud services." https://aws.amazon.com/federal/defense/jwcc/ Making them a target for retaliation ofc.
They decohere much faster as the context grows. Which is fine, or not, depending on whether you consider yourself a software engineer amplifying your output by automating the boilerplate, or an LLM cornac.
New hardware keeps on coming with large gains in performance.
"Do this"
"User wants me to [do complete opposite]"
Seems not to be as capable as a month ago.
(More seriously I wonder if they'd consider using Openai or Gemini for this purpose)
It also allows for consolidated billing, more control over usage, being able to switch between providers and models easily, and more.
I typically don’t use Bedrock, but when I have it’s been fine. You can even use Claude Code with a Bedrock API key if you prefer
https://docs.aws.amazon.com/bedrock/latest/userguide/what-is...
https://code.claude.com/docs/en/amazon-bedrock
(I am not affiliated with AWS in any way. I’m just a user stuck in their ecosystem!)
Only one 9 of availability means you are seriously unreliable.