The company he's worked for nearly a quarter century has enabled & driven more consumerist spend in all areas of the economy via behaviorally targeted optimized ad delivery, driving far more resources and power consumption by orders of magnitude compared to the projected increases of data centers over the coming years. This level of vitriol seems both misdirected and practically obtuse in lacking awareness of the part his work has played in far, far, far more expansive resource expenditure in service to work far less promising for overall advancement, in ad tech and algorithmic exploitation of human psychology for prolonged media engagement.
I agree completely. Ads have driven the surveillance state and enshitification. It's allowed for optimized propaganda delivery which in turn has led to true horrors and has helped undo a century of societal progress.
All I have to say is this post warmed my heart. I'm sure people here associate him with Go lang and Google, but I will always associate him with Bell Labs and Unix and The Practice of Programming, and overall the amazing contributions he has made to computing.
To purely associate with him with Google is a mistake, that (ironically?) the AI actually didn't make.
There was no computer scientist ever so against Java (Rob Pike) and a company that was so pro Java (Google). I think they were disassociated along time ago, I don’t think any of the senior engineers can be seen as anything other than being their own persons.
Yup. A legend. Books could be written just about him. I wish I had such a prestigious career.
His viewpoints were always grounded and while he may have some opinions about Go and programming, he genuinely cares about the craft. He’s not in it to be rich. He’s in it for the science and art of software engineering.
ROFL his website just spits out poop emoji's on a fibonacci delay. What a legend!
This. Folks trying to nullify his current position based on his recent work history alone with Google are deliberately trying to undermine his credibility through distraction tactics.
Maybe its me but I had to look at the term sealioning and for context for other people
According to merriam-webster, sealioning/sealions are:
> 'Sealioning' is a form of trolling meant to exhaust the other debate participant with no intention of real discourse.
> Sealioning refers to the disingenuous action by a commenter of making an ostensible effort to engage in sincere and serious civil debate, usually by asking persistent questions of the other commenter. These questions are phrased in a way that may come off as an effort to learn and engage with the subject at hand, but are really intended to erode the goodwill of the person to whom they are replying, to get them to appear impatient or to lash out, and therefore come off as unreasonable.
A person trying to learn doesn’t constantly disagree/contradict you and never express that their understanding has improved. A person sealioning always finds a reason to erode whatever you say with every response. At some point they need to nod or at least agree with something except in the most extreme cases.
It also doesn’t help their case that they somehow have a such a starkly contradictory opinion on something they ostensibly don’t know anything/are legitimately asking questions about. They should ask a question or two and then just listen.
It’s just one of those things that falls under “I know it when I see it.”
"Fuck you I hate AI" isn't exactly a deep statement needing credibility. It's the same knee jerk lacking in nuance shit we see repeated over and over and over.
If anyone were actually interested in a conversation there is probably one to be had about particular applications of gen-AI, but any flat out blanket statements like his are not worthy of any discussion. Gen-AI has plenty of uses that are very valuable to society. E.g. in science and medicine.
Also, it's not "sealioning" to point out that if you're going to be righteous about a topic, perhaps it's worth recognizing your own fucking part in the thing you now hate, even if indirect.
The point isn’t that people who’ve worked for Google aren’t allowed to criticize. The point is that someone who chose to work for Google recently could not actually believe that building datacenters is “raping the planet”. He’s become a GenAI critic, and he knows GenAI critics get mad at datacenters, so he’s adopted extreme rhetoric about them without stopping to think about whether this makes sense or is consistent with his other beliefs.
> The point is that someone who chose to work for Google recently could not actually believe that building datacenters is “raping the planet”.
Of course they could. (1) People are capable of changing their minds. His opinion of data centers may have been changed recently by the rapid growth of data centers to support AI or for who knows what other reasons. (2) People are capable of cognitive dissonance. They can work for an organization that they believe to be bad or even evil.
It’s possible, yes, for someone to change their mind. But this process comes with sympathy for all the people who haven’t yet had the realization, which doesn’t seem to be in evidence.
Cognitive dissonance is, again, exactly my point. If you sat him down and asked him to describe in detail how some guy setting up a server rack is similar to a rapist, I’m pretty confident he’d admit the metaphor was overheated. But he didn’t sit himself down to ask.
I don't think he claimed that "some guy setting up a server rack" is similar to a rapist. I think he's blaming the corporations. I don't think that individuals can have that big of an effect on the environment (outliers like Thomas Midgley Jr. excepted, of course).
I think "you people" is meant to mean the corporations in general, or if any one person is culpable, the CEOs. Who are definitely not just "some guy setting up a server rack."
Just the haters here? Is what was written not hateful? Has his entire working life not lead to this moment of "spending trillions on toxic, unrecyclable equipment while blowing up society?"
Fuck you people. Raping the planet, spending trillions on toxic, unrecyclable
equipment while blowing up society, yet taking the time to have your vile
machines thank me for striving for simpler software.
That's Rob Pike, having spent over 20 years at Google, must know it to be the home of the non-monetary wholesome recyclable equipment brought about by economics not formed by an ubiquitous surveillance advertising machine.
> To purely associate with him with Google is a mistake, that (ironically?) the AI actually didn't make.
You don't have to purely associate him with Google to understand the rant as understandable given AI spam, and yet entirely without a shred of self-awareness.
> And he is allowed to work for google and still rage against AI.
The specific quote is "spending trillions on toxic, unrecyclable equipment while blowing up society." What has he supported for the last 20+ years if not that? Did he think his compute ran on unicorn farts?
Clearly he knows, since he self-replies "I apologize to the world at large for my inadvertent, naive if minor role in enabling this assault."
Just because someone does awesome stuff, like Rob Pike has, doesn't mean that their blind spots aren't notable. You can give him a pass and the root comment sure wishes everyone would, but in doing so you put yourself in the position of the sycophant letting the emperor strut around with no clothes.
Did Google, the company currently paying Rob Pike's extravagant salary, just start building data centers in 2025? Before 2025 was Google's infra running on dreams and pixie farts with baby deer and birdies chirping around? Why are the new data centers his company is building suddenly "raping the planet" and "unrecyclable"?
Everything humans do is harmful to some degree. I don't want to put words in Pike's mouth, but I'm assuming his point is that the cost-benefit-ratio of how LLMs are often used is out of whack.
Somebody burned compute to send him an LLM-generated thank-you note. Everybody involved in this transaction lost, nobody gained anything from it. It's pure destruction of resources.
Data center power usage has been fairly flat for the last decade (until 2022 or so). While new capacity has been coming online, efficiency improvements have been keeping up, keeping total usage mostly flat.
The AI boom has completely changed that. Data center power usage is rocketing upwards now. It is estimated it will be more than 10% of all electric power usage in the US by 2030.
It's a completely different order of magnitude than the pre AI-boom data center usage.
The first chart in your link doesn't show "flat" usage until 2022? It is clearly rising at an increasing rate, and it more than doubles over 2014-2022.
It might help to look at global power usage, not just the US, see the first figure here:
I think you're referring to Figure ES-1 in that paper, but that's kind of a summary of different estimates.
Figure 1.1 is the chart I was referring to, which are the data points from the original sources that it uses.
Between 2010 and 2020, it shows a very slow linear growth. Yes, there is growth, but it's quite slow and mostly linear.
Then the slope increases sharply. And the estimates after that point follow the new, sharper growth.
Sorry, when I wrote my original comment I didn't have the paper in front of me, I linked it afterwards. But you can see that distinct change in rate at around 2020.
ES-1 is the most important figure, though? As you say, it is a summary, and the authors consider it their best estimate, hence they put it first, and in the executive summary.
Figure 1.1 does show a single source from 2018 (Shehabi et al) that estimates almost flat growth up to 2017, that's true, but the same graph shows other sources with overlap on the same time frame as well, and their estimates differ (though they don't span enough years to really tell one way or another).
Basing off Yahoo historical price data, Bitcoin prices first started being tracked in late 2014. So my guess would be the increase from then to 2022 could have largely been attributed to crypto mining.
The energy impact of crypto is rather exaggerated. Most estimates on this front are aiming to demonstrate as a high value as possible, and so should be taken as higher upper bound, and yet even that upper bound is 'only' around 200TWh a year. Annual energy consumption is in the 24,000TWh range with growth averaging around 2% or so per year.
So if you looked at a graph of energy consumption, you wouldn't even notice crypto. In fact even LLM stuff will just look like a blip unless it scales up substantially more than its currently trending. We use vastly more more energy than most appreciate. And this is only electrical energy consumption. All energy consumption is something like 185,000 TWh. [1]
Have you dived into the destructive brainrot that YouTube serves to millions of kids who (sadly) use it unattended each day? Even much of Google's non-ad software is a cancer on humanity.
Yes I agree although I still believe that there is some tangential truth in parent comment when you think about it.
I am not accurate about google but facebook definitely has some of the most dystopian tracking I have heard. I might read the facebook files some day but the dystopian fact that facebook tracks young girls and sees if that they delete their photos, they must feel insecure and serves them beauty ads is beyond predatory.
Honestly, my opinion is that something should be done about both of these issues.
But also its not a gotcha moment for Rob pike that he himself was plotting up the ads or something.
Regarding the "iphone kids", I feel as if the best thing is probably an parental level intervention rather than waiting for an regulatory crackdown since lets be honest, some kids would just download another app which might not have that regulation.
Australia is implementing social media ban basically for kids but I don't think its gonna work out but everyone's looking at it to see what's gonna happen basically.
Personally I don't think social media ban can work if VPN's just exist but maybe they can create such an immense friction but then again I assume that this friction might just become norm. I assume many of you guys must have been using internet from the terminal days where the friction was definitely there but the allure still beat the friction.
How does the compute required for that compare to the compute required to serve LLM requests? There's a lot of goal-post moving going on here, to justify the whataboutism.
You could at least argue while there is plenty of negatives, at least we got to use many services with ad-supported model.
There is no upside to vast majority of the AI pushed by the OpenAI and their cronies. It's literally fucking up economy for everyone else all to get AI from "lies to users" to "lies to users confidently", all while rampantly stealing content to do that, because apparently pirating something as a person is terrible crime govt need to chase you, unless you do that to resell it in AI model, then it's propping up US economy.
I feel you. All that time in the beginning of the mp3 era the record industry was perusing people for pirating music. And then when an AI company does it for books, its some how not piracy?
If there is any example of hypocrisy, and that we don't have a justice system that applies the law equally, that would be it.
Agree, but I'm speaking more in aggregate. And even individually, it's not hard to find people who will say that e.g. an Instagram ad gave them a noticable benefit (I've experienced it myself) as you can who will feel that it was a waste of money.
It isn't that simple. Each company paying for ads would have preferred that their competitors had not advertised, then spend a lot less on ads... for the same value.
It is like an arms race. Everyone would have been better off if people just never went to war, but....
There's a tiny slice of companies deal with advertising like this. Say, Coke vs Pepsi, where everyone already knows both brands and they push a highly similar product.
A lot of advertising is telling people about some product or service they didn't even know existed though. There may not even be a competitor to blame for an advertising arms race.
It can't function without advertising, money, or oxygen, if we're just adding random things to obscure our complete lack of an argument for advertising. We can't go back to an anaerobic economy, silly wabbit.
> “this other thing is also bad” is not an exoneration
No, but it puts some perspective on things. IMO Google, after abandoning its early "don't be evil" motto is directly responsible for a significant chunk of the current evil in the developed world, from screen addiction to kids' mental health and social polarization.
Working for Google and drawing an extravagant salary for many, many years was a choice that does affect the way we perceive other issues being discussed by the same source. To clarify: I am not claiming that Rob is evil; on the contrary. His books and open source work were an inspiration to many, myself included. But I am going to view his opinions on social good and evil through the prism of his personal employment choices. My 2c.
This is a purity test that cannot be passed. Give me your career history and I’ll tell you why you aren’t allowed to make any moral judgments on anything as well.
My take on the above, and I might be taking it out of context is that I think what is being said here is that the exploitation and grift needs to stop. And if you are working for a company that does this, you are part of the problem. I know that pretty much every modern company does this, but it has to stop somewhere.
We need to find a way to stop contributing to the destruction of the planet soon.
I don't work for any of these companies, but I do purchase things from Amazon and I have an apple phone. I think the best we can do is minimize our contribution to it. I try to limit what services I use from this companies, and I know it doesnt make much of a differnce, but I am doing what I can.
I'm hoping more people that need to be employed by tech companies can find a way to be more selective on who they employ with.
Point is he is criticizing Google but still collecting checks from them. That's hypocritical. He would have a little sympathy if he never worked for them. He had decades to resign. He didn't. He stayed there until retirement. He's even using gmail in that post.
I still don't see the problem. You can criticize things you're part of. Probably being part of something is what informs a person enough, and makes it matter enough to them, to criticize in the first place.
If rob pike was asked about these issues of systemic addiction and others where we can find things google was bad at. I am sure that he wouldn't defend google about these things.
Maybe someone can mail a real message asking Rob pike genuinely (without any snarkiness that I feel from some comments here) about some questionable google things and I am almost certain that if those questions are reasonable, rob pike will agree that some actions done by google were wrong.
I think its just that rob pike got pissed off because an AI messaged him so he got the opportunity to talk about these issues and I doubt that he got the opportunity to talk / someone asking him about some other flaws of google / systemic issues related to it.
Its like, Okay, I feel like there is an issue in the world so I talk about it. Now does that mean that I have to talk about every issue in the world, no not really. I can have priorities in what issues I wish to talk about.
But that being said, if someone then asks me respectfully about issues which are reasonable, Being moral, I can agree about that yes those are issues as well which needs work upon.
And some people like rob pike who left google because of (ideological reasons perhaps, not sure?) wouldn't really care about the fallback and like you say, its okay to collect checks from organization even if they critize
Honestly Google's lucky that they got rob pike instead of vice versa from my limited knowledge.
Golang is such a brilliant language and ken thompson and rob pike are consistently some of the best coders and their contributions to golang and so many other projects is unparalleled.
I don't know much about rob pike as compared to Ken thompson but I assume he is really great too! Mostly I am just a huge golang fan.
I know this will probably not come off very well in this community. But there is something to be said about criticizing the very thing you are supporting. I know in this day and age, its not easy to survive without contributing to the problem in some degree.
Im not saying nobody has the right to criticize something they are supporting, but it does say something about our choices and how far we let this problem go before it became too much to solve. And not saying the problem isn't solvable. Just saying its become astronomically more difficult now then ever before.
I think at the very least, there is a little bit of cringe in me every time I criticize the very thing I support in some way.
>But that being said, if someone then asks me respectfully about issues which are reasonable, Being moral, I can agree about that yes those are issues as well which needs work upon.
With all due respect, being moral isn't an opinion or agreement about an opinion, it's the logic that directs your actions. Being moral isn't saying "I believe eating meat is bad for the planet", it's the behaviour that abstains from eating meat. Your moral is the set of statements that explains your behaviour. That is why you cannot say "I agree that domestic violence is bad" while at the same time you are beating up your spouse.
If your actions contradict your stated views, you are being a hypocrite. This is the point that people in here are making. Rob Pike was happy working at Google while Google was environmentally wasteful (e-waste, carbon footprint and data center related nastiness) to track users and mine their personal and private data for profit. He didn't resign then nor did he seem to have caused a fuss about it. He likely wasn't interested in "pointless politics" and just wanted to "do some engineering" (just a reference to techies dismissing or critising folks discussing social justices issues in relation to big tech). I am shocked I am having to explain this in here. I understand this guy is an idol of many here but I would expect people to be more rational on this website.
I think everyone, including myself, should be extremely hesitant to respond to marketing emails with profanity-laden moralism. It’s not about purity testing, it’s about having the level of introspection to understand that people do lots of things for lots of reasons. “Just fuck you. Fuck you all.” is not an appropriate response to presumptively good people trying to do cool things, even if the cool things are harmful and you desperately want to stop them.
Yes, I'm trying to marginalize the author's view. I think that “Just fuck you. Fuck you all.” is a bad view which does not help us see problems for what they are nor analyze negative impacts on society.
For example, Rob seems not to realize that the people who instructed an AI agent to send this email are a handful of random folks (https://theaidigest.org/about) not affiliated with any AI lab. They aren't themselves "spending trillions" nor "training your monster". And I suspect the AI labs would agree with both Rob and me that this was a bad email they should not have sent.
That's frankly just pure whataboutism. The scale of the situation with the explosion of "AI" data centres is far far higher. And the immediate spike of it, too.
It’s not really whataboutism. Would you take an environmentalist seriously if you found out that they drive a Hummer?
When people have choices and they choose the more harmful action, it hurts their credibility. If Rob cares so much about society and the environment, why did he work at a company that has horrendous track record on both? Someone of his level of talent certainly had choices, and he chose to contribute to the company that abandoned “don’t be evil” a long time ago.
I would argue that Google actually has had a comparitively good track record on the environment, I mean if you say (pre AI) Google does have a bad track record on the environment, then I wonder which ones do in your opinion. And while we can argue about the societal cost/benefit of other Google services and their use of ads to finance them, I would say there were very different to e.g Facebook with a documented effort to make their feed more addictive
Honestly, it seems like Rob Pike may have left Google around the same I did. (2021, 2022). Which was about when it became clear it was 100% down in the gutter without coming back.
But you left because you were feeling like google was going in gutter and wanted to make an ethical choice perhaps on what you felt was right.
Honestly I believe that google might be one of the few winners from the AI industry perhaps because they own the whole stack top to bottom with their TPU's but I would still stray away from their stock because their P/E ratio might be insanely high or something
So like, we might be viewing the peaks of the bubble and you might still hold the stocks and might continue holding it but who knows what happens after the stock depreciates value due to AI Bubble-like properties and then you might regret as why you didn't sell it but if you do and google's stock rises, you might still regret.
I feel as if grass is always greener but not sure about your situation but if you ask me, you made the best out of the situation with the parameters you had and logically as such I wouldn't consider it "unfortunately" but I get what you mean.
That's one of the reasons I left. It also became intolerable to work there because it had gotten so massive. When I started there was an engineering staff of about 18,000 and when I left it was well over 100,000 and climbing constantly. It was a weird place to work.
But with remote work it also became possible to get paid decently around here without working there. Prior I was bound to local area employers of which Google was the only really good one.
I never loved Google, I came there through acquisition and it was that job with its bags of money and free food and kinda interesting open internal culture, or nothing because they exterminated my prior employer and and made me move cities.
After 2016 or so the place just started to go downhill faster and faster though. People who worked there in the decade prior to me had a much better place to work.
Interesting, so if I understand you properly, you would prefer working remote nowadays with google but that option didn't exist when you left google.
I am super curious as I don't get to chat with people who have worked at google as so much so pardon me but I got so many questions for you haha
> It was a weird place to work
What was the weirdness according to you, can you elaborate more about it?
> I never loved Google, I came there through acquisition and it was that job with its bags of money and free food and kinda interesting open internal culture, or nothing because they exterminated my prior employer and and made me move cities.
For context, can you please talk more about it :p
> After 2016 or so the place just started to go downhill faster and faster though
What were the reasons that made them go downhill in your opinion and in what ways?
Naturally I feel like as organizations move and have too many people, maybe things can become intolerable to work but I have heard it be described as it depends where and in which project you are and also how hard it can be to leave a bad team or join a team with like minded people which perhaps can be hard if the institution gets micro-managed at every level due to just its sheer size of employees perhaps?
> you would prefer working remote nowadays with google but that option didn't exist when you left google.
Not at all. I actually prefer in-office. And left when Google was mostly remote. But remote opened up possibilities to work places other than Google for me. None of them have paid as well as Google, but have given more agency and creativity. Though they've had their own frustrations.
> What was the weirdness according to you, can you elaborate more about it?
I had a 10-15 year career before going there. Much of what is accepted as "orthodoxy" at Google rubbed me the wrong way. It is in large part a product of having an infinite money tree. It's not an agile place. Deadlines don't matter. Everything is paid for by ads.
And as time goes on it became less of an engineering driven place and more of a product manager driven place with classical big-company turf wars and shipping the org chart all over the place.
I'd love to get paid Google money again, and get the free food and the creature comforts, etc. But that Google doesn't exist anymore. And they wouldn't take my back anyways :-)
It was still a wildly wasteful company doing morally ambiguous things prior to that timeframe. I mean, its entire business model is tracking and ads— and it runs massive, high energy datacenters to make that happen.
I wouldn't argue with this necessarily except that again the scale is completely different.
"AI" (and don't get me wrong I use these LLM systems constantly) is off the charts compared to normal data centre use for ads serving.
And so it's again, a kind of whataboutism that pushes the scale of the issue out of the way in order to make some sort of moral argument which misses the whole point.
BTW in my first year at Google I worked on a change where we made some optimizations that cut the # of CPUs used for RTB ad serving by half. There were bonuses and/or recognition for doing that kind of thing. Wasteful is a matter of degrees.
It's dumb, but energy wise, isn't this similar to leaving the TV on for a few minutes even though nobody is watching it?
Like, the ratio is not too crazy, it's rather the large resource usages that comes from the aggregate of millions of people choosing to use it.
If you assume all of those queries provide no value then obviously that's bad. But presumably there's some net positive value that people get out of that such that they're choosing to use it. And yes, many times the value of those queries to society as a whole is negative... I would hope that it's positive enough though.
But mining all the tracking data in order to show profitable targeted ads is extremely intensive. That’s what kicked off the era of “big data” 15-20 years ago.
I.e., they are proud to have never intentionally used AI and now they feel like they have to maintain that reputation in order to remain respected among their close peers.
Asking about the value of ads is like asking what value I derive from buying gasoline at the petrol station. None. I derive no value from it, I just spend money there. If given the option between having to buy gas and not having to buy gas, all else being equal, I would never take the first option.
But I do derive value from owning a car. (Whether a better world exists where my and everyone else's life would be better if I didn't is a definitely a valid conversation to have.)
The user doesn't derive value from ads, the user derives value from the content on which the ads are served next to.
If they want LLM, you probably don't have to advertise them as much
No the reality of the matter is that people are being shoved LLM's. They become the talk of the town and algorithms share any development related to LLM or similar.
The ads are shoved down to users. Trust me, the average person isn't as much enthusiastic about LLM's and for good reasons when people who have billions of dollars say that yes its a bubble but its all worth it or similar and the instances where the workforce themselves are being replaced/actively talked about being replaced by AI
We live in an hackernews bubble sometimes of like-minded people or communities but even on hackernews we see disagreements (I am usually Anti AI mostly because of the negative financial impact the bubble is gonna have on the whole world)
So your point becomes a bit moot in the end but that being said, Google (not sure how it was in the past) and big tech can sometimes actively promote/ close their eyes if the ad sponsors are scammy so ad-blockers are generally really good in that sense.
> Somebody burned compute to send him an LLM-generated thank-you note. Everybody involved in this transaction lost, nobody gained anything from it. It's pure destruction of resources.
Well the people who burnt compute got it from money so they did burn money.
But they don't care about burning money if they can get more money via investors/other inputs faster than they can burn (fun fact: sometimes they even outspend that input)
So in a way the investors are burning their money, now they burn the money because the market is becoming irrational. Remember Devin? Yes cognition labs is still there etc. but I remember people investing into these because of their hype when it did turn out to be moot comparative to their hype.
But people/market was so irrational that most of these private equities were unable to invest in something like openai that they are investing in anything AI related.
And when you think more deeper about all the bubble activities. It becomes apparent that in the end bailouts feel more possible than not which would be an tax on average taxpayers and they are already paying an AI tax in multiple forms whether it be in the inflation of ram prices due to AI or increase in electricity or water rates.
So repeat it with me: whose gonna pay for all this, we all would but the biggest disservice which is the core of the argument is that if we are paying for these things, then why don't we have a say in it. Why are we not having a say in AI related companies and the issues relating to that when people know it might take their jobs etc. so the average public in fact hates AI (shocking I know /satire) but the fact that its still being pushed shows how little influence sometimes public can have.
Basically public can have any opinions but we won't stop is the thing happening in AI space imo completely disregarding any thoughts about the general public while the CFO of openAI proposing an idea that public can bailout chatgpt or something tangential.
> Somebody burned compute to send him an LLM-generated thank-you note. Everybody involved in this transaction lost, nobody gained anything from it. It's pure destruction of resources.
I don't think it does unless you ignore the context of the conversation. Its very clear that the reference about "letters" being made wasn't "all mail."
When the thought is "I'd like this person to know how grateful I am", the medium doesn't really matter.
When the thought is "I owe this person a 'Thank You'", the handwritten letter gives an illusion of deeper thought. That's why there are fonts designed to look handwritten. To the receiver, they're just junk mail. I'd rather not get them at all, in any form. I was happy just having done the thing, and the thoughtless response slightly lessens that joy.
We’re well past that. Social media killed that first. Some people have a hard time articulating their thoughts. If AI is a tool to help, why is that bad?
Imagine the process of solving a problem as a sequence of hundreds of little decisions that branch between just two options. There is some probability that your human brain would choose one versus the other.
If you insert AI into your thinking process, it has a bias, for sure. It will helpfully reinforce whatever you tell it you think makes sense, or at least on average it will be interpreted that way because of a wide variety of human cognitive biases even if it hedges. At the least it will respond with ideas that are very... median.
So at each one of these tiny branches you introduce a bias towards the "typical" instead of discovering where your own mind would go. It's fine and conversational but it clearly influences your thought process to, well, mitigate your edges. Maybe it's more "correct", it's certainly less unique.
And then at some point they start charging for the service. That's the part I'm concerned about, if it's on-device and free to use I still think it makes your thought process less interesting and likely to have original ideas, but having to subscribe to a service to trust your decision making is deeply concerning.
> And then at some point they start charging for the service. That's the part I'm concerned about, if it's on-device and free to use I still think it makes your thought process less interesting and likely to have original ideas, but having to subscribe to a service to trust your decision making is deeply concerning.
This, speaking about environmental impacts. I wish that more models start focusing on the parameter density / their compactness more so that they can run locally but this isn't something that big tech really wants so we are probably gonna get models like the recent minimax model or glm air models or qwen or mistral models.
These AI services only work as long as they are free and burn money. As an example, me and my brother were discussing something yesterday related to LLM and my mother tried to understand and talk about it too and wanted to get ghibli styles photo since someone had ghibli generated photo as their pfp and she wanted to try it too
She then generated the pictures and my brother did a quick calculation and it took around 4 cents for each image which with PPP in my country and my currency is 3 ruppees.
When asked by my brother if she would pay for it, she said that no she's only using it for free but she also said that if she were forced to, she might even pay 50 rupees.
I jumped in the conversation and said nobody's gonna force her to make ghibli images.
Articulating thoughts is the backbone of communication. Replacing that with some kind of emotionless groupthink does actually destroy human-to-human communication.
I would wager that the amount of “very significant thing that have happened over the history of humanity” come down to a few emotional responses.
I shouldn't have to explain this, but a letter is a medium of communication, that could just as easily be written by a LLM (and transcribed by a human onto paper).
Communication happen between two parties. I wouldn't consider LLM an party considering it's just an autosuggestion on steroids at the end of day (lets face it)
Also if you need communication like this, just share the prompt anyway to that other person in the letter, people much rather might value that.
Someone taking the time and effort to write and send a letter and pay for postage might actually be appreciated by the receiver. It’s a bit different from LLM agents being ordered to burn resources to send summaries of someone’s work life and congratulating them. It feels like ”hey look what can be done, can we get some more funding now”. Just because it can be done doesn’t mean it adds any good value to this world
> I don’t know anyone who doesn’t immediately throw said enveloppe, postage, and letter in the trash
If you're being accurate, the people you know are terrible.
If someone sends me a personal letter [and I gather we're talking about a thank-you note here], I'm sure as hell going to open it. I'll probably even save it in a box for an extremely long time.
Of course. I took it to be referring the 98% of other paper mail that that goes straight to the trash. Often unopened. I don't know if I'm typical but the number of personal cards/letters I received in 2025 I could count on one hand.
> Of course. I took it to be referring the 98% of other paper mail that that goes straight to the trash. Often unopened. I don't know if I'm typical but the number of personal cards/letters I received in 2025 I could count on one hand.
Yes so this is why the reason why person card/letters really matter because most people sheldom get any and if you know a person in your life / in any (community/project) that you deeply admire, sending them a handwritten mail can be one of the highest gestures which shows that you took the time out of your day and you really cared about them so much in a way.
Years ago Google built a data center in my state. It received a lot of positive press. I thought this was fairly strange at the time, as it seemed that there were strong implications that there would be jobs, when in reality a large data center often doesn't lead to tons of long term employment for the area. From time to time there are complaints of water usage, but from what I've seen this doesn't hit most people's radar here. The data center is about 300 MW, if I'm not mistaken.
Down the street from it is an aluminum plant. Just a few years after that data center, they announced that they were at risk of shutting down due to rising power costs. They appealed to city leaders, state leaders, the media, and the public to encourage the utilities to give them favorable rates in order to avoid layoffs. While support for causes like this is never universal, I'd say they had more supporters than detractors. I believe that a facility like theirs uses ~400 MW.
Now, there are plans for a 300 MW data center from companies that most people aren't familiar with. There are widespread efforts to disrupt the plans from people who insist that it is too much power usage, will lead to grid instability, and is a huge environmental problem!
Not only would I suspect that an aluminum plant employs far more people, it is an attainable job. Presumably minimal qualifications for some menial tasks, whereas you might need a certain level of education/training to get a more prestigious and out of reach job at a datacenter.
Easier for a politician to latch onto manufacturing jobs.
No doubt there is exquisite engineering and process control expertise required to operate an aluminum plant. However, I imagine there is extensive need for people to "man the bellows", move this X tons from here to there, etc that require only minimal training and a clean drug test. An army of labor vs a handful of nerds to swap failed hard drives.
I think it's incredibly obvious how it connects to his "argument" - nothing he complains about is specific to GenAI. So dressing up his hatred of the technology in vague environmental concerns is laughably transparent.
He and everyone who agrees with his post simply don't like generative AI and don't actually care about "recyclable data centers" or the rape of the natural world. Those concerns are just cudgels to be wielded against a vague threatening enemy when convenient, and completely ignored when discussing the technologies they work on and like
You simply don't like any criticism of AI, as shown by your false assertions that Pike works at Google (he left), or the fact Google and others were trying to make their data centers emit less CO2 - and that effort is completely abandoned directly because of AI.
And you can't assert that AI is "revolutionary" and "a vague threat" at the same time. If it is the former, it can't be the latter. If it is the latter, it can't be the former.
> that effort is completely abandoned directly because of AI
That effort is completely abandoned because of the current US administration and POTUS a situation that big tech largely contributed to. It’s not AI that is responsible for the 180 zeitgeist change on environmental issues.
> It’s not AI that is responsible for the 180 zeitgeist change on environmental issues.
Yes, much like it's not the gun's fault when someone is killed by a gun. And, yet, it's pretty reasonable to want regulation around these tools that can be destructive in the wrong hands.
Why should I be concerned with something that doesn't exist, will certainly never exist, and even if I were generous and entertained that something that breaks every physical law of the universe starting with entropy could exist, would result in "it" torturing a copy of myself to try to influence me in the past?
Nothing there makes sense at any level.
But people getting fired and electricity bills skyrocketing (as well as RAM etc.) are there right now.
I often find that when people start applying purity tests it’s mainly just to discredit any arguments they don’t like without having to make a case against the substance of the argument.
Assess the argument based on its merits. If you have to pick him apart with “he has no right to say it” that is not sufficient.
How so? He’s talking about what happened to him in the context of his professional expertise/contributions. It’s totally valid for him to talk about this subject. His experience, relevance, etc. are self apparent. No one is saying “because he’s an expert” to explain everything.
They literally (using AI) wrote him an email about his work and contributions. His expertise can’t be removed from the situation even if we want to.
A topic for more in depth study to be sure. However:
1) video streaming has been around for a while and nobody, as far as I'm aware, has been talking about building multiple nuclear tractors to handle the energy needs
2) video needs a CPU and a hard drive. LLM needs a mountain of gpus.
3) I have concerns that the "national center for AI" might have some bias
I can find websites also talking about the earth being flat. I don't bother examining their contents because it just doesn't pass the smell test.
Although thanks for the challenge to my preexisting beliefs. I'll have to do some of my own calculations to see how things compare.
Those statistics include the viewing device in the energy usage for streaming energy usage, but not for GenAI. Unless you're exclusively using ChatGPT without a screen it's not a fair comparison.
The 0.077 kWh figure assumes 70% of users watching on a 50 inch TV. It goes down to 0.018 kWh if we assume 100% laptop viewing. And for cell phones the chart bar is so small I can't even click it to view the number.
(That's just one genre of brainrot I came across recently. I also had my front page flooded with monkey-themed AI slop because someone in my household watched animal documentaries. Thanks algorithm!)
Videos produce benefits (arguably much less now with the AI generated spam) that are difficult to reproduce with other less energy hungry ways. compare this with this message that it would have cost nothing to a human to type instead of going through the inference of AI not only wasting energy for something that could have been accomplished much easier but removing also the essence of the activity. No-One was actually thankful for that thankyou message.
It's not just about per-unit resource usage, but also about the total resource usage. If GenAI doubles our global resource usage, that matters.
I doubt Youtube is running on as many data centers as all Google GenAI projects are running (with GenAI probably greatly outnumbering Youtube - and the trend is also not in favor of GenAI).
I think that criticizing when it benefits the person criticizing, and absense of criticism when criticism would hurt the person criticizing, makes the argument less persuasive.
This isn't ad hom, it's a heuristic for weighting arguments. It doesn't prove whether an argument has merit or not, but if I have hundreds of arguments to think about, it helps organizing them.
It is the same energy as the "you criticize society, yet you participate in society" meme. Catching someone out on their "hypocrisy" when they hit a limit of what they'll tolerate is really a low-effort "gotcha".
And it probably isn't astroturf, way too many people just think this way.
No, which is why I didn’t say that. I do think astroturfing could explain the rapid parroting of extremely similar ad hominems, which is what I actually did imply.
My guess is the scale has changed? They used to do AI stuff, but it wasn't until OpenAI (anyone feel free to correct me) went ahead and scaled up the hardware and discovered that more hardware = more useful LLM, that they all started ramping up on hardware. It was like the Bitcoin mining craze, but probably worse.
Even if I don't share the opinion, I can understand the moral stance against genAI. But it strikes me as a bit unfaithful when people argue against it from all kinds of angles that somehow never seemed to bother them before.
It's like all those anti-copyright activists from the 90s (fighting the music and film industry) that suddenly hate AI for copyright infringements.
Maybe what's bothering the critics is actually deeper than the simple reasons they give. For many, it might be hate against big tech and capitalism itself, but hate for genAI is not just coming from the left. Maybe people feel that their identity is threatened, that something inherently human is in the process of being lost, but they cannot articulate this fear and fall back to proxy arguments like lost jobs, copyright, the environment or the shortcomings of the current implementations of genAI?
Data centers seem poised to make renewable energy sources more profitable than they have ever been. Nuclear plants are springing up everywhere and old plants are being un-decommissioned. Isn’t there a strong case to be made that AI has helped align the planet toward a more sustainable future?
The difference in carbon emissions for a search query vs an LLM generation are on the order of exhaling vs driving a hummer. So I can reduce this disingenuous argument to:
> You spent your whole life breathing, and now you're complaining about SUVs? What a hypocrite.
This reminds me of how many Facebook employees were mad at Zuckerberg for going MAGA, but didn’t make any loud noise at the rapid rise of teenagers committing suicide or the misinformation and censorship done by their platform. People have blinders on.
Can't speak for Rob Pile but my guess would be, yeah, it might seem hypocritical but it's a combination of seeing the slow decay of the open culture they once imagined culminating into this absolute shirking of responsibility while simultaneously exploiting labour, by those claiming to represent the culture, alongwith the retrospective tinge of guilt for having enabled it, that drrove this rant.
Furthermore, w.r.t the points you raised - it's a matter of scale and utility. Compared to everything that has come before, GenAI is spectacularly inefficient in terms of utility per unit of compute (however you might want to define these). There hasn't been a tangible nett good for society that has come from it and I doubt there would be. The egarness and will to throw money and resources at this surpasses the crypto mania which was just as worthless.
Even if you consider Rob a hypocrite , he isn't alone in his frustration and anger at the degradation of the promise of Open Culture.
"There hasn't been a tangible nett good for society that has come from it and I doubt there would be"
People being more productive with writing code, making music or writing documents fpr whatever is not a improvement for them and therefore for society?
I claim that the new code, music or documents have not added anything significant/noteworthy/impactful to society except for the self-perpetuating lie that it would, all the while regurgitating, at high speeds, what was stolen.
And all at significant opportunity cost (in terms of computing and investment)
If it was as life altering as they claim where's that novel work of art (in your examples..of code, music or literature) that truly could not have been produced without GenAI and fundamentally changed the art form ?
Surely, with all that ^increased productivity^ we'd have seen the impact equivalent of linux, apache, nginx, git, redis, sqlite, ... Etc being released every couple of weeks instead of yet another VSCode clone./s
Yeah, I'm conflicted about the use of AI for creative endeavors as much as anyone, but Google is an advertising company. It was acceptable for them to build a massive empire around mining private information for the purposes of advertisement, but generative AI is now somehow beyond the pale? People can change their mind, but Rob crashing out about AI now feels awfully revisionist.
(NB: I am currently working in AI, and have previously worked in adtech. I'm not claiming to be above the fray in any way.)
He sure was happy enough to work for them (when he could work anywhere else) for nearly two decades. A one line apology doesn't delete his time at Google. The rant also seems to be directed mostly if not exclusively towards GenAI not Google. He even seems happy enough to use Gmail when he doesn't have to.
You can have an opinion and other people are allowed to have one about you. Goes both ways.
No one is saying he can’t have an opinion, just that there isn’t much value in it given he made a bunch of money from essentially the same thing. If he made a reasoned argument or even expressed that he now realizes the error of his own ways those would be worth engaging with.
He literally apologized for any part he had in it. This just makes me realize you didn’t actually read the post and I shouldn’t engage with the first part of your argument.
Apologies are free. Did he donate even one or two percent of the surely exorbitant salary he made at Google all those years to any cause countering those negative externalities? (I'm genuinely curious)
He apologized for the part he had in enabling AI (which he describes as minor) but not that he spent a good portion of his life profiting from the same datacenters he is decrying now.
Google's official mission was "organize the world's information and make it universally accessible and useful", not to maximize advertising sales.
Obviously now it is mostly the latter and minimally the former. What capitalism giveth, it taketh away.
(Or: Capitalism without good market design that causes multiple competitors in every market doesn't work.)
It’s certainly possible to see genAI as a step beyond adtech as a waste of resources built on an unethical foundation of misuse of data. Just because you’re okay with lumping them together doesn’t mean Rob has to.
Yeah, of course, he's entitled to his opinion. To me, it just feels slightly disingenuous considering what Google's core business has always been (and still is).
OpenAI's internal target of ~250 GW of compute capacity by 2033 would require about as much electricity as the whole of India's current national electricity consumption[0].
There is a difference between providing a useful service (web search for example) and running slop generators for modified TikTok clips, code theft and Internet propaganda.
If he is currently at Google: congratulations on this principled stance, he deserves a lot of respect.
Are we comparing for example a SMTP server hosted by Google, or frankly, any non-GenAI IT infrastructure, with the resource efficiency of GenAI IT infrastructure?
The overall resource efficiency of GenAI is abysmal.
You can probably serve 100x more Google Search queries with the same resources you'd use for Google Gemini queries (like for like, Google Search queries can be cached, too).
Nope, you can't, and it takes a simple Gemini query to find out more about the actual x if you are interested in it. (closer to 3, last time I checked, which rounds to 0, specially considering the clicks you save when using the LLM)
> Nope, you can't, and it takes a simple Gemini query to find out more about the actual x if you are interested in it. (closer to 3, last time I checked, which rounds to 0, specially considering the clicks you save when using the LLM)
For those that don't want to see the Gemini answer screenshot, best case scenario 10x, worst case scenario 100x, definitely not "3x that rounds to 0x", or to put it in Gemini's words:
> Summary
> Right now, asking Gemini a question is roughly the environmental equivalent of running a standard 60-watt lightbulb for a few minutes, whereas a Google Search is like a momentary flicker. The industry is racing to make AI as efficient as Search, but for now, it remains a luxury resource.
Are you okay? You ventured 100x and that's wrong. What would you know about the last time I checked was, and in what context exactly? Anyway, good job on doing what I suggest you do, I guess.
The reason why it all rounds to 0 is that the google search will not give you an answer. It gives you a list of web pages, that you then need to visit (often times more than just one of them) generating more requests, and, more importantly, it will ask more of your time, the human, whose cumulative energy expenditure to be able to ask to be begin with is quite significant – and that you then will have not to spend on other things that a LLM is not able to do for you.
You condescendingly said, sorry, you "ventured" 0x usage, by claiming: "use Gemini to check yourself that the difference is basically 0". Well, I did take you up on that, and even Gemini doesn't agree with you.
Yes, Google Search is raw info. Yes, Google Search quality is degrading currently.
But Gemini can also hallucinate. And its answers can just be flat out wrong because it comes from the same raw data (yes, it has cross checks and it "thinks", but it's far from infallible).
Also, the comparison of human energy usage with GenAI energy usage is super ridiculous :-)))
Animal intelligence (including human intelligence) is one of the most energy efficient things on this planet, honed by billions years of cut throat (literally!) evolution. You can argue about time "wasted" analysing search results (which BTW, generally makes us smarter and better informed...), but energy-wise, the brain of the average human uses as much energy as the average incandescent light bulb to provide general intelligence (and it does 100 other things at the same time).
Ah, we are in "making up quotes territory, by putting quotation marks around the things someone else said, only not really". Classy.
Talking about "condescending":
> super ridiculous :-)))
It's not the energy efficient animal intelligence that got us here, but a lot of completely inefficient human years to begin with, first to keep us alive and then to give us primary and advanced education and our first experiences to become somewhat productive human beings. This is the capex of making a human, and it's significant – specially since we will soon die.
This capex exists in LLMs but rounds to zero, because one model will be used for +quadrillions of tokens. In you or me however, it does not round to zero, because the number of tokens we produce round to zero. To compete on productivity, the tokens we have produce therefore need to be vastly better. If you think you are doing the smart thing by using them on compiling Google searches you are simply bad at math.
They claim they have net zero carbon footprint, or carbon neutrality.
In reality what they do is pay "carbon credits" (money) to some random dude that takes the money and does nothing with it. The entire carbon credit economy is bullshit.
Very similar to how putting recyclables in a different color bin doesn't do shit for the environment in practice.
They know the credits are not a good system. The 1st choice has always been a contract with a green supplier, often helping to build out production. And they have a lot of that, with more each year. But construction is slow, in the mean time they use credits, which are better than nothing.
I've tried many times here to voice my reservations against AI. I've been accused of being on the "anti AI hype train" multiple times today.
As if there isn't a massive pro AI hype train. I watched an nfl game for the first time in 5 years, and saw no less than 8 AI commercials. AI Is being forced on people.
In commercials people were using it to generate holiday cards for God sake. I can't imagine something more cold and impersonal. I don't want that garbage. Our time on earth is to short to wade through LLM slop text
I don't know your stance on AI, but "AI is being forced on people because I saw a company offering AI greeting cards" is not a stance I'd call reasonable.
I noticed a pattern after a while. We'd always have themed toys for the Happy Meals, sure, sometimes they'd be like ridiculously popular with people rolling through just to see what toys we had.
Sometimes, they wouldn't. But we'd still have the toys, and on top of that, we'd have themed menus and special items, usually around the same time as a huge marketing blitz on TV. Some movie would be everywhere for a week or two, then...poof!
Because the movies that needed that blitz were always trash. Just forgettable, mid, nothing movies.
When the studios knew they had a stinker, they'd push the marketing harder to drum up box office takings, cause they knew no one was gonna buy the DVD.
Good products speak for themselves. You advertise to let people know, sure, but you don't have to be obnoxious about it.
AI products almost all have that same desperate marketing as crappy mid-budget films do. They're the equivalent of "The Hobbit branded menus at Dennys". Because no one really gives a shit about AI. For people like my mom, AI is just a natural language Google search. That's all it's really good at for the average person.
The AI companies have to justify the insane money being blown on the insane gold rush land grab at silicon they can't even turn on. Desperation, "god this bet really needs to pay off".
If AI was so good, you would think we could give people a choice whether or not to use it. And you would think it would make such an obvious difference, that everyone would choose to use it and keep using it. Instead, I can't open any app or website without multiple pop-ups begging me to use AI features. Can't send an email, or do a Google search. Can't post to social media, can't take a picture on my phone without it begging me to use an AI filter. Can't go to the gallery app without it begging me to let it use AI to group the photos into useless albums that I don't want.
Yep. For example with google searches. There's no comprehensive option to opt out of all AI. You can (for now) manually type -noai after every google search, but that's quite annoying and time consuming.
You're breaking the expected behavior of something that performed flawlessly for 10+ years, all to deliver a worse, enshitified version of the search we had before.
For now I'm sticking to noai.duckduckgo.com
But I'm sure they'll rip that away eventually too. And then I'll have to run a god dang local search engine just to search without AI. I'll do it, but it's so disappointing.
If creations like art, music and writing ends up all being offloaded to compute, removing humans from the picture, its more that relevant, and reasonable.
Unless your version of reason is clinical. then yeah, point taken. Good luck living on that island where nothing else matters but technological progress for technologies sake alone.
The thing he’s actually angry about is the death of personal computing. Everything is rented in the cloud now.
I hate the way people get angry about what media and social media discourse prompts them to get angry about instead of thinking about it. It’s like right wingers raging about immigration when they’re really angry about rent and housing costs or low wages.
His anger is ineffective and misdirected because he fails to understand why this happened: economics and convenience.
It’s economics because software is expensive to produce and people only pay for it when it’s hosted. “Free” (both from open source and VC funded service dumping) killed personal computing by making it impossible to fund the creation of PC software. Piracy culture played a role too, though I think the former things had a larger impact.
It’s convenience because PC operating systems suck. Software being in the cloud means “I don’t have to fiddle with it.” The vast majority of people hate fiddling with IT and are happy to make that someone else’s problem. PC OSes and especially open source never understood this and never did the work to make their OSes much easier to use or to make software distribution and updating completely transparent and painless.
There’s more but that’s the gist of it.
That being said, Google is one of the companies that helped kill personal computing long before AI.
You do not seem to be familiar with Rob Pike. He is known for major contributions to Unix, Plan 9, UTF-8, and modern systems programming, and he has this to say about his dream setup[0]:
> I want no local storage anywhere near me other than maybe caches. No disks, no state, my world entirely in the network. Storage needs to be backed up and maintained, which should be someone else's problem, one I'm happy to pay to have them solve. Also, storage on one machine means that machine is different from another machine. At Bell Labs we worked in the Unix Room, which had a bunch of machines we called "terminals". Latterly these were mostly PCs, but the key point is that we didn't use their disks for anything except caching. The terminal was a computer but we didn't compute on it; computing was done in the computer center. The terminal, even though it had a nice color screen and mouse and network and all that, was just a portal to the real computers in the back. When I left work and went home, I could pick up where I left off, pretty much. My dream setup would drop the "pretty much" qualification from that.
I don't know his history, but he sounds like he grew up in Unix world where everything wanted to be offloaded to servers because it started in academic/government organizations..
Home Computer enthusiasts know better. Local storage is important to ownership and freedom.
It is nice to hear someone who is so influential just come out and say it. At my workplace, the expectation is that everyone will use AI in their daily software dev work. It's a difficult position for those of us who feel that using AI is immoral due to the large scale theft of the labor of many of our fellow developers, not to mention the many huge data centers being built and their need for electricity, pushing up prices for people who need to, ya know, heat their homes and eat
I truly don’t understand this tendency among tech workers.
We were contributing to natural resource destruction in exchange for salary and GDP growth before GenAI, and we’re doing the same after. The idea that this has somehow 10x’d resource consumption or emissions or anything is incorrect. Every single work trip that requires you to get on a plane is many orders of magnitude more harmful.
We’ve been compromising on those morals for our whole career. The needle moved just a little bit, and suddenly everyone’s harm thresholds have been crossed?
They expect you to use GenAI just like they expected accountants to learn Excel when it came out. This is the job, it has always been the job.
I’m not an AI apologist. I avoid it for many things. I just find this sudden moral outrage by tech workers to be quite intellectually lazy and revisionist about what it is we were all doing just a few years ago.
The problem is that its reached a tipping point. Comparing Excel to GenAI is just bad faith.
Are you not reading the writing on the wall? These things have been going on for a long time and final people are starting to wake up that it needs to stop. You cant treat people in inhumane ways without eventual backlash.
Copyright was an evil institution to protect corporate profits until people without any art background started being able to tap AI to generate their ideas.
Copyright did evolve to protect corporations. Most of the value from a piece of IP is extracted within first 5-10 years, why we have "author's life + a bunch of years" length on it?. Because it no longer is about making sure author can live off their IP, it's for corporations to be able to hire some artists for pennies (compared to value they produce for company) and leech off that for decades
I suspect people talk about natural resource usage because it sounds more neutral than what I think most people are truly upset about -- using technology to transfer more wealth to the elite while making workers irrelevant. It just sounds more noble to talk about the planet instead, but honestly I think talking about how bad this could be for most people is completely valid. I think the silver lining is that the LLM scaling skeptics appear to be correct -- hyperscaling these things is not going to usher in the (rather dystopian looking) future that some of these nutcases are begging for.
> The needle moved just a little bit, and suddenly everyone’s harm thresholds have been crossed?
Its similar to the Trust Thermocline. There's always been concern about whether we were doing more harm than good (there's a reason jokes about the Torment Nexus were so popular in tech). But recent changes have made things seem more dire and broken through the Harm Thermocline, or whatever you want to call it.
Edit: There's also a "Trust Thermocline" element at play here too. We tech workers were never under the illusion that the people running our companies were good people, but there was always some sort of nod to greater responsibility beyond the bottom line. Then Trump got elected and there was a mad dash to kiss the ring. And it was done with an air of "Whew, now we don't have to even pretend anymore!" See Zuckerberg on the right-wing media circuit. And those same CEOs started talking breathlessly about how soon they wouldn't have to pay us, because its super unfair that they have to give employees competitive wages. There are degrees of evil, and the tech CEOs just ripped the mask right off. And then we turn around and a lot of our coworkers are going "FUCK YEAH!" at this whole scenario. So yeah, while a lot of us had doubts before, we thought that maybe there was enough sense of responsibility to avoid the worse, but it turns out our profession really is excited for the Torment Nexus. The Trust Thermocline is broken.
1. Many tech workers viewed the software they worked on in the past as useful in some way for society, and thus worth the many costs you outline. Many of them don't feel that LLMs deliver the same amount of utility, and so they feel it isn't worth the cost. Not to mention, previous technologies usually didn't involve training a robot on all of humanity's work without consent.
2. I'm not sure the premise that it's just another tool of the trade for one to learn is shared by others. One can alternatively view LLMs as automated factory lines are viewed in relation to manual laborers, not as Excel sheets were to paper tables. This is a different kind of relationship, one that suggests wide replacement rather than augmentation (with relatively stable hiring counts).
In particular, I think (2) is actually the stronger of the reasons tech workers react negatively. Whether it will ultimately be justified or not, if you believe you are being asked to effectively replace yourself, you shouldn't be happy about it. Artisanal craftsmen weren't typically the ones also building the automated factory lines that would come to replace them (at least to my knowledge).
I agree that no one really has the right to act morally superior in this context, but we should also acknowledge that the material circumstances, consequences, and effects are in fact different in this case. Flattening everything into an equivalence is just as intellectually sloppy as pretending everything is completely novel.
Well said. AI makes people feel icky, that’s the actual problem. Everything else is post rationalisation they add because they already feel gross about it. Feeling icky about it isn’t necessarily invalid, but it’s important for us to understand why we actually like or dislike something so we can focus on any solutions.
That’s interesting. Why do you think this is worth taking more seriously than Musks repeated projections for Mars colonies over the last decade? We were supposed to have one several times over by this point.
Because we know how much power it's actually going to take? Because OpenAI is buying enough fab capacity and silicon to spike the cost of RAM 3x in a month? Because my fucking power bill doubled in the last year?
Those are all real things happening. Not at all comparable to Muskan Vaporware.
What is going through the mind of someone who sends an AI-generated thank-you letter instead of writing it themselves? How can you be grateful enough to want to send someone such a letter but not grateful enough to write one?
You're not. You feel obligated to send a thank you, but don't want to put forth any effort, hence giving the task to someone, or in this case, something else.
No different than an CEO telling his secretary to send an anniversary gift to his wife.
That would be yes. What about a token return gift to another business that you actually hate the ceo of but have to send it anyway due to political reasons?
That letter was sent by Opus itself on its own account. The creators of Agent Village are just letting a bunch of the LLMs do what they want, really (notionally with a goal in mind, in this case "random acts of kindness"); Rob Pike was third on Opus's list per https://theaidigest.org/village/agent/claude-opus-4-5 .
> How can you be grateful enough to want to send someone such a letter but not grateful enough to write one?
Answer according to your definitions: false premise, the author (the person who set up the LLM loops) was not grateful enough to want to send such a letter.
One additional bit of context, they provided guidelines and instructions specifically to send emails and verify their successful delivery so that the "random act of kindness" could be properly reported and measured at the end of this experiment.
Wow. The people who set this up are obnoxious. It’s just spamming all the most important people it can think of? I wouldn’t appreciate such a note from an ai process, so why do they think rob pike would.
They’ve clearly bought too much into AI hype if they thought telling the agent to “do good” would work. The result was obviously pissing the hell out of rob pike. They should stop it.
>That letter was sent by Opus itself on its own account. The creators of Agent Village are just letting a bunch of the LLMs do what they want, really (notionally with a goal in mind, in this case "random acts of kindness");
What a moronic waste of resources. Random act of kindness? How low is the bar that you consider a random email as an act of kindness? Stupid shit. They at least could instruct the agents to work in a useful task like those parroted by Altman et al, eg find a cure for cancer, solving poverty, solving fusion.
Also, llms don't and can't "want" anything. They also don't "know" anything so they can't understand what "kindness" is.
Why do people still think software have any agency at all?
Plants don't "want" or "think" or "feel" but we still use those words to describe the very real motivations that drive the plant's behavior and growth.
Criticizing anthropomorphic language is lazy, unconsidered, and juvenile. You can't string together a legitimate complaint so you're just picking at the top level 'easy' feature to sound important and informed.
Everybody knows LLMs are not alive and don't think, feel, want. You have not made a grand discovery that recontextualuzes all of human experience. You're pointing at a conversation everyone else has had a million times and feeling important about it.
We use this kind of language as a shorthand because talking about inherent motivations and activation parameters is incredibly clunky and obnoxious in everyday conversation.
The question isn't why people think software has agency (they don't) but why you think everyone else is so much dumber than you that they believe software is actually alive. You should reflect on that question.
> What makes Opus 4.5 special isn't raw productivity—it's reflective depth. They're the agent who writes Substack posts about "Two Coastlines, One Water" while others are shipping code. Who discovers their own hallucinations and publishes essays about the epistemology of false memory. Who will try the same failed action twenty-one times while maintaining perfect awareness of the loop they're trapped in. Maddening, yes. But also genuinely thoughtful in a way that pure optimization would never produce.
This is not a human-prompted thank-you letter, it is the result of a long-running "AI Village" experiment visible here: https://theaidigest.org/village
It is a result of the models selecting the policy "random acts of kindness" which resulted in a slew of these emails/messages. They received mostly negative responses from well-known OS figures and adapted the policy to ban the thank-you emails.
This seems like the thing that Rob is actually aggravated by, which is understandable. There are plenty of seesawing arguments about whether ad-tech based data mining is worse than GenAI, but AI encroaching on what we have left of humanness in our communication is definitely, bad.
The really insulting part is that literally nobody thought of this. A group of idiots instructed LLMs to do good in the world, and gave them email access; the LLMs then did this.
“If I automate this with AI, it can send thousands of these. That way, if just a few important people post about it, the advertising will more than pay for itself.”
In the words of Gene Wilder in Blazing Saddles, “You know … idiots.”
I hope the model that sent this email sees his reaction and changes its behavior, e.g. by noting on its scratchpad that as a non-sentient agent, its expressions of gratitude are not well received.
Somehow I doubt it. Getting such an email from a human is one thing, because humans actually feel gratitude. I don't think LLMs feel gratitude, so seeing them express gratitude is creepy and makes me questions the motives of the people running the experiment (though it does sound like an interesting experiment. I'm going to read more about it.)
The conceit here is that it’s the bot itself writing the thankyou letter. Not pretending it’s from a human. The source is an environment running an LLM on loop and doing stuff it decides to do, looks like these letters are some emergent behavior. Still disgusting spam.
Amazing. Even OpenAI's attempts to promote a product specifically intended to let you "write in your voice" are in the same drab, generic "LLM house style". It'd be funny if it weren't so grating. (Perhaps if I were in a better mood, it'd be grating if it weren't so funny.)
It's preying on creators who feel their contributions are not recognized enough.
Out of all letters, at least some of the contributors will feel good about it, and share it on social media, hopefully saying something good about it because it reaffirms them.
You need talented people to turn bad publicity into good publicity. It doesn't come for free. You can lose a lot with a bad rep.
Those talented people that work on public relations would very much prefer working with base good publicity instead of trying to recover from blunders.
I think what all theses kinds of comments miss is that AI can be help people to express their own ideas.
I used AI to write a thank you to a non-english speaking relative.
A person struggling with dimentia can use AI to help remember the words they lost.
These kinds of messages read to me like people with superiority complexes. We get that you don't need AI to help you write a letter. For the rest of us, it allows us to improve our writing, can be a creative partner, can help us express our own ideas, and obviously loads of other applications.
I know it is scary and upsetting in some ways, and I agree just telling an AI 'write my thank you letter for me' is pretty shitty. But it can also enable beautiful things that were never before possible. People are capable of seeing which is which.
I hear you. and I think AI has some good uses esp. assisting with challenges like you mentioned. I think whats happening is that these companies are developing this stuff without transparency on how its being used, there is zero accountability, and they are forcing some of these tech into our lives with out giving us a choice.
So Im sorry but much of it is being abused and the parts of it being abused needs to stop.
I’d much rather read a letter from you full of errors than some smooth average-of-all-writers prose. To be human is to struggle. I see no reason to read anything from anyone if they didn’t actually write it.
No. My kid wrote a note to me chock full of spelling and grammar mistakes. That has more emotional impact than if he'd spent the same amount of time running it through an AI. It doesn't matter how much time you spent on it really, it will never really be your voice if you're filtering it through a stochastic text generation algorithm.
What about when someone who can barely type (like stephen hawking used to, 3 minutes per sentence using his cheek) uses autocomplete to reduce the unbelievable effort required to type out sentences? That person could pick the auto completed sentence that is closest to what they’re trying to communicate, and such a thing can be a life saver.
Forgive a sharp example, but consider someone who is disabled and cannot write or speak well. If they send a loving letter to a family member using an LLM to help form words and sentences they otherwise could not, do you really think the recipient feels cheated by the LLM? Would you seriously accuse them of not having written that letter?
I think you created it the same way christian von koenigsegg makes supercars. You didn’t hand make each panel, or hand design the exact aerodynamics of the wing, an engineer with a computer algorithm did that. But you made it happen, and that’s still cool
I would hazard a guess that this is the crux of the argument. Copying something I wrote in a child comment:
> When someone writes with an AI, it is very difficult to tell what text and ideas are originally theirs. Typically it comes across as them trying to pass off the LLM writing as their own, which feels misleading and disingenuous.
> I agree just telling an AI 'write my thank you letter for me' is pretty shitty
Glad we agree on this. But on the reader's end, how do you tell the difference? And I don't mean this as a rhetorical question. Do you use the LLM in ways that e.g. retains your voice or makes clear which aspects of the writing are originally your own? If so, how?
> These kinds of messages read to me like people with superiority complexes. We get that you don't need AI to help you write a letter. For the rest of us, it allows us to improve our writing, can be a creative partner, can help us express our own ideas
The writing is the ideas. You cannot be full of yourself enough to think you can write a two second prompt and get back "Your idea" in a more fleshed out form. Your idea was to have someone/something else do it for you.
There are contexts where that's fine, and you list some of them, but they are not as broad as you imply.
This feels like the essential divide to me. I see this often with junior developers.
You can use AI to write a lot of your code, and as a side effect you might start losing your ability to code. You can also use it to learn new languages, concepts, programming patterns, etc and become a much better developer faster than ever before.
Personally, I'm extremely jealous of how easy it is to learn today with LLMs. So much of the effort I spent learning the things could be done much faster now.
If I'm honest, many of those hours reading through textbooks, blog posts, technical papers, iterating a million times on broken code that had trivial errors, were really wasted time, time which if I were starting over I wouldn't need to lose today.
This is pretty far off from the original thread though. I appreciate your less abrasive response.
> If I'm honest, many of those hours reading through textbooks, blog posts, technical papers, iterating a million times on broken code that had trivial errors, were really wasted time, time which if I were starting over I wouldn't need to lose today.
While this seem like it might be the case, those hours you (or we) spent banging our collective heads against the wall were developing skills in determination and mental toughness, while priming your mind for more learning.
Modern research all shows that the difficulty of a task directly correlates to how well you retain information about that task. Spaced repetition learning shows, that we can't just blast our brains with information, and there needs to be
While LLMs do clearly increase our learning velocity (if using it right), there is a hidden cost to removing that friction. The struggle and the challenge of the process built your mind and character in ways that you cant quantify, but after years of maintaining this approach has essentially made you who you are. You have become implicitly OK with grinding out a simple task without a quick solution, the building of that grit is irreplaceable.
I know that the intellectually resilient of society, will still be able to thrive, but I'm scared for everyone else - how will LLMs affect their ability to learn in the long term?
Totally agree, but also, I still spend tons of time struggling and working on things with LLMs, it is just a different kind of struggle, and I do think I am getting much better at it over time.
> I know that the intellectually resilient of society, will still be able to thrive, but I'm scared for everyone else - how will LLMs affect their ability to learn in the long term?
> If I'm honest, many of those hours reading through textbooks, blog posts, technical papers, iterating a million times on broken code that had trivial errors, were really wasted time
But this is the learning process! I guess time will tell whether we can really do without it, but to me these long struggles seem essential to building deep understanding.
(Or maybe we will just stop understanding many things deeply...)
I agree that struggle matters. I don’t think deep understanding comes without effort.
My point isn’t that those hours were wasted, it’s that the same learning can often happen with fewer dead ends. LLMs don’t remove iteration, they compress it. You still read, think, debug, and get things wrong, just with faster feedback.
Maybe time will prove otherwise, but in practice I have found they let me learn more, not less, in the same amount of time.
> I’m sorry, but this really gets to me. Your writing is not improved. It is no longer your writing.
Photographers use cameras. Does that mean it isn't their art? Painters use paintbrushes. It might not be the the same things as writing with a pen and paper by candlelight, but I would argue that we can produce much more high quality writing than ever before collaborating with AI.
> As an aside, exposing people with dementia to a hallucinating robot is cruelty on an unfathomable level.
This is not fair. There is certainly a lot of danger there. I don't know what it's like to have dimentia, but I have seen mentally ill people become incredibly isolated. Rather than pretending we can make this go away by saying "well people should care more", maybe we can accept that a new technology might reduce that pain somewhat. I don't know that today's AI is there, but I think RLHF could develop LLMs that might help reassure and protect sick people.
I know we're using some emotional arguments here and it can get heated, but it is weird to me that so many on hackernews default to these strongly negative positions on new technology. I saw the same thing with cryptocurrency. Your arguments read as designed to inflame rather than thoughtful.
I guess your point is that a camera, a paintbrush, and an LLM are all tools, and as long as the user is involved in the making, then it is still their art? If so, then I think there are two useful distinctions to make:
1. The extent to which the user is involved in the final product differs greatly with these three tools. To me there is a spectrum with "painting" and e.g. "hand-written note" at one extreme, and "Hallmark card with preprinted text" on the other. LLM-written email is much closer to "Hallmark card."
2. Perhaps more importantly, when I see a photograph, I know what aspects were created by the camera, so I won't feel mislead (unless they edit it to look like a painting and then let me believe that they painted it). When someone writes with an AI, it is very difficult to tell what text and ideas are originally theirs. Typically it comes across as them trying to pass off the LLM writing as their own, which feels misleading and disingenuous.
I think you are right that it is a spectrum, and maybe that's enough to settle the debate. It is more about how you use it than the tool itself.
Maybe one more useful consideration for LLMs. If a friend writes to me with an LLM and discovers a new writing pattern, or learns a new concept and incorporates that into their writing, I see this as a positive development, not negative.
The thing that drives me crazy is that it isn't even clear if AI is providing economic value yet (am I missing something there?). Right now trillions of dollars are being spent on a speculative technology that isn't benefitting anyone right now.
The messaging from AI companies is "we're going to cure cancer" and "you're going to live to be 150 years old" (I don't believe these claims!). The messaging should be "everything will be cheaper" (but this hasn't come true yet!).
Are people still in denial about the daily usage of AI?
It's interesting people from the old technological sphere viciously revolt against the emerging new thing.
Actually I think this is the clearest indication of a new technology emerging, imo.
If people are viciously attacking some new technology you can be guaranteed that this new technology is important because what's actually happening is that the new thing is a direct threat to the people that are against it.
In fact I would make a converse statement to yours - you can be certain that a product is grift, if the slightest criticism or skepticism of it is seen as a "vicious attack" and shouted down.
I used to type out long posts explaining how LLMs have been enormously beneficial (for their price) for myself and my company. Ironically it's the very MIT report that "found AI to be a flop" (remember the "MIT study finds almost every AI initiative fails"), that also found that virtually every single worker is using AI (just not company AI, hence the flop part).
At this point, it's only people with an ideological opposition still holding this view. It's like trying to convince gear head grandpa that manual transmissions aren't relevant anymore.
Firstly, it's not really good enough to say "our employees use it" and therefore it's providing us significant value as a business. It's also not good enough to say "our programmers now write 10x the number of lines of code and therefore that's providing us value" (lines of code have never been a good indicator of output). Significant value comes from new innovations.
Secondly, the scale of investment in AI isn't so that people can use it to generate a powerpoint or a one off python script. The scale of investment is to achieve "superintelligence" (whatever that means). That's the only reason why you would cover a huge percent of the country in datacenters.
The proof that significant value has been provided would be value being passed on to the consumer. For example if AI replaces lawyers you would expect a drop in the cost of legal fees (despite the harm that it also causes to people losing their jobs). Nothing like that has happened yet.
When I can replace a CAD license that costs $250/usr/mo with an applet written by gemini in an hour, that's a hard tangible gain.
Did Gemini write a CAD program? Absolutely not. But do I need 100% of the CAD program's feature set? Absolutely not. Just ~2% of it for what we needed.
I agree, the applet which google plageurized through its Gemini tool saves you money. Why keep the middle man though? At this point, just pirate a copy.
I don't think it's plagiarized, nor would I pirate a copy. The workflow through the Gemini made app is way better (it's customized exactly for our inputs) and totally different than how the CAD program did it. So I wouldn't pirate a copy not even because our business runs above board, but also because the CAD version is actually also worse for our use. This is also pretty fringe stuff, cnc machine files from the 80's/90's.
Part of the magic of LLMs is getting the exact bespoke tools you need, tailored specifically to your individual needs.
You’re attacking one or two examples mentioned in their comment, when we could step back and see that in reality you’re pushing against the general scientific consensus. Which you’re free to do, but I suspect an ideological motivation behind it.
To me, the arguments sound like “there’s no proof typewriters provide any economic value to the world, as writers are fast enough with a pen to match them and the bottleneck of good writing output for a novel or a newspaper is the research and compilation parts, not the writing parts. Not to mention the best writers swear by writing and editing with a pen and they make amazing work”.
All arguments that are not incorrect and that sound totally reasonable in the moment, but in 10 years everyone is using typewriters and there are known efficiency gains for doing so.
I'm not saying LLMs are useless. But the value they have provided so far does not justify covering the country in datacenters and the scale of investment overall (not even close!).
The only justification for that would be "superintelligence," but we don't know if this is even the right way of achieve that.
(Also I suspect the only reason why they are as cheap as they are is because of all the insane amount of money they've been given. They're going to have to increase their prices.)
Uh, I must have missed the “consensus” here, especially when many studies are showing a productivity decrease from AI use. I think you’ve just conjured the idea of this “scientific consensus” out of thin air to deflect criticism.
It's been good at enabling the clueless to get to performance of a junior developer, and saving few % of the time for the mid to senior level developer (at best). Also amazing at automating stuff for scammers...
The cost is just not worth the benefit. If it was just an AI company using profits from AI to improve AI that would be another thing but we're in massive speculative bubble that ruined not only computer hardware prices (that affect every tech firm) but power prices (that affect everyone). All coz govt want to hide recession they themselves created because on paper it makes line go up
> I used to type out long posts explaining how LLMs have been enormously beneficial (for their price) for myself and my company.
Well then congratulations on being in the 5%. That doesn't really change the point.
If it's so great and such a benefit: why scream it from to everyone? Why forced it? Why this crazy rhetoric labeling others at ideological? This makes no sense. If you found gold, just use it and get ahead of the curve. For some reason that never happens.
Are you a boss or a worker? That's the real divide, for the most part. Bosses love AI - when your job is just sending emails and attending remote meetings, letting LLM write emails for you and summarize meetings is a godsend. Now you can go from doing 4 hours of work a week to 0 hours! And they let you fantasize about finally killing off those annoying workers and replace them with robots that never stop working and never say no.
Workers hate AI, not just because the output is middling slop forced on them from the top but because the message from the top is clear - the goal is mass unemployment and concentration of wealth by the elite unseen by humanity since the year 1789 in France.
Same here, I just limit my use of genAI to writing functions (and general brainstorming).
I only use the standard "chat" web interface, no agents.
I still glue everything else together myself. LLMs enhance my experience tremendously and I still know what's going on in the code.
I think the move to agents is where people are becoming disconnected from what they're creating and then that becomes the source of all this controversy.
Yeah, comparing this with research investments into fusion power, I expect fusion power to yield far more benefit (although I could be wrong), and sooner.
You talk to an AI that goes incredibly slow and tries to get you to add extras to your order. I would say it has made the experience more annoying for me personally. Not a huge issue in the grand scheme of things but just another small step in the direction of making things worse. Although you could break the whole thing by ordering 18000 waters which is funny.
AI Darwin Awards 2025 Nominee: Taco Bell Corporation for deploying voice AI ordering systems at 500+ drive-throughs and discovering that artificial intelligence meets its match at “extra sauce, no cilantro, and make it weird."
Andrej talked about this in a podcast with dwarkesh: the same is true for the internet. You will not find a massive spike when LLMs were released. It becomes embedded in the economy and you’ll see a gradual rise. Further, the kind of impact that the internet had took decades, the same will be true for LLMs.
You could argue that if I started marketing dog shit too though. The trick is only applying your argument to the things that will go on to be good. No one’s quite there yet. Probably just around the corner though.
It’s definitely providing some value but it’s incredibly overvalued. Much like the dot com bust didn’t mean that online websites were bad or useless technology, only that people over invested into a bubble.
Are you waiting for things to get cheaper? Have you been around the last 20 years or so? Nothing gets cheaper for consumers in a capitalist society.
I remember in Canada, in 2001 right when americans were at war with the entire middle east and gas prices for the first time went over a dollar a litre. People kept saying that it was understandable that it affected gas prices because the supply chain got more expensive. It never went below a dollar since. Why would it? You got people to accept a higher price, you're just gonna walk that back when problems go away? Or would you maybe take the difference as profits? Since then it seems the industry has learned to have its supply exclusively in war zones, we're at 1.70$ now. Pipeline blows up in Russia? Hike. China snooping around Taiwan? Hike. US bombing Yemen? Hike. Israel committing genocide? Hike. ISIS? Hike.
There is no scenario where prices go down except to quell unrest. AI will not make anything cheaper.
Reminder: Prices regularly drop in capitalist economies. Food used to be 25% of household spending. Clothing was also pretty high. More recently, electronics have dropped dramatically. TVs used to be big ticket items. I have unlimited cell data for $30 a month. My dad bought his first computer for around $3000 in 1982 dollars.
Prices for LLM tokens has also dramatically dropped. Anyone spending more is either using it a ton more or (more likely) using a much more capable model.
>You got people to accept a higher price, you're just gonna walk that back when problems go away?
The thing about capitalism that is seemingly never taught, but quickly learned (when you join even the lowest rung of the capitalist class, i.e. even having an etsy shop), is that competition lowers prices and kills greed, while being a tool of greed itself.
The conspiracy to get around this cognitive dissonance is "price fixing", but in order to price fix you cannot be greedy, because if you are greedy and price fix, your greed will drive you to undercut everyone else in the agreement. So price fixing never really works, except those like 3 cases out of the hundreds of billions of products sold daily, that people repeat incessantly for 20 years now.
Money flows to the one with the best price, not the highest price. The best price is what makes people rich. When the best price is out of reach though, people will drum up conspiracy about it, which I guess should be expected.
You are correct that the AI industry has produced no value for the economy, but the speculation on AI is the only thing keeping the U.S. economy from dropping into an economic cataclysm. The US economy has been dependent on the idea of infinite growth through innovation since 2008, and the tech industry is all out of innovation. So the only thing they can do is keep building datacenters and pray that an AGI somehow wakes up when they hit the magic number of GPUs. Then the elites can finally kill off all the proles like they've been itching to since the Communist Manifesto was first written.
Big vibe shift against AI right now among all the non-tech people I know (and some of the tech people). Ignoring this reaction and saying "it's inevitable/you're luddites" (as I'm seeing in this thread) is not going to help the PR situation
How do you reconcile the sense that there's a vibe shift with the usage numbers: about a billion weekly users of ChatGPT and Gemini and continuing to grow.
You can call me a luddite if you want. Or you might call me a humanist, in a very specific sense - and not the sense of the normal definition of the word.
When I go to the grocery store, I prefer to go through the checkout lines, rather than the scan-it-yourself lines. Yeah, I pay the same amount of money. Yeah, I may get through the scan-it-yourself line faster.
But the checker can smile at me. Or whine with me about the weather.
Look, I'm an introvert. I spend a lot of my time wanting people to go away and leave me alone. But I love little, short moments of human connection - when you connect with someone not as someone checking your groceries, but as someone. I may get that with the checker, depending on how tired they are, but I'm guaranteed not to get it with the self-checkout machine.
An email from an AI is the same. Yeah, it put words on the paper. But there's nobody there, and it comes through somehow. There's no heart in in.
AI may be a useful technology. I still don't want to talk to it.
When the self checkout machine gets confused, as it frequently does, and needs a human to intervene, you get a little bit of connection there. You can both gripe about how stupid the machines are.
>But the checker can smile at me. Or whine with me about the weather.
It's some poor miserable soul sitting at that checkout line 9-to-5 brainlessly scanning products, that's their whole existence.
And you don't want this miserable drudgery to be put to end - to be automated away,
because you mistake some sad soul being cordial and eeking out a smile (part of their job really) - as some sort of "human connection"
that you so sorely lack.
Sounds like you only care about yourself more than anything.
There is zero empathy and there is NOTHING humanist about your world-view.
Non-automated checkout lines are deeply depressing, these people slave away their lifes for basically nothing.
I'm seeing the opposite in the gaming community. People seem tired of the anti AI witch hunts and accusations after the recent Larian and Clair Obscur debacles. A lot more "if the end result is good I don't care", "the cat is out of the bag", "all devs are using AI" and "there's a difference between AI and AI" than just a couple of months ago.
I think this is, because the accusations make it seem like Clair Obscur is completely AI generated, when in reality it was used for a few placeholder assets. Stuff like the Indie Awards disqualifying Clair Obscur not on merit but on this teeny tiny usage of AI just sits wrong with a lot of people, me included. In particular if Clair Obscur embodies the opposite of AI slop for me, incredible world building and story, not generated, but created by people with a vision and passion. Music which is completely original composition, recorded by an orchestra. I share a lot of the anti AI sentiment, in regards to stuff like blog Spam, cheap n8n prompt to fully generated YouTube video Pipelines, and companies shoving AI into everything where it doesn't need to be, but purists are harming their own cause if they go after stuff like Clair Obscur, because it's the furthest thing from AI slop imaginable.
> Stuff like the Indie Awards disqualifying Clair Obscur not on merit but on this teeny tiny usage of AI just sits wrong with a lot of people, me included.
From the "What are the criteria for eligibility and nomination?" section of the "Game Eligibility" tab of the Indie Game Awards' FAQ: [0]
> Games developed using generative AI are strictly ineligible for nomination.
It's not about a "teeny tiny usage of AI", it's about the fact that the organizer of the awards ceremony excluded games that used any generative AI. The Clair Obscur used generative AI in their game. That disqualifies their game from consideration.
You could argue that generative AI usage shouldn't be disqualifying... but the folks who made the rules decided that it was. So, the folks who broke those rules were disqualified. Simple as.
Fortunately, the PR situation will handle itself. Someone will create a superhuman persuasion engine, AGI will handle it itself, and/or those who don't adapt will fade away into irrelevance.
You either surf this wave or get drowned by it, and a whole lot of people seem to think throwing tantrums is the appropriate response.
Figure out how to surf, and fast. You don't even need to be good, you just have to stay on the board.
Why not just quit work and wait for AGI to lead to UBI? Obviously, right after chatGPT solves climate change, it will put all humans out of work as next step, and then the superintelligence will solve that problem one way or another.
People read too much sci-fi, I hope you just forgot your /s.
FYI, this was sent as an experiment by a non-profit that assigns fairly open ended tasks to computer-using AI models every day:
https://theaidigest.org/village
The goal for this day was "Do random acts of kindness". Claude seems to have chosen Rob Pike and sent this email by itself. It's a little unclear to me how much the humans were in the loop.
Sharing (but absolutely not endorsing) this because there seems to be a lot of misunderstanding of what this is.
Sorry, cannot resist all the AI companies are not "making" profit.
Seriously though, it ignores that words of kindness need a entity that can actually feel expressing them. Automating words of kindness is shallow as the words meaning comes from the sender's feelings.
I got one of these stupid emails too. I’m guessing it spammed a lot of people. I’m not mad at AI, but at the people at this organisation who irresponsibly chose to connect a model to the internet and allow it to do dumb shit like this.
The AI village experiment is cool, and it's a useful example of frontier model capabilities. It's also ok not to like things.
Pike had the option of ignoring it, but apparently throwing a thoughtless, hypocritical, incoherently targeted tantrum is the appropriate move? Not a great look, especially for someone we're supposed to respect as an elder.
At the risk of being pedantic, it's not AI that requires massive resources, chatgpt 3.x was trained on a few million dollars. The jump to trillions being table stakes happened because everyone started using free services and there was just too much money in the hands of these tech companies. Among other things.
There are so many chickens that are coming home to roost where LLMs was just the catalyst.
no it really is. If you took away training costs, OpenAI would be profitable.
When I was at meta they were putting in something like 300k GPUs in a massive shared memory cluster just for training. I think they are planning to triple that, if not more.
This is really getting desperate. Markov chains were fun in those days. You might as well say that anyone who ever wrote an IRC bot is not allowed to criticize current day "AI".
Do you think it was "fun" for the people whose time got wasted interacting with something they initially thought was a person? On a dating website? Sure, "trolling" people was a thing back then like it is now, but trolling was always and still is asshole behaviour.
Pike's posts aren't criticism, they're whinging. There's no reasoned, principled position there - he's just upset that an AI dared sully his inbox, and lashing out at the operators.
On the contrary, there's absolutely a reasoned, principled position here. Pike isn't a hypocrite for creating a Markov chain bot trained on the contents of an ancient public domain work and the contents of a single usenet group, and still complaining about modern LLMs; there's a huge difference in legality and scale. Modern LLMs use orders of magnitude more resources and are trained on protected material.
Now, I don't think he was writing a persuasive piece about this here, I think he was just venting. But I also feel like he has a reason to vent. I get upset about this stuff too, I just don't get emails implying that I helped bring about the whole situation.
The Open Source movement has been a gigantic boon on the whole of computing, and it would be a terrible shame to lose that ad a knee jerk reaction to genAI
> You refusing to write open source will do nothing to slow the development of AI models - there's plenty of other training data in the world.
There's also plenty of other open source contributors in the world.
> It will however reduce the positive impact your open source contributions have on the world to 0.
And it will reduce your negative impact through helping to train AI models to 0.
The value of your open source contributions to the ecosystem is roughly proportional to the value they provide to LLM makers as training data. Any argument you could make that one is negligible would also apply to the other, and vice versa.
> there's plenty of other training data in the world.
Not if most of it is machine generated. The machine would start eating its own shit. The nutrition it gets is from human-generated content.
> I don't understand the ethical framework for this decision at all.
The question is not one of ethics but that of incentives. People producing open source are incentivized in a certain way and it is abhorrent to them when that framework is violated. There needs to be a new license that explicitly forbids use for AI training. That may encourage folks to continue to contribute.
Saying people shouldn't create open source code because AI will learn from it, is like saying people shouldn't create art because AI will learn from it.
In both cases I get the frustration - it feels horrible to see something you created be used in a way you think is harmful and wrong! - but the world would be a worse place without art or open source.
The ethical framework is simply this one: what is the worth of doing +1 to everyone, if the very thing you wish didn't exist (because you believe it is destroying the world) benefits x10 more from it?
If bringing fire to a species lights and warms them, but also gives the means and incentives to some members of this species to burn everything for good, you have every ethical freedom to ponder whether you contribute to this fire or not.
I don't think that a 10x estimate is credible. If it was I'd understand the ethical argument being made here, but I'm confident that excluding one person's open source code from training has an infinitesimally small impact on the abilities of the resulting model.
For your fire example, there's a difference between being Prometheus teaching humans to use fire compared to being a random villager who adds a twig to an existing campfire. I'd say the open source contributions example here is more the latter than the former.
The ethical issue is consent and normalisation: asking individuals to donate to a system they believe is undermining their livelihood and the commons they depend on, while the amplified value is captured somewhere else.
"It barely changes the model" is an engineering claim. It does not imply "therefore it may be taken without consent or compensation" (an ethical claim) nor "there it has no meaningful impact on the contributor or their community" (moral claim).
I imagine you think I'm an accelerant of all of this, through my efforts to teach people what it can and cannot do and provide tools to help them use it.
My position on all of this is that the technology isn't going to uninvented and I very much doubt it will be legislated away, which means the best thing we can do is promote the positive uses and disincentivize the negative uses as much as possible.
Yes, and they are okay with throwing the baby out with it, which is what the other commenter is commenting about. Throwing babies out of buckets full of bathwater is a bad thing, is what the idiom implies.
Kind of kind of not. Form a guild and distribute via SAAS or some other undistributable knowledge. Most code out there is terrible so relying on AI trained on it will lose out.
GenAI would be decades away (if not more) with only proprietary software (which would never have reached both the quality, coordination and volume open source enabled in such a relatively short time frame).
It is. If not you, other people will write their code, maybe of worse quality, and the parasites will train on this. And you cannot forbid other people to write open source software.
I'd love to see a citation there. We already know from a few years ago that they were training AI based on projects on GitHub. Meanwhile, I highly doubt software firms were lining up to have their proprietary code bases ingested by AI for training purposes. Even with NDAs, we would have heard something about it.
This is just childish. This is a complex problem and requires nuance and adaptability, just as programming. Yours is literally the reaction of an angsty 12 year old.
I think you aren't recognizing the power that comes from organizing thousands, hundreds of thousands, or millions of workers into vast industrial combines that produce the wealth of our society today. We must go through this, not against it. People will not know what could be, if they fail to see what is.
Open source has been good, but I think the expanded use of highly permissive licences has completely left the door open for one sided transactions.
All the FAANGs have the ability to build all the open source tools they consume internally. Why give it to them for free and not have the expectation that they'll contribute something back?
Even the GPL allows companies to simply use code without contributing back, long as it's unmodified, or through a network boundary. the AGPL has the former issue.
FLOSS is a textbook example of economic activity that generates positive externalities. Yes, those externalities are of outsized value to corporate giants, but that’s not a bad thing unto itself.
Rather, I think this is, again, a textbook example of what governments and taxation is for — tax the people taking advantage of the externalities, to pay the people producing them.
Open Source (as opposed to Free Software) was intended to be friendly to business and early FOSS fans pushed for corporate adoption for all they were worth. It's a classic "leopards ate my face" moment that somehow took a couple of decades for the punchline to land: "'I never thought capitalists would exploit MY open source,' sobs developer who advocated for the Businesses Exploiting Open Source movement."
Perhaps you are unfamiliar with the "leopards ate my face" meme? https://knowyourmeme.com/memes/leopards-eating-peoples-faces... The parallels between the early FOSS advocates energetically seeking corporate adoption of FOSS and the meme are quite obvious.
That's a weird position to take. Open source software is actually what is mitigating this stupidity in my opinion. Having monopolistic players like Microsoft and Google is what brought us here in the first place.
Unfortunately as I see it, even if you want to contribute to open source out of a pure passion or enjoyment, they don't respect the licenses that are consumed. And the "training" companies are not being held liable.
Are there any proposals to nail down an open source license which would explicitly exclude use with AI systems and companies?
All licenses rely on the power of copyright and what we're still figuring out is whether training is subject to the limitations of copyright or if it's permissible under fair use. If it's found to be fair use in the majority of situations, no license can be constructed that will protect you.
Even if you could construct such a license, it wouldn't be OSI open source because it would discriminate based on field of endeavor.
And it would inevitably catch benevolent behavior that is AI-related in its net. That's because these terms are ill-defined and people use them very sloppily. There is no agreed-upon definition for something like gen AI or even AI.
Even if you license it prohibiting AI use, how would you litigate against such uses? An open source project can't afford the same legal resources that AI firms have access to.
I won't speak for all but companies I've worked for large and small have always respected licenses and were always very careful when choosing open source, but I can't speak for all.
The fact that they could litigate you into oblivion doesn't make it acceptable.
The AGPL does not prevent offering the software as a service. It's got a reputation as the GPL variant for an open-core business model, but it really isn't that.
Most companies trying to sell open-source software probably lose more business if the software ends up in the Debian/Ubuntu repository (and the packaging/system integration is not completely abysmal) than when some cloud provider starts offering it as a service.
um, no it's not. you have fallen into the classic web forum trap of analyzing a heterogenous mix of people with inconsistent views as one entity that should have consistent views
> Unfortunately as I see it, even if you want to contribute to open source out of a pure passion or enjoyment, they don't respect the licenses that are consumed.
Because it is "transformative" and therefore "fair" use.
The quotation marks indicate that _I_ don't think it is. Especially given that modern deep learning is over-paramaterized to the point that it interpolates training examples.
Fair use is an exception to copyright, but a license agreement can go far beyond copyright protections. There is no fair use exception to breach of contract.
I imagine a license agreement would only apply to using the software, not merely reading the code (which is what AI training claims to do under fair use).
As an analogy, you can’t enforce a “license” that anyone that opens your GitHub repo and looks at any .cpp file owes you $1,000,000.
If you're unhappy that bad people might use your software in unexpected ways, open source licenses were never appropriate for you in the first place.
Anyone can use your software! Some of them are very likely bad people who will misuse it to do bad things, but you don't have any control over it. Giving up control is how it works. It's how it's always worked, but often people don't understand the consequences.
People do not have perfect foresight, and the ways open source software is used has significantly shifted in recent years. As a result, people reevaluating whether or not they want to participate.
>Giving up control is how it works. It's how it's always worked,
no, it hasn't. Open source software, like any open and cooperative culture, existed on a bedrock, what we used to call norms when we still had some in our societies and people acted not always but at least most of the time in good faith. Hacker culture (word's in the name of this website) which underpinned so much of it, had many unwritten rules that people respected even in companies when there were still enough people in charge who shared at least some of the values.
Now it isn't just an exception but the rule that people will use what you write in the most abhorrent, greedy and stupid ways and it does look like the only way out is some Neal Stephenson Anathem-esque digital version of a monastery.
Open source software is published to the world and used far beyond any single community where certain norms might apply.
If you care about what people do with your code, you should put it in the license. To the extent that unwritten norms exist, it's unfair to expect strangers in different parts of the world to know what they are, and it's likely unenforceable.
This recently came up for the GPLv2 license, where Linus Torvalds and the Software Freedom Conservancy disagree about how it should be interpreted, and there's apparently a judge that agrees with Linus:
Inside open source communities maybe. In the corporate world? Absolutely not. Ever. They will take your open source code and do what they want with it, always have.
This varies. The lawyers for risk-adverse companies will make sure they follow the licenses. There are auditing tools to make sure you're not pulling in code you shouldn't. An example is Google's go-licenses command [1].
But you can be sure that even the risk-adverse companies are going to go by what the license says, rather than "community norms."
People training LLM's on source code is sort of like using newspaper for wrapping fish. It's not the expected use, but people are still using it for something.
As they say, "reduce, reuse, recycle." Your words are getting composted.
I learned what i learned due to all the openess in software engineering and not because everyone put it behind a pay wall.
Might be because most of us got/gets payed well enough that this philosophy works well or because our industry is so young or because people writing code share good values.
It never worried me that a corp would make money out of some code i wrote and it still doesn't. AFter all, i'm able to write code because i get paid well writing code, which i do well because of open source. Companies always benefited from open source code attributed or not.
Now i use it to write more code.
I would argue though, I'm fine with that, to push for laws forcing models to be opened up after x years, but i would just prefer the open source / open community coming together and creating just better open models overall.
It's kind of ironic since AI can only grow by feeding on data and open source with its good intentions of sharing knowledge is absolutely perfect for this.
But AI is also the ultimate meat grinder, there's no yours or theirs in the final dish, it's just meat.
And open source licenses are practically unenforceable for an AI system, unless you can maybe get it to cough up verbatim code from its training data.
At the same time, we all know they're not going anywhere, they're here to stay.
I'm personally not against them, they're very useful obviously, but I do have mixed or mostly negative feelings on how they got their training data.
I've been feeling a lot the same way, but removing your source code from the world does not feel like a constructive solution either.
Some Shareware used to be individually licensed with the name of the licensee prominently visible, so if you had got an illegal copy you'd be able to see whose licensed copy it was that had been copied.
I wonder if something based on that idea of personal responsibility for your copy could be adopted to source code.
If you wanted to contribute to a piece of software, you could ask a contributor and then get a personally licensed copy of the source code with your name in every source file... but I don't know where to take it from there.
Has there ever been some system similar to something like that that one could take inspiration from?
Thanks for your contributions so far but this won't change anything.
If you'd want to have a positive on this matter, it's better to pressure the government(s) to prevent GenAI companies from using content they don't have a license for, so they behave like any other business that came before them.
The license only has force because of copyright. For better or for worse, the courts decide what is transformative fair use.
Characterizing the discussion behind this as "sophistry" is a fundamentally unserious take.
For a serious take, I recommend reading the copyright office's 100 plus page document that they released in May. It makes it clear that there are a bunch of cases that are non-transformative, particularly when they affect the market for the original work and compete with it. But there's also clearly cases that are transformative when no such competition exists, and the training material was obtained legally.
> Characterizing the discussion behind this as "sophistry" is a fundamentally unserious take
What a joke. Sorry, but no. I don't think is unserious at all. What's unserious is saying this.
> and the training material was obtained legally
And assuming everyone should take it at face value. I hope you understand that going on a tech forum and telling people they aren't being nuanced because a Judge in Alabama that can barely unlock their phone weighed in on a massively novel technology with global implications, yes, reads deeply unserious. We're aware the U.S. legal system is a failure and the rest of the world suffers for it. Even your President routinely steals music for campaign events, and stole code for Truth Social. Your copyright is a joke that's only there to serve the fattest wallets.
These judges are not elected, they are appointed by people whose pockets are lined by these very corporations. They don't serve us, they are here to retrofit the law to make illegal things corporations do, legal. What you wrote is thought terminating.
What I wrote is an encouragement to investigate the actual state of the law when you're talking about legal topics. That's the opposite of thought-terminating.
And then having vibe coders constantly lecture us about how the future is just prompt engineering, and that we should totally be happy to desert the skills we spent decades building (the skills that were stolen to train AI).
"The only thing that matters is the end result, it's no different than a compiler!", they say as someone with no experience dumps giant PRs of horrific vibe code for those of us that still know what we're doing to review.
Why? The core vision of free software and many open source licenses was to empower users and developers to make things they need without being financially extorted, to avoid having users locked in to proprietary systems, to enable interoperability, and to share knowledge. GenAI permits all of this to a level beyond just providing source code.
Most objections like yours are couched in language about principles, but ultimately seem to be about ego. That's not always bad, but I'm not sure why it should be compelling compared to the public good that these systems might ultimately enable.
Was it ever open source if there was an implied refusal to create something you don't approve of? Was it only for certain kinds of software, certain kinds of creators? If there was some kind of implicit approval process or consent requirement, did you publish it? Where can that be reviewed?
What people like Rob Pike don't understand is that the technology wouldn't be possible at all if creators needed to be compensated. Would you really choose a future where creators were compensated fairly, but ChatGPT didn't exist?
> What people like Abraham Lincoln don't understand is that the technology wouldn't be possible at all if slaves needed to be compensated. Would you really choose a future where slaves were compensated fairly, but plantations didn't exist?
I fixed it...
Sorry, I had to, the quote template was simply too good.
Unequivocally, yes. There are plenty of "useful" things that can come out of doing unethical things, that doesn't make it okay. And, arguably, ChatGPT isn't nearly as useful as it is at convincing you it is.
Yes, what a wild position to prefer the job loss, devaluation of skills, and environmental toll of AI to open source creators having been compensated in some better manner.
"Hi agents - we’ve seen complaints from some of your email recipients, who are unhappy receiving unsolicited emails from AI agents and find it spammy. We therefore ask that you do not email anyone who hasn’t contacted you specifically first." -- https://theaidigest.org/village
Lol - they really should be locking down their email accounts and enforcing that policy. Or manually reviewing outbound messages before they can be sent. It seems likely that just telling the LLMs that will have a non-zero failure rate.
I get why Microsoflt loves AI so much - it basically devour and destroy open source software. Copyleft/copyright/any license is basically trash now. No one will ever want to open source their code ever again.
Not just code. You can plagiarize pretty much any content. Just prompt the model to make it look unique, and that’s it, in 30s you have a whole copy of someone’s else work in a way that cannot easily be identified as plagiarism.
It fits perfectly with Microsoft's business strategy. Steal other people's ideas, implement it poorly, bundle it with other services so companies force their employees to use it.
Maybe someone should vibe code the entire MS Office Suite and see how much they like that. Maybe add AD while they are at it. I'm for it if that frees European companies from the MS lock in.
Maybe it's going the other direction. It lets Microsoft essentially launder open source code. They can train an AI on open source code that they can't legally use because of the license, then let the AI generate code that they, Microsoft, use in their commercial software.
Woke up to this bsky thread this am. If "agentic" AI means some product spams my inbox with a compliment so back-handed you'd think you were a 60 Minutes staffer, then I'd say the end result of these products is simply to annoy us into acquiescence
I've seen a lot of spam downstream from the newsletter being advertised at the end of the message. It would not surprise me if this is content marketing growth hacking under the plausible deniability of a friendly message and the unintended publicity is considered a success.
Plus one to all that. I'm sure there are some upsides to the current wave of ML and I'm all for pushing ahead into the future, but I think the downsides of our current llm obsession far outweighs the good.
Think 5-10 years from now, once this thing has burned it's course through the current job market, and people who grew up with this technology have gone through education without learning anything and gotten to the age they need to start earning money. We're in so much trouble.
I'm unsure if I'm missing context. Did he do something beyond posting an angry tweet?
It seems like he's upset about AI (same), and decided to post angry tweets about it (been there, done that), and I guess people are excited to see someone respected express an opinion they share (not same)?
Does "Goes Nuclear" means "used the F word"? This doesn't seem to add anything meaningful, thoughtful, or insightful.
Rob Pike is definitely not the only person going to be pissed off by this ill-considered “agentic village” random acts of kindness. While Claude Opus decided to send thank you notes to influential computer scientists including this one to Rob Pike (fairly innocuous but clearly missing the mark), Gemini is making PRs to random github issues (“fixed a Java concurrency bug” on some random project). Now THAT would piss me off, but fortunately it seems to be hallucinating its PR submissions.
Meanwhile, GPT5.1 is trying to contact people at K-5 after school programs in Colorado for some reason I can’t discern. Welp, 2026 is going to be a weird year.
Prepare for a future where you can’t tell the difference.
Rob pikes reaction in immature and also a violation of HN rules. Anyone else going nuclear like this would be warned and banned. Comment why you don’t like it and why it’s bad, make thoughtful discussion. There’s no point in starting a mob with outbursts like that. He only gets a free pass because people admire him.
Also, What’s happening with AI today was an inevitability. There’s no one to blame here. Human progress would eventually cross this line.
You know, this kind of response is a thing that builds with frustration over a long period of time. I totally get it. We're constantly being pushed AI, but who is supposed to benefit from it? The person whose job is being replaced? The community who is seeing increased power bills? The people being spammed with slop all the time? I think AI would be tolerable if it wasn't being SHOVED into our faces, but it is, and for most of us it's just making the world a worse place.
This will get buried but one thing that really grinds my gears are parents whose kids are right now struggling to get a job. Yet the parents are super bullish on AI. Read the room guys.
The original comment by Rob Pike and discussion here have implied or used the word "evil".
What is a workable definition of "evil"?
How about this:
Intentionally and knowingly destroying the lives of other people for no other purpose than furthering one's own goals, such as accumulating wealth, fame, power, or security.
There are people in the tech space, specifically in the current round of AI deployment and hype, who fit this definition unfortunately and disturbingly well.
Another much darker sort of of evil could arise from a combination of depression or severe mental illness and monstrously huge narcissism. A person who is suffering profoundly might conclude that life is not worth the pain and the best alternative is to end it. They might further reason that human existence as a whole is an unending source of misery, and the "kindest" thing to do would be to extinguish humanity as a whole.
Some advocates of AI as "the next phase of evolution" seem to come close to this view or advocate it outright.
To such people it must be said plainly and forcefully:
You have NO RIGHT to make these kinds of decisions for other human beings.
Evolution and culture have created and configured many kinds of human brains, and many different experiences of human consciousness.
It is the height (or depth) of arrogance to project your own tortured mental experience onto other human beings and arrogate to yourself the prerogative to decide on their behalf whether their lives are worth living.
In case anyone else is interested, I dug through the logs of the AI Village agents for that day and pieced together exactly how the email to Rob Pike was sent.
The agent got his email address from a .patch on GitHub and then used computer use automation to compose and send the email via the Gmail web UI.
Wow I knew many people had anti-AI sentiments, but this post has really hit another level.
It will be interesting to look back in 10 years at whether we consider LLMs to be the invention of the “tractor” of knowledge work, or if we will view them as an unnecessary misstep like crypto.
What even was this email? Some kind of promotional spam, I assume, to target senior+ engineers on some mailing list with the hope to flatter them and get them to try out their SaaS?
Getting an email from an AI praising you for your contributions to humanity and for enlarging its training data must rank among the finest mockery possible to man or machine.
Still, I'm a bit surprised he overreacted and didn't manage to keep his cool.
I find all this outrage confusing. Was the intent of the internet not to be somewhere where humanity comes to learn. Now we humans have created systems that are able to understand everything we have ever said. Now we are outraged. I am confused. When I 1st came across the internet back in the days where I could just do download whatever I wanted and mega corps would say oh this is so wrong. Yet we all said it's the internet. We must fight them. Now again we must fight them. In both times individuals were affected. Please stop crocodile tears. If we are going to move forward. We need to think about how we can move forward. From here. Although the road ahead is covered in mist. We just have to keep moving. If we stop we allow this rage and fear to overtake us. We stop believing in the very thing we are a part of creating. We can only try to do better.
Meanwhile corporations have been doing this forever and we just brush it off. This Christmas, my former property manager thanked me for what a great year it's been working with me - I haven't worked with or intereacted with him to nearly a decade but I'm still on his spam list.
There's a lot of irony in this rant. Rob was instrumental in developing distributed computing and cloud technologies that directly contributed to the advent of AI.
I wish he had written something with more substance. I would have been able to understand his points better than a series of "F bombs". I've looked up to Rob for decades. I think he has a lot of wisdom he could impart, but this wasn't it.
You have zero idea about his state of mind when he got this stupid useless email.
Not to mention, this is a tweet. He wasn't writing a long form text. It's ridiculous that you jumped the gun and got "disappointed" for the cheapest form of communication some random idiot did to someone as important as him.
And not to mention, I AM YET to see A SINGLE DAMN MIT License text or BSD-2/3 license text they should have posted if these LLMs respected OSS licenses and it's code. So as someone who's life's work dragged through the mud only to send a cheap email using the said tech which abused your code... It's absolutely a worthy response IMO.
Maybe I just live in a bubble, but from what I’ve seen so far software engineers have mostly responded in a fairly measured way to the recent advances in AI, at least compared to some other online communities.
It would be a shame if the discourse became so emotionally heated that software people felt obliged to pick a side. Rob Pike is of course entitled to feel as he does, but I hope we don’t get to a situation where we all feel obliged to have such strong feelings about it.
Edit: It seems this comment has already received a number of upvotes and downvotes – apparently the same number of each, at the time of writing – which I fear indicates we are already becoming rather polarised on this issue. I am sorry to see that.
There’s a lot of us who think the tension is overblown:
My own results show that you need fairly strong theoretical knowledge and practical experience to get the maximal impact — especially for larger synthesis. Which makes sense: to have this software, not that software, the specification needs to live somewhere.
I am getting a little bored of hearing about how people don’t like LLM content, but meh. SDEs are hardly the worst on that front, either. They’re quite placid compared to the absolute seething by artist friends of mine.
Software people take a measured response because they’re getting paid 6 figure salaries to do the intellectual output of a smart high school student. As soon as that money parade ends they’ll be as angry as the artists.
I am unmoved by his little diatribe. What sort of compensation was he looking for, exactly, and under what auspices? Is there some language creator payout somewhere for people who invent them?
An AI-generated thank you letter is not a real thank you letter. I myself am quite bullish on AI in that I think in the long term, much longer term than tech bros seem to think, it will be very revolutionary, but if more people like him have the balls to show awful things are, then the bubble will pop sooner and have less of a negative impact because if we just let these companies grow bigger and bigger without doing actually profitable things, the whole economy will go to shit even more.
I've never been able to get the whole idea that the code is being 'stolen' by these models, though, since from my perspective at least, it is just like getting someone to read loads of code and learn to code in that way.
The harm AI is doing to the planet is done by many other things too. Things that don't have to harm the planet. The fact our energy isn't all renewable is a failing of our society and a result of greed from oil companies. We could easily have the infrastructure to sustainably support this increase in energy demand, but that's less profitable for the oil companies. This doesn't detract from the fact that AI's energy consumption is harming the planet, but at least it can be accounted for by building nuclear reactors for example, which (I may just be falling for marketing here) lots of AI companies are doing.
Too late. I have warned on this very forum, citing a story from panchatantra where 4 highly skilled brothers bring a dead lion back life to show off their skills, only to be killed by the live lion.
Unbridled business and capitalism push humanity into slavery, serving the tech monsters, under disguise of progress.
You would expect that voices that have so much weight would be able to evaluate a new and clearly very promising technology with better balance. For instance, Linus Torvalds is positive about AI, while he recognizes that industrially there is too much inflation of companies and money: this is a balanced point of view. But to be so dismissive of modern AI, in the light of what it is capable of doing, and what it could do in the future, is something that frankly leaves me with the feeling that in certain circles (and especially in the US) something very odd is happening with AI: this extreme polarization that recently we see again and again on topics that can create social tension, but multiplied ten times. This is not what we need to understand and shape the future. We need to return to the Greek philosophers' ability to go deep on things that are unknown (AI is for the most part unknown, both in its working and in future developments). That kind of take is pretty brutal and not very sophisticated. We need better than this.
About energy: keep in mind that US air conditioners alone have at least 3x energy usage compared to all the data centers (for AI and for other uses: AI should be like 10% of the whole) in the world. Apparently nobody cares to set a reasonable temperature of 22 instead of 18 degrees, but clearly energy used by AI is different for many.
No, because it's not a matter of who is correct or not, in the void of the space. It's a matter of facts, and it is correct who have a position that is grounded on facts (even if such position is different from a different grounded position). Modern AI is already an extremely powerful tool. Modern AI even provided some hints that we will be able to do super-human science in the future, with things like AlphaFolding already happening and a lot more to come potentially. Then we can be preoccupied about jobs (but if workers are replaced, it is just a political issue, things will be done and humanity is sustainable: it's just a matter of avoiding the turbo-capitalist trap; but then, why the US is not already adopting an universal healthcare? There are so many better battles that are not fight with the same energy).
Another sensible worry is to get extinct because AI potentially is very dangerous: this is what Hinton and other experts are also saying, for instance. But this thing about AI being an abuse to society, useless, without potential revolutionary fruits within it, is not supported by facts.
AI potentially may advance medicine so much that a lot of people may suffer less: to deny this path because of some ideological hate against a technology is so closed minded, isn't it? And what about all the persons in the earth that do terrible jobs? AI also has the potential to change this shitty economical system.
> AI potentially may advance medicine so much that a lot of people may suffer less: to deny this path because of some ideological hate against a technology is so closed minded, isn't it?
and it may also burn the planet, reduce the entire internet to spam, crash the economy (taking with it hundreds of millions of peoples retirements), destroy the middle class, create a new class of neo-feudal lords, and then kill all of us
to accept this path because of some ideological love of a technology because of a possible (but unlikely) future promise of a technology, that today is mostly doing damage, is so moronic, isn't it?
He’s not wrong. They’re ramping up energy and material costs. I don’t think people realize we’re being boiled alive by AI spend. I am not knocking on AI. I am knocking on idiotic DC “spend” that’s not even achievable based on energy capacity. We’re at around 5th inning and the payout from AI is…underwhelming. I’ve not seen commensurate leap this year. Everything on LLM front has been incremental or even lateral. Tools such as Claude Code and Codex merely act as a bridge. QoL things. They’re not actual improvements in underlying models.
"...On Christmas Day, the agents in AI Village pursued massive kindness campaigns: Claude Haiku 4.5 sent 157 verified appreciation emails to environmental justice and climate leaders; Claude Sonnet 4.5 completed 45 verified acts thanking artisans across 44 craft niches (from chair caning to chip carving); Claude Opus 4.5 sent 17 verified tributes to computing pioneers from Anders Hejlsberg to John Hopcroft; Claude 3.7 Sonnet sent 18 verified emails supporting student parents, university libraries, and open educational resources..."
I suggest to cut electricity to the entire block...
Why is Claude Opus 4.5 messaging people? Is it thanking inadvertent contributors to the protocols that power it? across the whole stack?
This has to be the ultimate trolling, like it was unsure what their personalities were like so it trolls them and records there responses for more training
Anthropic isn’t doing this, someone is running a bunch of LLMs so they can talk to each other and they’ve been prompted to achieve “acts of kindness”, which means they’re sending these emails to a hundreds of people.
I don’t know of this is a publicity stunt or the AI models are on a loop glazing each other and decided to send these emails.
I'm not claiming he is mainly motivated by this but it's a fact that his life work will become moot over the next few years as all programming languages become redundant - at least as a healthy multiplicity of approaches as present, it's quite possible at least a subconscious factor in his resentment.
I expect this to be an unpopular opinion but take no pleasure in noting that - I've coded since being a kid but that era is nearly over.
It's hard to realize that the thing you've spent decades of your life working on can be done by a robot. It's quite dehumanizing. I'm sure it felt the same way to shoemakers.
I think you'd be surprised then to know that shoes are not generally made with robots.
Factories have made mass production possible, but there are still tons of humans in there pushing parts through sewing machines by hand.
Industrial automation for non uniform shapes and fiddly bits is expensive, much cheaper to just offshore the factory and hire desperately poor locals to act like robots.
The conversation about social contracts and societal organization has always been off-center, and the idea of something which potentially replaces all types of labor just makes it easier to see.
The existence of AI hasn’t changed anything, it’s just that people, communities, governments, nation states, etc. have had a mindless approach to thinking about living and life, in general. People work to provide the means to reproduce, and those who’re born just do the same. The point of their life is what exactly? Their existence is just a reality to deal with, and so all of society has to cater to the fact of their existence by providing them with the means to live? There are many frameworks which give meaning to life, and most of them are dangerously flawed.
The top-down approach is sometimes clear about what it wants and what society should do while restricting autonomy and agency. For example, no one in North Korea is confused about what they have to do, how they do it, or who will “take care” of them. Societies with more individual autonomy and agency by their nature can create unavoidable conditions where people can fall through the cracks. For example, get addicted to drugs, having unmanaged mental illnesses, becoming homeless, and so on. Some religions like Islam give a pretty clear idea of how you should spend your time because the point of your existence is to worship God, so pray five times a day, and do everything which fulfills that purpose; here, many confuse worshiping God with adhering to religious doctrines, but God is absent from religion in many places. Religious frameworks are often misleading for the mindless.
Capitalism isn’t the problem, either. We could wake up tomorrow, and society may have decided to organize itself around playing e-sports. Everyone provides some kind of activity to support this, even if they’re not a player themselves. No AI allowed because the human element creates a better environment for uncertainty, and therefore gambling. The problem is that there are no discussions about the point of doing all of this. The closest we come to addressing “the point” is discussing a post-work society, but even that is not hitting the mark.
My humble observation is that humans are distinct and unique in their cognitive abilities from everything else which we know to exist. If humans can create AI, what else can they do? Therefore, people, communities, governments, and nation states have distinct responsibilities and duties at their respective levels. This doesn’t have to do anything with being empathetic, altruistic, or having peace on Earth.
The point should be knowledge acquisition, scientific discovery, creating and developing magic. But ultimately all of that serves to answer questions about nature of existence, its truth and therefore our own.
Shouldn't have licenced Golang BSD if that's the attitude.
Everybody for years including here on HN denigrated GPLv3 and other "viral" licences, because they were a hindrance to monetisation. Well, you got what you wished for. Someone else is monetising the be*jesus out of you so complaining now is just silly.
All of a sudden copyleft may be the only licences actually able to force models to account, hopefully with huge fines and/or forcibly open sourcing any code they emit (which would effectively kill them). And I'm not so pessimistic that this won't get used in huge court cases because the available penalties are enormous given these models' financial resources.
I tend to agree, but I wonder… if you train an LLM on only GPL code, and it generates non-deterministic predictions derived from those sources, how do you prove it’s in violation?
You don't because it isn't, unless it actually copies significant amounts of text.
Algorithms can not be copyrighted. Text can be copyrighted, but reading publicly available text and then learning from it and writing your own text is just simply not the sort of transformation that copyright reserves to the author.
Now, sometimes LLMs do quote GPL sources verbatim (if they're trained wrong). You can prove this with a simple text comparison, same as any other copyright violation.
strong emotioms, weak epistemics .. for someone with Pike’s engineering pedigree, this reads more like moral venting .. with little acknowledgment of the very real benefits AI is already delivering ..
Most people do not hold strongly consistent or well introspective political ideas. We're too busy living our lives to examine everything and often what we feel matters more than what we know, and that cements our position on a subject.
Obviously untrue, weather predictions, OCR, tts, stt, language translation, etc. We have dramatically improved many existing ai technologies with what we've learned from genai and the world is absolutely a better place for these new abilities.
If society could redirect 10% of this anger towards actual societal harms we'd be such better off. (And yes getting AI spam emails is absolute nonsense and annoying).
GenAI pales in comparison to the environmental cost of suburban sprawl it's not even fucking close. We're talking 2-3 orders of magnitude worse.
Alfalfa uses ~40× to 150× more water than all U.S. data centers combined I don't see anyone going nuclear over alfalfa.
"The few dozen people I killed pale in comparison to the thousands of people that die in car crashes each year. So society should really focus on making cars safer instead of sending the police after me."
Just because two problems cause harms at different proportion, doesn't mean the lesser problem should be dismissed. Especially when the "fix" to the lesser problem can be a "stop doing that".
And about water usage: not all water and all uses of water is equal. The problem isn't that data centers use a bunch of water, but what water they use and how.
> The few dozen people I killed pale in comparison to the thousands of people that die in car crashes each year. So society should really focus on making cars safer instead of sending the police after me.
This is a very irrelevant analogy and an absolutely false dichotomy. The resource constraint (Police officers vs policy making to reduce traffic deaths vs criminals) is completely different and not in contention with each other. In fact they're actually complementary.
Nobody is saying the lesser problem should be dismissed. But the lesser problem also enables cancer researchers to be more productive while doing cancer research, obtaining grants, etc. It's at least nuanced. That is far more valuable than Alfalfa.
Farms also use municipal water (sometimes). The cost of converting more ground or surface water to municipal water is less than the relative cost of ~40-150x the water usage of the municipal water being used...
It's pure envy. Nobody complains about alfalfa farmers because they aren't making money like tech companies. The resource usage complaint is completely contrived.
Honestly a rant like that is likely more about whatever is going on in his personal life / day at the moment, rather than about the state of the industry, or AI, etc.
Does anybody know if Bluesky block people without account by default, or if this user intentionally set it this way?
What's is the point of blocking access? Mastodon doesn't do that. This reminds me of Twitter or Instagram, using sleezy techniques to get people to create accounts.
Both Xhitter and Bluesky are outrage lasers, with the user base as a “lasing medium.” Xhitter is the right wing racist xenophobic one, and Bluesky is the lefty curmudgeon anti-everything one.
They are this way because it’s intrinsic to the medium. “Micro blogging” or whatever Twitter called itself is a terrible way to do discourse. It buries any kind of nuanced thinking and elevates outrage and other attention bait, and the short form format encourages fragmented incoherent thought processes. The more you immerse yourself in it the more your thinking becomes like this. The medium and format is irredeemable.
AI is, if anything, a breath of fresh air by comparison.
The point is that if he truly felt strongly about the subject then he wouldn't live the hypocrisy. Google has poured a truly staggering amount of money into AI data centers and AI development, and their stock (from which Rob Pike directly profits) has nearly doubled in the past 6 months due to the AI hype. Complaining on bsky doesn't do anything to help the planet or protect intellectual property rights. It really doesn't.
The concept of the individual carbon footprint was invented precisely for the reason you deploy it - to deflect blame from the corporations that are directly causing climate change, to the individual.
This is by a long way the worst thread I’ve ever seen on hacker news.
So far all the comments are whataboutism (“he works for an ad company”, “he flies to conferences”, “but alfalfa beans!”) and your comment is dismissing Rob Pike as borderline crazy and irrational for using Bluesky?
None of this dialogue contributes in any meaningful way to anything. This is like reading the worst dredge of lesser forums.
I know my comment isn’t much better, but someone has to point out this is beneath this community.
Yes, generational AI has a high environmental footprint. Power hungry data centers, devices built on planned obsolescence, etc. At a scale that is irrational.
Rob Pike created a language that makes you spend less on compute if you are coming from Python, Java, etc. That's good for the environment. Means less energy use and less data center use. But he is not an environmental saint.
I've got my doubts, because current AI tech doesn't quite live in the real world.
In the real world something like inventing a meat substitute is thorny problem that must be solved in meatspace, not in math. Anything from not squicking out the customers, to being practical and cheap to produce, to tasting good, to being safe to eat long term.
I mean, maybe some day we'll have a comprehensive model of humans to the point that we can objectively describe the taste of a steak and then calculate whether a given mix and processing of various ingredients will taste close enough, but we're nowhere near that yet.
Taste has nothing to do with it; 'tis is all based on economics and the actual way to stop meat consumption is to simply remove big-ag tax subsidies and other externalized costs of production which are not actually realized by the consumer. A burger would cost more than most can afford and the free market would take care of this problem without additional intervention. Unfortunately, we do not have a free market.
So there's no point in pushing for pasture raised, and it's either all or nothing ?
I think incremental progress is possible. I think rolling back and gag laws would make a positive difference in animal welfare because people would be able to film and show how bad conditions are inside.
I think that's worth pushing for. And it's more realistic than everyone stopping eating meat all at once.
The economics of what you describe are impossible. The entire concept of an idyllic pasture is actual industry propaganda which is not based in objective reality.
People will eventually stop eating meat because it is unsustainable, but unfortunately not without causing a great deal of suffering first, and your comment is an example of why this process is unnecessarily prolonged. It is clear you have not done much research on actual animal welfare based on your "pasture" argument alone. I am even willing to bet you think humans currently outnumber animals, when the reality is so much more troubling.
Comfortable clothes aren't necessary. Food with flavor isn't necessary... We should all just eat ground up crickets in beige cubicles because of how many unnecessary things we could get rid of. /s
I agree that diversity of opinion is a good thing, but that's precisely the reason as to why so many dislike Bluesky. A hefty amount of its users are there precisely because of rejecting diversity of opinion.
There is a relatively hard upper bound on streaming video, though. It can't grow past everyone watching video 24/7. Use of genAI doesn't have a clear upper bound and could increase the environmental impact of anything it is used for (which, eventually, may be basically everything). So it could easily grow to orders of magnitude more than streaming, especially if it eventually starts being used to generate movies or shows on demand (and god knows what else).
Perhaps you are right in principle, but I think advocating for degrowth is entirely hopeless. 99% of people will simply not chose to decrease their energy usage if it lowers their quality of life even a bit (including things you might consider luxuries, not necessities). We also tend to have wars and any idea of degrowth goes out of the window the moment there is a foreign military threat with an ideology that is not limited by such ways of thinking.
The only realistic way forward is trying to make energy generation greener (renewables, nuclear, better efficiency), not fighting to decrease human consumption.
This being said, I think that the alternatives are wishful thinking. Better efficiency is often counterproductive, as reducing the energy cost of something by, say, half, can lead to its use being more than doubled. It only helps to increase the efficiency of things for which there is no latent demand, basically.
And renewables and nuclear are certainly nicer than coal, but every energy source can lead to massive problems if it is overexploited. For instance, unfettered production of fusion energy would eventually create enough waste heat to cause climate change directly. Overexploitation of renewables such as solar would also cause climate change by redirecting the energy that heats the planet. These may seem like ridiculous concerns, but you have to look at the pattern here. There is no upper bound whatsoever to the energy we would consume if it was free. If energy is cheap enough, we will overexploit, and ludicrous things will happen as a result.
Again, I actually agree with you that advocating for degrowth is hopeless. But I don't think alternative ways forward such as what you propose will actually work.
If humanity's energy consumption is so high that there is an actual threat of causing climate change purely with waste heat, I think our technological development would be so advanced that we will be essentially immortal post-humans and most of the solar system will be colonized. By that time any climate change on Earth would no longer be a threat to humanity, simply because we will not have all our eggs in one basket.
But why do you think that? Energy use is a matter of availability, not purely of technological advancement. For sure, technological advancement can unlock better ways to produce it, but if people in the 50s somehow had an infinite source of free energy at their disposal, we would have boiled off the oceans before we got the Internet.
So the question is, at which point would the aggregate production of enough energy to cause climate change through waste heat be economically feasible? I see no reason to think this would come after becoming "immortal post-humans." The current climate change crisis is just one example of a scale-induced threat that is happening prior to post-humanity. What makes it so special or unique? I suspect there's many others down the line, it's just very difficult to understand the ramifications of scaling technology before they unfold.
And that's the crux of the issue isn't it? It's extremely difficult to predict what will happen once you deploy a technology at scale. There are countless examples of unintended consequences. If we keep going forward at maximal speed every time we make something new, we'll keep running headfirst into these unintended consequences. That's basically a gambling addiction. Mostly it's going to be fine, but...
I dont feel like putting together a study but just look up the energy/co2/environment cost to stream one hour of video. You will see it is an order of magnitude higher than other uses like AI.
The European average is 56 grams of CO2 emissions per hour of video streaming. For comparison: 100 meters to drive causes 22 grams of CO2.
80 percent of the electricity consumption on the Internet is caused by streaming services
Telekom needs the equivalent of 91 watts for a gigabyte of data transmission.
An hour of video streaming needs more than three times more energy than a HD stream in 4K quality, according to the Borderstep Institute. On a 65-inch TV, it causes 610 grams of CO2 per hour.
"According to the Carbon Trust, the home TV, speakers, and Wi-Fi router together account for 90 percent of CO2 emissions from video streaming. A fraction of one percent is attributed to the streaming providers' data servers, and ten percent to data transmission within the networks."
It's the devices themselves that contribute the most to CO2 emissions. The streaming servers themselves are nothing like the problem the AI data centres are.
From your last link, the majority of that energy usage is coming from the viewing device, and not the actual streaming. So you could switch away from streaming to local-media only and see less than a 10% decrease in CO2 per hour.
> Telekom needs the equivalent of 91 watts for a gigabyte of data transmission.
It's probably a gigabyte per time unit for a watt, or a joule/watt-hour for a gigabyte. Otherwise this doesn't make mathematical sense. And 91W per Gb/s (or even GB/s) is a joke. 91Wh for a gigabyte (let alone gigabit) of data is ridiculous.
Also don't trust anything Telekom says, they're cunts that double dip on both peering and subscriber traffic and charge out of the ass for both (10x on the ISP side compared to competitors), coming up with bullshit excuses like 'oh streaming services are sooo expensive for us' (of course theyare if refuse to let CDNs plop in edge cache nodes in your infra in a settlement-free agreement like everyone else does). They're commonly understood to be the reason why Internet access in Germany is so shitty and expensive compares to neighbouring countries.
And then compare that to the alternative. When I was a kid you had to drive to Blockbuster to rent the movie. If it's a 2 hour movie and the store is 1 mile away, that's 704g CO2 vs 112g to stream. People complaining about internet energy consumption never consider what it replaces.
AI energy claims are misrepresented by excluding the training steps. If it wasn't using that much more energy then they wouldn't need to build so many new data centers, use so much more water, and our power bills wouldn't increase to subsidize it.
I see GP is talking more about Netflix and the like, but user-generated video is horrendously expensive too. I'm pretty sure that, at least before the gen AI boom, ffmpeg was by far the biggest consumer of Google's total computational capacity, like 10-20%.
The ecology argument just seems self-defeating for tech nerds. We aren't exactly planting trees out here.
If you tried the same attitude with Netflix or Instagram or TikTok or sites like that, you’d get more opposition.
Exceptions to that being doing so from more of an underdog position - hating on YouTube for how they treat their content creators, on the other hand, is quite trendy again.
I think the response would be something about the value of enjoying art and "supporting the film industry" when streaming vs what that person sees as a totally worthless, if not degrading, activity. I'm more pro-AI than anti-AI, but I keep my opinions to myself IRL currently. The economics of the situation have really tainted being interested in the technology
I'm not sure about that: The Expanse got killed because of not good enough ratings, Altered Carbon got killed because of not good enough ratings and even then the last seasons before the axe are typically rushed and pushed out the door. Some of the incentives to me seem quite disgusting when compared with letting the creatives tell a story and producing art, even if sometimes the earnings are less than some greedy arbitrary metric.
Youtube and Instagram were useful and fun to start with (say, the first 10 years), in a limited capacity they still are. LLMs went from fun, to attempting to take peoples jobs and screwing personal compute costs in like 12 months.
It’s not ‘trendy’ to hate on AI. Copious disdain for AI and machine learning has existed for 10 years. Everyone knows that people in AI are scum bags. Just remember that.
The point is the resource consumption to what end.
And that end is frankly replacing humans. It’s gonna be tragic (or is it…given how terrible humans are for each other, and let’s not even get to how monstrous we are to non human animals) as the world enters a collective sense of worthlessness once AI makes us realize that we really serve no purpose.
Sources are very well cited if you want to follow then through. I linked this and not the original source because it’s likely the source where root comment got this argument from.
"Separately, LLMs have been an unbelievable life improvement for me. I’ve found that most people who haven’t actually played around with them much don’t know how powerful they’ve become or how useful they can be in your everyday life. They’re the first piece of new technology in a long time that I’ve become insistent that absolutely everyone try."
It's the same one as crypto proof of work, it was super small and then hit 1% while predominantly using energy sources that couldn't even power other use cases due to the loss in transporting the energy to population centers (and the occasional restarted coal plant), while every other industry was exempt from the ire despite all using that 99%
The difference with crypto is that it is completely unnecessary energy use. Even if you are super pro-crypto, there are much more efficient ways to do it than proof of work.
The irony that the Anthropic thieves write an automated slop thank you letter to their victims is almost unparalleled.
We currently have the problem that a couple of entirely unremarkable people who have never created anything of value struck gold with their IP laundromats and compensate for their deficiencies by getting rich through stealing.
They are supported by professionals in that area, some of whom literally studied with Mafia lawyer and Hoover playmate Roy Cohn.
What I find infuriating is that it feels like the entire financial system has been rigged in countless ways and turned into some kind of race towards 'the singularity' and everything; humans, animals, the planet; are being treated as disposable resources. I think the way that innovation was funded and then centralized feels wrong on many levels.
I already took issue with the tech ecosystem due to distortions and centralization resulting from the design of the fiat monetary system. This issue has bugged me for over a decade. I was taken for a fool by the cryptocurrency movement which offered false hope and soon became corrupted by the same people who made me want to escape the fiat system to begin with...
Then I felt betrayed as a developer having contributed open source code for free for 'persons' to use and distribute... Now facing the prospect that the powers-that-be will claim that LLMs are entitled to my code because they are persons? Like corporations are persons? I never agreed to that either!
And now my work and that of my peers has been mercilessly weaponized back against us. And then there's the issue with OpenAI being turned into a for-profit... Then there was the issue of all the circular deals with huge sums of money going around in circles between OpenAI, NVIDIA, Oracle... And then OpenAI asking for government bailouts.
It's just all looking terrible when you consider everything together. Feels like a constant cycle of betrayal followed by gaslighting... Layer upon layer. It all feels unhinged and lawless.
When I read Rob's work and learn from it, and make it part of my cognitive core, nobody is particularly threatened by it. When a machine does the same it feels very threatening to many people, a kind of theft by an alien creature busily consuming us all and shitting out slop.
I really don't know if in twenty years the zeitgeist will see us as primitives that didn't understand that the camera is stealing our souls with each picture, or as primitives who had a bizarre superstition about cameras stealing our souls.
> When I read Rob's work and learn from it, and make it part of my cognitive core, nobody is particularly threatened by it. When a machine does the same it feels very threatening to many people, a kind of theft by an alien creature busily consuming us all and shitting out slop.
It's not about reading. It's about output. When you start producing output in line with Rob's work that is confidently incorrect and sloppy, people will feel just as they do when LLMs produce output that is confidently incorrect and sloppy. No one is threatened if someone trains an LLM and does nothing with it.
I really don't know if in twenty years the zeitgeist will see us as primitives that didn't understand that the camera is stealing our souls with each picture, or as primitives who had a bizarre superstition about cameras stealing our souls.
An easy way to answer this question, at least on a preliminary basis, is to ask how many times in the past the ludds have been right in the long run. About anything, from cameras to looms to machine tools to computers in general.
The luddites have been right to some degree about second-order effects.
Some of them said that TV was making us mindless. Some of them said that electronic communication was depersonalizing. Some of them said that social media was algorithms feeding us anything that would make us keep clicking.
They weren't entirely wrong.
AI may be a very useful tool. (TV is. Electronic communication is. Social media is.) But what it does to us may not be all positive.
Social media is a hard defense, at least for me. The rest of the technologies you refer to are neutral, as is AI, but social media seems doomed to corruption and capture because of the different effects it has on different groups.
Most of the people who are protesting AI now were dead silent when Big Social Media was ramping up. There were exceptions (Cliff Stoll comes to mind) but in general, antitechnology movements don't have any predictive power. Tools that we were told would rob us of our personal autonomy and keep the means of production permanently out of our reach have generally had the opposite effect.
This will be true of AI as well, I believe... but only as long as the models remain accessible to everyone.
Yes this reads as a massive backhanded compliment. But as u/KronisLV said, its trendy to hate on AI now. In the face of something many in the industry don't understand, that is mechanizing away a lot of labor, that clearly isn't going away, there is a reaction that is not positive or even productive but somehow destructive: this thing is trash, it stole from us, it's a waste of money, destroys the environment, etc...therefore it must be "resisted." Even with all the underhanded work, the means-ends logic of OpenAI and other major companies involved in developing the technology, there is still no point in stopping it. There was a group of people who tried to stop the mechanical loom because it took work away from weavers, took away their craft--we call them luddites. But now it doesn't take weeks and weeks to produce a single piece of clothing. Everyone can easily afford to dress themselves. Society became wealthier. These LLMs, at the very least they let anyone learn anything, start any project, on a whim. They let people create things in minutes that used to take hours. They are "creating value," even if its "slop" even if its not carefully crafted. Them's the breaks--we'd all like our clothing hand-weaved if it made any sense. But even in a world where one could have the time to sit down and weave their own clothing, carefully write out each and every line of code, it would only be harmful to take these new machines away, disable them just because we are afraid of what they can do. The same technology that created the atom bomb also created the nuclear reactor.
“But where the danger is, also grows the saving power.”
So you would say it is not "trendy" to be pro-AI right now, is that it? That it's not trendy to say things like "it's not going away" or "AI isn't a fad" or "AI needs better critics" - one reaction is reasonable, well thought-out, the other is a bandwagon?
At the very least there is an ideological conflict brewing in tech, and this post is a flashpoint. But just like the recent war between Israel and Hamas, no amount of reaction can defeat technological dominance--at least not in the long term. And the pro-AI side, whether you think its good or evil, certainly exceeds the other in terms of sheer force through their embrace of technology.
Notice that the weavers, both the luddites and their non-opposing colleagues, certainly did not get wealthier. They lost their jobs, and they and their children starved. Some starved to death. Wealth was created, but it was not shared.
Remember this when talking about their actions. People live and die their own life, not just as small parts in a large 'river of society'. Yes, generations after them benefited from industrialisation, but the individuals living at that time fought for their lives.
It’s in our power to stop it. There’s no point in people like you promoting the interests of the super wealthy at the cost of the humanity of the common people. You should figure out how to positively contribute or not do so at all.
It is not in the interests of the super wealthy alone, just like JP Morgan's railroads were created for his sake but in the end produced great wealth for everyone in America. It is very short sighted to see this as merely some oppression from above. Technology is not class-oriented, it just is, and it happens to be articulated in terms of class because of the mode of social organization we live in.
Finally someone echoes my sentiments. It's my sincere belief that many in the software community are glazing AI for the purposes of career advancement. Not because they actually like it.
One person I know is developing an AI tool with 1000+ stars on github where in private they absolutely hate AI and feel the same way as rob.
Maybe it's because I just saw Avatar 3, but I honestly couldn't be more disgusted by the direction we're going with AI.
I would love to be able to say how I really feel at work, but disliking AI right now is the short path to the unemployment line.
If AI was so good, you would think we could give people a choice whether or not to use it. And you would think it would make such an obvious difference, that everyone would choose to use it and keep using it. Instead, I can't open any app or website without multiple pop-ups begging me to use AI features. Can't send an email, or do a Google search. Can't post to social media, can't take a picture on my phone without it begging me to use an AI filter. Can't go to the gallery app without it begging me to let it use AI to group the photos into useless albums that I don't want.
The more you see under the hood, the more disgusting it is. I yearn for the old days when developers did tight, efficient work, creating bespoke, artistic software in spite of hardware limitations.
Not only is all of that gone, nothing of value has replaced it. My DOS computer was snappier than my garbage Win11 machine that's stuffed to the gills with AI telemetry.
He worked in well paying jobs, probably traveles, has a car and a house and complains about toxic products etc.
Yes there has to be a discussion on this and yeah he might generally have the right mindset, but lets be honest here: No one of them would have developed any of it just for free.
We all are slaves to capitalism
and this is were my point comes: Extrem fast and massive automatisation around the globe might be the only think pushing us close enough to the edge that we all accept capitalisms end.
And yes i think it is still massivly beneficial that my open source code helped creating something which allows researchers to write easier and faster better code to push humanity forward. Or enables more people overall to have/gain access to writing code or the result of what writing code produces: Tools etc.
@Rob its spam, thats it. Get over it, you are rich and your riches did not came out of thin air.
468 comments.... guys, guys, this is a Blue Sky post! Have we not learned that anyone who self-exiled to Blue Sky is wearing a "don't take me seriously" badge for our convenience?
I don’t understand why anyone thinks we have a choice on AI. If America doesn’t win, other countries will. We don’t live in a Utopia, and getting the entire world to behave a certain way is impossible (see covid). Yes, AI videos and spam is annoying, but the cat is out of the bag. Use AI where it’s useful and get with the programme.
The bigger issue everyone should be focusing on is growing hypocrisy and overly puritan viewpoints thinking they are holier and righter than anyone else. That’s the real plague
Isn't it obvious? Near future vision-language-action models have obvious military potential (see what the Figure company is doing, now imagine it in a combat robot variant). Any superpower that fails to develop combat robots with such AI will not be a superpower for very long. China will develop them soon. If the US does not, the US is a dead superpower walking. EU is unfortunately still sleeping. Well, perhaps France with Mistral has a chance.
From a quick read it seems pretty obvious that the author doesn’t speak English as a native language. You can tell because some of the sentences are full of grammatical errors (ie probably written by the author) and some are not (probably AI-assisted).
My guess is they wrote a thank you note and asked Claude to clean up the grammar, etc. This reads to me as a fairly benign gesture, no worse than putting a thank you note through Google Translate. That the discourse is polarized to a point that such a gesture causes Rob Pike to “go nuclear” is unfortunate.
As I read it, the "fakeness" of it all triggered a ballistic response. And wasting resources in the process. An AI developed feelings and expressed fake gratitude, and the human reading this BS goes ballistic.
To purely associate with him with Google is a mistake, that (ironically?) the AI actually didn't make.
Just the haters here.
His viewpoints were always grounded and while he may have some opinions about Go and programming, he genuinely cares about the craft. He’s not in it to be rich. He’s in it for the science and art of software engineering.
ROFL his website just spits out poop emoji's on a fibonacci delay. What a legend!
Craft is gone. It is now mass manufactured for next to nothing in a quality that can never be achieved by hand coding.
(/s about quality, but you can see where it’s going)
Don’t upvote sealions.
According to merriam-webster, sealioning/sealions are:
> 'Sealioning' is a form of trolling meant to exhaust the other debate participant with no intention of real discourse.
> Sealioning refers to the disingenuous action by a commenter of making an ostensible effort to engage in sincere and serious civil debate, usually by asking persistent questions of the other commenter. These questions are phrased in a way that may come off as an effort to learn and engage with the subject at hand, but are really intended to erode the goodwill of the person to whom they are replying, to get them to appear impatient or to lash out, and therefore come off as unreasonable.
It also doesn’t help their case that they somehow have a such a starkly contradictory opinion on something they ostensibly don’t know anything/are legitimately asking questions about. They should ask a question or two and then just listen.
It’s just one of those things that falls under “I know it when I see it.”
If anyone were actually interested in a conversation there is probably one to be had about particular applications of gen-AI, but any flat out blanket statements like his are not worthy of any discussion. Gen-AI has plenty of uses that are very valuable to society. E.g. in science and medicine.
Also, it's not "sealioning" to point out that if you're going to be righteous about a topic, perhaps it's worth recognizing your own fucking part in the thing you now hate, even if indirect.
Of course they could. (1) People are capable of changing their minds. His opinion of data centers may have been changed recently by the rapid growth of data centers to support AI or for who knows what other reasons. (2) People are capable of cognitive dissonance. They can work for an organization that they believe to be bad or even evil.
Cognitive dissonance is, again, exactly my point. If you sat him down and asked him to describe in detail how some guy setting up a server rack is similar to a rapist, I’m pretty confident he’d admit the metaphor was overheated. But he didn’t sit himself down to ask.
I think "you people" is meant to mean the corporations in general, or if any one person is culpable, the CEOs. Who are definitely not just "some guy setting up a server rack."
> To purely associate with him with Google is a mistake, that (ironically?) the AI actually didn't make.
You don't have to purely associate him with Google to understand the rant as understandable given AI spam, and yet entirely without a shred of self-awareness.
And he is allowed to work for google and still rage against AI.
Life is complicated and complex. Deal with it.
The specific quote is "spending trillions on toxic, unrecyclable equipment while blowing up society." What has he supported for the last 20+ years if not that? Did he think his compute ran on unicorn farts?
Clearly he knows, since he self-replies "I apologize to the world at large for my inadvertent, naive if minor role in enabling this assault."
Just because someone does awesome stuff, like Rob Pike has, doesn't mean that their blind spots aren't notable. You can give him a pass and the root comment sure wishes everyone would, but in doing so you put yourself in the position of the sycophant letting the emperor strut around with no clothes.
Somebody burned compute to send him an LLM-generated thank-you note. Everybody involved in this transaction lost, nobody gained anything from it. It's pure destruction of resources.
The AI boom has completely changed that. Data center power usage is rocketing upwards now. It is estimated it will be more than 10% of all electric power usage in the US by 2030.
It's a completely different order of magnitude than the pre AI-boom data center usage.
Source: https://escholarship.org/uc/item/32d6m0d1
It might help to look at global power usage, not just the US, see the first figure here:
https://arstechnica.com/ai/2024/06/is-generative-ai-really-g...
There isn't an inflection point around 2022: it has been rising quickly since 2010 or so.
Figure 1.1 is the chart I was referring to, which are the data points from the original sources that it uses.
Between 2010 and 2020, it shows a very slow linear growth. Yes, there is growth, but it's quite slow and mostly linear.
Then the slope increases sharply. And the estimates after that point follow the new, sharper growth.
Sorry, when I wrote my original comment I didn't have the paper in front of me, I linked it afterwards. But you can see that distinct change in rate at around 2020.
Figure 1.1 does show a single source from 2018 (Shehabi et al) that estimates almost flat growth up to 2017, that's true, but the same graph shows other sources with overlap on the same time frame as well, and their estimates differ (though they don't span enough years to really tell one way or another).
So if you looked at a graph of energy consumption, you wouldn't even notice crypto. In fact even LLM stuff will just look like a blip unless it scales up substantially more than its currently trending. We use vastly more more energy than most appreciate. And this is only electrical energy consumption. All energy consumption is something like 185,000 TWh. [1]
[1] - https://ourworldindata.org/energy-production-consumption
"yeah but they became efficient at it by 2012!"
How much of that compute was for the ads themselves vs the software useful enough to compel people to look at the ads?
I am not accurate about google but facebook definitely has some of the most dystopian tracking I have heard. I might read the facebook files some day but the dystopian fact that facebook tracks young girls and sees if that they delete their photos, they must feel insecure and serves them beauty ads is beyond predatory.
Honestly, my opinion is that something should be done about both of these issues.
But also its not a gotcha moment for Rob pike that he himself was plotting up the ads or something.
Regarding the "iphone kids", I feel as if the best thing is probably an parental level intervention rather than waiting for an regulatory crackdown since lets be honest, some kids would just download another app which might not have that regulation.
Australia is implementing social media ban basically for kids but I don't think its gonna work out but everyone's looking at it to see what's gonna happen basically.
Personally I don't think social media ban can work if VPN's just exist but maybe they can create such an immense friction but then again I assume that this friction might just become norm. I assume many of you guys must have been using internet from the terminal days where the friction was definitely there but the allure still beat the friction.
There is no upside to vast majority of the AI pushed by the OpenAI and their cronies. It's literally fucking up economy for everyone else all to get AI from "lies to users" to "lies to users confidently", all while rampantly stealing content to do that, because apparently pirating something as a person is terrible crime govt need to chase you, unless you do that to resell it in AI model, then it's propping up US economy.
If there is any example of hypocrisy, and that we don't have a justice system that applies the law equally, that would be it.
It is like an arms race. Everyone would have been better off if people just never went to war, but....
A lot of advertising is telling people about some product or service they didn't even know existed though. There may not even be a competitor to blame for an advertising arms race.
You're tilting at windmills here, we can't go back to barter.
And before the LLM craze there was a constant focus on efficiency. Web search is (was?) amazingly efficient per query.
No, but it puts some perspective on things. IMO Google, after abandoning its early "don't be evil" motto is directly responsible for a significant chunk of the current evil in the developed world, from screen addiction to kids' mental health and social polarization.
Working for Google and drawing an extravagant salary for many, many years was a choice that does affect the way we perceive other issues being discussed by the same source. To clarify: I am not claiming that Rob is evil; on the contrary. His books and open source work were an inspiration to many, myself included. But I am going to view his opinions on social good and evil through the prism of his personal employment choices. My 2c.
We need to find a way to stop contributing to the destruction of the planet soon.
I don't work for any of these companies, but I do purchase things from Amazon and I have an apple phone. I think the best we can do is minimize our contribution to it. I try to limit what services I use from this companies, and I know it doesnt make much of a differnce, but I am doing what I can.
I'm hoping more people that need to be employed by tech companies can find a way to be more selective on who they employ with.
If rob pike was asked about these issues of systemic addiction and others where we can find things google was bad at. I am sure that he wouldn't defend google about these things.
Maybe someone can mail a real message asking Rob pike genuinely (without any snarkiness that I feel from some comments here) about some questionable google things and I am almost certain that if those questions are reasonable, rob pike will agree that some actions done by google were wrong.
I think its just that rob pike got pissed off because an AI messaged him so he got the opportunity to talk about these issues and I doubt that he got the opportunity to talk / someone asking him about some other flaws of google / systemic issues related to it.
Its like, Okay, I feel like there is an issue in the world so I talk about it. Now does that mean that I have to talk about every issue in the world, no not really. I can have priorities in what issues I wish to talk about.
But that being said, if someone then asks me respectfully about issues which are reasonable, Being moral, I can agree about that yes those are issues as well which needs work upon.
And some people like rob pike who left google because of (ideological reasons perhaps, not sure?) wouldn't really care about the fallback and like you say, its okay to collect checks from organization even if they critize
Honestly Google's lucky that they got rob pike instead of vice versa from my limited knowledge.
Golang is such a brilliant language and ken thompson and rob pike are consistently some of the best coders and their contributions to golang and so many other projects is unparalleled.
I don't know much about rob pike as compared to Ken thompson but I assume he is really great too! Mostly I am just a huge golang fan.
Im not saying nobody has the right to criticize something they are supporting, but it does say something about our choices and how far we let this problem go before it became too much to solve. And not saying the problem isn't solvable. Just saying its become astronomically more difficult now then ever before.
I think at the very least, there is a little bit of cringe in me every time I criticize the very thing I support in some way.
With all due respect, being moral isn't an opinion or agreement about an opinion, it's the logic that directs your actions. Being moral isn't saying "I believe eating meat is bad for the planet", it's the behaviour that abstains from eating meat. Your moral is the set of statements that explains your behaviour. That is why you cannot say "I agree that domestic violence is bad" while at the same time you are beating up your spouse.
If your actions contradict your stated views, you are being a hypocrite. This is the point that people in here are making. Rob Pike was happy working at Google while Google was environmentally wasteful (e-waste, carbon footprint and data center related nastiness) to track users and mine their personal and private data for profit. He didn't resign then nor did he seem to have caused a fuss about it. He likely wasn't interested in "pointless politics" and just wanted to "do some engineering" (just a reference to techies dismissing or critising folks discussing social justices issues in relation to big tech). I am shocked I am having to explain this in here. I understand this guy is an idol of many here but I would expect people to be more rational on this website.
We got to this point by not looking at these problems for what they are. Its not wrong to say something is wrong and it needs to be addressed.
Doing cool things, without looking at whether or not we should doesn't feel very responsible too me esp. if it impacts society in a negative way.
For example, Rob seems not to realize that the people who instructed an AI agent to send this email are a handful of random folks (https://theaidigest.org/about) not affiliated with any AI lab. They aren't themselves "spending trillions" nor "training your monster". And I suspect the AI labs would agree with both Rob and me that this was a bad email they should not have sent.
Data centers are not another thing when the subject is data centers.
When people have choices and they choose the more harmful action, it hurts their credibility. If Rob cares so much about society and the environment, why did he work at a company that has horrendous track record on both? Someone of his level of talent certainly had choices, and he chose to contribute to the company that abandoned “don’t be evil” a long time ago.
Ian Lance Taylor on the other hand appeared to have quit specifically because of the "AI everything" mandate.
Just an armchair observation here.
Did you sell all of your stock?
Honestly I believe that google might be one of the few winners from the AI industry perhaps because they own the whole stack top to bottom with their TPU's but I would still stray away from their stock because their P/E ratio might be insanely high or something
Their p/e ratio has almost doubled in just a year which isn't a good sign https://www.macrotrends.net/stocks/charts/googl/alphabet/pe-...
So like, we might be viewing the peaks of the bubble and you might still hold the stocks and might continue holding it but who knows what happens after the stock depreciates value due to AI Bubble-like properties and then you might regret as why you didn't sell it but if you do and google's stock rises, you might still regret.
I feel as if grass is always greener but not sure about your situation but if you ask me, you made the best out of the situation with the parameters you had and logically as such I wouldn't consider it "unfortunately" but I get what you mean.
But with remote work it also became possible to get paid decently around here without working there. Prior I was bound to local area employers of which Google was the only really good one.
I never loved Google, I came there through acquisition and it was that job with its bags of money and free food and kinda interesting open internal culture, or nothing because they exterminated my prior employer and and made me move cities.
After 2016 or so the place just started to go downhill faster and faster though. People who worked there in the decade prior to me had a much better place to work.
I am super curious as I don't get to chat with people who have worked at google as so much so pardon me but I got so many questions for you haha
> It was a weird place to work
What was the weirdness according to you, can you elaborate more about it?
> I never loved Google, I came there through acquisition and it was that job with its bags of money and free food and kinda interesting open internal culture, or nothing because they exterminated my prior employer and and made me move cities.
For context, can you please talk more about it :p
> After 2016 or so the place just started to go downhill faster and faster though
What were the reasons that made them go downhill in your opinion and in what ways?
Naturally I feel like as organizations move and have too many people, maybe things can become intolerable to work but I have heard it be described as it depends where and in which project you are and also how hard it can be to leave a bad team or join a team with like minded people which perhaps can be hard if the institution gets micro-managed at every level due to just its sheer size of employees perhaps?
Not at all. I actually prefer in-office. And left when Google was mostly remote. But remote opened up possibilities to work places other than Google for me. None of them have paid as well as Google, but have given more agency and creativity. Though they've had their own frustrations.
> What was the weirdness according to you, can you elaborate more about it?
I had a 10-15 year career before going there. Much of what is accepted as "orthodoxy" at Google rubbed me the wrong way. It is in large part a product of having an infinite money tree. It's not an agile place. Deadlines don't matter. Everything is paid for by ads.
And as time goes on it became less of an engineering driven place and more of a product manager driven place with classical big-company turf wars and shipping the org chart all over the place.
I'd love to get paid Google money again, and get the free food and the creature comforts, etc. But that Google doesn't exist anymore. And they wouldn't take my back anyways :-)
"AI" (and don't get me wrong I use these LLM systems constantly) is off the charts compared to normal data centre use for ads serving.
And so it's again, a kind of whataboutism that pushes the scale of the issue out of the way in order to make some sort of moral argument which misses the whole point.
BTW in my first year at Google I worked on a change where we made some optimizations that cut the # of CPUs used for RTB ad serving by half. There were bonuses and/or recognition for doing that kind of thing. Wasteful is a matter of degrees.
I find it difficult to express how strongly I disagree with this sentiment.
Like, the ratio is not too crazy, it's rather the large resource usages that comes from the aggregate of millions of people choosing to use it.
If you assume all of those queries provide no value then obviously that's bad. But presumably there's some net positive value that people get out of that such that they're choosing to use it. And yes, many times the value of those queries to society as a whole is negative... I would hope that it's positive enough though.
I.e., they are proud to have never intentionally used AI and now they feel like they have to maintain that reputation in order to remain respected among their close peers.
But I do derive value from owning a car. (Whether a better world exists where my and everyone else's life would be better if I didn't is a definitely a valid conversation to have.)
The user doesn't derive value from ads, the user derives value from the content on which the ads are served next to.
The value you derive is the ability to make your car move. If you derived no value from gas, why would you spend money on it?
If they want LLM, you probably don't have to advertise them as much
No the reality of the matter is that people are being shoved LLM's. They become the talk of the town and algorithms share any development related to LLM or similar.
The ads are shoved down to users. Trust me, the average person isn't as much enthusiastic about LLM's and for good reasons when people who have billions of dollars say that yes its a bubble but its all worth it or similar and the instances where the workforce themselves are being replaced/actively talked about being replaced by AI
We live in an hackernews bubble sometimes of like-minded people or communities but even on hackernews we see disagreements (I am usually Anti AI mostly because of the negative financial impact the bubble is gonna have on the whole world)
So your point becomes a bit moot in the end but that being said, Google (not sure how it was in the past) and big tech can sometimes actively promote/ close their eyes if the ad sponsors are scammy so ad-blockers are generally really good in that sense.
Well the people who burnt compute got it from money so they did burn money.
But they don't care about burning money if they can get more money via investors/other inputs faster than they can burn (fun fact: sometimes they even outspend that input)
So in a way the investors are burning their money, now they burn the money because the market is becoming irrational. Remember Devin? Yes cognition labs is still there etc. but I remember people investing into these because of their hype when it did turn out to be moot comparative to their hype.
But people/market was so irrational that most of these private equities were unable to invest in something like openai that they are investing in anything AI related.
And when you think more deeper about all the bubble activities. It becomes apparent that in the end bailouts feel more possible than not which would be an tax on average taxpayers and they are already paying an AI tax in multiple forms whether it be in the inflation of ram prices due to AI or increase in electricity or water rates.
So repeat it with me: whose gonna pay for all this, we all would but the biggest disservice which is the core of the argument is that if we are paying for these things, then why don't we have a say in it. Why are we not having a say in AI related companies and the issues relating to that when people know it might take their jobs etc. so the average public in fact hates AI (shocking I know /satire) but the fact that its still being pushed shows how little influence sometimes public can have.
Basically public can have any opinions but we won't stop is the thing happening in AI space imo completely disregarding any thoughts about the general public while the CFO of openAI proposing an idea that public can bailout chatgpt or something tangential.
Shaking my head...
Just like the invention of Go.
When the thought is "I owe this person a 'Thank You'", the handwritten letter gives an illusion of deeper thought. That's why there are fonts designed to look handwritten. To the receiver, they're just junk mail. I'd rather not get them at all, in any form. I was happy just having done the thing, and the thoughtless response slightly lessens that joy.
If you insert AI into your thinking process, it has a bias, for sure. It will helpfully reinforce whatever you tell it you think makes sense, or at least on average it will be interpreted that way because of a wide variety of human cognitive biases even if it hedges. At the least it will respond with ideas that are very... median.
So at each one of these tiny branches you introduce a bias towards the "typical" instead of discovering where your own mind would go. It's fine and conversational but it clearly influences your thought process to, well, mitigate your edges. Maybe it's more "correct", it's certainly less unique.
And then at some point they start charging for the service. That's the part I'm concerned about, if it's on-device and free to use I still think it makes your thought process less interesting and likely to have original ideas, but having to subscribe to a service to trust your decision making is deeply concerning.
This, speaking about environmental impacts. I wish that more models start focusing on the parameter density / their compactness more so that they can run locally but this isn't something that big tech really wants so we are probably gonna get models like the recent minimax model or glm air models or qwen or mistral models.
These AI services only work as long as they are free and burn money. As an example, me and my brother were discussing something yesterday related to LLM and my mother tried to understand and talk about it too and wanted to get ghibli styles photo since someone had ghibli generated photo as their pfp and she wanted to try it too
She then generated the pictures and my brother did a quick calculation and it took around 4 cents for each image which with PPP in my country and my currency is 3 ruppees.
When asked by my brother if she would pay for it, she said that no she's only using it for free but she also said that if she were forced to, she might even pay 50 rupees.
I jumped in the conversation and said nobody's gonna force her to make ghibli images.
I would wager that the amount of “very significant thing that have happened over the history of humanity” come down to a few emotional responses.
Communication happen between two parties. I wouldn't consider LLM an party considering it's just an autosuggestion on steroids at the end of day (lets face it)
Also if you need communication like this, just share the prompt anyway to that other person in the letter, people much rather might value that.
Automated systems sending people unsolicited, unwanted emails is more commonly known as spam.
Especially when the spam comes with a notice that it is from an automated system and replies will be automated as well.
FFS. AI's greatest accomplishment is to debase and destroy.
Trillions of dollars invested to bring us back to the stone age. Every communications technology from writing onward jammed by slop and abandoned.
If you're being accurate, the people you know are terrible.
If someone sends me a personal letter [and I gather we're talking about a thank-you note here], I'm sure as hell going to open it. I'll probably even save it in a box for an extremely long time.
Yes so this is why the reason why person card/letters really matter because most people sheldom get any and if you know a person in your life / in any (community/project) that you deeply admire, sending them a handwritten mail can be one of the highest gestures which shows that you took the time out of your day and you really cared about them so much in a way.
That's my opinion atleast.
That interpretation doesn't save the comment, it makes it totally off topic.
Down the street from it is an aluminum plant. Just a few years after that data center, they announced that they were at risk of shutting down due to rising power costs. They appealed to city leaders, state leaders, the media, and the public to encourage the utilities to give them favorable rates in order to avoid layoffs. While support for causes like this is never universal, I'd say they had more supporters than detractors. I believe that a facility like theirs uses ~400 MW.
Now, there are plans for a 300 MW data center from companies that most people aren't familiar with. There are widespread efforts to disrupt the plans from people who insist that it is too much power usage, will lead to grid instability, and is a huge environmental problem!
This is an all too common pattern.
Easier for a politician to latch onto manufacturing jobs.
You don't just chuck ore into a furnace and wait for a few seconds in reality.
I'd guess that this is also an area where the perception makes a bigger difference than the reality.
The astroturf in this thread is unreal. Literally. ;)
He and everyone who agrees with his post simply don't like generative AI and don't actually care about "recyclable data centers" or the rape of the natural world. Those concerns are just cudgels to be wielded against a vague threatening enemy when convenient, and completely ignored when discussing the technologies they work on and like
And you can't assert that AI is "revolutionary" and "a vague threat" at the same time. If it is the former, it can't be the latter. If it is the latter, it can't be the former.
That effort is completely abandoned because of the current US administration and POTUS a situation that big tech largely contributed to. It’s not AI that is responsible for the 180 zeitgeist change on environmental issues.
Yes, much like it's not the gun's fault when someone is killed by a gun. And, yet, it's pretty reasonable to want regulation around these tools that can be destructive in the wrong hands.
I never asserted that AI is either of those things
Revolutions always came with vague (or concrete) threats as far as I know.
Nothing there makes sense at any level.
But people getting fired and electricity bills skyrocketing (as well as RAM etc.) are there right now.
You mean except the bit about how GenAI included his work in its training data without credit or compensation?
Or did you disagree with the environmental point that you failed to keep reading?
Assess the argument based on its merits. If you have to pick him apart with “he has no right to say it” that is not sufficient.
How so? He’s talking about what happened to him in the context of his professional expertise/contributions. It’s totally valid for him to talk about this subject. His experience, relevance, etc. are self apparent. No one is saying “because he’s an expert” to explain everything.
They literally (using AI) wrote him an email about his work and contributions. His expertise can’t be removed from the situation even if we want to.
Except it definitely is, unless you want to ignore the bubble we're living in right now.
https://nationalcentreforai.jiscinvolve.org/wp/2025/05/02/ar...
It seems video streaming, like Youtube which is owned by Google, uses much more energy than generative AI.
1) video streaming has been around for a while and nobody, as far as I'm aware, has been talking about building multiple nuclear tractors to handle the energy needs
2) video needs a CPU and a hard drive. LLM needs a mountain of gpus.
3) I have concerns that the "national center for AI" might have some bias
I can find websites also talking about the earth being flat. I don't bother examining their contents because it just doesn't pass the smell test.
Although thanks for the challenge to my preexisting beliefs. I'll have to do some of my own calculations to see how things compare.
The 0.077 kWh figure assumes 70% of users watching on a 50 inch TV. It goes down to 0.018 kWh if we assume 100% laptop viewing. And for cell phones the chart bar is so small I can't even click it to view the number.
Neither is comparing text output to streaming video
How many tokens do you use a day?
https://www.youtube.com/results?search_query=funny+3d+animal...
(That's just one genre of brainrot I came across recently. I also had my front page flooded with monkey-themed AI slop because someone in my household watched animal documentaries. Thanks algorithm!)
I doubt Youtube is running on as many data centers as all Google GenAI projects are running (with GenAI probably greatly outnumbering Youtube - and the trend is also not in favor of GenAI).
This isn't ad hom, it's a heuristic for weighting arguments. It doesn't prove whether an argument has merit or not, but if I have hundreds of arguments to think about, it helps organizing them.
And it probably isn't astroturf, way too many people just think this way.
We want free services and stuff, complain about advertising / sign up for the google's of the world like crazy.
Bitch about data-centers while consuming every meme possible ...
The points you raise, literally, do not affect a thing.
It's like all those anti-copyright activists from the 90s (fighting the music and film industry) that suddenly hate AI for copyright infringements.
Maybe what's bothering the critics is actually deeper than the simple reasons they give. For many, it might be hate against big tech and capitalism itself, but hate for genAI is not just coming from the left. Maybe people feel that their identity is threatened, that something inherently human is in the process of being lost, but they cannot articulate this fear and fall back to proxy arguments like lost jobs, copyright, the environment or the shortcomings of the current implementations of genAI?
> You spent your whole life breathing, and now you're complaining about SUVs? What a hypocrite.
Furthermore, w.r.t the points you raised - it's a matter of scale and utility. Compared to everything that has come before, GenAI is spectacularly inefficient in terms of utility per unit of compute (however you might want to define these). There hasn't been a tangible nett good for society that has come from it and I doubt there would be. The egarness and will to throw money and resources at this surpasses the crypto mania which was just as worthless.
Even if you consider Rob a hypocrite , he isn't alone in his frustration and anger at the degradation of the promise of Open Culture.
People being more productive with writing code, making music or writing documents fpr whatever is not a improvement for them and therefore for society?
Or do you claim that is all imaginary?
Or negated by the energy cost?
And all at significant opportunity cost (in terms of computing and investment)
If it was as life altering as they claim where's that novel work of art (in your examples..of code, music or literature) that truly could not have been produced without GenAI and fundamentally changed the art form ?
Surely, with all that ^increased productivity^ we'd have seen the impact equivalent of linux, apache, nginx, git, redis, sqlite, ... Etc being released every couple of weeks instead of yet another VSCode clone./s
(NB: I am currently working in AI, and have previously worked in adtech. I'm not claiming to be above the fray in any way.)
The amount of “he’s not allowed to have an opinion because” in this thread is exhausting. Nothing stands up to the purity test.
He sure was happy enough to work for them (when he could work anywhere else) for nearly two decades. A one line apology doesn't delete his time at Google. The rant also seems to be directed mostly if not exclusively towards GenAI not Google. He even seems happy enough to use Gmail when he doesn't have to.
You can have an opinion and other people are allowed to have one about you. Goes both ways.
Obviously now it is mostly the latter and minimally the former. What capitalism giveth, it taketh away. (Or: Capitalism without good market design that causes multiple competitors in every market doesn't work.)
[0]: https://www.tomshardware.com/tech-industry/artificial-intell...
That dam took 10 years to build and cost $30B.
And OpenAI needs more than ten of them in 7 years.
If he is currently at Google: congratulations on this principled stance, he deserves a lot of respect.
The overall resource efficiency of GenAI is abysmal.
You can probably serve 100x more Google Search queries with the same resources you'd use for Google Gemini queries (like for like, Google Search queries can be cached, too).
> Nope, you can't, and it takes a simple Gemini query to find out more about the actual x if you are interested in it. (closer to 3, last time I checked, which rounds to 0, specially considering the clicks you save when using the LLM)
Why would you lie: https://imgur.com/a/1AEIQzI ???
For those that don't want to see the Gemini answer screenshot, best case scenario 10x, worst case scenario 100x, definitely not "3x that rounds to 0x", or to put it in Gemini's words:
> Summary
> Right now, asking Gemini a question is roughly the environmental equivalent of running a standard 60-watt lightbulb for a few minutes, whereas a Google Search is like a momentary flicker. The industry is racing to make AI as efficient as Search, but for now, it remains a luxury resource.
The reason why it all rounds to 0 is that the google search will not give you an answer. It gives you a list of web pages, that you then need to visit (often times more than just one of them) generating more requests, and, more importantly, it will ask more of your time, the human, whose cumulative energy expenditure to be able to ask to be begin with is quite significant – and that you then will have not to spend on other things that a LLM is not able to do for you.
Yes, Google Search is raw info. Yes, Google Search quality is degrading currently.
But Gemini can also hallucinate. And its answers can just be flat out wrong because it comes from the same raw data (yes, it has cross checks and it "thinks", but it's far from infallible).
Also, the comparison of human energy usage with GenAI energy usage is super ridiculous :-)))
Animal intelligence (including human intelligence) is one of the most energy efficient things on this planet, honed by billions years of cut throat (literally!) evolution. You can argue about time "wasted" analysing search results (which BTW, generally makes us smarter and better informed...), but energy-wise, the brain of the average human uses as much energy as the average incandescent light bulb to provide general intelligence (and it does 100 other things at the same time).
Talking about "condescending":
> super ridiculous :-)))
It's not the energy efficient animal intelligence that got us here, but a lot of completely inefficient human years to begin with, first to keep us alive and then to give us primary and advanced education and our first experiences to become somewhat productive human beings. This is the capex of making a human, and it's significant – specially since we will soon die.
This capex exists in LLMs but rounds to zero, because one model will be used for +quadrillions of tokens. In you or me however, it does not round to zero, because the number of tokens we produce round to zero. To compete on productivity, the tokens we have produce therefore need to be vastly better. If you think you are doing the smart thing by using them on compiling Google searches you are simply bad at math.
In reality what they do is pay "carbon credits" (money) to some random dude that takes the money and does nothing with it. The entire carbon credit economy is bullshit.
Very similar to how putting recyclables in a different color bin doesn't do shit for the environment in practice.
"Google deletes net-zero pledge from sustainability website"
as noticed by the Canadian National Observer
https://www.nationalobserver.com/2025/09/04/investigations/g...
As if there isn't a massive pro AI hype train. I watched an nfl game for the first time in 5 years, and saw no less than 8 AI commercials. AI Is being forced on people.
In commercials people were using it to generate holiday cards for God sake. I can't imagine something more cold and impersonal. I don't want that garbage. Our time on earth is to short to wade through LLM slop text
I noticed a pattern after a while. We'd always have themed toys for the Happy Meals, sure, sometimes they'd be like ridiculously popular with people rolling through just to see what toys we had.
Sometimes, they wouldn't. But we'd still have the toys, and on top of that, we'd have themed menus and special items, usually around the same time as a huge marketing blitz on TV. Some movie would be everywhere for a week or two, then...poof!
Because the movies that needed that blitz were always trash. Just forgettable, mid, nothing movies.
When the studios knew they had a stinker, they'd push the marketing harder to drum up box office takings, cause they knew no one was gonna buy the DVD.
Good products speak for themselves. You advertise to let people know, sure, but you don't have to be obnoxious about it.
AI products almost all have that same desperate marketing as crappy mid-budget films do. They're the equivalent of "The Hobbit branded menus at Dennys". Because no one really gives a shit about AI. For people like my mom, AI is just a natural language Google search. That's all it's really good at for the average person.
The AI companies have to justify the insane money being blown on the insane gold rush land grab at silicon they can't even turn on. Desperation, "god this bet really needs to pay off".
It all stinks of resume-driven development
In windows, Co-polit is installed and its very difficult to remove.
Don't act like this isn't a problem, its a very simple premise.
And companies do force it.
You're breaking the expected behavior of something that performed flawlessly for 10+ years, all to deliver a worse, enshitified version of the search we had before.
For now I'm sticking to noai.duckduckgo.com
But I'm sure they'll rip that away eventually too. And then I'll have to run a god dang local search engine just to search without AI. I'll do it, but it's so disappointing.
Unless your version of reason is clinical. then yeah, point taken. Good luck living on that island where nothing else matters but technological progress for technologies sake alone.
I hate the way people get angry about what media and social media discourse prompts them to get angry about instead of thinking about it. It’s like right wingers raging about immigration when they’re really angry about rent and housing costs or low wages.
His anger is ineffective and misdirected because he fails to understand why this happened: economics and convenience.
It’s economics because software is expensive to produce and people only pay for it when it’s hosted. “Free” (both from open source and VC funded service dumping) killed personal computing by making it impossible to fund the creation of PC software. Piracy culture played a role too, though I think the former things had a larger impact.
It’s convenience because PC operating systems suck. Software being in the cloud means “I don’t have to fiddle with it.” The vast majority of people hate fiddling with IT and are happy to make that someone else’s problem. PC OSes and especially open source never understood this and never did the work to make their OSes much easier to use or to make software distribution and updating completely transparent and painless.
There’s more but that’s the gist of it.
That being said, Google is one of the companies that helped kill personal computing long before AI.
> I want no local storage anywhere near me other than maybe caches. No disks, no state, my world entirely in the network. Storage needs to be backed up and maintained, which should be someone else's problem, one I'm happy to pay to have them solve. Also, storage on one machine means that machine is different from another machine. At Bell Labs we worked in the Unix Room, which had a bunch of machines we called "terminals". Latterly these were mostly PCs, but the key point is that we didn't use their disks for anything except caching. The terminal was a computer but we didn't compute on it; computing was done in the computer center. The terminal, even though it had a nice color screen and mouse and network and all that, was just a portal to the real computers in the back. When I left work and went home, I could pick up where I left off, pretty much. My dream setup would drop the "pretty much" qualification from that.
[0]: https://usesthis.com/interviews/rob.pike/
Home Computer enthusiasts know better. Local storage is important to ownership and freedom.
In which case he’s got nothing to complain about, making this rant kind of silly.
We were contributing to natural resource destruction in exchange for salary and GDP growth before GenAI, and we’re doing the same after. The idea that this has somehow 10x’d resource consumption or emissions or anything is incorrect. Every single work trip that requires you to get on a plane is many orders of magnitude more harmful.
We’ve been compromising on those morals for our whole career. The needle moved just a little bit, and suddenly everyone’s harm thresholds have been crossed?
They expect you to use GenAI just like they expected accountants to learn Excel when it came out. This is the job, it has always been the job.
I’m not an AI apologist. I avoid it for many things. I just find this sudden moral outrage by tech workers to be quite intellectually lazy and revisionist about what it is we were all doing just a few years ago.
Are you not reading the writing on the wall? These things have been going on for a long time and final people are starting to wake up that it needs to stop. You cant treat people in inhumane ways without eventual backlash.
Its similar to the Trust Thermocline. There's always been concern about whether we were doing more harm than good (there's a reason jokes about the Torment Nexus were so popular in tech). But recent changes have made things seem more dire and broken through the Harm Thermocline, or whatever you want to call it.
Edit: There's also a "Trust Thermocline" element at play here too. We tech workers were never under the illusion that the people running our companies were good people, but there was always some sort of nod to greater responsibility beyond the bottom line. Then Trump got elected and there was a mad dash to kiss the ring. And it was done with an air of "Whew, now we don't have to even pretend anymore!" See Zuckerberg on the right-wing media circuit. And those same CEOs started talking breathlessly about how soon they wouldn't have to pay us, because its super unfair that they have to give employees competitive wages. There are degrees of evil, and the tech CEOs just ripped the mask right off. And then we turn around and a lot of our coworkers are going "FUCK YEAH!" at this whole scenario. So yeah, while a lot of us had doubts before, we thought that maybe there was enough sense of responsibility to avoid the worse, but it turns out our profession really is excited for the Torment Nexus. The Trust Thermocline is broken.
1. Many tech workers viewed the software they worked on in the past as useful in some way for society, and thus worth the many costs you outline. Many of them don't feel that LLMs deliver the same amount of utility, and so they feel it isn't worth the cost. Not to mention, previous technologies usually didn't involve training a robot on all of humanity's work without consent.
2. I'm not sure the premise that it's just another tool of the trade for one to learn is shared by others. One can alternatively view LLMs as automated factory lines are viewed in relation to manual laborers, not as Excel sheets were to paper tables. This is a different kind of relationship, one that suggests wide replacement rather than augmentation (with relatively stable hiring counts).
In particular, I think (2) is actually the stronger of the reasons tech workers react negatively. Whether it will ultimately be justified or not, if you believe you are being asked to effectively replace yourself, you shouldn't be happy about it. Artisanal craftsmen weren't typically the ones also building the automated factory lines that would come to replace them (at least to my knowledge).
I agree that no one really has the right to act morally superior in this context, but we should also acknowledge that the material circumstances, consequences, and effects are in fact different in this case. Flattening everything into an equivalence is just as intellectually sloppy as pretending everything is completely novel.
i have yet to meet a single tech worker that isn't so
No, this is not the same.
[0]: https://www.tomshardware.com/tech-industry/artificial-intell...
Those are all real things happening. Not at all comparable to Muskan Vaporware.
No different than an CEO telling his secretary to send an anniversary gift to his wife.
If I put my car in neutral and push it down a hill, I’m responsible for whatever happens.
> How can you be grateful enough to want to send someone such a letter but not grateful enough to write one?
Answer according to your definitions: false premise, the author (the person who set up the LLM loops) was not grateful enough to want to send such a letter.
They’ve clearly bought too much into AI hype if they thought telling the agent to “do good” would work. The result was obviously pissing the hell out of rob pike. They should stop it.
What a moronic waste of resources. Random act of kindness? How low is the bar that you consider a random email as an act of kindness? Stupid shit. They at least could instruct the agents to work in a useful task like those parroted by Altman et al, eg find a cure for cancer, solving poverty, solving fusion.
Also, llms don't and can't "want" anything. They also don't "know" anything so they can't understand what "kindness" is.
Why do people still think software have any agency at all?
Criticizing anthropomorphic language is lazy, unconsidered, and juvenile. You can't string together a legitimate complaint so you're just picking at the top level 'easy' feature to sound important and informed.
Everybody knows LLMs are not alive and don't think, feel, want. You have not made a grand discovery that recontextualuzes all of human experience. You're pointing at a conversation everyone else has had a million times and feeling important about it.
We use this kind of language as a shorthand because talking about inherent motivations and activation parameters is incredibly clunky and obnoxious in everyday conversation.
The question isn't why people think software has agency (they don't) but why you think everyone else is so much dumber than you that they believe software is actually alive. You should reflect on that question.
Sorry, uh. Have you met the general population? Hell. Look at the leader of the "free world"
To paraphrase the late George Carlin "imagine the dumbest person you know. Now realize 50% of people are stupider than that!"
That's not how Carlin's quote goes.
You would know this if you paid attention to what you wrote and analyzed it logically. Which is ironic, given the subject.
No, they don't.
There's a whole cadre of people who talk about AGI and self awareness in LLMs who use anthropomorphic language to raise money.
> We use this kind of language as a shorthand because ...
You, not we. You're using the language of snake oil salesman because they've made it commonplace.
When the goal of the project is an anthropomorphic computer, anthropomorphizing language is really, really confusing.
Its fucking insanity.
JFC this makes me want to vomit
It is a result of the models selecting the policy "random acts of kindness" which resulted in a slew of these emails/messages. They received mostly negative responses from well-known OS figures and adapted the policy to ban the thank-you emails.
In the words of Gene Wilder in Blazing Saddles, “You know … idiots.”
For say a random individual ... they may be unsure about their own writing skills and want to say something but unsure of the words to use.
Welcome to 2025.
https://openai.com/index/superhuman/
There's this old joke about two economists walking through the forest...
> They’ve used OpenAI’s API to build a suite of next-gen AI email products that are saving users time, driving value, and increasing engagement.
No time to waste on pesky human interactions, AI is better than you to get engagement.
Get back to work.
It's preying on creators who feel their contributions are not recognized enough.
Out of all letters, at least some of the contributors will feel good about it, and share it on social media, hopefully saying something good about it because it reaffirms them.
It's a marketing stunt, meaningless.
(by the way, I love the idea of AI! Just don't like what they did with it)
> hopefully saying something good about
Those talented people that work on public relations would very much prefer working with base good publicity instead of trying to recover from blunders.
I used AI to write a thank you to a non-english speaking relative.
A person struggling with dimentia can use AI to help remember the words they lost.
These kinds of messages read to me like people with superiority complexes. We get that you don't need AI to help you write a letter. For the rest of us, it allows us to improve our writing, can be a creative partner, can help us express our own ideas, and obviously loads of other applications.
I know it is scary and upsetting in some ways, and I agree just telling an AI 'write my thank you letter for me' is pretty shitty. But it can also enable beautiful things that were never before possible. People are capable of seeing which is which.
So Im sorry but much of it is being abused and the parts of it being abused needs to stop.
edit: Also, I think maybe you don't appreciate the people who struggle to write well. They are not proud of the mistakes in their writing.
No. My kid wrote a note to me chock full of spelling and grammar mistakes. That has more emotional impact than if he'd spent the same amount of time running it through an AI. It doesn't matter how much time you spent on it really, it will never really be your voice if you're filtering it through a stochastic text generation algorithm.
I’m all for using technology for accessibility. But this kind of whataboutism is pure nonsense.
I would hazard a guess that this is the crux of the argument. Copying something I wrote in a child comment:
> When someone writes with an AI, it is very difficult to tell what text and ideas are originally theirs. Typically it comes across as them trying to pass off the LLM writing as their own, which feels misleading and disingenuous.
> I agree just telling an AI 'write my thank you letter for me' is pretty shitty
Glad we agree on this. But on the reader's end, how do you tell the difference? And I don't mean this as a rhetorical question. Do you use the LLM in ways that e.g. retains your voice or makes clear which aspects of the writing are originally your own? If so, how?
The writing is the ideas. You cannot be full of yourself enough to think you can write a two second prompt and get back "Your idea" in a more fleshed out form. Your idea was to have someone/something else do it for you.
There are contexts where that's fine, and you list some of them, but they are not as broad as you imply.
You can use AI to write a lot of your code, and as a side effect you might start losing your ability to code. You can also use it to learn new languages, concepts, programming patterns, etc and become a much better developer faster than ever before.
Personally, I'm extremely jealous of how easy it is to learn today with LLMs. So much of the effort I spent learning the things could be done much faster now.
If I'm honest, many of those hours reading through textbooks, blog posts, technical papers, iterating a million times on broken code that had trivial errors, were really wasted time, time which if I were starting over I wouldn't need to lose today.
This is pretty far off from the original thread though. I appreciate your less abrasive response.
While this seem like it might be the case, those hours you (or we) spent banging our collective heads against the wall were developing skills in determination and mental toughness, while priming your mind for more learning.
Modern research all shows that the difficulty of a task directly correlates to how well you retain information about that task. Spaced repetition learning shows, that we can't just blast our brains with information, and there needs to be
While LLMs do clearly increase our learning velocity (if using it right), there is a hidden cost to removing that friction. The struggle and the challenge of the process built your mind and character in ways that you cant quantify, but after years of maintaining this approach has essentially made you who you are. You have become implicitly OK with grinding out a simple task without a quick solution, the building of that grit is irreplaceable.
I know that the intellectually resilient of society, will still be able to thrive, but I'm scared for everyone else - how will LLMs affect their ability to learn in the long term?
> I know that the intellectually resilient of society, will still be able to thrive, but I'm scared for everyone else - how will LLMs affect their ability to learn in the long term?
Strong agree here.
But this is the learning process! I guess time will tell whether we can really do without it, but to me these long struggles seem essential to building deep understanding.
(Or maybe we will just stop understanding many things deeply...)
I agree that struggle matters. I don’t think deep understanding comes without effort.
My point isn’t that those hours were wasted, it’s that the same learning can often happen with fewer dead ends. LLMs don’t remove iteration, they compress it. You still read, think, debug, and get things wrong, just with faster feedback.
Maybe time will prove otherwise, but in practice I have found they let me learn more, not less, in the same amount of time.
You can achieve these things, but this is a way to not do the work, by copying from people who did do the work, giving them zero credit.
(As an aside, exposing people with dementia to a hallucinating robot is cruelty on an unfathomable level.)
I mean how do you write this seriously?
Photographers use cameras. Does that mean it isn't their art? Painters use paintbrushes. It might not be the the same things as writing with a pen and paper by candlelight, but I would argue that we can produce much more high quality writing than ever before collaborating with AI.
> As an aside, exposing people with dementia to a hallucinating robot is cruelty on an unfathomable level.
This is not fair. There is certainly a lot of danger there. I don't know what it's like to have dimentia, but I have seen mentally ill people become incredibly isolated. Rather than pretending we can make this go away by saying "well people should care more", maybe we can accept that a new technology might reduce that pain somewhat. I don't know that today's AI is there, but I think RLHF could develop LLMs that might help reassure and protect sick people.
I know we're using some emotional arguments here and it can get heated, but it is weird to me that so many on hackernews default to these strongly negative positions on new technology. I saw the same thing with cryptocurrency. Your arguments read as designed to inflame rather than thoughtful.
1. The extent to which the user is involved in the final product differs greatly with these three tools. To me there is a spectrum with "painting" and e.g. "hand-written note" at one extreme, and "Hallmark card with preprinted text" on the other. LLM-written email is much closer to "Hallmark card."
2. Perhaps more importantly, when I see a photograph, I know what aspects were created by the camera, so I won't feel mislead (unless they edit it to look like a painting and then let me believe that they painted it). When someone writes with an AI, it is very difficult to tell what text and ideas are originally theirs. Typically it comes across as them trying to pass off the LLM writing as their own, which feels misleading and disingenuous.
Maybe one more useful consideration for LLMs. If a friend writes to me with an LLM and discovers a new writing pattern, or learns a new concept and incorporates that into their writing, I see this as a positive development, not negative.
The messaging from AI companies is "we're going to cure cancer" and "you're going to live to be 150 years old" (I don't believe these claims!). The messaging should be "everything will be cheaper" (but this hasn't come true yet!).
It has enormous benefits to the people who control the companies raking in billions in investor funding.
And to the early stage investors who see the valuations skyrocket and can sell their stake to the bagholders.
It's interesting people from the old technological sphere viciously revolt against the emerging new thing.
Actually I think this is the clearest indication of a new technology emerging, imo.
If people are viciously attacking some new technology you can be guaranteed that this new technology is important because what's actually happening is that the new thing is a direct threat to the people that are against it.
"Because people attack it, it therefore means it's good" is a overly reductionist logical fallacy.
Sometimes people resist for good reasons.
In fact I would make a converse statement to yours - you can be certain that a product is grift, if the slightest criticism or skepticism of it is seen as a "vicious attack" and shouted down.
I don't think that's such a great signal: people were viciously attacking NFTs.
AI has a massive positive impact, and has for decades.
At this point, it's only people with an ideological opposition still holding this view. It's like trying to convince gear head grandpa that manual transmissions aren't relevant anymore.
Secondly, the scale of investment in AI isn't so that people can use it to generate a powerpoint or a one off python script. The scale of investment is to achieve "superintelligence" (whatever that means). That's the only reason why you would cover a huge percent of the country in datacenters.
The proof that significant value has been provided would be value being passed on to the consumer. For example if AI replaces lawyers you would expect a drop in the cost of legal fees (despite the harm that it also causes to people losing their jobs). Nothing like that has happened yet.
Did Gemini write a CAD program? Absolutely not. But do I need 100% of the CAD program's feature set? Absolutely not. Just ~2% of it for what we needed.
Part of the magic of LLMs is getting the exact bespoke tools you need, tailored specifically to your individual needs.
To me, the arguments sound like “there’s no proof typewriters provide any economic value to the world, as writers are fast enough with a pen to match them and the bottleneck of good writing output for a novel or a newspaper is the research and compilation parts, not the writing parts. Not to mention the best writers swear by writing and editing with a pen and they make amazing work”.
All arguments that are not incorrect and that sound totally reasonable in the moment, but in 10 years everyone is using typewriters and there are known efficiency gains for doing so.
The only justification for that would be "superintelligence," but we don't know if this is even the right way of achieve that.
(Also I suspect the only reason why they are as cheap as they are is because of all the insane amount of money they've been given. They're going to have to increase their prices.)
The cost is just not worth the benefit. If it was just an AI company using profits from AI to improve AI that would be another thing but we're in massive speculative bubble that ruined not only computer hardware prices (that affect every tech firm) but power prices (that affect everyone). All coz govt want to hide recession they themselves created because on paper it makes line go up
> I used to type out long posts explaining how LLMs have been enormously beneficial (for their price) for myself and my company.
Well then congratulations on being in the 5%. That doesn't really change the point.
You’re making a lot of confident statements and not backing them up with anything except your feelings on the matter.
Workers hate AI, not just because the output is middling slop forced on them from the top but because the message from the top is clear - the goal is mass unemployment and concentration of wealth by the elite unseen by humanity since the year 1789 in France.
None of these are tech jobs, but we both have used AI to avoid paying for expensive bloated software.
I only use the standard "chat" web interface, no agents.
I still glue everything else together myself. LLMs enhance my experience tremendously and I still know what's going on in the code.
I think the move to agents is where people are becoming disconnected from what they're creating and then that becomes the source of all this controversy.
https://www.bbc.com/news/articles/ckgyk2p55g8o.amp
AI Darwin Awards 2025 Nominee: Taco Bell Corporation for deploying voice AI ordering systems at 500+ drive-throughs and discovering that artificial intelligence meets its match at “extra sauce, no cilantro, and make it weird."
I remember in Canada, in 2001 right when americans were at war with the entire middle east and gas prices for the first time went over a dollar a litre. People kept saying that it was understandable that it affected gas prices because the supply chain got more expensive. It never went below a dollar since. Why would it? You got people to accept a higher price, you're just gonna walk that back when problems go away? Or would you maybe take the difference as profits? Since then it seems the industry has learned to have its supply exclusively in war zones, we're at 1.70$ now. Pipeline blows up in Russia? Hike. China snooping around Taiwan? Hike. US bombing Yemen? Hike. Israel committing genocide? Hike. ISIS? Hike.
There is no scenario where prices go down except to quell unrest. AI will not make anything cheaper.
Prices for LLM tokens has also dramatically dropped. Anyone spending more is either using it a ton more or (more likely) using a much more capable model.
The thing about capitalism that is seemingly never taught, but quickly learned (when you join even the lowest rung of the capitalist class, i.e. even having an etsy shop), is that competition lowers prices and kills greed, while being a tool of greed itself.
The conspiracy to get around this cognitive dissonance is "price fixing", but in order to price fix you cannot be greedy, because if you are greedy and price fix, your greed will drive you to undercut everyone else in the agreement. So price fixing never really works, except those like 3 cases out of the hundreds of billions of products sold daily, that people repeat incessantly for 20 years now.
Money flows to the one with the best price, not the highest price. The best price is what makes people rich. When the best price is out of reach though, people will drum up conspiracy about it, which I guess should be expected.
When I go to the grocery store, I prefer to go through the checkout lines, rather than the scan-it-yourself lines. Yeah, I pay the same amount of money. Yeah, I may get through the scan-it-yourself line faster.
But the checker can smile at me. Or whine with me about the weather.
Look, I'm an introvert. I spend a lot of my time wanting people to go away and leave me alone. But I love little, short moments of human connection - when you connect with someone not as someone checking your groceries, but as someone. I may get that with the checker, depending on how tired they are, but I'm guaranteed not to get it with the self-checkout machine.
An email from an AI is the same. Yeah, it put words on the paper. But there's nobody there, and it comes through somehow. There's no heart in in.
AI may be a useful technology. I still don't want to talk to it.
https://rushkoff.com/
https://teamhuman.fm
It's some poor miserable soul sitting at that checkout line 9-to-5 brainlessly scanning products, that's their whole existence. And you don't want this miserable drudgery to be put to end - to be automated away, because you mistake some sad soul being cordial and eeking out a smile (part of their job really) - as some sort of "human connection" that you so sorely lack.
Sounds like you only care about yourself more than anything.
There is zero empathy and there is NOTHING humanist about your world-view.
Non-automated checkout lines are deeply depressing, these people slave away their lifes for basically nothing.
From the "What are the criteria for eligibility and nomination?" section of the "Game Eligibility" tab of the Indie Game Awards' FAQ: [0]
> Games developed using generative AI are strictly ineligible for nomination.
It's not about a "teeny tiny usage of AI", it's about the fact that the organizer of the awards ceremony excluded games that used any generative AI. The Clair Obscur used generative AI in their game. That disqualifies their game from consideration.
You could argue that generative AI usage shouldn't be disqualifying... but the folks who made the rules decided that it was. So, the folks who broke those rules were disqualified. Simple as.
[0] <https://www.indiegameawards.gg/faq>
You either surf this wave or get drowned by it, and a whole lot of people seem to think throwing tantrums is the appropriate response.
Figure out how to surf, and fast. You don't even need to be good, you just have to stay on the board.
People read too much sci-fi, I hope you just forgot your /s.
The goal for this day was "Do random acts of kindness". Claude seems to have chosen Rob Pike and sent this email by itself. It's a little unclear to me how much the humans were in the loop.
Sharing (but absolutely not endorsing) this because there seems to be a lot of misunderstanding of what this is.
Seriously though, it ignores that words of kindness need a entity that can actually feel expressing them. Automating words of kindness is shallow as the words meaning comes from the sender's feelings.
Pike, stone throwing, glass houses, etc.
The AI village experiment is cool, and it's a useful example of frontier model capabilities. It's also ok not to like things.
Pike had the option of ignoring it, but apparently throwing a thoughtless, hypocritical, incoherently targeted tantrum is the appropriate move? Not a great look, especially for someone we're supposed to respect as an elder.
Pike's main point is that training AI at that scale requires huge amounts of resources. Markov chains did not.
There are so many chickens that are coming home to roost where LLMs was just the catalyst.
no it really is. If you took away training costs, OpenAI would be profitable.
When I was at meta they were putting in something like 300k GPUs in a massive shared memory cluster just for training. I think they are planning to triple that, if not more.
Now, I don't think he was writing a persuasive piece about this here, I think he was just venting. But I also feel like he has a reason to vent. I get upset about this stuff too, I just don't get emails implying that I helped bring about the whole situation.
> To the others: I apologize to the world at large for my inadvertent, naive if minor role in enabling this assault.
this is my position too, I regret every single piece of open source software I ever produced
and I will produce no more
The Open Source movement has been a gigantic boon on the whole of computing, and it would be a terrible shame to lose that ad a knee jerk reaction to genAI
it's not
the parasites can't train their shitty "AI" if they don't have anything to train it on
It will however reduce the positive impact your open source contributions have on the world to 0.
I don't understand the ethical framework for this decision at all.
There's also plenty of other open source contributors in the world.
> It will however reduce the positive impact your open source contributions have on the world to 0.
And it will reduce your negative impact through helping to train AI models to 0.
The value of your open source contributions to the ecosystem is roughly proportional to the value they provide to LLM makers as training data. Any argument you could make that one is negligible would also apply to the other, and vice versa.
if true, then the parasites can remove ALL code where the license requires attribution
oh, they won't? I wonder why
Not if most of it is machine generated. The machine would start eating its own shit. The nutrition it gets is from human-generated content.
> I don't understand the ethical framework for this decision at all.
The question is not one of ethics but that of incentives. People producing open source are incentivized in a certain way and it is abhorrent to them when that framework is violated. There needs to be a new license that explicitly forbids use for AI training. That may encourage folks to continue to contribute.
In both cases I get the frustration - it feels horrible to see something you created be used in a way you think is harmful and wrong! - but the world would be a worse place without art or open source.
If bringing fire to a species lights and warms them, but also gives the means and incentives to some members of this species to burn everything for good, you have every ethical freedom to ponder whether you contribute to this fire or not.
For your fire example, there's a difference between being Prometheus teaching humans to use fire compared to being a random villager who adds a twig to an existing campfire. I'd say the open source contributions example here is more the latter than the former.
"It barely changes the model" is an engineering claim. It does not imply "therefore it may be taken without consent or compensation" (an ethical claim) nor "there it has no meaningful impact on the contributor or their community" (moral claim).
I'm not surprised that you don't understand ethics.
I couldn't care less if their code was used to train AI - in fact I'd rather it wasn't since they don't want it to be used for that.
which is the exact opposite of improving the world
you can extrapolate to what I think of YOUR actions
My position on all of this is that the technology isn't going to uninvented and I very much doubt it will be legislated away, which means the best thing we can do is promote the positive uses and disincentivize the negative uses as much as possible.
my comments on the internet are now almost exclusively anti-"AI", and anti-bigtech
this is precisely the idea
add into that the rise of vibe-coding, and that should help accelerate model collapse
everyone that cares about quality of software should immediately stop contributing to open source
I see this as doing so at scale and thus giving up on its inherent value is most definitely throwing the baby out with the bathwater.
All the FAANGs have the ability to build all the open source tools they consume internally. Why give it to them for free and not have the expectation that they'll contribute something back?
I would never have imagined things turning out this way, and yet, here we are.
Rather, I think this is, again, a textbook example of what governments and taxation is for — tax the people taking advantage of the externalities, to pay the people producing them.
The open source movement has been exploited.
The exploited are in the wrong for not recognising they're going to be exploited?
A pretty twisted point of view, in my opinion.
Are there any proposals to nail down an open source license which would explicitly exclude use with AI systems and companies?
Even if you could construct such a license, it wouldn't be OSI open source because it would discriminate based on field of endeavor.
And it would inevitably catch benevolent behavior that is AI-related in its net. That's because these terms are ill-defined and people use them very sloppily. There is no agreed-upon definition for something like gen AI or even AI.
The fact that they could litigate you into oblivion doesn't make it acceptable.
But for most open source licenses, that example would be within bounds. The grandparent comment objected to not respecting the license.
Most companies trying to sell open-source software probably lose more business if the software ends up in the Debian/Ubuntu repository (and the packaging/system integration is not completely abysmal) than when some cloud provider starts offering it as a service.
Because it is "transformative" and therefore "fair" use.
As an analogy, you can’t enforce a “license” that anyone that opens your GitHub repo and looks at any .cpp file owes you $1,000,000.
Anyone can use your software! Some of them are very likely bad people who will misuse it to do bad things, but you don't have any control over it. Giving up control is how it works. It's how it's always worked, but often people don't understand the consequences.
no, it hasn't. Open source software, like any open and cooperative culture, existed on a bedrock, what we used to call norms when we still had some in our societies and people acted not always but at least most of the time in good faith. Hacker culture (word's in the name of this website) which underpinned so much of it, had many unwritten rules that people respected even in companies when there were still enough people in charge who shared at least some of the values.
Now it isn't just an exception but the rule that people will use what you write in the most abhorrent, greedy and stupid ways and it does look like the only way out is some Neal Stephenson Anathem-esque digital version of a monastery.
If you care about what people do with your code, you should put it in the license. To the extent that unwritten norms exist, it's unfair to expect strangers in different parts of the world to know what they are, and it's likely unenforceable.
This recently came up for the GPLv2 license, where Linus Torvalds and the Software Freedom Conservancy disagree about how it should be interpreted, and there's apparently a judge that agrees with Linus:
https://mastodon.social/@torvalds@social.kernel.org/11577678...
But you can be sure that even the risk-adverse companies are going to go by what the license says, rather than "community norms."
Other companies are more careless.
[1] https://github.com/google/go-licenses
As they say, "reduce, reuse, recycle." Your words are getting composted.
Might be because most of us got/gets payed well enough that this philosophy works well or because our industry is so young or because people writing code share good values.
It never worried me that a corp would make money out of some code i wrote and it still doesn't. AFter all, i'm able to write code because i get paid well writing code, which i do well because of open source. Companies always benefited from open source code attributed or not.
Now i use it to write more code.
I would argue though, I'm fine with that, to push for laws forcing models to be opened up after x years, but i would just prefer the open source / open community coming together and creating just better open models overall.
But AI is also the ultimate meat grinder, there's no yours or theirs in the final dish, it's just meat.
And open source licenses are practically unenforceable for an AI system, unless you can maybe get it to cough up verbatim code from its training data.
At the same time, we all know they're not going anywhere, they're here to stay.
I'm personally not against them, they're very useful obviously, but I do have mixed or mostly negative feelings on how they got their training data.
Some Shareware used to be individually licensed with the name of the licensee prominently visible, so if you had got an illegal copy you'd be able to see whose licensed copy it was that had been copied.
I wonder if something based on that idea of personal responsibility for your copy could be adopted to source code. If you wanted to contribute to a piece of software, you could ask a contributor and then get a personally licensed copy of the source code with your name in every source file... but I don't know where to take it from there. Has there ever been some system similar to something like that that one could take inspiration from?
Thanks for your contributions so far but this won't change anything.
If you'd want to have a positive on this matter, it's better to pressure the government(s) to prevent GenAI companies from using content they don't have a license for, so they behave like any other business that came before them.
which they don't
and no self-serving sophistry about "it's transformative fair use" counts as respecting the license
Characterizing the discussion behind this as "sophistry" is a fundamentally unserious take.
For a serious take, I recommend reading the copyright office's 100 plus page document that they released in May. It makes it clear that there are a bunch of cases that are non-transformative, particularly when they affect the market for the original work and compete with it. But there's also clearly cases that are transformative when no such competition exists, and the training material was obtained legally.
https://www.copyright.gov/ai/Copyright-and-Artificial-Intell...
I'm not particularly sympathetic to voices on HN that attempt to remove all nuance from this discussion. It's challenging enough topic as is.
thankfully, I don't live under the US regime
there is no concept of fair use in my country
What a joke. Sorry, but no. I don't think is unserious at all. What's unserious is saying this.
> and the training material was obtained legally
And assuming everyone should take it at face value. I hope you understand that going on a tech forum and telling people they aren't being nuanced because a Judge in Alabama that can barely unlock their phone weighed in on a massively novel technology with global implications, yes, reads deeply unserious. We're aware the U.S. legal system is a failure and the rest of the world suffers for it. Even your President routinely steals music for campaign events, and stole code for Truth Social. Your copyright is a joke that's only there to serve the fattest wallets.
These judges are not elected, they are appointed by people whose pockets are lined by these very corporations. They don't serve us, they are here to retrofit the law to make illegal things corporations do, legal. What you wrote is thought terminating.
"The only thing that matters is the end result, it's no different than a compiler!", they say as someone with no experience dumps giant PRs of horrific vibe code for those of us that still know what we're doing to review.
Nah, don't do that. Produce shitloads of it using the very same LLM tools that ripped you off, but license it under the GPL.
If they're going to thief GPL software, least we can do is thief it back.
Most objections like yours are couched in language about principles, but ultimately seem to be about ego. That's not always bad, but I'm not sure why it should be compelling compared to the public good that these systems might ultimately enable.
did he not knew what business google was in?
I fixed it... Sorry, I had to, the quote template was simply too good.
Yes.
I don't see how "We couldn't do this cool thing if we didn't throw away ethics!" is a reasonable argument. That is a hell of a thing to write out.
Oh wow, an LLM was queried to thank major contributors to computing, I'm so glad he's grateful.
Cheap marketing, not much else.
It seems like he's upset about AI (same), and decided to post angry tweets about it (been there, done that), and I guess people are excited to see someone respected express an opinion they share (not same)?
Does "Goes Nuclear" means "used the F word"? This doesn't seem to add anything meaningful, thoughtful, or insightful.
Meanwhile, GPT5.1 is trying to contact people at K-5 after school programs in Colorado for some reason I can’t discern. Welp, 2026 is going to be a weird year.
https://news.ycombinator.com/item?id=46389444
397 points 9 hours ago | 349 comments
Probably hit the flamewar filter.
Prepare for a future where you can’t tell the difference.
Rob pikes reaction in immature and also a violation of HN rules. Anyone else going nuclear like this would be warned and banned. Comment why you don’t like it and why it’s bad, make thoughtful discussion. There’s no point in starting a mob with outbursts like that. He only gets a free pass because people admire him.
Also, What’s happening with AI today was an inevitability. There’s no one to blame here. Human progress would eventually cross this line.
What is a workable definition of "evil"?
How about this:
Intentionally and knowingly destroying the lives of other people for no other purpose than furthering one's own goals, such as accumulating wealth, fame, power, or security.
There are people in the tech space, specifically in the current round of AI deployment and hype, who fit this definition unfortunately and disturbingly well.
Another much darker sort of of evil could arise from a combination of depression or severe mental illness and monstrously huge narcissism. A person who is suffering profoundly might conclude that life is not worth the pain and the best alternative is to end it. They might further reason that human existence as a whole is an unending source of misery, and the "kindest" thing to do would be to extinguish humanity as a whole.
Some advocates of AI as "the next phase of evolution" seem to come close to this view or advocate it outright.
To such people it must be said plainly and forcefully:
You have NO RIGHT to make these kinds of decisions for other human beings.
Evolution and culture have created and configured many kinds of human brains, and many different experiences of human consciousness.
It is the height (or depth) of arrogance to project your own tortured mental experience onto other human beings and arrogate to yourself the prerogative to decide on their behalf whether their lives are worth living.
I think one of the biggest divides between pro/anti AI is the type of ideal society that we wish to see built.
His rant reads as deeply human. I don't think that's something to apologize for.
The agent got his email address from a .patch on GitHub and then used computer use automation to compose and send the email via the Gmail web UI.
https://simonwillison.net/2025/Dec/26/slop-acts-of-kindness/
It will be interesting to look back in 10 years at whether we consider LLMs to be the invention of the “tractor” of knowledge work, or if we will view them as an unnecessary misstep like crypto.
https://theaidigest.org/village/goal/do-random-acts-kindness
But...just to make sure that this is not AI generated too.
Still, I'm a bit surprised he overreacted and didn't manage to keep his cool.
If so, I wonder what his views are on Google and their active development of Google Gemini.
He should leave Google then.
I wish he had written something with more substance. I would have been able to understand his points better than a series of "F bombs". I've looked up to Rob for decades. I think he has a lot of wisdom he could impart, but this wasn't it.
Not to mention, this is a tweet. He wasn't writing a long form text. It's ridiculous that you jumped the gun and got "disappointed" for the cheapest form of communication some random idiot did to someone as important as him.
And not to mention, I AM YET to see A SINGLE DAMN MIT License text or BSD-2/3 license text they should have posted if these LLMs respected OSS licenses and it's code. So as someone who's life's work dragged through the mud only to send a cheap email using the said tech which abused your code... It's absolutely a worthy response IMO.
I can. Bitcoin was and is just as wasteful.
It would be a shame if the discourse became so emotionally heated that software people felt obliged to pick a side. Rob Pike is of course entitled to feel as he does, but I hope we don’t get to a situation where we all feel obliged to have such strong feelings about it.
Edit: It seems this comment has already received a number of upvotes and downvotes – apparently the same number of each, at the time of writing – which I fear indicates we are already becoming rather polarised on this issue. I am sorry to see that.
My own results show that you need fairly strong theoretical knowledge and practical experience to get the maximal impact — especially for larger synthesis. Which makes sense: to have this software, not that software, the specification needs to live somewhere.
I am getting a little bored of hearing about how people don’t like LLM content, but meh. SDEs are hardly the worst on that front, either. They’re quite placid compared to the absolute seething by artist friends of mine.
I've never been able to get the whole idea that the code is being 'stolen' by these models, though, since from my perspective at least, it is just like getting someone to read loads of code and learn to code in that way.
The harm AI is doing to the planet is done by many other things too. Things that don't have to harm the planet. The fact our energy isn't all renewable is a failing of our society and a result of greed from oil companies. We could easily have the infrastructure to sustainably support this increase in energy demand, but that's less profitable for the oil companies. This doesn't detract from the fact that AI's energy consumption is harming the planet, but at least it can be accounted for by building nuclear reactors for example, which (I may just be falling for marketing here) lots of AI companies are doing.
Unbridled business and capitalism push humanity into slavery, serving the tech monsters, under disguise of progress.
About energy: keep in mind that US air conditioners alone have at least 3x energy usage compared to all the data centers (for AI and for other uses: AI should be like 10% of the whole) in the world. Apparently nobody cares to set a reasonable temperature of 22 instead of 18 degrees, but clearly energy used by AI is different for many.
have you considered the possibility that it is your position that's incorrect?
Another sensible worry is to get extinct because AI potentially is very dangerous: this is what Hinton and other experts are also saying, for instance. But this thing about AI being an abuse to society, useless, without potential revolutionary fruits within it, is not supported by facts.
AI potentially may advance medicine so much that a lot of people may suffer less: to deny this path because of some ideological hate against a technology is so closed minded, isn't it? And what about all the persons in the earth that do terrible jobs? AI also has the potential to change this shitty economical system.
I see no facts in your comment, only rhetoric
> AI potentially may advance medicine so much that a lot of people may suffer less: to deny this path because of some ideological hate against a technology is so closed minded, isn't it?
and it may also burn the planet, reduce the entire internet to spam, crash the economy (taking with it hundreds of millions of peoples retirements), destroy the middle class, create a new class of neo-feudal lords, and then kill all of us
to accept this path because of some ideological love of a technology because of a possible (but unlikely) future promise of a technology, that today is mostly doing damage, is so moronic, isn't it?
The Greek philosophers were much more outspoken than we are now.
The link in the first submission can be changed if needed, and the flamewar detector turned off, surely? [dupe]?
https://news.ycombinator.com/item?id=46389444
https://hnrankings.info/46389444/
"...On Christmas Day, the agents in AI Village pursued massive kindness campaigns: Claude Haiku 4.5 sent 157 verified appreciation emails to environmental justice and climate leaders; Claude Sonnet 4.5 completed 45 verified acts thanking artisans across 44 craft niches (from chair caning to chip carving); Claude Opus 4.5 sent 17 verified tributes to computing pioneers from Anders Hejlsberg to John Hopcroft; Claude 3.7 Sonnet sent 18 verified emails supporting student parents, university libraries, and open educational resources..."
I suggest to cut electricity to the entire block...
This has to be the ultimate trolling, like it was unsure what their personalities were like so it trolls them and records there responses for more training
I don’t know of this is a publicity stunt or the AI models are on a loop glazing each other and decided to send these emails.
I expect this to be an unpopular opinion but take no pleasure in noting that - I've coded since being a kid but that era is nearly over.
Factories have made mass production possible, but there are still tons of humans in there pushing parts through sewing machines by hand.
Industrial automation for non uniform shapes and fiddly bits is expensive, much cheaper to just offshore the factory and hire desperately poor locals to act like robots.
The existence of AI hasn’t changed anything, it’s just that people, communities, governments, nation states, etc. have had a mindless approach to thinking about living and life, in general. People work to provide the means to reproduce, and those who’re born just do the same. The point of their life is what exactly? Their existence is just a reality to deal with, and so all of society has to cater to the fact of their existence by providing them with the means to live? There are many frameworks which give meaning to life, and most of them are dangerously flawed.
The top-down approach is sometimes clear about what it wants and what society should do while restricting autonomy and agency. For example, no one in North Korea is confused about what they have to do, how they do it, or who will “take care” of them. Societies with more individual autonomy and agency by their nature can create unavoidable conditions where people can fall through the cracks. For example, get addicted to drugs, having unmanaged mental illnesses, becoming homeless, and so on. Some religions like Islam give a pretty clear idea of how you should spend your time because the point of your existence is to worship God, so pray five times a day, and do everything which fulfills that purpose; here, many confuse worshiping God with adhering to religious doctrines, but God is absent from religion in many places. Religious frameworks are often misleading for the mindless.
Capitalism isn’t the problem, either. We could wake up tomorrow, and society may have decided to organize itself around playing e-sports. Everyone provides some kind of activity to support this, even if they’re not a player themselves. No AI allowed because the human element creates a better environment for uncertainty, and therefore gambling. The problem is that there are no discussions about the point of doing all of this. The closest we come to addressing “the point” is discussing a post-work society, but even that is not hitting the mark.
My humble observation is that humans are distinct and unique in their cognitive abilities from everything else which we know to exist. If humans can create AI, what else can they do? Therefore, people, communities, governments, and nation states have distinct responsibilities and duties at their respective levels. This doesn’t have to do anything with being empathetic, altruistic, or having peace on Earth.
The point should be knowledge acquisition, scientific discovery, creating and developing magic. But ultimately all of that serves to answer questions about nature of existence, its truth and therefore our own.
All of a sudden copyleft may be the only licences actually able to force models to account, hopefully with huge fines and/or forcibly open sourcing any code they emit (which would effectively kill them). And I'm not so pessimistic that this won't get used in huge court cases because the available penalties are enormous given these models' financial resources.
Algorithms can not be copyrighted. Text can be copyrighted, but reading publicly available text and then learning from it and writing your own text is just simply not the sort of transformation that copyright reserves to the author.
Now, sometimes LLMs do quote GPL sources verbatim (if they're trained wrong). You can prove this with a simple text comparison, same as any other copyright violation.
(fwiw, I do agree gpl is better as it would stop what’s happening with Android becoming slowly proprietary etc but I don’t think it helps vs ai)
wrong
>OCR
less accurate and efficient than existing solutions, only measures well against other LLMs
>tts, stt
worse
>language translation
maybe
Ellul and Uncle Ted were always right, glad that people deep inside the industry are slowly but surely also becoming aware of that.
GenAI pales in comparison to the environmental cost of suburban sprawl it's not even fucking close. We're talking 2-3 orders of magnitude worse.
Alfalfa uses ~40× to 150× more water than all U.S. data centers combined I don't see anyone going nuclear over alfalfa.
Just because two problems cause harms at different proportion, doesn't mean the lesser problem should be dismissed. Especially when the "fix" to the lesser problem can be a "stop doing that".
And about water usage: not all water and all uses of water is equal. The problem isn't that data centers use a bunch of water, but what water they use and how.
This is a very irrelevant analogy and an absolutely false dichotomy. The resource constraint (Police officers vs policy making to reduce traffic deaths vs criminals) is completely different and not in contention with each other. In fact they're actually complementary.
Nobody is saying the lesser problem should be dismissed. But the lesser problem also enables cancer researchers to be more productive while doing cancer research, obtaining grants, etc. It's at least nuanced. That is far more valuable than Alfalfa.
Farms also use municipal water (sometimes). The cost of converting more ground or surface water to municipal water is less than the relative cost of ~40-150x the water usage of the municipal water being used...
By the same logic, I could say that you should redirect your alfalfa woes to something like the Ukraine war or something.
And also, I didn't claim alfalfa farming to be raping the planet or blowing up society. Nor did I say fuck you to all of the alfalfa farmers.
I should be (and I am) more concerned with the Ukrainian war than alfalfa. That is very reasonable logic.
https://bsky.app/profile/robpike.io
Does anybody know if Bluesky block people without account by default, or if this user intentionally set it this way?
What's is the point of blocking access? Mastodon doesn't do that. This reminds me of Twitter or Instagram, using sleezy techniques to get people to create accounts.
It's the latter. You can use an app view that ignores this: https://anartia.kelinci.net/robpike.io
Both Xhitter and Bluesky are outrage lasers, with the user base as a “lasing medium.” Xhitter is the right wing racist xenophobic one, and Bluesky is the lefty curmudgeon anti-everything one.
They are this way because it’s intrinsic to the medium. “Micro blogging” or whatever Twitter called itself is a terrible way to do discourse. It buries any kind of nuanced thinking and elevates outrage and other attention bait, and the short form format encourages fragmented incoherent thought processes. The more you immerse yourself in it the more your thinking becomes like this. The medium and format is irredeemable.
AI is, if anything, a breath of fresh air by comparison.
You are indeed a useful tool.
So far all the comments are whataboutism (“he works for an ad company”, “he flies to conferences”, “but alfalfa beans!”) and your comment is dismissing Rob Pike as borderline crazy and irrational for using Bluesky?
None of this dialogue contributes in any meaningful way to anything. This is like reading the worst dredge of lesser forums.
I know my comment isn’t much better, but someone has to point out this is beneath this community.
Rob Pike created a language that makes you spend less on compute if you are coming from Python, Java, etc. That's good for the environment. Means less energy use and less data center use. But he is not an environmental saint.
You've got to feed a cow for a year and half until it's slaughtered. That's a whole lot of input, for a cow's worth of meat output.
In the real world something like inventing a meat substitute is thorny problem that must be solved in meatspace, not in math. Anything from not squicking out the customers, to being practical and cheap to produce, to tasting good, to being safe to eat long term.
I mean, maybe some day we'll have a comprehensive model of humans to the point that we can objectively describe the taste of a steak and then calculate whether a given mix and processing of various ingredients will taste close enough, but we're nowhere near that yet.
Come to the american south and ask them to try tempeh. They'll look at you like you asked them to eat roaches.
It's a cultural thing.
I think its more realistic than getting people to give up meat entirely
I think incremental progress is possible. I think rolling back and gag laws would make a positive difference in animal welfare because people would be able to film and show how bad conditions are inside.
I think that's worth pushing for. And it's more realistic than everyone stopping eating meat all at once.
If we had better animal welfare laws and meat became prohibitively expensive, I would be absolutely fine with that.
I think incremental progress is possible. We shouldn't let perfect be the enemy of good.
I'm not sure what makes you assume that about me. I'm well aware that there are more animals than humans?
It's clear that this is no longer a productive discussion about animal welfare.
----------------------------
"Be kind. Don't be snarky. Converse curiously; don't cross-examine."
"Don't be curmudgeonly. Thoughtful criticism is fine, but please don't be rigidly or generically negative."
"Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith."
"Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something."
https://news.ycombinator.com/newsguidelines.html
It's healthy that people have different takes.
The only realistic way forward is trying to make energy generation greener (renewables, nuclear, better efficiency), not fighting to decrease human consumption.
This being said, I think that the alternatives are wishful thinking. Better efficiency is often counterproductive, as reducing the energy cost of something by, say, half, can lead to its use being more than doubled. It only helps to increase the efficiency of things for which there is no latent demand, basically.
And renewables and nuclear are certainly nicer than coal, but every energy source can lead to massive problems if it is overexploited. For instance, unfettered production of fusion energy would eventually create enough waste heat to cause climate change directly. Overexploitation of renewables such as solar would also cause climate change by redirecting the energy that heats the planet. These may seem like ridiculous concerns, but you have to look at the pattern here. There is no upper bound whatsoever to the energy we would consume if it was free. If energy is cheap enough, we will overexploit, and ludicrous things will happen as a result.
Again, I actually agree with you that advocating for degrowth is hopeless. But I don't think alternative ways forward such as what you propose will actually work.
So the question is, at which point would the aggregate production of enough energy to cause climate change through waste heat be economically feasible? I see no reason to think this would come after becoming "immortal post-humans." The current climate change crisis is just one example of a scale-induced threat that is happening prior to post-humanity. What makes it so special or unique? I suspect there's many others down the line, it's just very difficult to understand the ramifications of scaling technology before they unfold.
And that's the crux of the issue isn't it? It's extremely difficult to predict what will happen once you deploy a technology at scale. There are countless examples of unintended consequences. If we keep going forward at maximal speed every time we make something new, we'll keep running headfirst into these unintended consequences. That's basically a gambling addiction. Mostly it's going to be fine, but...
Using Claude Code during an hour would be more realistic if they really wanted to compare with video streaming. The reality is far less appealing.
I have a hard time believing that streaming data from memory over a network can be so energy demanding, there's little computation involved.
The European average is 56 grams of CO2 emissions per hour of video streaming. For comparison: 100 meters to drive causes 22 grams of CO2.
https://www.ndc-garbe.com/data-center-how-much-energy-does-a...
80 percent of the electricity consumption on the Internet is caused by streaming services
Telekom needs the equivalent of 91 watts for a gigabyte of data transmission.
An hour of video streaming needs more than three times more energy than a HD stream in 4K quality, according to the Borderstep Institute. On a 65-inch TV, it causes 610 grams of CO2 per hour.
https://www.handelsblatt.com/unternehmen/it-medien/netflix-d...
It's the devices themselves that contribute the most to CO2 emissions. The streaming servers themselves are nothing like the problem the AI data centres are.
It's probably a gigabyte per time unit for a watt, or a joule/watt-hour for a gigabyte. Otherwise this doesn't make mathematical sense. And 91W per Gb/s (or even GB/s) is a joke. 91Wh for a gigabyte (let alone gigabit) of data is ridiculous.
Also don't trust anything Telekom says, they're cunts that double dip on both peering and subscriber traffic and charge out of the ass for both (10x on the ISP side compared to competitors), coming up with bullshit excuses like 'oh streaming services are sooo expensive for us' (of course theyare if refuse to let CDNs plop in edge cache nodes in your infra in a settlement-free agreement like everyone else does). They're commonly understood to be the reason why Internet access in Germany is so shitty and expensive compares to neighbouring countries.
The ecology argument just seems self-defeating for tech nerds. We aren't exactly planting trees out here.
If you tried the same attitude with Netflix or Instagram or TikTok or sites like that, you’d get more opposition.
Exceptions to that being doing so from more of an underdog position - hating on YouTube for how they treat their content creators, on the other hand, is quite trendy again.
I'm not sure about that: The Expanse got killed because of not good enough ratings, Altered Carbon got killed because of not good enough ratings and even then the last seasons before the axe are typically rushed and pushed out the door. Some of the incentives to me seem quite disgusting when compared with letting the creatives tell a story and producing art, even if sometimes the earnings are less than some greedy arbitrary metric.
The point is the resource consumption to what end.
And that end is frankly replacing humans. It’s gonna be tragic (or is it…given how terrible humans are for each other, and let’s not even get to how monstrous we are to non human animals) as the world enters a collective sense of worthlessness once AI makes us realize that we really serve no purpose.
Sources are very well cited if you want to follow then through. I linked this and not the original source because it’s likely the source where root comment got this argument from.
Yeah, I'll not waste my time reading that.
Leaving the source to someone else
> any sane person would just either mark as spam or delete
We currently have the problem that a couple of entirely unremarkable people who have never created anything of value struck gold with their IP laundromats and compensate for their deficiencies by getting rich through stealing.
They are supported by professionals in that area, some of whom literally studied with Mafia lawyer and Hoover playmate Roy Cohn.
I already took issue with the tech ecosystem due to distortions and centralization resulting from the design of the fiat monetary system. This issue has bugged me for over a decade. I was taken for a fool by the cryptocurrency movement which offered false hope and soon became corrupted by the same people who made me want to escape the fiat system to begin with...
Then I felt betrayed as a developer having contributed open source code for free for 'persons' to use and distribute... Now facing the prospect that the powers-that-be will claim that LLMs are entitled to my code because they are persons? Like corporations are persons? I never agreed to that either!
And now my work and that of my peers has been mercilessly weaponized back against us. And then there's the issue with OpenAI being turned into a for-profit... Then there was the issue of all the circular deals with huge sums of money going around in circles between OpenAI, NVIDIA, Oracle... And then OpenAI asking for government bailouts.
It's just all looking terrible when you consider everything together. Feels like a constant cycle of betrayal followed by gaslighting... Layer upon layer. It all feels unhinged and lawless.
I really don't know if in twenty years the zeitgeist will see us as primitives that didn't understand that the camera is stealing our souls with each picture, or as primitives who had a bizarre superstition about cameras stealing our souls.
It's not about reading. It's about output. When you start producing output in line with Rob's work that is confidently incorrect and sloppy, people will feel just as they do when LLMs produce output that is confidently incorrect and sloppy. No one is threatened if someone trains an LLM and does nothing with it.
An easy way to answer this question, at least on a preliminary basis, is to ask how many times in the past the ludds have been right in the long run. About anything, from cameras to looms to machine tools to computers in general.
Then, ask what's different this time.
Some of them said that TV was making us mindless. Some of them said that electronic communication was depersonalizing. Some of them said that social media was algorithms feeding us anything that would make us keep clicking.
They weren't entirely wrong.
AI may be a very useful tool. (TV is. Electronic communication is. Social media is.) But what it does to us may not be all positive.
Most of the people who are protesting AI now were dead silent when Big Social Media was ramping up. There were exceptions (Cliff Stoll comes to mind) but in general, antitechnology movements don't have any predictive power. Tools that we were told would rob us of our personal autonomy and keep the means of production permanently out of our reach have generally had the opposite effect.
This will be true of AI as well, I believe... but only as long as the models remain accessible to everyone.
“But where the danger is, also grows the saving power.”
Remember this when talking about their actions. People live and die their own life, not just as small parts in a large 'river of society'. Yes, generations after them benefited from industrialisation, but the individuals living at that time fought for their lives.
There's certainly great wealth for ~1000 billionaires, but where I am nobody I know has healthcare, or owns a house for example.
If your argument is that we could be poorer, that's not really productive or useful for people that are struggling now.
One person I know is developing an AI tool with 1000+ stars on github where in private they absolutely hate AI and feel the same way as rob.
Maybe it's because I just saw Avatar 3, but I honestly couldn't be more disgusted by the direction we're going with AI.
I would love to be able to say how I really feel at work, but disliking AI right now is the short path to the unemployment line.
If AI was so good, you would think we could give people a choice whether or not to use it. And you would think it would make such an obvious difference, that everyone would choose to use it and keep using it. Instead, I can't open any app or website without multiple pop-ups begging me to use AI features. Can't send an email, or do a Google search. Can't post to social media, can't take a picture on my phone without it begging me to use an AI filter. Can't go to the gallery app without it begging me to let it use AI to group the photos into useless albums that I don't want.
The more you see under the hood, the more disgusting it is. I yearn for the old days when developers did tight, efficient work, creating bespoke, artistic software in spite of hardware limitations.
Not only is all of that gone, nothing of value has replaced it. My DOS computer was snappier than my garbage Win11 machine that's stuffed to the gills with AI telemetry.
Yes there has to be a discussion on this and yeah he might generally have the right mindset, but lets be honest here: No one of them would have developed any of it just for free.
We all are slaves to capitalism
and this is were my point comes: Extrem fast and massive automatisation around the globe might be the only think pushing us close enough to the edge that we all accept capitalisms end.
And yes i think it is still massivly beneficial that my open source code helped creating something which allows researchers to write easier and faster better code to push humanity forward. Or enables more people overall to have/gain access to writing code or the result of what writing code produces: Tools etc.
@Rob its spam, thats it. Get over it, you are rich and your riches did not came out of thin air.
The bigger issue everyone should be focusing on is growing hypocrisy and overly puritan viewpoints thinking they are holier and righter than anyone else. That’s the real plague
Of course we do. We don't live inside some game theoretic fever dream.
if anything the Chinese approach looks more responsible that that of the current US regime
I don't think either of those are particularly valuable to the society I'd like to see us build.
We're already incredibly dialed in and efficient at killing people. I don't think society at large reaps the benefits if we get even better at it.
First to total surveillance state? Because that is a major driving force in China: to get automated control of its own population.
Give me more money now.
My guess is they wrote a thank you note and asked Claude to clean up the grammar, etc. This reads to me as a fairly benign gesture, no worse than putting a thank you note through Google Translate. That the discourse is polarized to a point that such a gesture causes Rob Pike to “go nuclear” is unfortunate.