Feels too self-congratulatory when he claims to be correct about self driving in the Waymo case. The bar he set is so broad and ambiguous, that probably anything Waymo did, would not qualify as self driving to him. So he think humans are intervening once every 1-2 miles to train the Waymo, we’re not even sure if that is true, I heard from friends that it was 100+ miles but let us say Waymo comes out and says it is 1000 miles.
Then I bet Rodney can just fiddle with goal post and say that 3.26 trillion miles were driven in US in 2024, and having a human intervene 1000 miles would mean 3.26 billion interventions, and that this is clearly not self driving. In fact until Waymo disables Internet on all cars and prices it never needs any intervention ever, Rodney can claim he’s right, even then maybe not stopping exactly where Rodney wanted it to, might be proof that self driving doesn’t work.
Next big thing after deep learning prediction is clearly false. LLM is deep learning, scaled up, we are not in any sense looking past deep learning. Rodney I bet wanted it to be symbolic AI, but that is most likely a dead end, and the bitter lesson actually holds. In fact we have been riding this deep learning wave since Alex-Net 2012. OpenAI talked about scaling since 2016 and during that time the naysayers could be very confident and claim we needed something more, but OpenAI went ahead and proved out the scaling hypothesis and passed the language Turing test. We haven’t needed anything more except scale and reasoning has also turned out to be similar. Just an LLM trained to reason, no symbolic merger, not even a search step it seems like.
Waymo cars can drive. Everything from the (limited) public literature to riding them personally has me totally persuaded that they can drive.
DeepMind RL/MCTS can succeed in fairly open-ended settings like StarCraft and shit.
Brain/DeepMind still knocks hard. They under-invested in LLMs and remain kind of half-hearted around it because they think it’s a dumbass sideshow because it is a dumbass sideshow.
They train on TPU which costs less than chips made of Rhodium like a rapper’s sunglasses, they fixed the structural limits in TF2 and PyTorch via the Jax ecosystem.
If I ever get interested in making some money again Google is the only FAANG outfit I’d look at.
I can tell you as someone that crosses paths almost everyday with a Waymo car, they absolutely due work. I would describe their driving behavior as very safe and overly cautious. I’m far more concerned of humans behind the wheel.
Agreed Waymo cars can drive. Also I don't believe that, say, when a city bus stops on a narrow street near a school crosswalk, that the decision to edge out and around it is made on board the car, as I saw recently. The "car" made the right decision, drove it perfectly, and was safe at all times, but I just don't think anyone but a human in a call center said yes to that.
I think that, if it were true that Waymo cars require human intervention every 1-2 miles (thus requiring 1 operator for every, say, 1-2 cars, probably constantly paying attention while the car is in motion), then it would be fair to say that the cars are not really self driving.
However, if the real number is something like an intervention every 20 or 100 miles, and so an operator is likely passively monitoring dozens of cars, and the cars themselves ask for operator assistance rather than the operator actively monitoring them, then I would agree with you that Waymo has really achieved full self driving and his predicitons on the basic viability have turned out wrong.
I have no idea though which is the case. I would be very interested if there are any reliable resources pointing one way or the other.
I disagree that regular interventions every two trips where you have no control over pickup or dropoff points counts as full self driving.
But that definition doesn’t even matter. The key factor is whether the additional overhead, whatever percentage it is, makes economic sense for the operator or the customer. And it seems pretty clear the economics aren’t there yet.
Waymo is the best driver I’ve ridden with. Yes it has limited coverage. Maybe humans are intervening, but unless someone can prove that humans are intervening multiple times per ride, “self driving” is here, IMO, as of 2024.
In what sense is self-driving “here” if the economics alone prove that it can’t get “here”? It’s not just limited coverage, it’s practically non-existent coverage, both nationally and globally, with no evidence that the system can generalize, profitably, outside the limited areas it’s currently in.
It's covering significant areas of 3 major metros, and the core of one minor, with testing deployments in several other major metros. Considering the top 10 metros are >70% of the US ridehail market, that seems like a long way beyond "non-existent" coverage nationally.
You’re narrowing the market for self-driving to the ridehail market in the top 10 US metros. That’s kinda moving the goal posts, my friend, and completely ignoring the promises made by self-driving companies.
The promise has been that self-driving would replace driving in general because it’d be safer, more economical, etc. The promise has been that you’d be able to send your autonomous car from city to city without a driver present, possibly to pick up your child from school, and bring them back home.
In that sense, yes, Waymo is nonexistent. As the article author points out, lifetime miles for “self-driving” vehicles (70M) accounts for less than 1% of daily driving miles in the US (9B).
Even if we suspend that perspective, and look at the ride-hailing market, in 2018 Uber/Lyft accounted for ~1-2% of miles driven in the top 10 US metros. [1] So, Waymo is a tiny part of a tiny market in a single nation in the world.
Self-driving isn’t “here” in any meaningful sense and it won’t be in the near-term. If it were, we’d see Alphabet pouring much more of its war chest into Waymo to capture what stands to be a multi-trillion dollar market. But they’re not, so clearly they see the same risks that Brooks is highlighting.
There are, optimistically, significantly less than 10k Waymos operating today. There are a bit less than 300M registered vehicles in the US.
If the entire US automotive production were devoted solely to Waymos, it'd still take years to produce enough vehicles to drive any meaningful percentage of the daily road miles in the US.
I think that's a bit of a silly standard to set for hopefully obvious reasons.
> ..is a tiny part of a tiny market in a single nation in the world.
Calculator was a small device that was made in one tiny market in one nation in the world. Now we all got a couple of hardware ones in our desk drawers, and a couple software ones on each smartphone.
If a driving car can perform 'well' (Your Definition May Vary - YDMV) in NY/Chicago/etc. then it can perform equally 'well' in London, Paris, Berlin, Brussels, etc. It's just that EU has stricter rules/regulations while US is more relaxed (thus innovation happens 'there' and not 'here' in the EU).
When 'you guys' (US) nail self-driving, it will only be a matter of time til we (EU) allow it to cross the pond. I see this as a hockey-stick graph. We are still on the eraser/blade phase.
if you had read the F-ing article, which you clearly did not, you would see that you are committing the sin of exponentiation: assuming that all tech advances exponentially because microprocessor development did (for awhile).
Development of this technology appears to be logarithmic, not exponential.
He's committing the "sin" of monotonicity, not exponentiation. You could quibble about whether progress is currently exponential, but Waymo has started limited deployments in 2-3 cities in 2024 and wide deployments in at least SF (its second city after Phoenix). I don't think you can reasonably say its progress is logarithmic at this point - maybe linear or quadratic.
Speaking for one of those metro areas I'm familiar with: maybe in SF city limits specifically (where they still are half the Uber's share), but that's 10% of the population of the Bay Area metro. I'm very much looking forward to the day when I can take a robo cab from where I live near Google to the airport - preferably, much cheaper than today's absurd Uber rates - but today it's just not present in the lives of about 95+% of Bay Area residents.
> preferably, much cheaper than today's absurd Uber rates
I just want to highlight that the only mechanism by which this eventually produces cheaper rates is by removing having to pay a human driver.
I’m not one to forestall technological progress, but there are a huge number of people already living on the margins who will lose one of their few remaining options for income as this expands. AI will inevitably create jobs, but it’s hard to see how it will—in the short term at least—do anything to help the enormous numbers of people who are going to be put out of work.
I’m not saying we should stop the inevitable forward march of technology. But at the same time it’s hard for me to “very much look forward to” the flip side of being able to take robocabs everywhere.
People living on the margins is fundamentally a social problem, and we all know how amenable those are to technical solutions.
Let's say AV development stops tomorrow though. Is continuing to grind workers down under the boot of the gig economy really a preferred solution here or just a way to avoid the difficult political discussion we need to have either way?
I'm not sure how I could have been more clear that I'm not suggesting we stop development on robotaxis or anything related to AI.
All I'm asking is that we take a moment to reflect on the people who won't be winners. Which is going to be a hell of a lot of people. And right now there is absolutely zero plan for what to do when these folks have one of the few remaining opportunities taken away from them.
As awful as the gig economy has been it's better than the "no economy" we're about to drive them to.
This is orthogonal. You're living in a society with no social safety net, one which leaves people with minimal options, and you're arguing for keeping at least those minimal options. Yes, that's better than nothing, but there are much better solutions.
The US is one of the richest countries in the world, with all that wealth going to a few people. "Give everyone else a few scraps too!" is better than having nothing, but redistributing the wealth is better.
But this is the society we live in now. We don’t live in one where we take care of those whose jobs have been displaced.
I wish we did. But we don’t. So it’s hard for me to feel quite as excited these days for the next thing that will make the world worse for so many people, even if it is a technological marvel.
Just between trucking and rideshare drivers we’re talking over 10 million people. Maybe this will be the straw that breaks the camel’s back and finally gets us to take better care of our neighbors.
Yeah but it doesn't work to on the one hand campaign for not taking rideshare jobs away from people on an online forum, and on the other say "that's the society we live in now". If you're going to be defeatist, just accept those jobs might go away. If not, campaign for wealth redistribution and social safety nets.
Public transit has a fundamentally local impact. It takes away some jobs but also provides a lot of jobs for a wide variety of skills and skill levels. It simultaneously provides an enormous number of benefits to nearby populations, including increased safety and reduced traffic.
Self-driving cars will be disruptive globally. So far they primarily drive employment in a small set of the technology industry. Yes, there are manufacturing jobs involved but those are overwhelmingly going to be jobs that were already building human-operated vehicles. Self-driving cars will save many lives. But not as many as public transit does (proportionally per user) And it is blindingly obvious they will make traffic worse.
Waymo's current operational area in the bay runs from Sunnyvale to fisherman's wharf. I don't know how many people that is, but I'm pretty comfortable calling it a big chunk of the bay.
They don't run to SFO because SF hasn't approved them for airport service.
I just opened the Waymo app and its service certainly doesn't extend to Sunnyvale. I just recently had an experience where I got a Waymo to drive me to a Caltrain station so I can actually get to Sunnyvale.
The public area is SF to Daly City. The employee-only area runs down the rest of the peninsula. Both of them together are the operational area.
Waymo's app only shows the areas accessible to you. Different users can have different accessible areas, though in the Bay area it's currently just the two divisions I'm aware of.
Why would you consider the employee-only area? For that categorization to exist it must mean it's either unreliable for customers or too expensive cause there's too much human drivers on the loop. Either way it would not be considered as an area served by self driving, imo.
There are alternative possibilities, like "we don't have enough vehicles to serve this area appropriately" or "we don't have statistical power to ensure this area meets safety standards even though it looks fine", and "there are missing features (like freeways) that would make public service uncompetitive in this area" to simply "the CPUC hasn't approved a fare area expansion".
It's an area they're operating legally, so it's part of their operational area. It's not part of their public service area, which I'd call that instead.
I wish! In Palo Alto the cars have been driving around for more than a decade and you still can't hail one. Lately I see them much less often than I used to, actually. I don't think occasional internal-only testing qualifies as "operational".
Where's the economic proof of impossibility? As far as I know Waymo has not published any official numbers, and any third party unit profitability analysis is going to be so sensitive to assumptions about e.g. exact depreciation schedules and utilization percentages that the error bars would inevitably be straddling both sides of the break-even line.
> with no evidence that the system can generalize, profitably, outside the limited areas it’s currently in
That argument doesn't seem horribly compelling given the regular expansions to new areas.
Analyzing Alphabet’s capital allocation decisions gives you all the evidence necessary.
It’s safe to assume that a company’s ownership takes the decisions that they believe will maximize the value of their company. Therefore, we can look at Alphabet’s capital allocation decisions, with respect to Waymo, to see what they think about Waymo’s opportunity.
In the past five years, Alphabet has spent >$100B to buyback their stock; retained ~100B in cash. In 2024, they issued their first dividend to investors and authorized up to $70B more in stock buybacks.
Over that same time period they’ve invested <$5B in Waymo, and committed to investing $5B more over the next few years (no timeline was given).
This tells us that Alphabet believes their money is better spent buying back their stock, paying back their investors, or sitting in the bank, when compared to investing more in Waymo.
Either they believe Waymo’s opportunity is too small (unlikely) to warrant further investment, or when adjusted for the remaining risk/uncertainty (research, technology, product, market, execution, etc) they feel the venture needs to be de-risked further before investing more.
Isn’t there a point of diminishing returns? Let’s assume they hand over $70B to Waymo today. Can Waymo even allocate that?
I view the bottlenecks as two things. Producing the vehicles and establishing new markets.
My understanding of the process with the vehicles is they acquire them then begin a lengthy process of retrofitting them. It seems the only way to improve (read: speed up) this process is to have a tightly integrated manufacturing partner. Does $70B buy that? I’m not sure.
Next, to establish new markets… you need to secure people and real estate. Money is essential but this isn’t a problem you can simply wave money at. You need to get boots on the ground, scout out locations meeting requirements, and begin the fuzzy process of hiring.
I think Alphabet will allocate money as the operation scales. If they can prove viability in a few more markets the levers to open faster production of vehicles will be pulled.
This is just a quirk of the modern stock market capitalist system. Yes, stock buybacks are more lucrative than almost anything other than a blitz-scaling B2B SAAS. But for good of society, I would prefer if Alphabet spent their money developing new technologies and not on stock buybacks / dividends. If they think every tech is a waste of money, then give it to charity, not stock buybacks. That said, Alohabet does develop new technologies regularly. Their track record before 2012 is stellar, their track record now is good (Alphafold, Waymo, Tensorflow, TPU etc), and it is nowhere close to being the worst offender of stock buybacks (I’m looking at you Apple), but we should move away from stock price over everything as a mentality and force companies to use their profits for the common good.
> Alphabet has to buy back their stock because of the massive amount of stock comp they award.
Wait, really? They're a publically traded company; don't they just need to issue new stock (the opposite of buying it back) to employees, who can then choose to sell it in the public market?
That's a very hand wavy argument. How about starting here:
> Mario Herger: Waymo is using around four NVIDIA H100 GPUSs at a unit price of $10,000 per vehicle to cover the necessary computing requirements. The five lidars, 29 cameras, 4 radars – adds another $40,000 - $50,000. This would put the cost of a current Waymo robotaxi at around $150,000
There are definitely some numbers out there that allow us to estimate within some standard deviations how unprofitable Waymo is
(That quote doesn't seem credible. It seems quite unlikely that Waymo would use H100s -- for one, they operate cars that predate the H100 release. And H100s sure as hell don't cost just $10k either.)
You're not even making a handwavy argument. Sure, it might sound like a lot of money, but in terms of unit profitability it could mean anything at all depending on the other parameters. What really matters is a) how long a period that investment is depreciated over; b) what utilization the car gets (ot alternatively, how much revenue it generates); c) how much lower the operating costs are due to not needing to pay a driver.
Like, if the car is depreciated over 5 years, it's basically guaranteed to be unit profitable. While if it has to be depreciated over just a year, it probably isn't.
Do you know what those numbers actually are? I don't.
Here in the product/research sense, which is the hardest bar to cross. Making it cheaper takes time but generally we have reduced cost of everything by orders of magnitude when manufacturing ramps up, and I don't think self driving hardware(sensors etc) would be any different.
It’s not even here in the product/research sense. First, as the author points out, it’s better characterized as operator-assisted semi-autonomous driving in limited locations. That’s great but far from autonomous driving.
Secondly, if we throw a dart on a map: 1) what are the chances Waymo can deploy there, 2) how much money would they have to invest to deploy, and 3) how long would it take?
Waymo is nowhere near a turn-key system where they can setup in any city without investing in the infrastructure underlying Waymo’s system. See [1] which details the amount of manual work and coordination with local officials that Waymo has to do per city.
And that’s just to deploy an operator-assisted semi-autonomous vehicle in the US. EU, China, and India aren’t even on the roadmap yet. These locations will take many more billions worth of investment.
Not to mention Waymo hasn’t even addressed long-haul trucking, an industry ripe for automation that makes cold, calculated, rational business decisions based on economics. Waymo had a brief foray in the industry and then gave up. Because they haven’t solved autonomous driving yet and it’s not even on the horizon.
Whereas we can drop most humans in any of these locations and they’ll mostly figure it out within the week.
Far more than lowering the cost, there are fundamental technological problems that remain unsolved.
> So he think humans are intervening once every 1-2 miles to train the Waymo
Just to make sure we're applying our rubric fairly and universally: Has anyone else been in an Uber where you wished you were able to intervene in the driving a few times, or at least apply RLHF to the driver?
(In other words: Waymo may be imperfect to the point where corrections are sometimes warranted; that doesn't mean they're not already driving at a superhuman level, for most humans. Just because there is no way for remote advisors to provide better decisions for human drivers doesn't mean that human-driven cars would not benefit from that, if it were available.).
To apply this benchmark, you'd have to believe that Waymo is paying operators to improve the quality of the ride, not to make the ride possible at all. That is, you'd have to believe that the fully autonomous car works and gets you to your destination safely and in a timely manner (at the level of a median professional human driver), but Waymo decided that's not good enough and hired operators to improve beyond that. This seems very unlikely to me, and some of the (few) examples I've seen online were about correcting significant failures, such as waiting behind a parked truck indefinitely (as if it were stopped at a red light) or looping around aimlessly in a parking lot.
You'd also have to believe that when you wished to change how your Uber driver drove, you'd actually have improved things rather than worsened them.
Let's suppose Waymo's fully automated stuff has tenfold-fewer fatal collisions than a human. There's no way to avoid the fatal accidents a human causes, and the solution to Waymos getting stuck sometimes is simple. The point is that the Waymo can actually be described as superior to a human driver, and the fact that its errors can be corrected with review is a feature and not a bug - they optimize for those kinds of errors rather than unrecoverable ones.
Nonsense. If you spoke about self-driving cars a few decades ago you would have understood it to have meant that you could go to a dealer and buy a car that would drive itself, wherever you might be, without your input as a driver.
No-one would have equated the phrase "we'll have self-driving cars" with "some taxis in a few of US cities"
That's how all innovation works. Ford never said people asked for a faster horse, but the theory holds. It doesn't matter what benchmarks you set, the market finds an interesting way to satisfy people's needs.
I agree.. Waymo sells +150k rides every week according to Alphabet’s Q3 2024 earnings announcement. Yes they need human assistance once in a while. I know of plenty other automation that needs to be tickled or rebooted periodically to work, that most would still say works automatically.
Maybe he has a very narrow or strict definition of ‘driverless’. That would explain the “not in this half of the century”-sentiment. I mean, it’s 25 years!
Your objection to him claiming a win on self driving is that you think that we can still define cars as self driving even when humans are operating them? Ok I disagree. If humans are operating them then they simply are not self driving by any sensible definition.
Human interventions are some non zero number in current self driving cars and will likely be that way for a while. Does this mean self driving is a scam and in fact it is just a human driving, and that these are actually ADAS. Maybe in some pedantic sense, you are right but then your definition is not useful, since it lumps cruise control/ lane-keeping ADAS and Waymo’s in the same category. Waymo is genuinely, qualitatively a big improvement above any ADAS/ self driving system that we have seen. I suspect Rodney did not predict even Waymo’s to be possible, but gave himself enough leeway so that he can pedantically argue that Waymo’s are just ADAS and that his prediction was right.
This is not about crashes. By all accounts, the Waymo cars are mostly fully self driving, I beleive even the article author agrees with that. This includes crash avoidance, to the extent that they can.
The remote operation seems to be more about navigational issues and reading the road conditions. Things like accidentally looping, or not knowing how to proceed with an unexpected obstacle. Things that don't really happen to human drivers, even the greenest of new drivers.
Ok, but crashes are much worse than navigational issues or accidentally looping. It’s only status quo bias that makes us think driving is more solved if you get the accidental looping fixed before the crashing.
Only true up to some extent. If a car can't get you anywhere, then crashing is almost irrelevant: you won't use it, because there's nothing to be gained from that. A car looping around in a parking lot is extremely safe, but completely useless.
Some of them are scams, yes. For stuff like Waymo, it definitely doesn’t match the hype at the time he made the original predictions. As pointed out above, there were people in 2016 claiming we’d be buying cars without steering wheels that could go between any two points connected by roads by now.
Yeah, I think semi-autonomous vehicles are a huge milestone and should be celebrated but the jump from semi-autonomous to fully-autonomous will, I think, feel noticeably different. It will be a moment future generations have trouble imagining a world where drunk or tired driving was ever even an issue.
The future is here, just unevenly distributed. There are already people that don't have that issue, thanks to technology. That technology might be Waymo and not driving in the first place, or the technology might be smartphones and the Internet, which enables Uber/Lyft to operate. Some of them might use older technologies like concrete which enables people to live more densely and not have to drive to get to the nearest liquor establishment.
You can make exactly the opposite argument as well: You think that we can still define cars as human-driven even when they have self-driving features (e.g. lane keeping). If the car is self-driving in even the smallest way, then they simply are not human-operated by any sensible definition.
> when he claims to be correct about self driving in the Waymo case. The bar he set is so broad and ambiguous, that probably anything Waymo did, would not qualify as self driving to him
Honestly, back in 2012 or something I was convinced that we would have autonomous driving by now, and by autonomous driving I definitely didn't mean “one company is able to offer autonomous taxi rides is a very limited amount of places with remote operator supervision”, the marketing pitch has always been something along “the car you'll buy will be autonomously driving you to whatever destination you ask for, and you'll be just a passenger in you own car”, and we definitely aren't there at all when all we have is Waymo.
The Waymo criticisms are absurd to the point of dishonesty. He criticizes a Waymo for... not pulling out fast enough around a truck, or for human criminals vandalizing them? Oh no, once some Waymos did a weird thing where they honked for a while! And a couple times they got stuck over a few million miles! This is an amazingly lame waste of space, and the fact that he does his best to only talk about Tesla instead of Waymo emphasizes how weak his arguments are, particularly in comparison to his earliest predictions. (Obviously only the best self-driving car matters to whether self-driving cars have been created.)
"Nothing ever happens"... until it does, and it seems Brooks's prediction roundups can now be conveniently replaced with a little rock on it with "nothing in AI ever works" written on it without anything of value being lost.
It’s interesting that in my reading of the post I felt like he hardly talked about Tesla at all.
He calls out that Tesla FSD has been “next year” for 11 years, but then the vast majority of the self-driving car section is about Cruise and Waymo. He also minorly mentions Tesla’s promise of a robotaxi service and how it is unlikely to be materially different than Cruise/Waymo. The amount of space allocated to each made sense as I read it.
For the meat of the issue: I can regularly drive places without someone else intervening. If someone else had to intervene in my driving 1/100 miles, even 1/1000 miles, most would probably say I shouldn’t have a license.
Yes, getting stuck behind a parked car or similar scenario is a critical flaw. It seems simple and non-important because it is not dangerous, but it means the drive would not be completed without a human. If I couldn’t drive to work because there was a parked car on my home street, again, people would question whether I should be on the road, and I’d probably be fired.
Interesting, that wasn't my takeaway from the article at all!
Direct quote from the article:
> Then I will weave them together to explain how it is still pretty much business as usual, and I mean that in a good way, with steady progress on both the science and engineering of AI.
There are some extremely emotional defences of Waymo on this comment thread. I don't quite understand why? Are they somehow inviolable to constructive criticism in the SV crowd?
> That being said, we are not on the verge of replacing and eliminating humans in either white collar jobs or blue collar jobs.
Tell that to someone laid off when replaced by some "AI" system.
> Waymo not autonomous enough
It's not clear how often Waymo cars need remote attention, but it's not every 1-2 miles. Customers would notice the vehicle being stopped and stuck during the wait for customer service. There are many videos of people driving in Waymos for hours without any sign of a situation that required remote intervention.
Tesla and Baidu do use remote drivers.
The situations where Waymo cars get stuck are now somewhat obscure cases. Yesterday, the new mayor of SF had two limos double-parked, and a Waymo got stuck behind that. A Waymo got stuck in a parade that hadn't been listed on Muni's street closure list.
> Flying cars
Probably at the 2028 Olympics in Los Angeles. They won't be cost-effective, but it will be a cool demo.
EHang recently put solid state batteries into their flying car and got 48 minutes of flight time, instead of their previous 25 minutes. Ehang is basically a scaled-up quadrotor drone, with 16 motors and props. EHang been flying for years, but not for very long per recharge. Better batteries will help a lot.
> Tell that to someone laid off when replaced by some "AI" system.
What are some good examples? I am very skeptical of anyone losing their jobs to AI. People are getting laid off for various reasons:
- Companies are replacing American tech jobs with foreigners
- Many companies hired more devs than they need
- companies hired many devs during the pandemic, and don't need them anymore
Some companies may claim they are replacing devs with AI. I take it with a grain of salt. I believe some devs were probably replaced by AI, but not a large amount.
I think there may be a lot more layoffs in the future, but AI will probably account for a very small fraction of those.
> I believe some devs were probably replaced by AI, but not a large amount.
I'm not even sold on the idea that there were any. The media likes to blame AI for the developer layoffs because it makes a much more exciting story than interest rates and arcane tax code changes.
But the fact is that we don't need more than the Section 174 changes and the end of ZIRP to explain what's happened in tech. Federal economic policy was set up to direct massive amounts of investment into software development. Now it's not. That's a real, quantifiable impact that can readily explain what we've seen in a way that the current productivity gains from these tools simply can't.
Now, I'll definitely accept that many companies are attributing their layoffs to AI, but that's for much the same reason that the media laps the story up: it's a far better line to feed investors than that the financial environment has changed for the worse.
But I see so many devs typing here saying how vital AI is to their writing code efficiently and quickly now. If that is true then you need way less devs. Are people just sitting idle at their desks? I do see quite a bit of tech layoffs for sure. Are you saying devs aren't part of the workers being laid off?
>In 2024: At least 95,667 workers at U.S.-based tech companies have lost their jobs so far in the year, according to a Crunchbase News tally.
> Are you saying devs aren't part of the workers being laid off?
No, they are saying that the reason for the layoffs is not AI, it is financial changes making devs too expensive.
> If that is true then you need way less devs.
This does not follow. First of all, companies take a long time to measure dev output, it's not like you can look at a burn down chart over two sprints and decide to fire half the team because it seems they're working twice as fast. So any productivity gains will show up as layoffs only after a long time.
Secondly, dev productivity is very rarely significantly bounded by how long boilerplate takes to write. Having a more efficient way to write boilerplate, even massively more efficient, say 8h down to 1h, will only marginally improve your overall throughput, at least at the senior level: all that does is free you to think more about the complex issues you needed to solve. So if the task would have previously taken you 10 days, of which one day was spent on boilerplate, it may now take you, say, 8-9 days, because you've saved one day on boilerplate, plus some more minor gains here and there. So far from firing 7 out of every 8 devs, the 8h-to-1h boilerplate solution might allow you to fire 1 dev in a team of 10.
> But I see so many devs typing here saying how vital AI is to their writing code efficiently and quickly now. If that is true then you need way less devs.
Sure, in the same sense that editors and compilers mean you need way less devs.
Induced demand means we’ll need more devs than we have right now since every dev can produce more value (anyone using cursor for a longer while should be able to confirm that easily).
The problem is different in the meantime: nobody wants to be paying for training of those new devs. Juniors don’t have the experience to call LLM’s bullshit and seniors don’t get paid to teach them since LLMs replaced interns churning out boilerplate.
BLS reports ~1.9 million software developer jobs and predicts 17% growth through 2033. Crunchbase is talking about "tech workers" not developers. And they don't even say that tech employment is down. I predict that when BLS publishes their preliminary job numbers for 2024 it will be at least 1.85 million, not 1.9 million as suggested by your Crunchbase News. I would lay 2:1 odds that it will be higher than 2023's number.
Why would Jevon's paradox not apply to human labor?
I am not sure what I expect for software developers besides that the nature if the work will change but it is still too early to say exactly how. We certainly cannot extrapolate linearly or exponentially from the past few years.
> Are you saying devs aren't part of the workers being laid off?
Of course not. The Section 174 changes are really only relevant to software devs—the conversation in the months leading up to them kicking in was all about how it would kill software jobs. But then when it happened the media latched onto this idea that it was the result of automation, with zero evidence besides the timing.
Since the timing also coincided with a gigantically important change to the tax code and a rapid increase in interest rates, both of which were predicted to kill software jobs, I'm suggesting that blaming AI is silly—we have a proximate cause already that is much more probable.
> But I see so many devs typing here saying how vital AI is to their writing code efficiently and quickly now. If that is true then you need way less devs
Same can be said for github, and open-source deoendency management tools like npm, and I'd argue that it had an even a much bigger impact then, and did you see what happen afterwards? Where were the mass layoffs back then? The number of software developers is actually much higher than before that era.
It just isn’t true that AI has made developers more efficient. Some might claim such on this site, but the vast majority of developers aren’t using it, or they find it to be a drag on their productivity (because for most tasks the median software engineer has to do, it actually can’t help), and the ones that do use it are (unknowingly maybe) exaggerating its impact.
Devs are getting laid off, yes. AI is not the reason. Executive/shareholder priorities are the reason.
I was thinking about this. I think we have an overcorrection right now. People get laid off because of expected performance of AI, not real performance. With copywriting and software development we have three options:
1. leaders notice they were wrong, start to increase human headcount again
2. human work is seen as boutique and premium, used for marketing and market placement
3. we just accept the sub-par quality of AI and go with it (quite likely with copywriting I guess)
I'd like to compare it with cinema and Netflix. There was a time where lost of stuff was mindless shit, but there was still place for A24 and it took the world by storm. What's gonna happen? No one knows.
But anyway, I figure that 90% of "laid off because of AI" is just regular lay-offs with a nice sounding reason. You don't loose anything by saying that and only gain in stakeholder trust.
If you look up business analyst type jobs on JP Morgan website they are still hiring a ton right now.
What you actually notice is how many are being outsourced to other countries outside the US.
I think the main process at work is 1% actual AI automation and a huge amount of return to the office in the US while offshoring the remote work under the cover of "AI".
I imagine there aren't really layoffs, but slowing/stopping of hiring as you get more productivity out of existing devs. I imagine in the future, lots of companies will just let their employee base slowly attrition away.
Yeah, the AgentForce thing is a classic example. Internal leaks say Salesforce is using it as cover for more regular (cost cutting based) layoffs. People who've actually evaluated AgentForce don't think it's ready for prime time. It's more smoke and mirrors (and lots of marketing).
I think what Waymo's achieved is really impressive, and I like the way they've rolled out (carefully), but there's a lot of non evidence based defense of them in this comment thread. YouTube videos of people driving for hours are textbook survivorship bias. (What about all the videos people made but didn't upload because their drive didn't go perfectly?)
Nobody knows how many times operators intervene, because Waymo hasn't said. It's literally impossible to deduce.
Which means I also agree his estimate could also be wildly wrong too.
Solid state batteries. Prototypes work, but high-volume manufacturing doesn't work yet. The major battery manufacturers are all trying to get this to production.
Early versions will probably be expensive.
Maybe a 2x improvement in kwh/kg. Much less risk of fire or thermal runaway.
Charging in < 10 mins.
The one thing I'm curious about with solid state batteries is if there's a path towards incremental improvements in power density like we've seen with lithium batteries?
It would be unfortunate if we get solid state batteries that have the great features that you describe but they're limited to 2x or so power density. Twice the power density opens a lot of doors for technology improvements and innovation but it's still limiting for really cool things like humanoid robotics and large scale battery powered aircraft.
Somebody may come up with a new battery chemistry. There are many people trying. There are constraints other than energy density - charge rate, discharge rate, safety, lifetime, cooling, etc. Lithium-air batteries have an energy density which potentially approaches that of gasoline, but decades of work have not produced anything usable.[1]
There are, of course, small startups promising usable lithium-air batteries Real Soon Now.[2]
The big issues with hydrogen are volume and form factor. Hydrogen needs to be cryogenic or high pressure, and either work best with big spheroid-like tanks which don't naturally integrate into the wings where fuel is currently stored.
There are now a few large flow batteries. Here's one that's 400 megawatt-hours.[1] Round trip efficiency is poor and the installation is bulky, but storage is just tanks of liquid that are constantly recycled.
Good example of everything that can wrong with a prediction market if left unchecked. Don't like that Waymo broke your prediction? Fine just move your goalposts. Like that prediction came true but on the wrong timeframe? Just move the goal posts.
Glad Polymarket (and other related markets) exist so they can put actual goal posts in place with mechanisms that require certain outcomes in order to finalize on a prediction result.
> Glad Polymarket (and other related markets) exist so
Polymarket is a great way to incentive people into making their predictions happen, with all clandestine tools at their disposal, which is definitely not what you want for your society generally.
It seems to me that the redefined flying cars for extremely wealthy people did happen? eVTOLs are being sold/delivered to the general public. Certainly still pretty rare, as I've never seen one in real life. I'd love to have one but would probably hate a world where everyone has them.
Not really wanting to have this argument a second time in a week (seriously- just look at my past comments instead of replying here as I said all I care to say https://news.ycombinator.com/item?id=42588699), but he is totally wrong about LLMs just looking up answers in their weights- they can correctly answer questions about totally fabricated new scenarios, such as solving simple physics questions that require tracking the location of objects and reporting where they will likely end up based on modeling the interactions involved. If you absolutely must reply that I am wrong at least try it yourself first in a recent model like GPT-4o and post the prompt you tried.
Kobe Bryant basically commuted by helicopter, when it was convenient. It may have even taken off and landed at his house, but probably not exactly at all of his destinations. Is a “flying car” fundamentally that much different?
I think the difference is that a helicopter is extremely technical to fly requiring complex and expensive training, and the eVTOL is supposed to be extremely simple to fly. Also the eVTOL in principle is really cheap to make if you just consider the materials and construction costs- probably eventually much cheaper than a car.
I was curious so I looked up how much you can buy the cheapest new helicopters for, and they are cheaper than an eVTOL right now- the XE composite is $68k new, and things like that can be ~25k used. I'm shocked one can in principle own a working helicopter for less than the price of a 7 year old Toyota Camry.
Nothing that flies in the air is that safe for its passengers or its surroundings - not without restrictions placed on it and having a maintenance schedule that most people would not be comfortable following.
Most components are safety critical in ways that their failure can lead to an outright crash or feeding the pilot false information leading him to make a fatal mistake. Most cars can be run relatively safely even with major mechanical issues, but something as 'simple' as a broken heater on a pitot tube (or any other component) can lead to a crash.
Then there's an issue of weather - altitude, temperature, humidity, wind speed can create an environment that makes it either impossible, unsafe, or extremely unpleasant - imagine flying into an eddy current that stalls out the aircraft, making your ass drop a few feet.
Flying's a nice hobby, and I have great respect to people who can make a career out of it, but I'd definitely not get into these auto-piloted eVTOLs, nor should people who don't know what they are doing.
Edit: Also unlike helicopters, which can autorotate, and fixed wing aircraft, that can glide, eVTOLs just drop out of the sky.
I would expect eVTOLs to be capable of greater redundancy than a helicopter or fixed wing aircraft - with no single point of failure that could make them drop from the sky. It would add little weight to have two or more independent electrical and motor systems each capable of making a semi controlled landing on its own, but must coordinate to provide the full rated lift. Marketing materials claim the Blackfly has triple redundancy. I suppose one could have software logic glitches that cause all modular systems to respond inappropriately to conditions in unison.
eVTOLs are going to be much more expensive to build than helicopters because they have far more stringent weight/strength requirements due to low battery energy density (relative to aviation fuel).
The idea is to have far cheaper operating costs. Electric motors are far more efficient than ICE, so you should have much cheaper energy costs. Electric motors are also simpler than ICE so you should have cheaper maintenance with less required downtime compared to helicopters.
Of course, most of this is still being tested and worked on. But we are getting closer to having these get certified (FAA just released the SFAR for eVTOL, the first one since the 1940s).
But I'm sure running costs (aviation fuel), hanger costs, maintenance costs, cost to maintain pilot license are far more expensive, compared to driving a car.
I'm talking about buying the absolute cheapest possible used experimental helicopter- homemade by a stranger from a cheap kit. I would posit that if I were willing to take that risk- probably buying a model with know design and reliability issues to save money- I'd also just park it in the backyard, skip the maintenance and run it on the cheapest pump gas I can find!
The ones I'm seeing in the 20k range are mostly the "Mini 500." Wikipedia suggests that maybe as few as 100 were built, with 16 fatalities thus far (or is it 9- which it says in a different part of the article?). But some people argue all of those involved "pilot error."
I suppose choosing to fly the absolute cheapest homemade experimental aircraft kit notorious for a high fatality rate is technically a type of pilot error?
Can you imagine thousands of flying cars flying low over urban areas?
Skill level needed for "driving" would increase by a lot, noise levels would be abysmal, security implications would be severe (be they intentional or mechanical in nature), privacy implications would result in nobody wanting to have windows.
This is all more-or-less true for drones as well, but their weight is comparable to a todler, not to a polar bear. I firmly believe they'll never reach mass usage, but not because they're impossible to make.
I had a friend who used to (still does) fly RC helicopters; that requires quite a bit of skill. Meanwhile, I think anybody can fly a DJI drone. I think that's what will transform "flying" when anybody, not just a highly skilled pilot, can "drive" a flying car (assuming it can be as safe as a normal car... which somehow I doubt)
Yeah, as an NLP researcher I was reading the post with interest until I found that gross oversimplification about LLMs, which has been repeatedly proved wrong. Now I don't trust the comments and predictions on the other fields I know much less about.
I always have a definitional problem with predictions. I mean, it's moot whether a specific prediction is right or wrong as long as it doesn't help us to understand the big picture and the trends.
Take, for example, the prediction about "robots can autonomously navigate all US households". Why all? From the business POV, 80% of the market is "all" in a practical sense, and most people will consider navigation around the home as "solved" if they can do it for the majority of households and with virtually no intervention. Hilarious situations will arise that amuse the folks; video of clumsy robots will flood the internet instead of cats and dogs, but for the business site, it's lucrative enough to produce and sell them en masse.
Another question of interest is how is the trend? What will the approximate cost of such a robot be? How many US households will adopt such a robot by which time, as they have adopted washing machines and dishwashers. Will we see a linear adoption or rather a logistic adoption? These are the more interesting questions than just whether I'm right or wrong.
> Their imaginations were definitely encourage by exponentialism, but in fact all they knew was that when the went from smallish to largish networks following the architectural diagram above, the performance got much better. So the inherent reasoning was that if more made things better then more more would make things more better. Alas for them it appears that this is probably not the case.
I recommend reading Richard Hamming's "The Art of Science and Engineering." Early in the book he presents a simple model of knowledge growth that always leads to an s-curve. The trouble is that on the left, an s-curve looks exponential. We still don't know where we are on the curve with any of the technologies. It is very possible we've already passed the exponential growth phase with some of these technologies. If so, we will need new technologies to move forward to the next s curve.
In reading this I come to wonder if the current advances in "AI" are going to follow the Self Driving Car model. Turns out the 80% is relatively easy to do, but the remaining 20% to get it right is REALLY hard.
Agree, that is why the agent hype is going to bust. Agent means giving AI control. That means critical failure modes and the need of human to constantly oversee agent working.
> Systems which do require remote operations assistance to get full reliability cut into that economic advantage and have a higher burden on their ROI calculations
Technically true but I'm not convinced it matters that much. The reason autonomation took over in manufacturing was not that they could fire the operator entirely, but that one operator could man 8 machines simultaneously instead of just one.
All that verbiage about robotaxis and not a single mention about China, which by all accounts is well ahead of the US in deploying them out on the road. (With a distinctly mixed track record, it must be said, but still.)
I like Rodney Brooks, but I find the way he does these predictions to be very obtuse and subject to a lot of self-congratulatory interpretation. Highlighting something green that is "NET2021" and then saying he was right when something happened or didn't happen, when something related happened in 2024 mean that he predicted it right or wrong, or is everything subject to arbitrary interpretation? Where are the bold predictions? Sounds like a lot of fairly obvious predictions with a lot of wiggle room to determine if right or wrong.
NET2021 means that he predicted that the event would take place on or after 2021, so happening in 2024 satisfies that. Keep in mind these are six-year-old predictions.
Are you wishing that he had tighter confidence intervals?
If the predictions are meant to be bold, then yes. If they're meant to be fairly obvious, then no.
For example, saying that flying cars will be in widespread use NET 2025 is not much of a prediction. I think we can all say that if flying cars will be in widespread use, it will happen No Earlier Than 2025. It could happen in 2060, and that NET 2025 prediction would still be true. He could mark it green in 2026 and say he was right, that, yes, there are no flying cars, and so mark his scorecard another point in the correct column. But is that really a prediction?
A bolder prediction would be, say "Within 1-2 yrs of XX".
So what is Rodney Brooks really trying to predict and say? I'd rather read about what the necessary gating conditions are for something significant and prediction-worthy to occur, or what the intractable problems are that would make something not be possible within a predicted time, rather than reading about him complain about how much overhype and media sensation there is in the AI and robotics (and space) fields. Yes, there is, but that's not much of a prediction or statement either, as it's fairly obvious.
There's also a bit of an undercurrent of complaint in this long article about how the not-as-sexy or hyped work he has done for all those years has gone relatively unrewarded and "undeserving types" are getting all the attention (and money). And as such, many of the predictions and commentary on them read more as rant than as prediction.
Presumably you read the section where Brooks highlights all the forecasts executives were making in 2017? His NET predictions act as a sort of counter-prediction to those types of blind optimistic, overly confident assertions.
In that context, I’d say his predictions are neither obvious nor lacking boldness when we have influential people running around claiming that AGI is here today, AI agents will enter the workforce this year, and we should be prepared for AI-enabled layoffs.
The NET estimation is supposed to be a counter to the irrational exuberance of media and PR. E.g. musk says they'll get humans to Mars in 2020, and the counter is "I don't think that will happen until at least 2030".
"NET2021" means "no earlier than 2021". So, if nothing even arguably similar happened until 2024, that sounds like a very correct prediction.
Whether that's worth congratulating him about depends on how obvious it was, but I think you really need to measure "fairly obvious" at the time the prediction is made, not seven years later. A lot of things that seem "fairly obvious" now weren't obvious at all then.
For me this predictions are kind of being aware of how progress can happen based on history, but this will not lead to any breakthrough. I am not in the camp of being skeptic so I still like the hype cycle, they create an environment for people to break the boundaries and sometimes help untested ideas and things to be explored. This might not have happen if there is no hype cycle. I am in the camp of people who are positive as George Bernard Shaw in his 2 quotes:
1. A life spent making mistakes is not only more honorable, but more useful than a life spent doing nothing.
2. The reasonable person adapts themselves to the world: the unreasonable one persists in trying to adapt the world to themself. Therefore all progress depends on the unreasonable person. (Changed man to person as I feel it should be gender neutral)
In hindsight when we look back, everything looks like we anticipated, so predictions are no different some pans out some doesn't. My feeling after reading prediction scorecard is that you need a right balance between risk averse (who are either doubtful or do not have faith things will happen quickly enough) and risk takers (one who is extremely positive) for anything good to happen. Both help humanity to move forward and are necessary part of nature.
It is possible AGI might replace humans in a short term and then new kind of work emerges and humans again find something different. There is always a disruption with new changes and some survive and some can't, even if nothing much happens its worth trying as said in quote 1.
> The billionaire founders of both Virgin Galactic and Blue Origin had faith in the systems they had created. They both personally flew on the first operational flights of their sub-orbital launch systems. They went way beyond simply talking about how great their technology was, they believed in it, and flew in it.
> Let’s hope this tradition continues. Let’s hope the billionaire founder/CEO of SpaceX will be onboard the first crewed flight of Starship to Mars, and that it happens sooner than I expect. We can all cheer for that.
>Individually owned cars can go underground onto a pallet and be whisked underground to another location in a city at more than 100mph.
I'm curious where this idea even came from, not sure who the customer would be, it's a little disappointing he doesn't mention mag-lev trains in a discussion about future rapid transit. I'd much rather ride a smooth mag-lev across town than an underground pallet system.
I don't have a pulse on how far self-driving has come from a tech standpoint, but from an outsider's perspective I'd say it is "achieved" when I can order a self-driving car from an app in all of the top 10 most populated cities in the US (since that's where it is being developed) with as much consistency as uber/lyft. The real final boss for self-driving will be the government red-tape that companies will need to get through. I doubt local governments will be a laissez faire with self-driving as they were with uber being an illegal taxi company.
the final boss will be the first big lawsuit against a manufacturer for liability after someone is killed by a driverless car
Of course, then we will eventually see infrastructure become even more hostile to non-drivers and people will have to sue their own governments for the right to exist in public without paying transport companies. Strong Towns tried to warn us
There was, but the reality is that the modern US regulatory environment demonstrably doesn't care whether cars labled as self driving are actually capable of that safely, and have not demonstrated any interest in regulating such, and that was BEFORE we popularly elected a group of charlatans, hacks, and grifters that have all made "The courts will bend over backwards for us and our wealth" a huge part of America.
Like, Reagan's instructions to the regulatory agencies to basically stand down was only just beginning to be undone after 40 years, and we immediately elected the people promising to slam hard in the other direction.
America will be a regulatory free for all for business for decades.
It is valuable to make predictions about the world, evaluate those predictions, and reflect on the quality of the predictions and what biases skewed those predictions. The key is to refine how one looks at the world.
I don't see that in this article. Largely, I see the author trying to argue that he was right in 2018 rather than trying to take a step back to accurately evaluate their predictions.
Does it drive anyone else crazy when an author posts 15,000 words (yes, there are that many in this article) when 1,500 would have more than communicated the relevant information? The length of this article is almost comical.
It's long, so I'm skimming a little and... flying cars. If you don't know why we don't have flying cars, you're not a good engineer.
It really doesn't matter what prestigious lab you ran, as that apparently didn't impart the ability to think critically about engineering problems.
[Hint: Flying takes 10x the energy of driving, and the cost/weight/volume of 1 MJ hasn't changed in close to a hundred years. Flying cars require a 10x energy breakthrough.]
The article is responding to claims by CEOs of car companies, industry and business press, and other hype sources that keep predicting flying cars next year or so. It's predicting that, against this hype, it will not come to pass. Not sure why you've worded your comment in such a way as if the article was hyping up flying cars.
Not to mention, since we do have helicopters, the engineering challange of flying cars is almost entirely unrelated to energy costs (at least for the super rich, the equivalent of, say, a Rolls Royce, not of a Toyota). The thing stopping flying cars from existing is that it is extremely hard to make an easy to pilot flying vehicle, given the numerous degrees of freedom (and potential catastrophic failure modes); and the significantly higher impredictability and variance of the medium (air vs road surface).
Plus, the major problem of noise pollution, which gets to extreme levels for somewhat fundamental reasons (you have to diaplace a whole lot of air to fly; which is very close to having to create sound waves).
So, overall, the energy problem is already fixed, we already have point-to-point flying vehicles usable, and occasionally used, in urban areas, helicopters. Making them safe when operated by a very lightly trained pilot, and silent enough to not wake up a neighborhood, are the real issues that will persist even if we had mini fusion reactors.
I'm not sure that this disproves my original point that self driving cars and flying cars don't belong in the same list because they are fundamentally different engineering problems.
Not quite. It's about 3x. It also depends on whether you're talking fixed wing or rotary wings.
A modern car might easily have 130 kW or more, and that's what a Cessna 172 has (around 180 hp). (Sure, a plane cruises at the higher end of that, while a car only uses that much to accelerate and cruises at the lower end of the range - still not a factor of 10x.)
As another datapoint, a Diamond DA40 does around 28 miles per gallon (< 9 litres per 100 km) at 60% power cruise.
The article is not optimistic on flying cars. The prediction is that an expensive flying car could be purchased no earlier than 2036, and notes a strong possibility that it won’t even happen by 2050. Plus states that minor success (aka 0.1% of car sales are flying cars) isn’t going to happen in his lifetime.
The author also expands on this:
> Don’t hold your breath. They are not here. They are not coming soon.
> Nothing has changed. Billions of dollars have been spent on this fantasy of personal flying cars. It is just that, a fantasy, largely fueled by spending by billionaires.
It’s worth actually reading the article before trashing someone’s career and engineering skills!
Engineering is about focusing on what matters. There's no point in talking about flying cars: they will exist when portable fusion exists, so just talk about that.
On reading the negative commentary here on Rodney Brooks's post, I'm realizing that besides being a rambling article, it also assumes too much background from the reader. It isn't really understandable without knowing something about the author and about the business of robots.
Disclaimer: I worked for years building robots, several of these years with Rod. I assure you, when it comes to robotics and AI, he knows what he's talking about.
Here's my perspective. Also, he wrote his original predictions six years ago in a blog post [1], which is the basis for this latest post. If you don't have the time to read the old post, I provide a short summary from it about autonomous driving below, too.
1. Rod is not just an MIT professor emeritus and a past director of CSAIL. He has co-founded multiple robotics companies, one of which, iRobot, made loads of money selling tens of millions of consumer-grade autonomous robots cleaning floors in people's homes.
Making money selling autonomous robots is a very, very difficult thing. Roomba was a true milestone. Before then, the only civilian, commercially successful mass-produced robots were the programmable industrial arms that are still used in auto manufacturing. If the author sounds self-important, maybe that's why.
Yeah, he can get a little snarky sometimes when self-important CEOs run around with VC money in their pockets making tall claims and never being held accountable. That's just his style. Try to look beyond it. You might learn a thing or two.
2. The entire purpose of his annual "predictions" posts starting with [1] was to counter the hype and salesmanship about AI and robotics that's wasting billions of investment dollars and polluting the media landscape.
About autonomous cars, he believes that the core technology has been demonstrated in the 1980s, but that instead of using it, we have squandered the decades since then. For autonomous robots, the interaction with their surroundings is critical to success. We could have enhanced our road and communications infrastructure to enable autonomous cars. Instead, we have chosen to give money to slick salesmen to chase the mirage of placing "intelligent" cars on existing roads, continuing to neglect our civil infrastructure.
If you took a transcript of a conversation with Claude 3.6 Sonnet, and sent it back in time even five years ago (just before the GPT-3 paper was published), nobody would believe it was real. They would say that it was fake, or that it was witchcraft. And whoever believed it was real would instantly acknowledge that the Turing test had been passed. This refusal to update beliefs on new evidence is very tiresome.
Similarly if you could let a person from five years ago have a spoken conversation with ChatGPT Advanced Voice mode or Gemini Live. For me five years ago, the only giveaways that the voice on the other end might not be human would have been its abilities to answer questions instantaneously about almost any subject and to speak many different languages.
The NotebookLM “podcasters” would have been equally convincing to me.
> The level of hype about AI, Machine Learning and Robotics completely distorts people’s understanding of reality. It distorts where VC money goes, always to something that promises impossibly large payoffs–it seems it is better to have an untested idea that would have an enormous payoff than a tested idea which can get to a sustainable business, but does not change the world for ever.
One thing to remember is that there is more than one target audience in these claims. VCs for example seem to operate on a rough principle of 5 tech companies, 4 make 0x and one makes 10x, for a total 2x on each investment. If you only promise 5x, with 4 failures of 0x and one success at 5x, total return is 1x on each (not worth the risk). You may say "yes, my company is 2x, but it is guaranteed!" - but they all sell this idea. VCs could be infinitely good at predicting success and great companies, but it's based on partial information. Essentially companies have to promise the 10x and the VCs assume they are likely incorrect anyway, in order to balance the risk profile.
I do have a fundamental problem with this "infinite growth" model that almost everything seems based on.
> There is steady growth in sales but my prediction of 30% of US car sales being electric by 2027 now seems wildly optimistic. We need two doublings to get there in three years and the doubling rate seems more like one doubling in four to five years.
Even one doubling in 4-5 years might be too much. There are fundamental issues to be addressed:
1. What do we do about crashed EVs? They are dangerous to store and dangerous to dismantle. There have been quite a few EV fires at places like Copart now. There is little to no value in crashed EVs because they are so dangerous, which pushes insurance up because they cannot recover these funds.
2. Most car dealerships in the UK refuse to accept EVs for trade-in, because they sit on their forecourt until they eventually die. Those who can afford EVs typically get them on finance when the batteries provide the fullest range. Nobody I know is buying 10 year old EVs with no available replacement batteries. Commerical fleets are also not buying any more EVs as they essentially get no money back after using them for 3 years or so.
3. The electrical grid cannot scale to handle EVs. With every Western country decarbonising their electrical grid in favour of renewable energy, they have zero ability to respond to increased load.
The truth is, when they push to remove fossil fuel vehicles, they simply want to take your personal transport from you. There is no plan for everybody to maintain personal mobility, it'll be a privilege reserved for the rich. You'll be priced out and put onto public transport, where there will be regular strikes because the government is broke and wages cannot increase - because who knew, infinite growth is a terrible investment model.
> The other thing that has gotten over hyped in 2024 is humanoids robots.
> The visual appearance of a robot makes a promise about what it can do and how smart it is.
The real sin is not HRI issues, it's that we simply cannot justify them. What job is a humanoid robot supposed to do? Who is going to be buying tens of thousands of the first unit? What is the killer application? What will a humanoid robot do that it is not cheaper/more effective to do with a real human, or cannot be done better with a specialised robot?
Anything you can think of which is a humanoid robot performing a single physical action repeatedly, is wrong. It would need to be a series of tasks that keeps the robot highly busy, and the nature of the work needs to be somewhat unpredictable (otherwise use a dedicated robot). After all, humans are successful not because we do one thing well, but because we do many not-well defined things good-enough. This kind of generalisation is probably harder than all other AI problems, and likely requires massive advances in real-time learning, embodiment and intrinsic motivation.
What we need sub-problems for robots, i.e. like a smart vacuum, where robots are slowly but surely introduced into complex environments where they can safely incrementally improve. Trying to crack self-driving 1+ tonne high speed death machines in your first attempt is insanity.
>>> [self driving cars are rmeote controlled] in all cases so far deployed, humans monitoring those cars from a remote location, and occasionally sending control inputs to the cars.
Wait, What now?
I have never heard this, but from the founder of CSAIL I am going to take it as a statement of fact and proof that basically every AI company is flat out lying.
I mean the difference between remote piloting a drone that has some autonomous flying features (which they do to handle lag etc) and remote driving a car is … semantics?
But yeah it’s just moving jobs from one location to another.
Note that even the examples he gives are related to things like an operator telling the car to overtake a stopped truck instead of waiting for it to start again. So occasional high level decisions, not minute-to-minute or even second-to-second interactions like you have when flying a drone.
This is more like telling your units to go somewhere in a video game, and they mostly do it right, but occasionally you have to take a look and help them because they got stuck in a particularly narrow corridor or something.
I don't know the motivation behind making robotics and AI predictions, as these things have been done to death since the 70s, but I know people who bet for high inflation made a killing in financial futures.
> It distorts where VC money goes, always to something that promises impossibly large payoffs–it seems it is better to have an untested idea that would have an enormous payoff than a tested idea which can get to a sustainable business
But this is the whole point of VC investing. It is not normal distribution investing.
what a weird writer, lots of interesting things to talk about but this very long essay continued to circle back to being author-self-obsessed with their own prowess and drawing out huge expositions and bullet lists on how well they are at predicting things. Call it self-referential-appeal-to-authority.
Another perspective is that it is a person who takes great care/is very thorough, to examine and re-evaluate his reasonings, and makes an effort to explain the logic in his reasoning, which can be helpful if you are trying to figure out if you agree or disagree.
Quite an unreadable web page, and somehow rationalising there was 'everything before me', and 'everything after me' with regard technology and prediction. Unfortunate understanding of reality really.
Then I bet Rodney can just fiddle with goal post and say that 3.26 trillion miles were driven in US in 2024, and having a human intervene 1000 miles would mean 3.26 billion interventions, and that this is clearly not self driving. In fact until Waymo disables Internet on all cars and prices it never needs any intervention ever, Rodney can claim he’s right, even then maybe not stopping exactly where Rodney wanted it to, might be proof that self driving doesn’t work.
Next big thing after deep learning prediction is clearly false. LLM is deep learning, scaled up, we are not in any sense looking past deep learning. Rodney I bet wanted it to be symbolic AI, but that is most likely a dead end, and the bitter lesson actually holds. In fact we have been riding this deep learning wave since Alex-Net 2012. OpenAI talked about scaling since 2016 and during that time the naysayers could be very confident and claim we needed something more, but OpenAI went ahead and proved out the scaling hypothesis and passed the language Turing test. We haven’t needed anything more except scale and reasoning has also turned out to be similar. Just an LLM trained to reason, no symbolic merger, not even a search step it seems like.
DeepMind RL/MCTS can succeed in fairly open-ended settings like StarCraft and shit.
Brain/DeepMind still knocks hard. They under-invested in LLMs and remain kind of half-hearted around it because they think it’s a dumbass sideshow because it is a dumbass sideshow.
They train on TPU which costs less than chips made of Rhodium like a rapper’s sunglasses, they fixed the structural limits in TF2 and PyTorch via the Jax ecosystem.
If I ever get interested in making some money again Google is the only FAANG outfit I’d look at.
It’s like being in the back seat of Nikki Lauda’s car.
https://www.youtube.com/watch?v=hVZ8NyV4pXU
However, if the real number is something like an intervention every 20 or 100 miles, and so an operator is likely passively monitoring dozens of cars, and the cars themselves ask for operator assistance rather than the operator actively monitoring them, then I would agree with you that Waymo has really achieved full self driving and his predicitons on the basic viability have turned out wrong.
I have no idea though which is the case. I would be very interested if there are any reliable resources pointing one way or the other.
But that definition doesn’t even matter. The key factor is whether the additional overhead, whatever percentage it is, makes economic sense for the operator or the customer. And it seems pretty clear the economics aren’t there yet.
The promise has been that self-driving would replace driving in general because it’d be safer, more economical, etc. The promise has been that you’d be able to send your autonomous car from city to city without a driver present, possibly to pick up your child from school, and bring them back home.
In that sense, yes, Waymo is nonexistent. As the article author points out, lifetime miles for “self-driving” vehicles (70M) accounts for less than 1% of daily driving miles in the US (9B).
Even if we suspend that perspective, and look at the ride-hailing market, in 2018 Uber/Lyft accounted for ~1-2% of miles driven in the top 10 US metros. [1] So, Waymo is a tiny part of a tiny market in a single nation in the world.
Self-driving isn’t “here” in any meaningful sense and it won’t be in the near-term. If it were, we’d see Alphabet pouring much more of its war chest into Waymo to capture what stands to be a multi-trillion dollar market. But they’re not, so clearly they see the same risks that Brooks is highlighting.
[1]: https://drive.google.com/file/d/1FIUskVkj9lsAnWJQ6kLhAhNoVLj...
I think that's a bit of a silly standard to set for hopefully obvious reasons.
Calculator was a small device that was made in one tiny market in one nation in the world. Now we all got a couple of hardware ones in our desk drawers, and a couple software ones on each smartphone.
If a driving car can perform 'well' (Your Definition May Vary - YDMV) in NY/Chicago/etc. then it can perform equally 'well' in London, Paris, Berlin, Brussels, etc. It's just that EU has stricter rules/regulations while US is more relaxed (thus innovation happens 'there' and not 'here' in the EU).
When 'you guys' (US) nail self-driving, it will only be a matter of time til we (EU) allow it to cross the pond. I see this as a hockey-stick graph. We are still on the eraser/blade phase.
Development of this technology appears to be logarithmic, not exponential.
I just want to highlight that the only mechanism by which this eventually produces cheaper rates is by removing having to pay a human driver.
I’m not one to forestall technological progress, but there are a huge number of people already living on the margins who will lose one of their few remaining options for income as this expands. AI will inevitably create jobs, but it’s hard to see how it will—in the short term at least—do anything to help the enormous numbers of people who are going to be put out of work.
I’m not saying we should stop the inevitable forward march of technology. But at the same time it’s hard for me to “very much look forward to” the flip side of being able to take robocabs everywhere.
Let's say AV development stops tomorrow though. Is continuing to grind workers down under the boot of the gig economy really a preferred solution here or just a way to avoid the difficult political discussion we need to have either way?
All I'm asking is that we take a moment to reflect on the people who won't be winners. Which is going to be a hell of a lot of people. And right now there is absolutely zero plan for what to do when these folks have one of the few remaining opportunities taken away from them.
As awful as the gig economy has been it's better than the "no economy" we're about to drive them to.
The US is one of the richest countries in the world, with all that wealth going to a few people. "Give everyone else a few scraps too!" is better than having nothing, but redistributing the wealth is better.
But this is the society we live in now. We don’t live in one where we take care of those whose jobs have been displaced.
I wish we did. But we don’t. So it’s hard for me to feel quite as excited these days for the next thing that will make the world worse for so many people, even if it is a technological marvel.
Just between trucking and rideshare drivers we’re talking over 10 million people. Maybe this will be the straw that breaks the camel’s back and finally gets us to take better care of our neighbors.
This is just coming from using what we already know how to do better.
Self-driving cars will be disruptive globally. So far they primarily drive employment in a small set of the technology industry. Yes, there are manufacturing jobs involved but those are overwhelmingly going to be jobs that were already building human-operated vehicles. Self-driving cars will save many lives. But not as many as public transit does (proportionally per user) And it is blindingly obvious they will make traffic worse.
You haven’t paid attention to how VC companies work.
They don't run to SFO because SF hasn't approved them for airport service.
Waymo's app only shows the areas accessible to you. Different users can have different accessible areas, though in the Bay area it's currently just the two divisions I'm aware of.
It's an area they're operating legally, so it's part of their operational area. It's not part of their public service area, which I'd call that instead.
> with no evidence that the system can generalize, profitably, outside the limited areas it’s currently in
That argument doesn't seem horribly compelling given the regular expansions to new areas.
It’s safe to assume that a company’s ownership takes the decisions that they believe will maximize the value of their company. Therefore, we can look at Alphabet’s capital allocation decisions, with respect to Waymo, to see what they think about Waymo’s opportunity.
In the past five years, Alphabet has spent >$100B to buyback their stock; retained ~100B in cash. In 2024, they issued their first dividend to investors and authorized up to $70B more in stock buybacks.
Over that same time period they’ve invested <$5B in Waymo, and committed to investing $5B more over the next few years (no timeline was given).
This tells us that Alphabet believes their money is better spent buying back their stock, paying back their investors, or sitting in the bank, when compared to investing more in Waymo.
Either they believe Waymo’s opportunity is too small (unlikely) to warrant further investment, or when adjusted for the remaining risk/uncertainty (research, technology, product, market, execution, etc) they feel the venture needs to be de-risked further before investing more.
I view the bottlenecks as two things. Producing the vehicles and establishing new markets.
My understanding of the process with the vehicles is they acquire them then begin a lengthy process of retrofitting them. It seems the only way to improve (read: speed up) this process is to have a tightly integrated manufacturing partner. Does $70B buy that? I’m not sure.
Next, to establish new markets… you need to secure people and real estate. Money is essential but this isn’t a problem you can simply wave money at. You need to get boots on the ground, scout out locations meeting requirements, and begin the fuzzy process of hiring.
I think Alphabet will allocate money as the operation scales. If they can prove viability in a few more markets the levers to open faster production of vehicles will be pulled.
Within the context of the original discussion around whether self-driving is here, today, or not, I think we can definitively see it’s not here.
Since Alphabet buybacks mostly just offset employee stock compensation, the main thing they are getting for this money is employees.
Alphabet has to buy back their stock because of the massive amount of stock comp they award.
Wait, really? They're a publically traded company; don't they just need to issue new stock (the opposite of buying it back) to employees, who can then choose to sell it in the public market?
> Mario Herger: Waymo is using around four NVIDIA H100 GPUSs at a unit price of $10,000 per vehicle to cover the necessary computing requirements. The five lidars, 29 cameras, 4 radars – adds another $40,000 - $50,000. This would put the cost of a current Waymo robotaxi at around $150,000
There are definitely some numbers out there that allow us to estimate within some standard deviations how unprofitable Waymo is
You're not even making a handwavy argument. Sure, it might sound like a lot of money, but in terms of unit profitability it could mean anything at all depending on the other parameters. What really matters is a) how long a period that investment is depreciated over; b) what utilization the car gets (ot alternatively, how much revenue it generates); c) how much lower the operating costs are due to not needing to pay a driver.
Like, if the car is depreciated over 5 years, it's basically guaranteed to be unit profitable. While if it has to be depreciated over just a year, it probably isn't.
Do you know what those numbers actually are? I don't.
Secondly, if we throw a dart on a map: 1) what are the chances Waymo can deploy there, 2) how much money would they have to invest to deploy, and 3) how long would it take?
Waymo is nowhere near a turn-key system where they can setup in any city without investing in the infrastructure underlying Waymo’s system. See [1] which details the amount of manual work and coordination with local officials that Waymo has to do per city.
And that’s just to deploy an operator-assisted semi-autonomous vehicle in the US. EU, China, and India aren’t even on the roadmap yet. These locations will take many more billions worth of investment.
Not to mention Waymo hasn’t even addressed long-haul trucking, an industry ripe for automation that makes cold, calculated, rational business decisions based on economics. Waymo had a brief foray in the industry and then gave up. Because they haven’t solved autonomous driving yet and it’s not even on the horizon.
Whereas we can drop most humans in any of these locations and they’ll mostly figure it out within the week.
Far more than lowering the cost, there are fundamental technological problems that remain unsolved.
[1]: https://waymo.com/blog/2020/09/the-waymo-driver-handbook-map...
> First driverless "taxi" service in a major US city, with dedicated pick up and drop off points, and restrictions on weather and time of day.
However, their analysis this year is that, "This is unlikely to happen in the first half of this century."
The prediction is clear. The evaluation is dishonest.
Just to make sure we're applying our rubric fairly and universally: Has anyone else been in an Uber where you wished you were able to intervene in the driving a few times, or at least apply RLHF to the driver?
(In other words: Waymo may be imperfect to the point where corrections are sometimes warranted; that doesn't mean they're not already driving at a superhuman level, for most humans. Just because there is no way for remote advisors to provide better decisions for human drivers doesn't mean that human-driven cars would not benefit from that, if it were available.).
You'd also have to believe that when you wished to change how your Uber driver drove, you'd actually have improved things rather than worsened them.
No-one would have equated the phrase "we'll have self-driving cars" with "some taxis in a few of US cities"
> First driverless "taxi" service in a major US city, with dedicated pick up and drop off points, and restrictions on weather and time of day.
Their 2025 analysis is: "This is unlikely to happen in the first half of this century."
The prediction is clear. The evaluation is dishonest.
Maybe he has a very narrow or strict definition of ‘driverless’. That would explain the “not in this half of the century”-sentiment. I mean, it’s 25 years!
Human driving isn't a solved problem either; the difference is that when a human driver needs intervention it just crashes.
The remote operation seems to be more about navigational issues and reading the road conditions. Things like accidentally looping, or not knowing how to proceed with an unexpected obstacle. Things that don't really happen to human drivers, even the greenest of new drivers.
Honestly, back in 2012 or something I was convinced that we would have autonomous driving by now, and by autonomous driving I definitely didn't mean “one company is able to offer autonomous taxi rides is a very limited amount of places with remote operator supervision”, the marketing pitch has always been something along “the car you'll buy will be autonomously driving you to whatever destination you ask for, and you'll be just a passenger in you own car”, and we definitely aren't there at all when all we have is Waymo.
"Nothing ever happens"... until it does, and it seems Brooks's prediction roundups can now be conveniently replaced with a little rock on it with "nothing in AI ever works" written on it without anything of value being lost.
He calls out that Tesla FSD has been “next year” for 11 years, but then the vast majority of the self-driving car section is about Cruise and Waymo. He also minorly mentions Tesla’s promise of a robotaxi service and how it is unlikely to be materially different than Cruise/Waymo. The amount of space allocated to each made sense as I read it.
For the meat of the issue: I can regularly drive places without someone else intervening. If someone else had to intervene in my driving 1/100 miles, even 1/1000 miles, most would probably say I shouldn’t have a license.
Yes, getting stuck behind a parked car or similar scenario is a critical flaw. It seems simple and non-important because it is not dangerous, but it means the drive would not be completed without a human. If I couldn’t drive to work because there was a parked car on my home street, again, people would question whether I should be on the road, and I’d probably be fired.
Direct quote from the article:
> Then I will weave them together to explain how it is still pretty much business as usual, and I mean that in a good way, with steady progress on both the science and engineering of AI.
There are some extremely emotional defences of Waymo on this comment thread. I don't quite understand why? Are they somehow inviolable to constructive criticism in the SV crowd?
Tell that to someone laid off when replaced by some "AI" system.
> Waymo not autonomous enough
It's not clear how often Waymo cars need remote attention, but it's not every 1-2 miles. Customers would notice the vehicle being stopped and stuck during the wait for customer service. There are many videos of people driving in Waymos for hours without any sign of a situation that required remote intervention.
Tesla and Baidu do use remote drivers.
The situations where Waymo cars get stuck are now somewhat obscure cases. Yesterday, the new mayor of SF had two limos double-parked, and a Waymo got stuck behind that. A Waymo got stuck in a parade that hadn't been listed on Muni's street closure list.
> Flying cars
Probably at the 2028 Olympics in Los Angeles. They won't be cost-effective, but it will be a cool demo. EHang recently put solid state batteries into their flying car and got 48 minutes of flight time, instead of their previous 25 minutes. Ehang is basically a scaled-up quadrotor drone, with 16 motors and props. EHang been flying for years, but not for very long per recharge. Better batteries will help a lot.
[1] https://aerospaceamerica.aiaa.org/electric-air-taxi-flights-...
Some companies may claim they are replacing devs with AI. I take it with a grain of salt. I believe some devs were probably replaced by AI, but not a large amount.
I think there may be a lot more layoffs in the future, but AI will probably account for a very small fraction of those.
I'm not even sold on the idea that there were any. The media likes to blame AI for the developer layoffs because it makes a much more exciting story than interest rates and arcane tax code changes.
But the fact is that we don't need more than the Section 174 changes and the end of ZIRP to explain what's happened in tech. Federal economic policy was set up to direct massive amounts of investment into software development. Now it's not. That's a real, quantifiable impact that can readily explain what we've seen in a way that the current productivity gains from these tools simply can't.
Now, I'll definitely accept that many companies are attributing their layoffs to AI, but that's for much the same reason that the media laps the story up: it's a far better line to feed investors than that the financial environment has changed for the worse.
>In 2024: At least 95,667 workers at U.S.-based tech companies have lost their jobs so far in the year, according to a Crunchbase News tally.
No, they are saying that the reason for the layoffs is not AI, it is financial changes making devs too expensive.
> If that is true then you need way less devs.
This does not follow. First of all, companies take a long time to measure dev output, it's not like you can look at a burn down chart over two sprints and decide to fire half the team because it seems they're working twice as fast. So any productivity gains will show up as layoffs only after a long time.
Secondly, dev productivity is very rarely significantly bounded by how long boilerplate takes to write. Having a more efficient way to write boilerplate, even massively more efficient, say 8h down to 1h, will only marginally improve your overall throughput, at least at the senior level: all that does is free you to think more about the complex issues you needed to solve. So if the task would have previously taken you 10 days, of which one day was spent on boilerplate, it may now take you, say, 8-9 days, because you've saved one day on boilerplate, plus some more minor gains here and there. So far from firing 7 out of every 8 devs, the 8h-to-1h boilerplate solution might allow you to fire 1 dev in a team of 10.
Sure, in the same sense that editors and compilers mean you need way less devs.
The problem is different in the meantime: nobody wants to be paying for training of those new devs. Juniors don’t have the experience to call LLM’s bullshit and seniors don’t get paid to teach them since LLMs replaced interns churning out boilerplate.
BLS reports ~1.9 million software developer jobs and predicts 17% growth through 2033. Crunchbase is talking about "tech workers" not developers. And they don't even say that tech employment is down. I predict that when BLS publishes their preliminary job numbers for 2024 it will be at least 1.85 million, not 1.9 million as suggested by your Crunchbase News. I would lay 2:1 odds that it will be higher than 2023's number.
I am not sure what I expect for software developers besides that the nature if the work will change but it is still too early to say exactly how. We certainly cannot extrapolate linearly or exponentially from the past few years.
Of course not. The Section 174 changes are really only relevant to software devs—the conversation in the months leading up to them kicking in was all about how it would kill software jobs. But then when it happened the media latched onto this idea that it was the result of automation, with zero evidence besides the timing.
Since the timing also coincided with a gigantically important change to the tax code and a rapid increase in interest rates, both of which were predicted to kill software jobs, I'm suggesting that blaming AI is silly—we have a proximate cause already that is much more probable.
Same can be said for github, and open-source deoendency management tools like npm, and I'd argue that it had an even a much bigger impact then, and did you see what happen afterwards? Where were the mass layoffs back then? The number of software developers is actually much higher than before that era.
Devs are getting laid off, yes. AI is not the reason. Executive/shareholder priorities are the reason.
1. leaders notice they were wrong, start to increase human headcount again 2. human work is seen as boutique and premium, used for marketing and market placement 3. we just accept the sub-par quality of AI and go with it (quite likely with copywriting I guess)
I'd like to compare it with cinema and Netflix. There was a time where lost of stuff was mindless shit, but there was still place for A24 and it took the world by storm. What's gonna happen? No one knows.
But anyway, I figure that 90% of "laid off because of AI" is just regular lay-offs with a nice sounding reason. You don't loose anything by saying that and only gain in stakeholder trust.
If you look up business analyst type jobs on JP Morgan website they are still hiring a ton right now.
What you actually notice is how many are being outsourced to other countries outside the US.
I think the main process at work is 1% actual AI automation and a huge amount of return to the office in the US while offshoring the remote work under the cover of "AI".
Nobody knows how many times operators intervene, because Waymo hasn't said. It's literally impossible to deduce.
Which means I also agree his estimate could also be wildly wrong too.
Maybe a 2x improvement in kwh/kg. Much less risk of fire or thermal runaway. Charging in < 10 mins.
It would be unfortunate if we get solid state batteries that have the great features that you describe but they're limited to 2x or so power density. Twice the power density opens a lot of doors for technology improvements and innovation but it's still limiting for really cool things like humanoid robotics and large scale battery powered aircraft.
There are, of course, small startups promising usable lithium-air batteries Real Soon Now.[2]
[1] https://en.wikipedia.org/wiki/Lithium%E2%80%93air_battery
[2] https://airenergyllc.com/
1. Solid state batteries. Likely to be expensive, but promise better energy density.
2. Some really good grid storage battery. Likely made with iron or molten salt or something like that. Dirt cheap, but horrible energy density.
3. Continued Lithium ion battery improvements, e.g. cheaper, more durable etc.
[1] https://newatlas.com/energy/worlds-largest-flow-battery-grid...
Glad Polymarket (and other related markets) exist so they can put actual goal posts in place with mechanisms that require certain outcomes in order to finalize on a prediction result.
Polymarket is a great way to incentive people into making their predictions happen, with all clandestine tools at their disposal, which is definitely not what you want for your society generally.
Not really wanting to have this argument a second time in a week (seriously- just look at my past comments instead of replying here as I said all I care to say https://news.ycombinator.com/item?id=42588699), but he is totally wrong about LLMs just looking up answers in their weights- they can correctly answer questions about totally fabricated new scenarios, such as solving simple physics questions that require tracking the location of objects and reporting where they will likely end up based on modeling the interactions involved. If you absolutely must reply that I am wrong at least try it yourself first in a recent model like GPT-4o and post the prompt you tried.
I was curious so I looked up how much you can buy the cheapest new helicopters for, and they are cheaper than an eVTOL right now- the XE composite is $68k new, and things like that can be ~25k used. I'm shocked one can in principle own a working helicopter for less than the price of a 7 year old Toyota Camry.
Most components are safety critical in ways that their failure can lead to an outright crash or feeding the pilot false information leading him to make a fatal mistake. Most cars can be run relatively safely even with major mechanical issues, but something as 'simple' as a broken heater on a pitot tube (or any other component) can lead to a crash.
Then there's an issue of weather - altitude, temperature, humidity, wind speed can create an environment that makes it either impossible, unsafe, or extremely unpleasant - imagine flying into an eddy current that stalls out the aircraft, making your ass drop a few feet.
Flying's a nice hobby, and I have great respect to people who can make a career out of it, but I'd definitely not get into these auto-piloted eVTOLs, nor should people who don't know what they are doing.
Edit: Also unlike helicopters, which can autorotate, and fixed wing aircraft, that can glide, eVTOLs just drop out of the sky.
The idea is to have far cheaper operating costs. Electric motors are far more efficient than ICE, so you should have much cheaper energy costs. Electric motors are also simpler than ICE so you should have cheaper maintenance with less required downtime compared to helicopters.
Of course, most of this is still being tested and worked on. But we are getting closer to having these get certified (FAA just released the SFAR for eVTOL, the first one since the 1940s).
The ones I'm seeing in the 20k range are mostly the "Mini 500." Wikipedia suggests that maybe as few as 100 were built, with 16 fatalities thus far (or is it 9- which it says in a different part of the article?). But some people argue all of those involved "pilot error."
I suppose choosing to fly the absolute cheapest homemade experimental aircraft kit notorious for a high fatality rate is technically a type of pilot error?
Skill level needed for "driving" would increase by a lot, noise levels would be abysmal, security implications would be severe (be they intentional or mechanical in nature), privacy implications would result in nobody wanting to have windows.
This is all more-or-less true for drones as well, but their weight is comparable to a todler, not to a polar bear. I firmly believe they'll never reach mass usage, but not because they're impossible to make.
Take, for example, the prediction about "robots can autonomously navigate all US households". Why all? From the business POV, 80% of the market is "all" in a practical sense, and most people will consider navigation around the home as "solved" if they can do it for the majority of households and with virtually no intervention. Hilarious situations will arise that amuse the folks; video of clumsy robots will flood the internet instead of cats and dogs, but for the business site, it's lucrative enough to produce and sell them en masse. Another question of interest is how is the trend? What will the approximate cost of such a robot be? How many US households will adopt such a robot by which time, as they have adopted washing machines and dishwashers. Will we see a linear adoption or rather a logistic adoption? These are the more interesting questions than just whether I'm right or wrong.
I recommend reading Richard Hamming's "The Art of Science and Engineering." Early in the book he presents a simple model of knowledge growth that always leads to an s-curve. The trouble is that on the left, an s-curve looks exponential. We still don't know where we are on the curve with any of the technologies. It is very possible we've already passed the exponential growth phase with some of these technologies. If so, we will need new technologies to move forward to the next s curve.
Technically true but I'm not convinced it matters that much. The reason autonomation took over in manufacturing was not that they could fire the operator entirely, but that one operator could man 8 machines simultaneously instead of just one.
Are you wishing that he had tighter confidence intervals?
For example, saying that flying cars will be in widespread use NET 2025 is not much of a prediction. I think we can all say that if flying cars will be in widespread use, it will happen No Earlier Than 2025. It could happen in 2060, and that NET 2025 prediction would still be true. He could mark it green in 2026 and say he was right, that, yes, there are no flying cars, and so mark his scorecard another point in the correct column. But is that really a prediction?
A bolder prediction would be, say "Within 1-2 yrs of XX".
So what is Rodney Brooks really trying to predict and say? I'd rather read about what the necessary gating conditions are for something significant and prediction-worthy to occur, or what the intractable problems are that would make something not be possible within a predicted time, rather than reading about him complain about how much overhype and media sensation there is in the AI and robotics (and space) fields. Yes, there is, but that's not much of a prediction or statement either, as it's fairly obvious.
There's also a bit of an undercurrent of complaint in this long article about how the not-as-sexy or hyped work he has done for all those years has gone relatively unrewarded and "undeserving types" are getting all the attention (and money). And as such, many of the predictions and commentary on them read more as rant than as prediction.
In that context, I’d say his predictions are neither obvious nor lacking boldness when we have influential people running around claiming that AGI is here today, AI agents will enter the workforce this year, and we should be prepared for AI-enabled layoffs.
Whether that's worth congratulating him about depends on how obvious it was, but I think you really need to measure "fairly obvious" at the time the prediction is made, not seven years later. A lot of things that seem "fairly obvious" now weren't obvious at all then.
It is possible AGI might replace humans in a short term and then new kind of work emerges and humans again find something different. There is always a disruption with new changes and some survive and some can't, even if nothing much happens its worth trying as said in quote 1.
How much money has been burned on robo-taxis which could have been spent on incubators for kids.
> Let’s Continue a Noble Tradition!
> The billionaire founders of both Virgin Galactic and Blue Origin had faith in the systems they had created. They both personally flew on the first operational flights of their sub-orbital launch systems. They went way beyond simply talking about how great their technology was, they believed in it, and flew in it.
> Let’s hope this tradition continues. Let’s hope the billionaire founder/CEO of SpaceX will be onboard the first crewed flight of Starship to Mars, and that it happens sooner than I expect. We can all cheer for that.
Rodney Brooks Predictions Scorecard - https://news.ycombinator.com/item?id=34477124 - Jan 2023 (41 comments)
Predictions Scorecard, 2021 January 01 - https://news.ycombinator.com/item?id=25706436 - Jan 2021 (12 comments)
Predictions Scorecard - https://news.ycombinator.com/item?id=18889719 - Jan 2019 (4 comments)
I'm curious where this idea even came from, not sure who the customer would be, it's a little disappointing he doesn't mention mag-lev trains in a discussion about future rapid transit. I'd much rather ride a smooth mag-lev across town than an underground pallet system.
Of course, then we will eventually see infrastructure become even more hostile to non-drivers and people will have to sue their own governments for the right to exist in public without paying transport companies. Strong Towns tried to warn us
Like, Reagan's instructions to the regulatory agencies to basically stand down was only just beginning to be undone after 40 years, and we immediately elected the people promising to slam hard in the other direction.
America will be a regulatory free for all for business for decades.
I don't see that in this article. Largely, I see the author trying to argue that he was right in 2018 rather than trying to take a step back to accurately evaluate their predictions.
It really doesn't matter what prestigious lab you ran, as that apparently didn't impart the ability to think critically about engineering problems.
[Hint: Flying takes 10x the energy of driving, and the cost/weight/volume of 1 MJ hasn't changed in close to a hundred years. Flying cars require a 10x energy breakthrough.]
Not to mention, since we do have helicopters, the engineering challange of flying cars is almost entirely unrelated to energy costs (at least for the super rich, the equivalent of, say, a Rolls Royce, not of a Toyota). The thing stopping flying cars from existing is that it is extremely hard to make an easy to pilot flying vehicle, given the numerous degrees of freedom (and potential catastrophic failure modes); and the significantly higher impredictability and variance of the medium (air vs road surface).
Plus, the major problem of noise pollution, which gets to extreme levels for somewhat fundamental reasons (you have to diaplace a whole lot of air to fly; which is very close to having to create sound waves).
So, overall, the energy problem is already fixed, we already have point-to-point flying vehicles usable, and occasionally used, in urban areas, helicopters. Making them safe when operated by a very lightly trained pilot, and silent enough to not wake up a neighborhood, are the real issues that will persist even if we had mini fusion reactors.
A modern car might easily have 130 kW or more, and that's what a Cessna 172 has (around 180 hp). (Sure, a plane cruises at the higher end of that, while a car only uses that much to accelerate and cruises at the lower end of the range - still not a factor of 10x.)
As another datapoint, a Diamond DA40 does around 28 miles per gallon (< 9 litres per 100 km) at 60% power cruise.
The author also expands on this:
> Don’t hold your breath. They are not here. They are not coming soon.
> Nothing has changed. Billions of dollars have been spent on this fantasy of personal flying cars. It is just that, a fantasy, largely fueled by spending by billionaires.
It’s worth actually reading the article before trashing someone’s career and engineering skills!
Where I live (in suburbia Virginia), we now can get items from the local WalMart grocery via DroneUp, which kind of blows mind.
Disclaimer: I worked for years building robots, several of these years with Rod. I assure you, when it comes to robotics and AI, he knows what he's talking about.
Here's my perspective. Also, he wrote his original predictions six years ago in a blog post [1], which is the basis for this latest post. If you don't have the time to read the old post, I provide a short summary from it about autonomous driving below, too.
1. Rod is not just an MIT professor emeritus and a past director of CSAIL. He has co-founded multiple robotics companies, one of which, iRobot, made loads of money selling tens of millions of consumer-grade autonomous robots cleaning floors in people's homes.
Making money selling autonomous robots is a very, very difficult thing. Roomba was a true milestone. Before then, the only civilian, commercially successful mass-produced robots were the programmable industrial arms that are still used in auto manufacturing. If the author sounds self-important, maybe that's why.
Yeah, he can get a little snarky sometimes when self-important CEOs run around with VC money in their pockets making tall claims and never being held accountable. That's just his style. Try to look beyond it. You might learn a thing or two.
2. The entire purpose of his annual "predictions" posts starting with [1] was to counter the hype and salesmanship about AI and robotics that's wasting billions of investment dollars and polluting the media landscape.
About autonomous cars, he believes that the core technology has been demonstrated in the 1980s, but that instead of using it, we have squandered the decades since then. For autonomous robots, the interaction with their surroundings is critical to success. We could have enhanced our road and communications infrastructure to enable autonomous cars. Instead, we have chosen to give money to slick salesmen to chase the mirage of placing "intelligent" cars on existing roads, continuing to neglect our civil infrastructure.
[1] https://rodneybrooks.com/my-dated-predictions/
If you took a transcript of a conversation with Claude 3.6 Sonnet, and sent it back in time even five years ago (just before the GPT-3 paper was published), nobody would believe it was real. They would say that it was fake, or that it was witchcraft. And whoever believed it was real would instantly acknowledge that the Turing test had been passed. This refusal to update beliefs on new evidence is very tiresome.
The NotebookLM “podcasters” would have been equally convincing to me.
One thing to remember is that there is more than one target audience in these claims. VCs for example seem to operate on a rough principle of 5 tech companies, 4 make 0x and one makes 10x, for a total 2x on each investment. If you only promise 5x, with 4 failures of 0x and one success at 5x, total return is 1x on each (not worth the risk). You may say "yes, my company is 2x, but it is guaranteed!" - but they all sell this idea. VCs could be infinitely good at predicting success and great companies, but it's based on partial information. Essentially companies have to promise the 10x and the VCs assume they are likely incorrect anyway, in order to balance the risk profile.
I do have a fundamental problem with this "infinite growth" model that almost everything seems based on.
> There is steady growth in sales but my prediction of 30% of US car sales being electric by 2027 now seems wildly optimistic. We need two doublings to get there in three years and the doubling rate seems more like one doubling in four to five years.
Even one doubling in 4-5 years might be too much. There are fundamental issues to be addressed:
1. What do we do about crashed EVs? They are dangerous to store and dangerous to dismantle. There have been quite a few EV fires at places like Copart now. There is little to no value in crashed EVs because they are so dangerous, which pushes insurance up because they cannot recover these funds.
2. Most car dealerships in the UK refuse to accept EVs for trade-in, because they sit on their forecourt until they eventually die. Those who can afford EVs typically get them on finance when the batteries provide the fullest range. Nobody I know is buying 10 year old EVs with no available replacement batteries. Commerical fleets are also not buying any more EVs as they essentially get no money back after using them for 3 years or so.
3. The electrical grid cannot scale to handle EVs. With every Western country decarbonising their electrical grid in favour of renewable energy, they have zero ability to respond to increased load.
The truth is, when they push to remove fossil fuel vehicles, they simply want to take your personal transport from you. There is no plan for everybody to maintain personal mobility, it'll be a privilege reserved for the rich. You'll be priced out and put onto public transport, where there will be regular strikes because the government is broke and wages cannot increase - because who knew, infinite growth is a terrible investment model.
> The other thing that has gotten over hyped in 2024 is humanoids robots.
> The visual appearance of a robot makes a promise about what it can do and how smart it is.
The real sin is not HRI issues, it's that we simply cannot justify them. What job is a humanoid robot supposed to do? Who is going to be buying tens of thousands of the first unit? What is the killer application? What will a humanoid robot do that it is not cheaper/more effective to do with a real human, or cannot be done better with a specialised robot?
Anything you can think of which is a humanoid robot performing a single physical action repeatedly, is wrong. It would need to be a series of tasks that keeps the robot highly busy, and the nature of the work needs to be somewhat unpredictable (otherwise use a dedicated robot). After all, humans are successful not because we do one thing well, but because we do many not-well defined things good-enough. This kind of generalisation is probably harder than all other AI problems, and likely requires massive advances in real-time learning, embodiment and intrinsic motivation.
What we need sub-problems for robots, i.e. like a smart vacuum, where robots are slowly but surely introduced into complex environments where they can safely incrementally improve. Trying to crack self-driving 1+ tonne high speed death machines in your first attempt is insanity.
Wait, What now?
I have never heard this, but from the founder of CSAIL I am going to take it as a statement of fact and proof that basically every AI company is flat out lying.
I mean the difference between remote piloting a drone that has some autonomous flying features (which they do to handle lag etc) and remote driving a car is … semantics?
But yeah it’s just moving jobs from one location to another.
This is more like telling your units to go somewhere in a video game, and they mostly do it right, but occasionally you have to take a look and help them because they got stuck in a particularly narrow corridor or something.
Predict the future, Mr. Brooks!
You are not predicting just daydreaming.
But this is the whole point of VC investing. It is not normal distribution investing.
It seems to me we’re at the very least close to this, unless you hold unproven beliefs about grey matter vs silicon.