We've been impacted by this. I migrated our services to Python 3.14 so we could attach profilers during runtime.
A couple of services looked like they had a memory leak. Memory was continuously increasing over time. Thanks to Python 3.14, we were able to use memray to understand what was going on. Those services were recreating HTTP clients (aiohttp) for every inbound request, and memory allocated by the downstream SSL lib was growing faster than it was being released.
We ended up rolling back to 3.13, which fixed the issue. I'll try again with 3.14.5.
If you are using "httpx", it's likely caused by a reference cycle. I made a PR to fix it but the maintainers haven't applied it. :-( https://github.com/encode/httpx/pull/3733
The reference cycle httpx creates is kind of a worst-case scenario for the incremental GC issue. Both the generational (3.13 and older) and incremental GC are triggered by the net new "container" objects (objects that have references to others, like lists and not like ints and floats). The short summary is that you need to create more container objects before the incremental GC triggers. In the case of the httpx reference cycle, you have a relatively small number of container objects hanging on to a lot of memory, due the SSL context data (which is a big memory hog).
Reverting back to the generational GC was the wise thing to do, even though it's a bit scary to do in a bugfix release. The incremental GC works for most people but in the minority of cases it doesn't, it uses quite a lot more memory. I'm pretty sure with some additional tuning, the incremental GC would be fine too but it just didn't get that tuning. The generational GC has literal decades of real-world use (Guido merged my patch on Jun 2000, Tim Peters did a bunch of tuning after that to optimize it).
On profilers - profiling will come in 3.15, are you referring to remote exec? It is a great feature I am very exited about, at the same time afraid that the company won’t allow ptrace capability in prod.
yes. remote exec allows me to attach profilers (e.g. memray) directly into a running process. i'm also excited about the upcoming statistical (cpu) profiler from 3.15
"Python 3.14 shipped with a new incremental garbage collector. However, we’ve had a number of reports of significant memory pressure in production environments.
We’ve decided to revert it in both 3.14 and 3.15, and go back to the generational GC from 3.13."
The main benefit of python to me is that while slow, it's predictable. I do think they're going to get a lot more resistance to adding JITs, moving GCs, etc. it will become java with a million knobs to tune. If people want a JIT'd python just use pypy, right?
Java lost almost all those knobs a while ago (I mean they're there, but you're better off relying on the defaults). The modern GCs have one or at most two knobs remaining, and even that will become unnecessary next year. As to predictablity, you get maximal pause time of well under 1ms for heaps up to 16TB.
Well, they never made the jump to Python 3. But shipping 2.7 interpreters in 2024 was quite an achievement on its own. So their users already know this pain. And from my experience in academia, python 2.7 and java 8 will probably be used for another 20 years before the last machine running that stuff burns out.
I'm currently in a .NET shop so not an issue for me, makes me wonder if Python will eventually adopt the concept of LTS releases, this could have been avoided as an issue if it was part of a non-LTS release.
Yeah it seems like a miss. I guess the thinking was that it wasn't developer-facing and just an internal optimization. But of course any change to garbage collection will change the memory and cpu dynamics of the process in a material way.
.NET seems to have regularly changed the garbage collector over the years and I do not remember any similar surprises in production. I wonder why they have had better experience?
I thought that by now dynamic garbage collection was a known quantity so that making changes, outside of out right bugs, is fairly safe and predictable?
One thing Microsoft does really well is eating its own dogfood and Microsoft feeds a ton of .Net dogs.
So any change to GC starts with massive .Net MSFT code base so they get extremely good telemetry back about any downsides and might be able to fix it in time.
There is almost no dog fooding on Windows development since version 8, Typescript team rather rewrite the compiler in Go, Azure has plenty of Go, Rust and Java projects alongside .NET.
Oh, they really don't dogfood Windows development any longer, regardless of the incentives.
I have my WinRT 8, UAP 8.1, UWP 10, Project Reunion, .NET Native, C++/CX, C++/WinRT, XAML Islands, XAML Direct, WinUI 2.0, WinUi 3.0, WinAppSDK and what not scars to prove how they aren't dog fooding any piece of it in any meaningful manner.
Heck they keep talking about C++ support in WinUI 3, as if the team hasn't left the project and is now playing with Rust instead.
They managed that plenty of early WinRT advocates became their hardest critics, while not believing anything else they put out, like now this Windows K2 project.
I like my programming language flame wars just as much as the next guy but Go is a really easy language to get started with, while also being very fast. It's not just luck
What? If you are talking web development, .Net is just about the same as Go. It's 100% Java OOP type writing but result is same, very performant API server.
Sure, Rust is completely different beast with different target system.
Actually there’s a change to dotnet 9 with how it handles the heap and GC which caused major issues for us.
I’ll confess the reason it hit us so hard is because the code quality was so low and wasteful on allocations that it didn’t hide the problem as well as previous versions.
I remember working on the Windows Update back-end at Microsoft around 2005, and we had a problem where it would freeze up periodically, and not surprisingly that turned out to be caused by GC. But we noticed it before shipping, and we just tweaked some GC parameters.
So I think it was not a big problem for .Net because it gave you enough control over GC, and because people tested their code before putting it in production.
All these issues were known in previous attempts for removing the GIL. But if Instagram/Meta want it, everyone stands to attention and finds out the obvious problems years later. Kind of like in geopolitics.
I hope Meta switches Instagram to PHP/Hack so they leave Python alone.
In the world of AI written code, Python just doesn’t make sense. Converted about 100k lines in the last few months to golang and the performance is life changing. Curious if we will see global Python adoption fall by 75% or more in the next few years.
With a similar amount of experience with both languages I found Go much easier to read. I've always been a bit miffed why Python is seen as easy to read for experienced developers. I get the syntax is good for short code or people with little experience but my experience is those readability benefits went away quickly with time or complexity.
Why are you miffed about it? I legitimately hate reading golang with passion and find python to be pretty intuitive, outside of the odd ambitious list comprehensions. I worked in a golang shop for several years, so it's not just an familiarity situation either.
We are just different. That's not something to be mad about.
In my opinion most interpreted languages today tend to produce very dense code. Fancy call chains and closures interleaving. If you look for a subtle bug those are hard to reason about, you have to know the details of a lot of different APIs.
Go is verbose partly for that reason, but a silly loop is a silly loop. The constraints are clear, you only have to do the logic.
Python is a garbage language. Dynamic types are a disaster for maintaining large codebases and we waste enormous amounts of compute running large systems with it.
No we should write one of the many modern programming languages that handle certain projects way better, including kotlin, go, or Java. The only things python is best in class at are scripting and as a harness for high performance c++ or fortran.
Any language that uses error codes instead of exceptions is a non-starter for me. Produces code that craps all over the happy path.
Python has a different problem: it is slow as f---. I did a micro benchmark comparison against 5 other languages in preparation for my python replacement language. Outside of dictionary lookups, it is 50-600 times slower than C depending on the workload.
Go, Rust etc are fine. They land at 1.25-3x slower than C. But I prefer the readability of python minus its dynamic nature.
I think we'll eventually be generating machine code directly. But until then we should be using code that our team can actually read and understand. If you know go, then that works you, Not everyone does.
Doubt it. LLMs will always be more expensive per-token than compilers, and high level languages need fewer tokens than machine code. Also, type systems, warnings, overlap with natural language in names - those are very useful.
nothing about the performance characteristics of python changed with AI so why would you use python over golang if performance is a requirement/bottleneck? Trying to understand the reasoning as to me golang and python are equally simple to write and understand.
Regardless of whether golang and python are actually equally simple, python certainly has the reputation of being easier to write and read than almost any other language. That is a big part of its popularity.
Python is not really simple though, the semantics are actually quite bonkers. It just has "simple"-looking syntax, but that only helps you for trivial programs where the bonkers semantics does not get in the way.
For personal projects, yes. For code going into production, you still need human code review, and that has to happen in a language that the humans you've hired are comfortable with. One day, we'll all be YOLOing vibe code straight into production, but that day is not today.
A couple of services looked like they had a memory leak. Memory was continuously increasing over time. Thanks to Python 3.14, we were able to use memray to understand what was going on. Those services were recreating HTTP clients (aiohttp) for every inbound request, and memory allocated by the downstream SSL lib was growing faster than it was being released.
We ended up rolling back to 3.13, which fixed the issue. I'll try again with 3.14.5.
The reference cycle httpx creates is kind of a worst-case scenario for the incremental GC issue. Both the generational (3.13 and older) and incremental GC are triggered by the net new "container" objects (objects that have references to others, like lists and not like ints and floats). The short summary is that you need to create more container objects before the incremental GC triggers. In the case of the httpx reference cycle, you have a relatively small number of container objects hanging on to a lot of memory, due the SSL context data (which is a big memory hog).
Reverting back to the generational GC was the wise thing to do, even though it's a bit scary to do in a bugfix release. The incremental GC works for most people but in the minority of cases it doesn't, it uses quite a lot more memory. I'm pretty sure with some additional tuning, the incremental GC would be fine too but it just didn't get that tuning. The generational GC has literal decades of real-world use (Guido merged my patch on Jun 2000, Tim Peters did a bunch of tuning after that to optimize it).
We’ve decided to revert it in both 3.14 and 3.15, and go back to the generational GC from 3.13."
Sounds the right move for me
Lately, they seems to work with CRIU, various heuristics, multi-stage in-process bytecode compilation ..
Java is a mess, they are working hard to avoid fixing their issue (that nobody else have, so fixes are available)
Compared to Python's, all of them are beyond perfect. And 99.9% of the time you don't even need to use anything but the default.
PyPy doesn't have the support it needs and is stuck on 3.11.
Not to mention that there are differences in ecosystem, familiarity, and ergonomics that may make a team want to stick with Python.
“Just use Go” is not really actionable advice in most cases.
People parrot to use OpenJDK without understanding it is mostly Oracle employees working on it.
And if you dislike Oracle, the other minor contributors are Red-Hat, IBM, SAP, Microsoft, Alibaba, Azul,... which for many HNers are the same.
jython went EOL.with python 2 going EOL.
https://en.wikipedia.org/wiki/Guido_van_Rossum
https://devguide.python.org/versions/
I thought that by now dynamic garbage collection was a known quantity so that making changes, outside of out right bugs, is fairly safe and predictable?
So any change to GC starts with massive .Net MSFT code base so they get extremely good telemetry back about any downsides and might be able to fix it in time.
There is almost no dog fooding on Windows development since version 8, Typescript team rather rewrite the compiler in Go, Azure has plenty of Go, Rust and Java projects alongside .NET.
Windows Development is not "We are not dogfooding", it's that incentives are misaligned with customer wants.
.Net team incentives are aligned with customer wants, provide a language that is highly performant and easy enough to write.
I have my WinRT 8, UAP 8.1, UWP 10, Project Reunion, .NET Native, C++/CX, C++/WinRT, XAML Islands, XAML Direct, WinUI 2.0, WinUi 3.0, WinAppSDK and what not scars to prove how they aren't dog fooding any piece of it in any meaningful manner.
Heck they keep talking about C++ support in WinUI 3, as if the team hasn't left the project and is now playing with Rust instead.
They managed that plenty of early WinRT advocates became their hardest critics, while not believing anything else they put out, like now this Windows K2 project.
Go is, essentially, nearly perfect at what it does - even if the language itself leaves much to be desired and would ideally be much safer.
Microsoft should up their game. They have a few research languages in development.
They've always been great with languages. Hopefully, they rise to the occassion.
Now we're stuck with it in anything CNCF related.
Sure, Rust is completely different beast with different target system.
I’ll confess the reason it hit us so hard is because the code quality was so low and wasteful on allocations that it didn’t hide the problem as well as previous versions.
So I think it was not a big problem for .Net because it gave you enough control over GC, and because people tested their code before putting it in production.
https://github.com/python/cpython/pull/117120
I hope Meta switches Instagram to PHP/Hack so they leave Python alone.
Free-threading actually uses its own, separate GC: https://labs.quansight.org/blog/free-threaded-gc-3-14
You are free to switch language but you still need to understand it.
We are just different. That's not something to be mad about.
Go is verbose partly for that reason, but a silly loop is a silly loop. The constraints are clear, you only have to do the logic.
> (Mocking) Yes, that's why we should go back to Y with even worse static analysis.
Sure
Python has a different problem: it is slow as f---. I did a micro benchmark comparison against 5 other languages in preparation for my python replacement language. Outside of dictionary lookups, it is 50-600 times slower than C depending on the workload.
Go, Rust etc are fine. They land at 1.25-3x slower than C. But I prefer the readability of python minus its dynamic nature.
Also, even if it looks like that to you, there are still people that write code with their own hands.