I'm a little sad to see YJIT go down in favour of a traditional design. (Yes, YJIT isn't "deprecated" but the people working on it are now mainly working on something else. That's hardly a great place to be in.)
I'm personally quite interested in trying out an LBBV JIT for JavaScript, following in Chevalier-Boisvert's Higgs engine's footsteps. The note about a traditional method JIT giving more code for the compiler to work with does ring very true, but I'd just like to see more variety in the compiler and JIT world.
Like: perhaps it would be possible to conjoin (say) an LBBV with a SoN compiler such that LBBV takes care of the quick, basic compilation and leaves behind enough data such that a SoN compiler can be put to use on the whole-method result once the whole method has been deemed hot enough? This is perhaps a totally ridiculous idea, but it's the kind of idea that will never get explored in a world with only traditional method JITs.
ZJIT exists because it's a more traditional design and there's hope more people will have a easier time contributing[0]. Given that, it seems YJIT will become unnecessary if ZJIT succeeds.
Earnestly: why are you annoyed? I tried to make it clear that you don't have to make any changes. If you want, you can try ZJIT (which should not be anything other than a one character change), but you don't have to.
Because now YJIT is deprecated and at some point in a year or two I will have to have my team go through and switch everything from YJIT to ZJIT. So it's creating random tech debt busywork that will suck up dev resources that could better be used building features.
In isolation, having to switch from YJIT to ZJIT isn't that bad, but this same type of churn happens across so much of the frameworks and technologies that my company uses that in aggregate it becomes quite an annoyance.
Non-ruby dev here. Can someone explain the side exit thing for me?
> This meant that the code we were running had to continue to have the same preconditions (expected types, no method redefinitions, etc) or the JIT would safely abort. Now, we can side-exit and use this feature liberally.
> For example, we gracefully handle the phase transition from integer to string; a guard instruction fails and transfers control to the interpreter.
> (example showing add of two strings omitted)
What is the difference between the JIT safely aborting and the JIT returning control to the interpreter? Or does the JIT abort mean the entire app aborts (i.e. I presumed JIT aborting means continuing on the interpreter anyway?)
(Also, why would you want the code that uses the incorrect types to succeed? Isn’t abort of the whole unit of execution the right answer here, anyway?)
Dynamic languages will allow a range of types through functions. JITs add tracing and attempt to specialize the functions based on the observed types at runtime. It is possible that later on, the function is called with different types than what the JIT observed and compiled code for. To handle this, JITs will have stubs and guards. You check the observed type at runtime before calling the JITted code. If the type does not match, you would call a stub to generate the correct machine code, or you could just call into the interpreter slow path.
An example might be the plus operator. Many languages will allow integers, floats, strings and more on either side of the operator. The JIT likely will see mostly integers and optimize the functions call for integer math. If later you call the plus operator with two Point classes, then you would fall back to the interpreter.
In this case, we used to abort (i.e. abort(); intentionally crash the entire process) but now we jump into the interpreter to handle the dynamic behavior.
If someone writes dynamic ruby code to add two objects, it should succeed in both integer and string cases. The JIT just wants to optimize whatever the common case is.
I guess I’m confused why an actual add instruction is emitted rather than whatever overloaded operation takes place when the + symbol (or overloaded add vtable entry) is called (like it would in other OOP languages).
If all you're doing is summing small integers---frequently the case---it's much preferable to optimize that to be fast and then skip the very dynamic method lookup (the slower, less common case)
I recently rewrote one of my rails apps in rust. Used Claude 4.5 Opus heavily and it was very fast.
One thing that's struck me with the new code is that's its so easy to follow compared to rails. It's like two different extremes on the implicit-explicit spectrum. Yet it's not like I have tons more boilerplate code now, I think I have maybe 10 or 20% more SLOC than before.
I'll probably do this with my other rails apps as well.
You're comparing a language with a framework. Better comparison would be Rust (with your choices of libraries) vs Ruby (with your choices of libraries).
If you want to compare with Rails, you need to compare with battery included Rust frameworks with equivalent batteries and convention.
I'm personally quite interested in trying out an LBBV JIT for JavaScript, following in Chevalier-Boisvert's Higgs engine's footsteps. The note about a traditional method JIT giving more code for the compiler to work with does ring very true, but I'd just like to see more variety in the compiler and JIT world.
Like: perhaps it would be possible to conjoin (say) an LBBV with a SoN compiler such that LBBV takes care of the quick, basic compilation and leaves behind enough data such that a SoN compiler can be put to use on the whole-method result once the whole method has been deemed hot enough? This is perhaps a totally ridiculous idea, but it's the kind of idea that will never get explored in a world with only traditional method JITs.
Also, what's the long-term plan for YJIT.
0: https://railsatscale.com/2025-05-14-merge-zjit/
In isolation, having to switch from YJIT to ZJIT isn't that bad, but this same type of churn happens across so much of the frameworks and technologies that my company uses that in aggregate it becomes quite an annoyance.
With any luck, this performance in the next year or two will be enough to make it a happy change. "Damn, free money" etc
> This meant that the code we were running had to continue to have the same preconditions (expected types, no method redefinitions, etc) or the JIT would safely abort. Now, we can side-exit and use this feature liberally.
> For example, we gracefully handle the phase transition from integer to string; a guard instruction fails and transfers control to the interpreter.
> (example showing add of two strings omitted)
What is the difference between the JIT safely aborting and the JIT returning control to the interpreter? Or does the JIT abort mean the entire app aborts (i.e. I presumed JIT aborting means continuing on the interpreter anyway?)
(Also, why would you want the code that uses the incorrect types to succeed? Isn’t abort of the whole unit of execution the right answer here, anyway?)
An example might be the plus operator. Many languages will allow integers, floats, strings and more on either side of the operator. The JIT likely will see mostly integers and optimize the functions call for integer math. If later you call the plus operator with two Point classes, then you would fall back to the interpreter.
If someone writes dynamic ruby code to add two objects, it should succeed in both integer and string cases. The JIT just wants to optimize whatever the common case is.
One thing that's struck me with the new code is that's its so easy to follow compared to rails. It's like two different extremes on the implicit-explicit spectrum. Yet it's not like I have tons more boilerplate code now, I think I have maybe 10 or 20% more SLOC than before.
I'll probably do this with my other rails apps as well.
If you want to compare with Rails, you need to compare with battery included Rust frameworks with equivalent batteries and convention.
Ruby/Rails is awesome but it's a bit too magical sometimes and, lacking types by default, doesn't help either.