I'm using oxc_traverse and friends to implement on-the-fly JS instrumentation for https://github.com/antithesishq/bombadil and it has been awesome. That in combination with boa_engine lets me build a statically linked executable rather than a hodgepodge of Node tools to shell out to. Respect to the tools that came before but this is way nicer for distribution. Good times for web tech IMO.
It always comes as a surprise to me how the same group of people who go out of their way to shave off the last milliseconds or microseconds in their tooling care so little about the performance of the code they ship to browsers.
TBH I don't know how to do that work. If I'm in the backend it's very easy for me. I can think about allocations, I can think about threading, concurrency, etc, so easily. In browser land I'm probably picking up some confusing framework, I don't have any of the straightforward ways to reason about performance at the language level, etc.
Maybe once day we can use wasm or whatever and I can write fast code for the frontend but not today, and it's a bit unsurprising that others face similar issues.
Also, if I'm building a CLI, maybe I think that 1ms matters. But someone browsing my webpage one time ever? That might matter a lot less to me, you're not "browsing in a hot loop".
People shaving off the last milliseconds or microseconds in their tooling aren't the same people shipping slow code to browsers. Say thanks to POs, PMs, stakeholders, etc.
I've never met a single person obsessed with performance who goes half the way. You either have a performance junkie or a slob who will be fine with 20 minutes compile times.
The vite plus idea is that you'll pay for visual tools. What's odd to me is it makes their paid product kind of a bet against their open product. If their open platform were as powerful as it should be, it would be easy to use it to recreate the kinds of experiences they propose to sell.
The paradox gains another layer when you consider that their whole mission is to build tools for the JavaScript ecosystem, yet by moving to Rust they are betting that JS-the-language is so broken that it cannot even host its own tools. And because JS is still a stronger language for building UIs in than Rust, their business strategy now makes them hard-committed to their bet that JS tools in JS are a dead end.
You say this like this is the basic requirement for a language. But languages make tradeoffs that make them more appropriate for some domains and not others. There's no shade if a language isn't ideal for developer tools, just like there's no shade if a language isn't perfect for web frontends, web backends, embedded development, safety critical code (think pacemakers), mobile development, neural networks and on and on.
Seriously, go to https://astral.sh and scroll down to "Linting the CPython code base from scratch". It would be easy to look at that and conclude that Python's best days are behind it because it's so slow. In reality Python is an even better language at its core domains now that its developer tools have been rewritten in Rust. It's the same excellent language, but now developers can iterate faster.
It's the same with JavaScript. Just because it's not the best language for linters and formatters doesn't mean it's broken.
I don't see the idea is visual tools, I never even heard somebody to talk about it like that. The plan is to target enterprise customers with advanced features. I feel like you should just go and watch some interviews or something where talk about their plan, Evan You was recently on a few podcasts mentioning their plans.
Also, the paradox is not really even there. JS ecosystem largely gave up on JS tools long time ago already. Pretty much all major build tools are migrating to native or already migrated, at least partially. This has been going on for last 4 years or something.
But the key to all of this is that most of these tools are still supporting JS plugins. Rolldown/Vite is compatible with Rollup JS plugins and OXLint has ESLint compatible API (it's in preview atm). So it's not really even a bet at all.
Another example is the TypeScript compiler being rewritten in Go instead of self-hosting. It's an admission that the language is not performant enough, and more, it can never be enough for building its own tooling. It might be that the tooling situation is the problem, not the language itself, though. I do see hopeful signs that JavaScript ecosystem is continuing to evolve, like the recent release of MicroQuickJS by Bellard, or Bun which is fast(er) and really fun to use.
I don't think that's necessarily a bad thing, though. JavaScript isn't performant enough for its own tooling, but that's just one class of program that can be written. There are plenty of other classes of program where JavaScript is perfectly fast enough, and the ease of e.g. writing plugins or having a fast feedback loop outweighs the benefits of other languages.
I quite like Roc's philosophy here: https://www.roc-lang.org/faq#self-hosted-compiler. The developers of the language want to build a language that has a high performance compiler, but they don't want to build a language that one would use to build a high performance compiler (because that imposes a whole bunch of constraints when it comes to things like handling memory). In my head, JavaScript is very similar. If you need a high performance compiler, maybe look elsewhere? If you need the sort of fast development loop you can get by having a high performance compiler, then JS is just the right thing.
True, I agree. It's a good thing to accept a language's limitations and areas of suitability, without any judgement about whether the language is good for all purposes - which is likely not a good goal for a language to have anyway. I like that example of Roc, how it's explicitly planned to be not self-hosting. It makes sense to use different languages to suit the context, as all tools have particular strengths and weaknesses.
Off topic but I wonder if this applies to human languages, whether some are more suited for particular purposes - like German to express rigorous scientific thinking with compound words created just-in-time; Spanish for romantic lyrical situations; or Chinese for dense ideographs. People say languages can expand or limit not only what you can express but what you can think. That's certainly true of programming languages.
in the beginning yes, but VCs want to cash out eventually. Look at mongodb, redis and whatnot that did everything to get money at a certain point. For VCs open source is a vehicle to get relevant in a space you would never be relevant if you won't do open source.
I thought oxfmt would just be a faster drop-in replacement for "biome format"... It wasn't.
Let this be a warning: running oxfmt without any arguments recursively scans directory tree from the current directory for all *.js and *.ts files and silently reformats them.
Thanks to that, I got a few of my Allman-formatted JavaScript files I care about messed up with no option to format them back from K&R style.
> running oxfmt without any arguments recursively scans directory tree from the current directory for all .js and .ts files and silently reformats them
I've got to say this is what I would have expected and wanted to happen. I'd say it is wise to not run tools designed to edit files on files you don't have a backup for (like Git) without doing a dry-run or a small scope experiment first.
While I can get behind things such as "use version control," "use backups", etc. this is definitely not what I'd expect from a program run without arguments, especially when it will go and change stuff.
What? The very first page of documentation tells you this. The help screen clearly shows a `--check` argument. This is a formatter and uses the same arguments as many others - in particular Prettier, the most popular formatter in the ecosystem.
How were you not expecting this? Did you not bother to read anything before installing and running this command on a sensitive codebase?
Not taking a position but the design of rm strengthens the position that recursive by default without flags isn’t ok. rm makes you confirm when you want changes to recurse dirs.
Let's say 100k files is 300k syscalls, at ~1-2us per syscall. That's 300ms of syscalls. Then assume 10kb per file, that's 1GB of file, easily done in a fraction of a second when the cache is warm (it'll be from scanning the dir). That's like 600ms used up and plenty left to just parse and analyze 100k things in 2s.
I’m assuming they meant 100kloc rather than 100,000 files of arbitrary size (how could we even tell how impressive that is without knowing how big the files are?)
arena allocation is a big part of it, but also oxc benefits from not having to support the same breadth of legacy transforms that swc accumulated over time. swc has a lot of surface area from being the go-to babel replacement -- oxc could design the AST shape from scratch with allocation patterns in mind. the self-hosting trap (writing js tooling in js) set a performance ceiling for so long that when you finally drop down to Rust and rethink the data layout, the gains feel almost unfair
One thing worth noting: beyond raw parse speed, oxc's AST is designed to be allocation-friendly with arena allocation. SWC uses a more traditional approach. In practice this means oxc scales better when you're doing multiple passes (lint + transform + codegen) on the same file because you avoid a ton of intermediate allocations.
We switched a CI pipeline from babel to SWC last year and got roughly 8x improvement. Tried oxc's transformer more recently on the same codebase and it shaved off another 30-40% on top of SWC. The wins compound when you have thousands of files and the GC pressure from all those AST nodes starts to matter.
I wonder why did it take so long for someone to make something(s) this fast when this much performance was always available on the table.
Crazy accomplishment!
I am fully aware of it, there have been many 'excited' posts in HN history about various programming languages, with related rewrite X in Y, the remark still stands.
We had many languages that are faster that are not c/c++.
Compare Go (esbuild) to webpack (JS), its over 100x faster easily.
For a dev time matters, but is relative, waiting 50sec for a webpack build compared to 50ms with a Go toolchain is life changing.
But for a dev waiting 50ms or 20ms does not matter. At all.
So the conclusion is javascript devs like hype, and flooded Rust and built tooling for JS in Rust. They could have used any other compiled languge and get near the same peformance computer-time-wise, or the exact same time human-timewise.
I believe it goes back a few years to originally being just oxlint, and then recently Void Zero was created to fund the project. One of the big obstacles I can imagine is that it needs extensive plugin support to support all the modern flavours of TypeScript like React, Vue, Svelte, and backwards compatibility with old linting rules (in the case of oxlint, as opposed to oxc which I imagine was a by-product).
* You need have a clean architecture, so starting "almost from scratch"
* Knowledge about performance (for Rust and for build tools in general) is necessary
* Enough reason to do so, lack of perf in competition and users feeling friction
* Time and money (still have to pay bills, right?)
It takes a good programmer to write it, and most good programmers avoid JavaScript, unless forced to use it for their day job. in that case, there is no incentive to speed up the part of the job that isn't writing JavaScript.
Some of us, already have all the speed we need with Java and .NET tooling, don't waste our time rewriting stuff, nor need to bother with borrow checker, even if it isn't a big deal to write affine types compliant code.
And we can always reach out to Scala or F# if feeling creating to play with type systems.
They are talking about pnpm (which they said would be the uv equivalent for node, though I disagree given that what pnpm brings on top of npm is way less than the difference between uv and the status quo in Python).
You can find a comparison with `bun build` on Bun's homepage. It hasn't been updated in a little while, but I haven't heard that the relative difference between Bun and Rolldown has changed much in the time since (both have gotten faster).
Bundler Version Time
─────────────────────────────────────────────────────────
Bun v1.3.0 269.1 ms
Rolldown v1.0.0-beta.42 494.9 ms
esbuild v0.25.10 571.9 ms
Farm v1.0.5 1,608.0 ms
Rspack v1.5.8 2,137.0 ms
Oxc is not a JavaScript runtime environment; it's a collection of build tools for JavaScript. The tools output JavaScript code, not native binaries. You separately need a runtime environment like Deno (or a browser, depending on what kind of code it is) to actually run that code.
Deno is a native implementation of a standard library, it doesn't have language implementation of its own, it just bundles the one from Safari (javascriptcore).
This is a set of linting tools and a typestripper, a program that removes the type annotations from typescript to make turn it into pure javascript (and turn JSX into document.whateverMakeElement calls). It still doesn't have anything to actually run the program.
I'm going to call it: a Rust implementation of JavaScript runtime (and TypeScript compiler) will eventually overtake the official TypeScript compiler now being rewritten in Go.
Nothing, but it will happen anyway. Maybe improved memory safety and security, at least as a plausible excuse to get funding for it. Perhaps also improved enthusiasm of developers, since they seem to enjoy the newness of Rust over working with an existing C++ codebase. Well there are probably many actual advantages to "rewrite it in Rust". I'm not in support or against it, just making an observation that the cultural trend seems to be moving that way.
If you want native binaries from typescript, check my project: https://tsonic.org/
Currently it uses .Net and NativeAOT, but adding support for the Rust backend/ecosystem over the next couple of months. TypeScript for GPU kernels, soon. :)
No, it it a suite of tools to handle Typescript (and Javascript as its subset). So far it's a parser, a tool to strip Typescript declarations and produce JS (like SWC), a linter, and a set of code transformation tools / interfaces, as much as I can tell.
Too slow. Different people implemented linter, bundler, ts compiler in JS. That means three different parsers and ASTs, which is inefficient. These guys want a grand unified compiler to rule them all.
I've played with all of these various formatters/linters in my workflow. I tend to save often and then have them format my code as I type.
I hate to say it, but biome just works better for me. I found the ox stuff to do weird things to my code when it was in weird edge case states as I was writing it. I'd move something around partially correct, hit save to format it and then it would make everything weird. biome isn't perfect, but has fewer of those issues. I suspect that it is hard to even test for this because it is mostly unintended side effects.
ultracite makes it easy to try these projects out and switch between them.
For the love of god, please stop naming Rust projects with "corrosion" and "oxidation" and the cute word pwns related to Rust because they are currently overplayed.
I said nothing about the rs prefix. But making oxide ferrous, Fe203 or whatever your whole shtick tells me nothing about your package and the pwn space is so so so very crowded at this point it just makes for a bad naming scheme.
Oxc is not the first Rust-based product on the market that handles JS, there is also SWC which is now reasonably mature. I maintain a reasonably large frontend project (in the 10s of thousands of components) and SWC has been our default for years. SWC has made sure that there is actually a very decent support for JS in the Rust ecosystem.
I'd say my biggest concern is that the same engineers who use JS as their main language are usually not as adept with Rust and may experience difficulties maintaining and extending their toolchain, e.g. writing custom linting rules. But most engineers seem to be interested in learning so I haven't seen my concern materialize.
It's not like JS isn't already implemented in a language that's a lot more similar to Rust anyhow though. When the browser or Node or whatever other runtime you're using is already in a different language out of necessity, is it really that weird for the tooling to also optimize for the out-of-the-box experience rather than people hacking on them?
Even as someone who writes Rust professionally, I also wouldn't necessarily expect every Rust engineer to be super comfortable jumping into the codebase of the compiler or linter or whatever to be able to hack on it easily because there's a lot of domain knowledge in compilers and interpreters and language tooling, and most people won't end up needing experience with implementing them. Honestly, I'd be pretty strongly against a project I work on switching to a custom fork of a linting tool because a teammate decided they wanted to add extra rules for it or something, so I don't see it as a huge loss that it might end up being something people will need to spend personal time on if they want to explore.
The goal is for Vite to transition to tooling built on Oxc. They’ve been experimenting with Rolldown for a while now (also by voidzero and uses oxc) - https://vite.dev/guide/rolldown
oxidation is a chemical process where a substance loses electrons, often by reacting with oxygen, causing it to change. What does it have to do with JavaScript?
Not to discredit OP's work of course.
Maybe once day we can use wasm or whatever and I can write fast code for the frontend but not today, and it's a bit unsurprising that others face similar issues.
Also, if I'm building a CLI, maybe I think that 1ms matters. But someone browsing my webpage one time ever? That might matter a lot less to me, you're not "browsing in a hot loop".
It just take someone to have poor empathy towards your users to ship slow software that you don't use.
But to be fair, besides the usual patterns like tree-shaking and DCE, "runtime performance" is really tricky to measure or optimize for
The paradox gains another layer when you consider that their whole mission is to build tools for the JavaScript ecosystem, yet by moving to Rust they are betting that JS-the-language is so broken that it cannot even host its own tools. And because JS is still a stronger language for building UIs in than Rust, their business strategy now makes them hard-committed to their bet that JS tools in JS are a dead end.
You say this like this is the basic requirement for a language. But languages make tradeoffs that make them more appropriate for some domains and not others. There's no shade if a language isn't ideal for developer tools, just like there's no shade if a language isn't perfect for web frontends, web backends, embedded development, safety critical code (think pacemakers), mobile development, neural networks and on and on.
Seriously, go to https://astral.sh and scroll down to "Linting the CPython code base from scratch". It would be easy to look at that and conclude that Python's best days are behind it because it's so slow. In reality Python is an even better language at its core domains now that its developer tools have been rewritten in Rust. It's the same excellent language, but now developers can iterate faster.
It's the same with JavaScript. Just because it's not the best language for linters and formatters doesn't mean it's broken.
Also, the paradox is not really even there. JS ecosystem largely gave up on JS tools long time ago already. Pretty much all major build tools are migrating to native or already migrated, at least partially. This has been going on for last 4 years or something.
But the key to all of this is that most of these tools are still supporting JS plugins. Rolldown/Vite is compatible with Rollup JS plugins and OXLint has ESLint compatible API (it's in preview atm). So it's not really even a bet at all.
Evan wallace proved it by building esbuild. this is no longer bet.
> If their open platform were as powerful as it should be, it would be easy to use it to recreate the kinds of experiences they propose to sell.
you would be surprised to know that tech companies may find it cheaper to pay money than developer bandwidth for stuff beyong their core compentency.
dropbox was also considered to be trivially implementable, but end users rarely try to re-invent it.
Another example is the TypeScript compiler being rewritten in Go instead of self-hosting. It's an admission that the language is not performant enough, and more, it can never be enough for building its own tooling. It might be that the tooling situation is the problem, not the language itself, though. I do see hopeful signs that JavaScript ecosystem is continuing to evolve, like the recent release of MicroQuickJS by Bellard, or Bun which is fast(er) and really fun to use.
I quite like Roc's philosophy here: https://www.roc-lang.org/faq#self-hosted-compiler. The developers of the language want to build a language that has a high performance compiler, but they don't want to build a language that one would use to build a high performance compiler (because that imposes a whole bunch of constraints when it comes to things like handling memory). In my head, JavaScript is very similar. If you need a high performance compiler, maybe look elsewhere? If you need the sort of fast development loop you can get by having a high performance compiler, then JS is just the right thing.
Off topic but I wonder if this applies to human languages, whether some are more suited for particular purposes - like German to express rigorous scientific thinking with compound words created just-in-time; Spanish for romantic lyrical situations; or Chinese for dense ideographs. People say languages can expand or limit not only what you can express but what you can think. That's certainly true of programming languages.
Doesn’t look super interesting to me tbh.
Let this be a warning: running oxfmt without any arguments recursively scans directory tree from the current directory for all *.js and *.ts files and silently reformats them.
Thanks to that, I got a few of my Allman-formatted JavaScript files I care about messed up with no option to format them back from K&R style.
I've got to say this is what I would have expected and wanted to happen. I'd say it is wise to not run tools designed to edit files on files you don't have a backup for (like Git) without doing a dry-run or a small scope experiment first.
How were you not expecting this? Did you not bother to read anything before installing and running this command on a sensitive codebase?
Try git reset --hard, that should work.
It is bad ux.
A power user can just pass the right params. Besides, it is not that hard to support "--yolo" parameter for that use case
`oxfmt` should have done the same and `oxfmt .`, with the desired dir ".", should have been the required usage.
It's blisteringly fast
We switched a CI pipeline from babel to SWC last year and got roughly 8x improvement. Tried oxc's transformer more recently on the same codebase and it shaved off another 30-40% on top of SWC. The wins compound when you have thousands of files and the GC pressure from all those AST nodes starts to matter.
C is fine but old
Compare Go (esbuild) to webpack (JS), its over 100x faster easily.
For a dev time matters, but is relative, waiting 50sec for a webpack build compared to 50ms with a Go toolchain is life changing.
But for a dev waiting 50ms or 20ms does not matter. At all.
So the conclusion is javascript devs like hype, and flooded Rust and built tooling for JS in Rust. They could have used any other compiled languge and get near the same peformance computer-time-wise, or the exact same time human-timewise.
It absolutely does:
https://mail.python.org/pipermail/python-dev/2018-May/153296...
https://news.ycombinator.com/item?id=16978932.
* You need have a clean architecture, so starting "almost from scratch" * Knowledge about performance (for Rust and for build tools in general) is necessary * Enough reason to do so, lack of perf in competition and users feeling friction * Time and money (still have to pay bills, right?)
And we can always reach out to Scala or F# if feeling creating to play with type systems.
Nonsense.
In other words does it treat comments as syntactic units, or as something that can be ignored wince they are not needed by the "next stage"?
The reason to find out what the comments are is of course to make it easy to remove them.
But I guess it wouldn't be an apples to apples com parison because Bun can also run typescript directly.
In text form:
Bundling 10,000 React components (Linux x64, Hetzner)
This is a set of linting tools and a typestripper, a program that removes the type annotations from typescript to make turn it into pure javascript (and turn JSX into document.whateverMakeElement calls). It still doesn't have anything to actually run the program.
Currently it uses .Net and NativeAOT, but adding support for the Rust backend/ecosystem over the next couple of months. TypeScript for GPU kernels, soon. :)
I hate to say it, but biome just works better for me. I found the ox stuff to do weird things to my code when it was in weird edge case states as I was writing it. I'd move something around partially correct, hit save to format it and then it would make everything weird. biome isn't perfect, but has fewer of those issues. I suspect that it is hard to even test for this because it is mostly unintended side effects.
ultracite makes it easy to try these projects out and switch between them.
I'd say my biggest concern is that the same engineers who use JS as their main language are usually not as adept with Rust and may experience difficulties maintaining and extending their toolchain, e.g. writing custom linting rules. But most engineers seem to be interested in learning so I haven't seen my concern materialize.
Even as someone who writes Rust professionally, I also wouldn't necessarily expect every Rust engineer to be super comfortable jumping into the codebase of the compiler or linter or whatever to be able to hack on it easily because there's a lot of domain knowledge in compilers and interpreters and language tooling, and most people won't end up needing experience with implementing them. Honestly, I'd be pretty strongly against a project I work on switching to a custom fork of a linting tool because a teammate decided they wanted to add extra rules for it or something, so I don't see it as a huge loss that it might end up being something people will need to spend personal time on if they want to explore.