https://github.com/libjxl/jxl-rs jxl-rs is the underlying implementation. It's relatively new but Rust certainly calms security fears. This library wasn't really an option last time this came around in chromium.
No, memory safety is not security, Rust's memory guarantees eliminate some issues, but they also create a dangerous overconfidence, devs treat the compiler as a security audit and skip the hard work of threat modeling
A vigilant C programmer who manually validates everything and use available tools at its disposal is less risky than a complacent Rust programmer who blindly trust the language
Didn't Google refuse adding JpegXL because they claimed there wasn't enough interest? I don't think they refused out of security concerns but maybe I'm misremembering that.
Google argued that duplicating largely (I know JpegXL does support a bit more, but from most users' perspectives, they're largely right) what AVIF provided while being written in an unsafe language was not what they wanted in terms in increasing the attack surface.
And it really was the right move at the time, imo. JXL however now has better implementations and better momentum in the wider ecosystem and not just yet another image format that gets put into chrome and de facto becomes a standard.
I can confirm.
I found multiple problems in the "official" cjxl encoder back in 2023 contrary to the webp2 (cwp2) implementation where I could not find any bug or error.
If the encoder have obvious problems it is not a big deal, but it doesn't bode well for the decoder.
Google refused to merge JpegXL as a strategy play to promote AVIF, which was in use by other teams (i think Photos?). Internally, chrome engineers were supportive of jxl but were overridden by leadership.
Do you have actual sources for this? Because the other people commenting about how the newer library removes most of the concerns explains this better than an unsubstantiated speculation about promoting AVIF.
I’ve recently compared WebP and AVIF with the reference encoders (and rav1e for lossy AVIF), and for similar quality, WebP is almost instant while AVIF takes more than 20 seconds (1MP image).
JXL is not yet widely supported, so I cannot really use it (videogame maps), but I hope its performance is similar to WebP with better quality, for the future.
You have to adjust the CPU used parameter, not just quality, for AVIF. Though it can indeed be slow it should not be that slow, especially for a 1mp image. The defaults usually use a higher CPU setting for some reason. I have modest infrastructure that generates 2MP AVIF in a hundred ms or so.
I tested both WebP and AVIF with maximum CPU usage/effort. I have not tried the faster settings because I wanted the highest quality for small size, but for similar quality WebP blew AVIF out of the water.
I also have both compiled with -O3 and -march=znver2 in GCC (same for rav1e's RUSTFLAGS) through my Gentoo profile.
Maximum CPU between those two libs is not really comparable though. But quality is subjective and it sounds like webp worked best for you! Just saying though, there is little benefit in using the max CPU settings for avif. That's like comparing max CPU settings on zip vs xz!
I've been hearing about fights over JpegXL and WebP (and AVIF?) for years, but don't know much about it.
From a quick look at various "benchmarks" JpegXL seems just be flat out better than WebP in both compression speed and size, why has there been such reluctance from Chromium to adopt it? Are there WebP benefits I'm missing?
My only experience with WebP has been downloading what is nominally a `.png` file but then being told "WebP is not supported" by some software when I try to open it.
Most of the code in WebP and AVIF is shared with VP8/AV1, which means if your browser supports contemporary video codecs then it also gets pretty good lossy image codecs for free. JPEG-XL is a separate codebase, so it's far more effort to implement and merely providing better compression might not be worth it absent other considerations. The continued widespread use of JPEG is evidence that many web publishers don't care that much about squeezing out a few bytes.
Also from a security perspective the reference implementation of JPEG-XL isn't great. It's over a hundred kLoC of C++, and given the public support for memory safety by both Google and Mozilla it would be extremely embarrassing if a security vulnerability in libjxl lead to a zero-click zero-day in either Chrome or Firefox.
The timing is probably a sign that Chrome considers the Rust implementation of JPEG-XL to be mature enough (or at least heading in that direction) to start kicking the tires.
> The continued widespread use of JPEG is evidence that many web publishers don't care that much about squeezing out a few bytes.
I agree with the second part (useless hero images at the top of every post demonstrate it), but not necessarily the first. JPEG is supported pretty much everywhere images are, and it’s the de facto default format for pictures. Most people won’t even know what format they’re using, let alone that they could compress it or use another one. In the words of Hank Hill:
> Do I look like I know what a JPEG is? I just want a picture of a god dang hot dog.
I'm not (only) talking about the general population, but major sites. As a quick sanity check, the following sites are serving images with the `image/jpeg` content type:
* CNN (cnn.com): News-related photos on their front page
* Reddit (www.reddit.com): User-provided images uploaded to their internal image hosting
* Amazon (amazon.com): Product categories on the front page (product images are in WebP)
I wouldn't expect to see a lot of WebP on personal homepages or old-style forums, but if bandwidth costs were a meaningful budget line item then I would expect to see ~100% adoption of WebP or AVIF for any image that gets recompressed by a publishing pipeline.
And more importantly, an additional format is a commitment to maintain support forever, not only for you, but for future people who implement a web browser.
I can completely see why the default answer to "should we add x" should be no unless there is a really good reason.
> various "benchmarks" JpegXL seems just be flat out better than WebP
The decode speed benchmarks are misleading. WebP has been hardware accelerated since 2013 in Android and 2020 in Apple devices. Due to existing hardware capabilities, real users will _always_ experience better performance and battery life with webp.
JXL is more about future-proofing. Bit depth, Wide gamut HDR, Progressive decoding, Animation, Transparency, etc.
JXL does flat out beats AVIF (the image codec, not videos) today. AVIF also pretty much doesn't have hardware decoding in modern phones yet. It makes sense to invest NOW in JXL than on AVIF.
For what people use today - unfortunately there is no significant case to beat WebP with the existing momentum. The size vs perceptive quality tradeoffs are not significantly different. For users, things will get worse (worser decode speeds & battery life due to lack of hardware decode) before it gets better. That can take many years – because hey, more features in JXL also means translating that to hardware die space will take more time. Just the software side of things is only now picking up.
But for what we all need – it's really necessary to start the JXL journey now.
It was an issue with the main JPEGXL library being unmaintained and possibly open for security flaws. Some people got together and wrote a new one in Rust which then became an acceptable choice for a secure browser.
Google created webp and that is why they are giving it unjustified preferential treatment and has been trying to unreasonably force it down the throat of the internet.
WebP gave me alpha transparency with lossy images, which came in handy at the time. It was also not bogged down by patents and licensing. Plus like others said, if you support vp8 video, you pretty much already have a webp codec, same with AV1 and avif
a better argument might be that chrome protects their own vs a research group in google switzerland, however as other mentioned the security implications of another unsafe binary parser in a browser is hardly worth it
which only strengthens my argument, webp seemed like some ad-hoc pet project in chrome, and that ended like most unsafe binary parsers, with critical vulnerabilities
You're getting downvoted, but you're not wrong. If anyone else had come up with it, it would have been ignored completely. I don't think it's as bad as some people make it out to be, but it's not really that compelling for end users, either. As other folks in the thread have pointed out, WebP is basically the static image format that you get “for free” when you've already got a VP8 video decoder.
The funny thing is all the places where Google's own ecosystem has ignored WebP. E.g., the golang stdlib has a WebP decoder, but all of the encoders you'll find are CGo bindings to libwebp.
Yes, but it's not recommended - it does not have inter-frame compression, so it is significantly less efficient than just having a regular video file and slapping 'gif' on it.
What, isn't this the cue for someone to explain that it's ironic webp is really a video format which is a bad image format, and now we have symmetry that JpegXL is a good image format which is bad video format? :-D
(I don't know if any of this is true, but it sounds funny...)
Thanks, but just like WEBP I'll try to stick to regular JPEGs whenever possible. Not all programs I use accept these formats, and for a common user JPEG + PNG should mostly cover all needs. Maybe add GIF to the list for simple animations, while more complex ones can be videos instead of images.
Unfortunately being universal implies way more than just having good browser support. There are quite a few image processing programs without webp or jpeg-xl support. I'm using Windows 11 and the default image viewer can't even open webp... Also, keep in mind that due to subscription models there are many people stuck with older Photoshop versions too.
Webp can be really annoying once you hit certain encoding edge cases.
One customer of mine (fashion) has over 700k images in their DAM, and about 0.5% cannot be converted to webp at all using libwebp. They can without problem be converted to jpeg, png, and avif.
Just out of curiosity, what's the problem libwebp has with them? I wasn't aware of cases where any image format would just cross its arms and refuse point blank like that.
We have never been able to resolve it better than knowing this:
Certain pixel colour combinations in the source image appear to trip the algorithm to such a degree that the encoder will only produce a black image.
We know this because we have been able to encode the images by (in pure frustration) manually brute forcing moving a black square across the source image on different locations and then trying to encode again. Suddenly it will work.
Images are pretty much always exported from Adobe, often smaller than 3000x3000 pixels. Images from the same camera, same size, same photo session, same export batch will work and then suddenly one out of a few hundred may become black, and only the webp one not other formats, the rest of the photos will work for all formats.
A more mathematically inclined colleague tried to have a look at the implementation once, but was unable to figure it out because they could apparently not find a good written spec on how the encoder is supposed to work.
"JPEG XL" is a little bit of a misnomer as it's not just "JPEG with more bits". It supports lossless encoding of existing content at a smaller file size than PNG and allows you to transcode existing JPEGs recoverably for a 20% space savings, the lossy encoding doesn't look nearly as ugly and artifacted as JPEG, it supports wide gamut and HDR, and delivers images progressively so you get a decent preview with as little as 15% of the image loaded with no additional client-side effort (from https://jpegxl.info/).
It is at least a very good transcoding target for the web, but it genuinely replaces many other formats in a way where the original source file can more or less be regenerated.
There's odd cases where it still has uses. When I was a teacher, some of the gamifying tools don't allow video embeds without a subscription, but I wanted to make some "what 3D operation is shown here" questions with various tools in Blender. GIF sizes were pretty comparable to video with largely static, less-than-a-second loops, and likely had slightly higher quality with care used to reduce color palette usage.
But I fully realize, there are vanishingly few cases with similar constraints.
Unfortunately, with Chromium dropping support for manifest-v2 extensions, and through that dropping proper support for uBlock Origin, I'm moving away from it. Not that that's easy, of course...
Oldie goodie article with charts, comparing webp, jpegxl, avif, jpeg etc. avif is SLOW
No, memory safety is not security, Rust's memory guarantees eliminate some issues, but they also create a dangerous overconfidence, devs treat the compiler as a security audit and skip the hard work of threat modeling
A vigilant C programmer who manually validates everything and use available tools at its disposal is less risky than a complacent Rust programmer who blindly trust the language
If the encoder have obvious problems it is not a big deal, but it doesn't bode well for the decoder.
JXL is not yet widely supported, so I cannot really use it (videogame maps), but I hope its performance is similar to WebP with better quality, for the future.
I also have both compiled with -O3 and -march=znver2 in GCC (same for rav1e's RUSTFLAGS) through my Gentoo profile.
From a quick look at various "benchmarks" JpegXL seems just be flat out better than WebP in both compression speed and size, why has there been such reluctance from Chromium to adopt it? Are there WebP benefits I'm missing?
My only experience with WebP has been downloading what is nominally a `.png` file but then being told "WebP is not supported" by some software when I try to open it.
Also from a security perspective the reference implementation of JPEG-XL isn't great. It's over a hundred kLoC of C++, and given the public support for memory safety by both Google and Mozilla it would be extremely embarrassing if a security vulnerability in libjxl lead to a zero-click zero-day in either Chrome or Firefox.
The timing is probably a sign that Chrome considers the Rust implementation of JPEG-XL to be mature enough (or at least heading in that direction) to start kicking the tires.
I agree with the second part (useless hero images at the top of every post demonstrate it), but not necessarily the first. JPEG is supported pretty much everywhere images are, and it’s the de facto default format for pictures. Most people won’t even know what format they’re using, let alone that they could compress it or use another one. In the words of Hank Hill:
> Do I look like I know what a JPEG is? I just want a picture of a god dang hot dog.
https://www.youtube.com/watch?v=EvKTOHVGNbg
* CNN (cnn.com): News-related photos on their front page
* Reddit (www.reddit.com): User-provided images uploaded to their internal image hosting
* Amazon (amazon.com): Product categories on the front page (product images are in WebP)
I wouldn't expect to see a lot of WebP on personal homepages or old-style forums, but if bandwidth costs were a meaningful budget line item then I would expect to see ~100% adoption of WebP or AVIF for any image that gets recompressed by a publishing pipeline.
so webp > jpegxl > png
I can completely see why the default answer to "should we add x" should be no unless there is a really good reason.
https://www.youtube.com/watch?v=UphN1_7nP8U
- jxl is better at high bpp, best in lossless mode
The decode speed benchmarks are misleading. WebP has been hardware accelerated since 2013 in Android and 2020 in Apple devices. Due to existing hardware capabilities, real users will _always_ experience better performance and battery life with webp.
JXL is more about future-proofing. Bit depth, Wide gamut HDR, Progressive decoding, Animation, Transparency, etc.
JXL does flat out beats AVIF (the image codec, not videos) today. AVIF also pretty much doesn't have hardware decoding in modern phones yet. It makes sense to invest NOW in JXL than on AVIF.
For what people use today - unfortunately there is no significant case to beat WebP with the existing momentum. The size vs perceptive quality tradeoffs are not significantly different. For users, things will get worse (worser decode speeds & battery life due to lack of hardware decode) before it gets better. That can take many years – because hey, more features in JXL also means translating that to hardware die space will take more time. Just the software side of things is only now picking up.
But for what we all need – it's really necessary to start the JXL journey now.
The issue was the use of C++ instead of Rust or WUFFS (that Chromium uses for a lot of formats).
https://blog.cloudflare.com/uncovering-the-hidden-webp-vulne...
The funny thing is all the places where Google's own ecosystem has ignored WebP. E.g., the golang stdlib has a WebP decoder, but all of the encoders you'll find are CGo bindings to libwebp.
>>>
(I don't know if any of this is true, but it sounds funny...)
Browser support for WebP is excellent now. The last browser to add it was Safari 14 in September 16, 2020: https://caniuse.com/webp
It got into Windows 10 1809 in October 2018. Into MacOS Big Sur in November 2020.
Wikipedia has a great list of popular software that supports it: https://en.wikipedia.org/wiki/WebP#Graphics_software
One customer of mine (fashion) has over 700k images in their DAM, and about 0.5% cannot be converted to webp at all using libwebp. They can without problem be converted to jpeg, png, and avif.
Certain pixel colour combinations in the source image appear to trip the algorithm to such a degree that the encoder will only produce a black image.
We know this because we have been able to encode the images by (in pure frustration) manually brute forcing moving a black square across the source image on different locations and then trying to encode again. Suddenly it will work.
Images are pretty much always exported from Adobe, often smaller than 3000x3000 pixels. Images from the same camera, same size, same photo session, same export batch will work and then suddenly one out of a few hundred may become black, and only the webp one not other formats, the rest of the photos will work for all formats.
A more mathematically inclined colleague tried to have a look at the implementation once, but was unable to figure it out because they could apparently not find a good written spec on how the encoder is supposed to work.
[0] https://developers.google.com/speed/webp/faq#what_is_the_max...
It is at least a very good transcoding target for the web, but it genuinely replaces many other formats in a way where the original source file can more or less be regenerated.
https://web.dev/articles/replace-gifs-with-videos
But I fully realize, there are vanishingly few cases with similar constraints.