I actually have an implementation of that too, since I was fascinated by the twisting cloth example, but need to figure out how best to incorporate it, or if it’s better in a standalone experiment.
I'm super frustrated by the state of 3D on web right now as an app developer. I wish we just had Vulkan on the web ...
Right now your options are basically having a GLES renderer that you can restrict to WebGL2 (so no compute shaders, etc. and other things that make desktop OpenGL acceptable for writing a modern renderer) or having to abstract over Vulkan/WebGPU yourself, which are similar but different enough to increase your code complexity considerably.
There's abstractions like wgpu and bgfx you can commit to, and of course you can just use game engine middleware and have it all done for you, but overall things fall short of just being able to "write once, run anywhere" a renderer, sadly.
> your options are basically having a GLES renderer that you can restrict to WebGL2 (so no compute shaders, etc. and other things that make desktop OpenGL acceptable for writing a modern renderer) or having to abstract over Vulkan/WebGPU yourself
I don't understand this complaint. What's worse about using WebGPU over using GLES? Seems like a strict improvement. You can use WebGPU anywhere, you're not required to "abstract" over Vulkan. If you're talking about using it outside of the web, you just choose wgpu or Dawn as your implementation, it's the same API and even the same implementation as you'd get in a browser.
Other than the fact that WebGPU sucks compared to modern Vulkan + extensions, there’s nothing stopping you from just using webgpu even in a native-only project, with no further abstraction
What's holding it back? Is there something to be done to make it less frustrating? What would make "The Hardware-Accelerated Web" a breeze to use for developers?
Thanks! Never been easier to start than right now. This physics engine is a bit opaque in terms of how it works, but I recently wrote about a global illumination approach that uses surfels - I break it down into small manageable pieces, with plenty of interactive visualizations, and it's also in WebGPU! If you have some time, maybe take a look at that and start taking it apart: https://juretriglav.si/surfel-based-global-illumination-on-t...
It can definitely be achieved with Claude. Even with no experience in graphics progammming, I've been able to replicate results of several papers in related to fluid simulation.
Awesome work, what always prevents me from implementing more solvers is the amount of math required. While the implementation always seems simple, understanding the different optimization strategies for each solver gets confusing.
It's really impressive that the author was able to implement rendering papers and physics sim papers with such regularity. It really is a feat. Makes me curious to see what their background is.
Can you elaborate on what you mean? It could be a matter of perspective, For a stack of blocks, each 1 meters high, the stack can get quite high and your expectations on how it should look like IRL might not be correct, due to never experience a large tower of blocks being knocked over at that vantage point. Especially if the mass of the objects are strange (super light for their size or super heavy).
I know in older games, the recommendation was to keep gravity low (~6 m/s^2 iirc) to help with simulation stability and make things look better, that might contribute to your idea of things being floaty.
I don't find the examples in the git repo to be especially floaty, but I work with a lot of simulators so I might just be used to it.
Right now your options are basically having a GLES renderer that you can restrict to WebGL2 (so no compute shaders, etc. and other things that make desktop OpenGL acceptable for writing a modern renderer) or having to abstract over Vulkan/WebGPU yourself, which are similar but different enough to increase your code complexity considerably.
There's abstractions like wgpu and bgfx you can commit to, and of course you can just use game engine middleware and have it all done for you, but overall things fall short of just being able to "write once, run anywhere" a renderer, sadly.
I don't understand this complaint. What's worse about using WebGPU over using GLES? Seems like a strict improvement. You can use WebGPU anywhere, you're not required to "abstract" over Vulkan. If you're talking about using it outside of the web, you just choose wgpu or Dawn as your implementation, it's the same API and even the same implementation as you'd get in a browser.
I aspire to build cool stuff like this in WebGPU.
Very excited for the future of the web.
It's really impressive that the author was able to implement rendering papers and physics sim papers with such regularity. It really is a feat. Makes me curious to see what their background is.
I know in older games, the recommendation was to keep gravity low (~6 m/s^2 iirc) to help with simulation stability and make things look better, that might contribute to your idea of things being floaty.
I don't find the examples in the git repo to be especially floaty, but I work with a lot of simulators so I might just be used to it.