I think this is a very impressive implementation in WebGL.
However, I believe the same approach could easily have 2x to 4x the frame rate (or 2x to 4x more battery lifetime) if it was using compute shaders, but those aren't available on WebGL. So I'd count this as an example of why WebGL will not replace "proper" desktop OpenGL anytime soon.
Also, it appears to be reflecting by the same amount everywhere, which makes it look more like glue than like glass. Typically, glass has a fresnel reflection, meaning that it reflects more strongly the lower the angle between the light and the surface. That's why glass bottles usually reflect at the border (which is curved away from you) but are fully refractive in the center (which is facing you).
So the hard part of this isn't computing the reflection color (that can be done with an environment cubemap and some math expressions), but sorting the transparency. This demo uses "depth peeling", which is a fancy term to just mean rendering the model in a few different frustum slices along the view direction, and using the previous framebuffer content to accumulate reflection.
The whole purpose of this trick is to abuse the rasterizer! I struggle to see how compute shaders would make this faster.
The problem is security, as usual. You can use opengl to exploit a host system, therefore all interactions have to be whitelisted and checked. Compute shaders I know can be crafted to bring your PC to a grinding halt.
Webgpu will be a lovely security nightmare. The underlying hardware is inherently insecure. It's fast, not safe. Safety is simply not a feature of graphics operations.
Yep. Browsers could protect against this much better by profiling the fragment shaders instructions, size of data buffers, and requiring the render loop to have a delay. But then we get into managed GL territory and it's no longer really GLES, it's a library that gives you some features of GLES.
If I buy and download a game from Steam, I trust it to not contain malware. That's why I allow that game to run with very little protection.
If I visit a random website, I have to be prepared for the worst. And if I visit any website that finances itself with ads, I can assume that they will behave like offensive attackers trying whatever they can to collect more information about me in ways that I do not want.
As such, WebGL needs a very strong sandbox around it, to prevent rogue websites from causing harm. For desktop games, that trust issue does not arise, because they have a different business model.
That's why in my opinion, desktop OpenGL will always remain faster than WebGL.
WebGL does have a strong sandbox, just like HTML/CSS/ES do. But desktop OpenGL has a strong sandbox too (by API design, process control, device memory paging protection, etc.), so I’m not sure why you’re thinking the threat models are any different, it’s just as bad if a game secretly hacks your computer as if a web site does. Desktop OpenGL development is dying anyway, so it doesn’t really matter how fast it is compared to WebGL in the future, but I don’t believe it is faster than WebGL due to any sandbox differences. Whatever speed differences exist are there due to features and design and speed of the host language.
Maybe yes, but then that's only market share lost to DirectX. I don't see the triple-A video game market shrinking anytime soon.
As for the Sandbox, Microsoft recently disabled GPU virtualization because, apparently, sandboxing a GPU is really difficult. For video games, that sandbox is not used in practice. You can access pretty much all the GPU memory that you want.
As for the different threat model, a video game tends to have a clear distributor. A website is more anonymous and, hence, inherently less trustworthy.
You know that I know that you know that this isn't really true, lol.
> Yet the linked document has been updated just a few weeks ago.
But that's just because they come from the same WebGL spec repo. The 2.0-compute spec indeed hasn't been updated for a year. I imagine it was halted because WebGL 2 never saw great adoption to begin with. Though even Apple / WebKit is finally adding it, because I think they realized that WebGPU is going to take how many more years to pan out. What a disappointment.
Uh.. It has fresnel reflection, you can see that the curved surfaces that don't face the camera are reflecting more of the mostly white environment, it's even more clear when you adjust the reflectionFactor.
In the past, I have supervised cross-platform Unity and UE4 game projects. I even found and fixed a mobile GPU heat death bug :) So that 2x to 4x is just my personal experience.
Some things like GPU pixel shaders tend to be magically slower on WebGL, even if you send the same raw HLSL/GLSL source code. The reason appears to be that due to security concerns, the same shader source code is compiled differently by WebGL than by desktop OpenGL.
Also, many smart tricks like DMA from the SSD bus to the GPU are inherently very unsafe, so WebGL sandboxing simply doesn't allow them. The result is that you need to copy the data twice instead of once and if memory bandwidth is a bottleneck (it usually is for triple-A), then that can easily completely waste performance.
In my humble experience this is exactly the main reason why WebGL hasn't taken off for Web games as Flash did.
For the common consumer, they don't get why a graphics card that plays their collection just fine struggles with an Amiga 500 like game on their browser.
Wait, are you saying Flash was fast compared to WebGL??
Which Amiga 500 like browser games are you thinking of? Would you link to one that demonstrates WebGL dramatically underperforming compared to a desktop?
This seems quite exaggerated to me. @fxtentacle gave some specific reasons, but even the 2x-4x estimate seems over-stated for the average shader, and pixel shaders are only a small fraction of a typical game’s run time. The 2-4x claim is relative and lacking specifics. It could happen, especially with tiny shaders, but on average, I don’t believe it, and it’s easy to verify using ShaderToy for example. (Also plenty easy to see just by visiting ShaderToy that typical WebGL shader perf is fine.)
There are far bigger reasons consumers don’t go to individual web sites for their games than the difference in perf between WebGL and desktop OpenGL. Just to mention two that can each separately account for it, 1) asset loading in the browser over the internet every time you play is awful, and 2) a web site is not a distribution channel -- most people making games don’t also have the capacity to market, publish, and host their own games, and most consumers are already looking for games on Steam and other app stores. Throw in browser UI restrictions and lack of support for game controllers, vs native apps on top of that. It’s really easy to see that WebGL game adoption has nothing to do with perf.
Hahaha, I appreciate the humor. If it was so great, why didn’t Unreal 4 support it? And when did web games on 3d Flash ever have high adoption? Your earlier claim was WebGL wasn’t used as much as Flash, but the only high adoption of Flash games on the web were 2d had nothing to do with UE3 flash player support.
“Why did Adobe decide to EOL Flash Player and select the end of 2020 date?
Open standards such as HTML5, WebGL, and WebAssembly have continually matured over the years and serve as viable alternatives for Flash content. Also, the major browser vendors are integrating these open standards into their browsers and deprecating most other plug-ins (like Adobe Flash Player).By announcing our business decision in 2017, with three years’ advance notice, we believed that would allow sufficient time for developers, designers, businesses, and other parties to migrate existing Flash content as needed to new, open standards”
Compute shaders in WebGL would be great. But why would they improve the performance in this case? Right now it's implemented as a fragment shader, which to me seems appropriate here.
However, I believe the same approach could easily have 2x to 4x the frame rate (or 2x to 4x more battery lifetime) if it was using compute shaders, but those aren't available on WebGL. So I'd count this as an example of why WebGL will not replace "proper" desktop OpenGL anytime soon.
Also, it appears to be reflecting by the same amount everywhere, which makes it look more like glue than like glass. Typically, glass has a fresnel reflection, meaning that it reflects more strongly the lower the angle between the light and the surface. That's why glass bottles usually reflect at the border (which is curved away from you) but are fully refractive in the center (which is facing you).