Hacker Newsnew | past | comments | ask | show | jobs | submit | Karliss's commentslogin

I doubt it will be a problem in practice.

Regular variadic arguments in general aren't used very often in C++ with exception of printf like functions. Not rare enough for majority of C++ programmers to not know about them, but definitely much more rare than their use in python. Main reason people know about it at all is printf. The "new" C compatible form has been supported since the first ISO standardized version of c++ if not longer. There haven't been a good reason to use the "old" form for a very long time. Which means that the amount of C++ code using deprecated form is very low.

Being deprecated means that most compilers and linters will likely add a warning/code fix suggestion. So any maintained project which was accidentally using C incompatible form will quickly fix it. No good reason not to.

As for the projects which for some reason are targeting ancient pre ISO standard c++ version they wouldn't have upgraded to newer standard anyway. So if new standard removed old form completely it wouldn't have helped with those projects.

So no you don't need to know the old form to read C++ code. And in the very unlikely case you encounter it, the way for accessing variadic arguments is the same for both forms through special va_list/va_arg calls. So if you only know the "new" form you should have a pretty good idea of whats going on there. You might lookup in references what's the deal with missing coma, but other than that it shouldn't be a major problem for reading code. This is hardly going to be the biggest obstacle when dealing with code bases that old.


The article answers to this near the very beginning.

> Now, don't get me wrong. Trigonometry is convenient and necessary for data input and for feeding the larger algorithm. What's wrong is when angles and trigonometry suddenly emerge deep in the internals of a 3D engine or algorithm out of nowhere.

In most cases it is perfectly fine to store and clamp your first person view camera angles as angles (unless you are working on 6dof game). That's surface level input data not deep internals of 3d engine. You process your input, convert it to relevant vectors/matrices and only then you forget about angles. You will have at most few dozen such interactive inputs from user with well defined ranges and behavior. It's neither a problem from edge case handling perspective nor performance.

The point isn't to avoid trig for the sake of avoiding it at all cost. It's about not introducing it in situations where it's unnecessary and redundant.


Ah you're right! Then I believe the author and I are indeed on the same page.

0.001mm is 1 micron not 10.

More like the opposite. Point cloud data captured with varying means has existed for a long time with raw data visualized more or less just like this. And SciFi movies/games use the effect of raw visualization as something futuristic/computer tech looking. Just like wireframe on black background, although that one is getting partially downgraded to more retro scifi status since drawing 3d wireframe isn't hard anymore. It started when any 3d computer graphics even basic wireframe was futuristic and not every movie could afford it, with some of them faking it with analog means. Any good scifi author takes inspiration from real world technology and extrapolate based on it, often before widespread recognition of technology by general population. Once something reaches the state of consumer product beyond just researchers and trained professionals, the visuals tend to get more polished and you loose some of the raw, purely functional, engineering style.

Except it kind of fails at that too. The window corners seem to be either based on those squircle things or some kind of other varying radii curve which eases out into sides much more gradually than proper circles. The window buttons (close, minimize) the round toolbar buttons anchored to top right corner are based on proper circles. Attempting to center circle in a varying curvature corner results in varying spacing between the circle and corner, which defeats the whole point of why different windows have different corner size (not calling it radius because they are not circles).

When the top right corner contains a search field instead of rounded button, that also seems to use varying curvature instead of capsule with proper circles at the ends. Still results in varying spacing between window corner and the toolbar content.

And that's just the 2 top corners. Attempts to align top corners result in even bigger mismatch with the rest of the window content. For example calculator -> it has a grid of round buttons. While the window corners might match top bar (as good as they can due to different shapes) the main calculation buttons don't match the corners at all.

Similar problem affects many of the popups which have something like confirmation button anchored to bottom right corner.

Rounded scrollbar handle - not aligned with bottom left corner size, instead it awkwardly gets cut of by different amount in each program.

Menus also have this disease. The non circular corner curve of overall menu shape extends way past the corner of item highlight resulting in varying spacing and making it feel almost like whole menu has bulged out instead of flat sides.


Exactly!

And to OC you're replying to: window close/minimise/resize were already equidistant from window edge on macOS 15 and probably earlier.

Here is a screenshot (safari in the background, textedit in front): https://pasteboard.co/OeMBTDKGsTx9.png

In MacOS 26 it's only weirder, because as you say - due to squircle window corners, now we have this constantly varying distance to the edge.

EDIT: I "get" apple's fascination to squircle, but why they made it such a big radius. Probably no one would've complained if they just have changed from current ~15-20px rounded corners into ~15-20px squircles, but they went 50px+ on toolbared windows.


Not on a phone right now, but you have to type in sample text above and press check. Bad UI choice of showing bars before text has been entered and separating bars from the input field by additional text.


It is already way beyond double layer. The 4.8TB is achieved using 301 layers.


There goes my hope of non-cloud backup. I was thinking 1TB doesn't quite make it. Or at least I need a dozens of these.


The capacity per device is irrelevant.

What matters is the capacity per volume, per mass and per dollar.

The capacities per volume and per mass for these glass slabs are already very competitive. They are about the same as for the best tape cartridges currently available. The capacity per volume is about twice better than for the best HDDs, and the capacity per mass is much better than that, because HDDs are very heavy.

If such optical storage had not been so expensive as it is for now, it would have already been much better than any cloud storage. The slow writing speed is similar to that of file downloading or uploading over the Internet. Reading can be done much faster than writing, because it uses ordinary lasers and a video camera, not the very expensive femtosecond-pulse lasers used for writing.


Bought a ferry ticket, it told me I am close enough to board. The usual popup for advancing time to leaving time didn't show up. Decided to speed up time and wait it manually it just kept going without getting in until 2am when out of hotel triggered game over.


Hmm, that's not good. Where was the ferry from and to?


One of the 2 terminals in Cherbourg France, the city you reach by traveling from Dublin to France, but not the one I arrived in. It only had routes to England.


Similar thing here. Was in a ferry from Ireland to France and got a penalty for not being in a hotel. How is 20 hour ferry not considered an overnight transport.


It's not that different from "real 3d" renderers. Especially in deferred rendering pipelines the rasterizer creates a bunch of buffers for depth map, normal map, color, etc but main shaders are running on those 2d buffers. That's the beauty of it parts operating with 3d triangles are kept simple simple and the expensive lighting shaders run once on flat 2d images with 0 overdraw. The shaders don't care whether normal map buffer came from 3d geometry which was rasterized just now, prerendered some time ago or the mix of 2. And even in forward rendering pipelines the fragment shader is operating on implicit 2d pixels created by vertex shaders and rasterizer from "real 3d" data.

The way I look at it if the input and math in the shader is working with 3d vectors its a 3d shader. Whether there is also a 3d rasterizer is a separate question.

Modern 3d games are exploiting it in many different ways. Prerendering a 3D model from multiple views might sound like cheating but use of imposters is a real technique used by proper 3d engines.


There's a GBDK demo that actually does something similar (spinning 2D imposters). Does not handle the lighting though, which is quite impressive.

https://github.com/gbdk-2020/gbdk-2020/tree/develop/gbdk-lib...

Unfortunately, the 2D imposter mode has pretty significant difficulties with arbitrarily rotated 3D. The GBDK imposter rotation demo needs a 256k cart just to handle 64 rotation frames in a circle for a single object. Expanding that out to fully 3D views and rotations gets quite prohibitive.

Haven't tried downloading RGDBS to compile this yet. However, suspect the final file is probably similar, and pushing the upper limits on GB cart sizes.


Well, Cannon Fodder for the GBC it's 1 MB big, and the rest such as Metal Gear and Alone in the Dark are pretty sized for the hardware.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: