I played a lot of Civ1, Colonization and Civ 2. First time I tried Civ 3 I lost some city due to some culture or religious influence and ragequit (I was also working my first job at that point so didn't have as much time to spare).
Played a bit of Civ 4 and 5(or 6?) but never was really as hooked on them.
It's on that list of things I would've love to do with infinite time. Especially as it actually had a hotseat multiplayer-mode that would be awesome to put in a networked context (iirc it might've been a hack enabled with a hex-editor but it was fun).
I actually wrote some stuff directly because I was young, poor and stupid.
First year in uni my windows laptop broke, had to lug around a heavy second hand underpowered ppc powerbook and wrote some application I needed that I didn't want "bloated".
Font handling, shared memory backbuffers, network api, etc.. as I wrote in another comment. It is an API to solve over the wire graphics in the late 80s/early 90s era using idioms of that time, and already by year 2000 the problems (rasterization power) didn't exist nor is it even a suitable API surface (even less so 25 years later).
Not to mention that the complexity of X11 shots through the roof once shared buffers comes into play.
X11 was ok for it's time, but fundamentally it's an really outdated design to solve 80s/90s issues over the network in the way you solved it back then.
It is INCREDIBLY outdated and forces all graphics to flow through a crappy 80s era network protocol even when there is no network. It is the recurrent laryngeal nerve of graphics technology.
Most CPU's has signed and unsigned right shift instructions (left shift is the same), so yes it works (You can test this in C by casting a signed to unsigned before shifting).
The biggest caveat is that right shifting -1 still produces -1 instead of 0, but that's usually fine for much older game fixed-point maths since -1 is close enough to 0.
It's not ideal, the api is kind of low-yet-high-level and that brings some complications.
Move backpressure handling onto the task producer and use a SharedArrayBuffer between the producer and worker, where the worker atomically updates a work-count or current work item ID in that SharedArrayBuffer that the producer can read (atomically) to determine how far along the worker has gotten.
No, they're threads as far as the OS is concerned (they'll map to OS threads) and actually _do_ share physical process and memory (that's how SharedArrayBuffer works).
However, apart from atomic "plain" memory no objects are directly shared (For Node/V8 they live in so called Isolated iirc) so from a logical standpoint they're kinda like a process.
The underlying reason is that in JavaScript objects are by default open to modification, ie:
To get sane performance out of JS there are a ton of tricks the runtime does under the hood, the bad news is that those are all either slow (think Python GIL) or heavily exploitable in a multithreaded scenario.
If you've done multithreaded C/C++ work and touched upon Erlang the JS Worker design is the logical conclusion, message passing works for small packets (work orders, structured cloning) whilst large data-shipping can be problematic with cloning.
This is why SharedArrayBuffer:s allows for no-copy sharing since the plain memory arrays they expose don't offer any security surprises in terms of code execution (spectre style attacks is another story) and also allows for work-subdivision if needed.
Optimizing the wrong thing, probably wanted to shave customer support costs by having lower call volumes, but those that need support probably were hanging onto the calls since nobody that can fix things calls support (so no savings) AND reduced customer satisfaction.
reply