It's hard to convey to today's generation, who think Ivy Bridge to Haswell was a big jump or whatever, how awesome the 286 -> 386 -> 486 changes were to personal computing. It felt almost like what going from a NES to a Super Nintendo to a N64 felt like. The improvements were astounding.
It wasn't a big jump, but it was a jump. Ivy Bridge lacks the instruction set required to run RHEL 10 [1].
The minimum supported microarchitecture level is x86-64-v3 and Ivy Bridge lacks AVX2 instructions.
Agreed, it's not those, it's the fact that we went from JS being a little sprinkling of dynamism on a document to an entire build process with massive numbers of dependencies and browser shims. The web feels like a mistake as a platform...
I remember trying to run a game, Rise of the Triad, which was built with an improved Wolfenstein engine iirc, and having it struggle on my 386 unless I made the viewport as small as possible. At which point it told me to buy a 486... well I did eventually, I guess it worked.
Had the same experience with Doom II. Got it to run surprisingly well on a brand new Tandy 486DX2 + 4MB RAM, though I seem to recall having issues with SoundBlaster compatibility.
The 286 protected mode did not allow for a 32-bit flat address space and was heavily half-baked in other ways, e.g. no inbuilt way to return the CPU to real mode without a slow and fiddly CPU-reset.
It was architecturally a 16-bit CPU so a flat 32-bit address space would be a non sequitur. If you wanted flat 32-bit addressing, there was a contemporary chip that could do it with virtual memory: Motorola 68010 + the optional external MMU. (Or if you were willing to do some hoops, even a 68000.. see the Sun-1)
Protected mode on the 286 allowed 24-bit addressing, enabling access to 16 MB of memory, but lacked virtual memory and required a reboot to return to real mode. The 386 introduced virtual memory through paging, 32-bit addressing for 4 GB of memory, and virtual 8086 mode for running multiple 8086 programs simultaneously without compromising security.
An MMU is pretty much necessary for robust multitasking. Without it, you are at the whim of how well software behaves. Without it, it is more difficult for developers to create well behaved software. That also assumes good intentions from programmers, since an MMU is necessary for memory protection (thus security).
While emulating an FPU results in a huge performance penalty, it is only required in certain domains. In the world of IBM PCs, it was also possible to upgrade your system with an FPU after the fact. I don't recall seeing this option for IBM compatibles. While I have seen socketed MMUs on other systems, I don't know whether they were intended as upgrade options.
By the way, "the i486SX was a microprocessor originally released by Intel in 1991. It was a modified Intel i486DX microprocessor with its floating-point unit (FPU) disabled." (https://en.wikipedia.org/wiki/I486SX)
That's an advancement but that's a matter of speed an simplicity. An MMU is a huge before and after, it's still the biggest separator of CPUs today. The most important detail to understand a CPU is whether it has an MMU.
I remember a 120MHz Pentium Linux box arriving at a cottage in Crete, where, with the aid of a 56k USRobotics modem, we (my wife and I) worked remotely in 1995-6. She had a Mac SE/30 for her tourist guidebook work. She later upgraded to a 6100 PowerMac "pizza-box", various iMacs, G3/G4/G5, whereas I saved a quad-200MHz PentiumPro monster (Compaq Professional Workstation 8000, tricked up to 3GB RAM) from the skip. I regret taking that to the recycling centre many years later.
There are two ways of constructing software: one way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult.
That's cute, but pure fantasy, imo. There's no non-trivial code that's actually used and maintained that's going to be immune to obvious deficiencies that get overlooked. Look at this [0], for example. Simple as can be, obvious defect in a critical area, left out in open source for ages, a type of bug an LLM would be liable to generate in a heartbeat and a reviewer could easily miss. We are human after all. [1]
The sheer amount of dedication is awe-inspiring. In the 80s, at Imperial College, London, Theoretical Physics Group, we came across some correspondence intended for one of the professors, I think. This person had evidently "translated" his native Spanish into English, laboriously, via some dictionary we thought. We spent many tea breaks puzzling over phrases such as
"Is well-knew the refran: of the said to made it has a good road."
"The language assumes contiguous memory, column-major order, and no hidden aliasing between arrays."
One out of three ain't bad.
"Column-major order", that's the one. That just says that there is an implied order of elements of an array. This enables efficient mapping of an array of the language definition to storage with a linear address space. It does not require storage to be organised in any particular way.
"Contiguous memory". Not really, the language definition does not even use the word "memory". The language is carefully defined so that you can implement it efficiently on a linear address space in the presence of caches (no accident that IBM machines in the 80s were pushing caches while CRAY was pushing vector processors). The term "contiguous" in the language definition just means a collection whose parts are not separated by other data objects.
"no hidden aliasing between arrays". This is a crude mis-statement of the actual rule of argument association across subroutine/function caller-callee boundary. The rule takes pages to describe fully. A language that still has EQUIVALENCE statement (although marked obsolete) cannot be said to disallow aliasing. It is still quite hard to find a compiler that will reliably diagnose inadvertent aliasing at runtime. The actual rule in Fortran says something like "thou shall not cause any effect on (dummy name) A through a reference to some other name, unless (some-clause-x or y or z)".
It is not suited to cases where the machine state is experienced by the user at all times (games, hardware-control, real-time transactions). It is very suited to cases where the machine is expected to act like a SSBN, disappearing from view until a large floating-point calculation is ready.
Suddenly, it was possible to imagine running advanced software on a PC, and not have to spend 25,000 USD on a workstation.
reply