The scenario was about the first fusion (hydrogen) bomb test causing a runaway "ignition" of the atmosphere. It was never considered likely, but they still did the math to make certain it couldn't happen.
Since we're talking about defining our own processor, that means we need to define one with cheaper traps.
Expanding on what I wrote above about "bits of hardware acceleration", maybe adding a few primitives to the instruction set that make page table walking easier would help.
And with a trusted compiler architecture you don't need to keep the ISA stable between iterations, since it's assumed that all code gets compiled at the last minute for the current ISA.
Taking this to an extreme, the whole idea of a TLB sounds like hardware protection too?
As a thought experiment, imagine an extremely simple ISA and memory interface where you would do address translation or even cache management in software if you needed it... the different cache tiers could just be different NUMA zones that you manage yourself.
You might end up with something that looks more like a GPU or super-ultra-hyper-threading to get throughput masking the latency of software-defined memory addressing and caching?
In TempleOS, everything runs in ring 0, but that's not the same as doing protection in software (which would require disallowing any native code not produced by some trusted translator). It simply means there's no protection at all.
That's because CS in real/V86 mode is actually a writable data segment. Most protection checks work exactly the same in any mode, but the "is this a code segment?" check is only done when CS is loaded in protected mode, and not on any subsequent code fetch.
Using a non-standard mechanism of loading CS (LOADALL or RSM), it's possible to have a writable CS in protected mode too, at least on these older processors.
There's actually a slight difference in the access rights byte that gets loaded into the hidden part of a segment register (aka "descriptor cache") between real and protected mode. I first noticed this on the 80286, and it looks to be the same on the 386:
- In protected mode, the byte always matches that from the GDT/LDT entry: bit 4 (code/data segment vs. system) must be set, the segment load instruction won't allow otherwise, bit 0 (accessed) is set automatically (and written back to memory).
- In real and V86 mode, both of these bits are clear. So in V86 mode the value is 0xE2 instead of the "correct" 0xF3 for a ring 3 data segment, and similarly in real mode it's 0x82 (ring 0).
The hardware seems to simply ignore these bits, but they still exist in the register, unlike other "useless" bits. For example, LDT only has bit 7 (present), and GDT/IDT/TSS have no access rights byte at all - they're always assumed to be present, and the access rights byte reads as 0xFF. At least on the 286 that was the case, I've read that on the Pentium you can even mark GDT as not-present, and then get a triple fault on any access to it.
Keeping these bits, and having them different between modes might have been an intentional choice, making it possible to determine (by ICE monitor software) in what mode a segment got loaded. Maybe even the two other possible combinations (where bit4 != bit0) have some use to mark a "special" segment type that is never set by hardware?
Seems to me from reading the deleted text file[1] like the author[2] used an LLM to get feedback on how to improve their own code. That isn't at all what "vibe-coding" usually means, and I say that as a complete AI hater myself.
Or do you have some "smoking gun" evidence?
I think the setup screen is a really nice touch and not something an AI would come up with.
“ Some of this firmware code was written with AI assistance. It currently contains an IPC re-entry, and possibly other bugs that could cause the RP2350 to crash under certain circumtstances. ”
Seems like an admission they’ve not really read the code either.
I guess the question is: would I trust this with a ~35-year old hard drive that I have to baby lest it finally die? Well, I'd rather wait for the (slightly more) inevitable kinks to be ironed out.
This has nothing to do with this project. It's an adapter for connecting (very) old hard drives which only support CHS to a modern computer via USB, so that you can copy the data off them.
I mean, I’m sure there _are_ drive adapters without CHS support; in my sample size of 1, a cheap no-name adapter bought from Amazon a few years ago, it works just fine (I’m assuming the very early IDE drive I used didn’t use LBA, but I don’t have it anymore).
For what is worth the adapter is one of those half-red half-black vertical-insertion ones with a cursed USB-A to USB-A cable, connections for SATA and PATA (2.5 and 3.5”) and a sliding “Molex” connector for the 3.5 PATA drive. Not a quality item…
It may very well depend on how the HDD was set up originally.
Before the HDDs had Integrated Drive Electronics (IDE), it required you to add an interface card to one of your ISA slots which you then connected the HDD to.
Each HDD factory had a documented CHS geometry for their products, but it didn't take long for this to no longer try to represent the actual physical geometry on the platters any more.
Either way, you set the jumpers on the interface card for that particular geometry according to the documentation, and that was the only way to correctly address the sectors in the most reliable way, if at all.
With the arrival of IDE, the interface card was no longer needed because that logic was handled inside the HDD after that, and motherboards arrived with built-in connections for HDDs, not only floppies any more.
You would set the HDD geometry in BIOS and all seemed OK until CMOS corruption occurred from something like a power surge, when the setting reverted to default.
The IDE got smarter in tandem with the BIOS's advances, and eventually the default "automatic" BIOS setting was smart enough to correctly pick up the effective geometry. Whether the HDD was flexible enough to have been commissioned with something other than its "native" geometry or not. By this time almost all new HDDs were, but the older PIO-0 HDDs were still the most abundant, and for people moving it to a new motherboard this made it work seamlessly most of the time.
Once LBA came out it was layered on top of that but you still had fixed CHS options in BIOS if you wanted it. Otherwise CHS was handled automatically as established. Never was exactly the same under every BIOS.
With a lesser USB adapter it may or may not be able to pick up geometry correctly, depending on how the data appears. And it can still be various different things, detecting and utilizing the structure found on the HDD, or re-partioning and formatting a HDD that still contains its previous data like that and having layout & structure come out the same, or not quite. But setting up a completely zeroed HDD may not end up with the same CHS as any of that either. In that situation the factory geometry prevails since there is nothing other than zeros to autodetect data structure from.
To avoid this I still do like to zero (the first 100mb at least) of the HDD first, remove it from all power for at least an hour to allow internal capacitors which store any non-default geometry to discharge, then partition & format on a vintage Intel-based PC with highly compatible BIOS.
That drive will then be more compatible with anything it connects to, but if a zeroed HDD was commissioned from USB to start it still can be just fine.
Booting is another thing to layer on, then you may or may not have to pay attention to sector alignment in addition to geometry, even if plain storage use works there are other obstacles to booting that may come up.
Now it does seem like the only advantage of a something like a Pi in the process would not be more helpful to access data from a properly established old HDD, but maybe one of the only ways to set up an old HDD using something other than that HDDs inbuilt default. But this was to be avoided back then too, it was better to have the HDD set up as default rather than unique "custom" CHS which amounts to "weird" and after that it may not be possible to connect to anything else and be recognized.
Unless you can manually set CHS in BIOS to match, which a USB adapter won't let you do anything you want like BIOS. A Pi could substitute for that but it was never really a good idea, mainly useful to set a non-default CHS on one drive to match the default CHS on an established drive when both are plugged into the same motherboard.
If I was being very skeptical, I would say it's possible the coder didn't even know that USB adapters exist. Prompted his AI to come up with one and this is the first untested draft.
Thanks for trying to educate the young whippersnappers about hard drives, but a lot of this rambling seems entirely off-topic.
>Unless you can manually set CHS in BIOS to match, which a USB adapter won't let you do anything you want like BIOS. A Pi could substitute for that but it was never really a good idea, mainly useful to set a non-default CHS on one drive to match the default CHS on an established drive when both are plugged into the same motherboard.
USB hard drives act as SCSI block devices, they don't have a CHS geometry and sectors are addressed by a single number (LBA=Linear Block Address).
Again: the purpose of this device is to connect an OLD HARD DISK to a MODERN COMPUTER. Not the other way around! If you plug it in and try to boot from with a BIOS / UEFI CSM that supports this, it will make up a CHS geometry based on the total number of sectors, instead of using the (real or translated) one that the drive actually uses and reports in its "IDENTIFY DEVICE" response. Because it's connected over USB and behaves like any other USB mass storage device.
That may well lead to problems when booting DOS from a drive that was formatted in some other machine, because the MBR will not use the same geometry. But that's not what this is for.
>If I was being very skeptical, I would say it's possible the coder didn't even know that USB adapters exist.
From the second paragraph in the readme: «« While cheap, modern adapters usually only work with newer "LBA" type drives, ATAboy works all the way back to the earliest CHS only, PIO Mode 0, ATA disks. »»
There were a lot of "non-standard" elements to work around through time, since there wasn't actually a real standard. More info can always help from many viewpoints and experiences and still not cover it all, young or old the more that have something to give as well as something to learn is a winning combination :)
The CHS values are still present in the partition table along with LBA equivalent, SCSI or not. Only with MBR layout though, not GPT. Some systems have never paid attention to CHS, some have never stopped. Like different forms of DOS.
Which is why a proper USB adapter is intended to just work with a PIO-0 HDD, and usually does unless the data layout on the old drive is so uncommon to be the kind of edge case that would be a show-stopper when connected to a vintage ATA motherboard too. That would require the unique CHS to be manually set in the BIOS to conform to a "custom" layout that was so non-mainstream for some reason when the old HDD was set up.
Then you had "drive overlays" which can get even more challenging when you're connecting old HDDs to newer PCs so often it makes you blue in the face :)
This code in the fallback path (when no constant-time @min/@max is available) will only work if the subtraction doesn't overflow. Or is this not a problem for some reason?
It makes no difference whether they're signed or unsigned. Unless the subtraction is checking for overflow, or using a wider integer type than the numbers being compared, the high bit will not in every case indicate which number is smaller.
e.g.
0x8000_0000 < 0x0000_0001 for signed numbers
0x8000_0000 - 0x0000_0001 = 0x7fff_ffff, high bit clear
Yes, looking at the source code on GitHub now cleared that up!
Didn't see it mentioned in the article though, maybe I missed it. If not, I think that this detail would be a good thing to include, both since it's a common mistake that others with less experience might make, and to get ahead of nitpicky comments like mine :)
Funny that you would be arguing for that (unless I misunderstood the intention), given your many other posts about how C is a horrible broken unsafe language that should not be used by anyone ever. I tend to agree with that, btw, even if not so much with the "memory safety" hysteria.
Should every program, now and in the future, be forced to depend on libc, just because it's "grandfathered in"?
IMO, Linux is superior because you are in fact free to ignore libc, and directly interface with the kernel. Which is of course also written in C, but that's still one less layer of crud. Syscalls returning error codes directly instead of putting them into a thread-local variable would be one example of that.
Should a hypothetical future OS written in Rust (or Ada, ALGOL-68, BLISS, ...) implement its own libc and force userspace appplications to go through it, just because that's "proper"?
In traditional UNIX, there is no libc per se, there is the stable OS API set of functions and that's it.
When C was standardised, a subset of the UNIX API became the ISO C standard library, aka libc. When it was proven that wasn't enough for portable C code, the remaining UNIX API surface became POSIX.
Outside UNIX, libc is indeed a thing, because many of those OSes aren't even written in C, just like your language lists example, in those OSes libc ships with C compiler, not the OS per se, as you can check by diving into VMS documentation before it became OpenVMS, or IBM and Unisys systems, where libc is also irrelevant if using PL/I, NEWP, whatever.
Also on Windows, you are not supposed to depend on libc unless you are writing portable C code, there isn't one libc to start with. Just like everyone else, each compiler ships their own C runtime library, and nowadays there is also universal C runtime as well, plenty of libc choices.
If not writing portable C code, you aren't supposed to use memset(), rather FillMemory ().
Same applies to other non-UNIX OSes, you would not be calling memset(), rather the OS API for such service.
I don't think GP is arguing that's the best way to design an OS, just that interfacing with non-Linux Unixes is best done via libc, because that's the stable public interface.
With the WWW, from here on out and especially in multimedia WWW applications, frames are your friend. Use them always. Get good at framing. That is wisdom from Gary.
The problem most website designer have is that they do not recognize that the WWW, at its core, is framed. Pages are frames. As we want to better link pages, then we must frame these pages. Since you are not framing pages, then my pages, or anybody else's pages will interfere with your code (even when the people tell you that it can be locked - that is a lie). Sections in a single html page cannot be locked. Pages read in frames can be.
Therefore, the solution to this specific technical problem, and every technical problem that you will have in the future with multimedia, is framing.
Frames securely mediate, by design. Secure multi-mediation is the future of all webbing.
reply