With Intel VMX virtualization, instruction execution is handled by the CPU but (a lot) of software still has to deal with HW peripheral emulation .
QEMU uses KVM (Intel VMX, etc) but implements HW peripherals (display, network, disk, etc) faithfully matching really HW and provides a full BIOS (SeaBios) or UEFI firmware (EDK) to deal with with boot process.
Over time, Linux (and Windows) were extended to support novel “peripherals” designed for high emulation performance (not a real HW product).
Firecracker basically skips all the “real” peripheral emulation and skips the full BIOS/UEFI firmware. Instead, it implements just enough to boot modern Linux directly. Also written in Rust instead of C. It will never support DOS, Windows 95 or probably anything else.
The “microVM” BIOS allows it to start booting Linux very quickly (sub-second). A traditional QEMU VM might take 2-5 seconds. Some people are emboldened to effectively move back from containers to running applications in a VM…
Instead of the VM being long lived, it is really just for running a single app.
I think Kata containers had this idea for much longer but Firecracker provides a more efficient implementation for such a thing.
thank you very much for the detail there. I assume you would also know very well how a docker container would compare to firecracker in terms of boot time. I understand that a container and a VM are not the same thing but just curious
Good questions — yes, Containarium relies heavily on *user namespaces*. Here’s how it works:
- We enable `security.nesting=true` on unprivileged LXC containers, so Docker can run inside (rootless).
- *User namespace isolation* ensures that even if a user is “root” inside the container, they are mapped to an unprivileged UID on the host (e.g., UID 100000), preventing access to host files or devices.
This setup allows developers to run Docker and do almost anything inside their sandbox, while keeping the host safe.
I’ve seen too many embedded drivers written by well known companies not use spinlocks for data shared with an ISR.
At one point, I found serious bugs (crashing our product) that had existed for over 15 years. (And that was 10 years ago).
Rust may not be perfect but it gives me hope that some classes of stupidity will be either be avoided or made visible (like every function being unsafe because the author was a complete idiot).
I remember my first PC had a HD of around 20Mb. It was vast at the time, I had so much software to keep me busy for a while that I I felt overwhelmed. Amongst those there was Windows 3.0, taking probably a whopping 3mb
My first PC (my parents splurged on a high-end machine (for the time, of course) only had 8MiB of memory and a 1 GiB HDD. It ran windows 95 and encarta just fine.
Seems like this might be a data entry error in the DOL LCA filings. We are presenting the data as is, but can flag it to remove it. It does seem like a clear error, where it an extra digit was entered, or the hourly rate field was filled incorrectly.
And to clarify, this is a separate page, and separate data from the Levels.fyi total compensation data. I also doubt the US Department of Labor was reporting wages in Rupees lol, but it does seem like an error. Will take a look, and see if we can add some sort of outlier removal.
It wouldn't completely prevent data-loss, but most "normal" people would be a lot better off if they simply copied their important files to an external HDD or flash drive on a regular basis
Flash drives are less than ideal for backups. I think when they are stored cold i.e. unpowered, flash memory only retains data for a couple of years. Spinning hard drives are way more reliable for the use case.
That's true. But if they are stored unpowered for a couple of years, then you clearly aren't doing regular backups. OTOH, it's doesn't seem unlikely that the average person would leave a disk gathering dust, so advising people to use a regular HDD is probably the best approach
> if they are stored unpowered for a couple of years, then you clearly aren't doing regular backups
I am doing regular backups yet I have a few backup disks unpowered for years. They are older, progressively smaller backup HDDs I keep for extra redundancy.
Every 2-4 years I am getting a larger backup drive, and clone my previous backup drive to the new one. This way when the backup drive fails (happened around 2013 because I was unfortunate to get notoriously unreliable 3TB Seagate), I don’t lose much data if at all because most of the new stuff is still on the computers, and the old stuff is left on these older backup drives.
I do basically the same, but instead of keeping everything around I just keep the last two drives in rotation at the same time: One kept at home and one kept at work. One of them failed recently, while I was performing a backup, so I just got a new (and larger) drive, and synced it with the other backup drive before continuing as usual
This is it, the odds of both main drive and external failing at the same time are low enough for most people. As long as youre regularly backing up and therefore catching if the external has failed.
As long as it is not automatic, this probably is the only working solution. Pair of USB disks that are rotated and manually copied into. Note to self: mark in my callendar backup days.
I got to talk to Patrick about this and also someone who was working on the customized connectors.
They had to make some Gen 5-capable custom adapters that plugged into one of the front-of-chassis NVMe bay connectors (thus disabling that bank of 4 drive bays), and then routed that signal all the way back to the rear, using Nvidia's custom edge plug on the card.
Normally, if you're deploying the 800 Gbps ConnectX-8 cards, you have the card in one slot, then a little adapter board right next to it, with a cable NVIDIA makes. But for this, it had to be fully custom.
Would be neat to hear more from STH how that cable was made exactly, because it seems like hand-building PCIe cables can be a bit tricky!
With Intel VMX virtualization, instruction execution is handled by the CPU but (a lot) of software still has to deal with HW peripheral emulation .
QEMU uses KVM (Intel VMX, etc) but implements HW peripherals (display, network, disk, etc) faithfully matching really HW and provides a full BIOS (SeaBios) or UEFI firmware (EDK) to deal with with boot process.
Over time, Linux (and Windows) were extended to support novel “peripherals” designed for high emulation performance (not a real HW product).
Firecracker basically skips all the “real” peripheral emulation and skips the full BIOS/UEFI firmware. Instead, it implements just enough to boot modern Linux directly. Also written in Rust instead of C. It will never support DOS, Windows 95 or probably anything else.
The “microVM” BIOS allows it to start booting Linux very quickly (sub-second). A traditional QEMU VM might take 2-5 seconds. Some people are emboldened to effectively move back from containers to running applications in a VM…
Instead of the VM being long lived, it is really just for running a single app.
I think Kata containers had this idea for much longer but Firecracker provides a more efficient implementation for such a thing.
reply