Hacker Newsnew | past | comments | ask | show | jobs | submit | xyse53's commentslogin

This isn't just down, this discussion seems like it's been barely holding on and there's a non-zero chance it goes away or changes in some significant way moving forward.


Yeah I'm more of a `--wet-run` `-w` fan myself. But it does depend on how serious/annoying the opposite is.


I've done that, but I hate the term "wet run."

I use "live run" now, which I think gets the point across without being sort of uncomfortable.


--with-danger

--make-it-so

--do-the-thing

--go-nuts

--safety-off

So many fun options.


I'm a fan of --safety-off. It gives off a 'aim away from face' or 'mishandle me and I'll blow a chunk out of your DB' vibe.


I find it important to include system information in here as well, so just copy-pasting an invocation from system A to system B does not run.

For example, our database restore script has a parameter `--yes-delete-all-data-in` and it needs to be parametrized with the PostgreSQL cluster name. So a command with `--yes-delete-all-data-in=pg-accounting` works on exactly one system and not on other systems.


It's in the UI not the command line, but I like Chromium's thisisunsafe


I've done a few --execute --i-know-what-im-doing for some more dangerous scripts


May I recommend --I-take-responsibility-for-the-outcome-of-proceeding and require a capital I?


--commit is solid too


    --moisten


Moist run is the way.


When you do release it, do you know yet if you plan on releasing the full change history? Or would you start with a snapshot at the ~release date?


For an opposite datapoint: I had no issues with either game that I noticed. Denver area.


It's also a good way to learn about UEFI for people most familiar with go.


I get that for a boot / root drive but not for building a self hosted storage system. I'm not taking about cost of SATA SSD vs NVME; I haven't seen a lot of board+enclosure options that take enough M.2 disks.


I've noticed there aren't a lot of reasonable home/sb m.2 NVME NAS options for main boards and enclosures.

SATA SSD still seems like the way you have to go for a 5 to 8 drive system (boot disk + 4+ raid6).


It seems like it's rare to find M.2 with the sort of things you'd want in a NAS like PLP, reasonably high DWPD, good controllers, etc. and you've also got to contend with the problem of managing heat in a way I had never seen with 2.5 or 3.5 drives. I would imagine the sort of people doing NVMe for NAS/SAN/servers are all probably using U.2 or U.3 (I know I do).


I've been doing my home NASes in m.2 NVMe for years now with 12 disks on one and 22 disks on another (backup still HDD though):

DWPD: Between the random teamgroup drives in the main NAS and WD Red Pro HDDs in the backup, the write limits are actually about the same. With the bonus reads are infinite on the SSDs, so things like scheduled ZFS scrubs don't count as 100 TB of usage across the pool each time.

Heat: Actually easier to manage than the HDDs. The drives are smaller (so denser for the same wattage) but the peak wattage is lower than the idle spinning wattage of the HDDs and there isn't a large physical buffer between the hot parts and the airflow. My normal case airflow keeps them at <60C under sustained benching of all of the drives raw, and more like <40 C given ZFS doesn't like to go more than 8 GB/s in this setup anyways. If you select $600 top end SSDs with high wattage controllers shipping with heatsinks you might have more of a problem, otherwise it's like 100 W max for the 22 drives and easy enough to cool.

PLP: More problematic if this is part of your use case, as NVMe drives with PLP will typically lead you straight into enterprise pricing. Personally my use case is more "on demand large file access" with extremely low churn data regularly backed up for the long term and I'm not at a loss if I have an issue and need to roll back to yesterday's data, but others who use things more as an active drive may have different considerations.

The biggest downsides I ran across were:

- Loading up all of the lanes on a modern consumer board works in theory, can be buggy as hell in practice. Anything from the boot becoming EXTREMELY long to just not working at all sometimes to PCIe errors during operation. Used Epyc in a normal PC case is the way to go instead.

- It costs more, obviously

- Not using a chassis designed for massive numbers of drives with hot-swap access can be quite the pain to troubleshoot install for.

The biggest upsides (other than the obvious ones) I ran across were:

- No spinup drain on the PSU

- No need to worry about drive powersaving/idling <- pairs with -> whole solution is quiet enough to sit in my living room without hearing drive whine.

- I don't look like a struggling fool trying to move a full chassis around :)


Its also quite difficult to find 2280 M.2 SATA SSD. Had an old laptop that only takes 2280 M.2 SATA SSD.

Its always one of the 2. M.2 but PCIe/NVMe, or SATA but not M.2.


Fwiw, SATA and NVMe are mutually incompatible concepts for a single device; SATA drives use AHCI to wrap ATA commands in a SCSI-shaped queuing mechanism called command lists over the SATA bus, while NVMe (M.2/U.2/add-in) drives talk NVMe protocol (multiple queues) over PCIe.


For a drive, yes, SATA and NVMe are mutually exclusive. The M.2 slot can provide both options. But if you have a machine with a M.2 slot that's only wired for SATA but not PCIe, your choices for drives to put in that slot have been quite limited for a long time.


There were even M.2 PCIe-connected AHCI drives - both not-SATA and not-NVMe. Samsung SM951 was one. You can find them on ebay but not otherwise.


At least the Samsung and SanDisk PCIe AHCI M.2 drives were only for PC OEMs and were not officially sold as retail products. There were gray-market resellers, but overall it was a niche and short-lived format. Especially because any system that shipped with a PCIe M.2 slot could gain NVMe capability if the OEM deigned to release an appropriate UEFI firmware update.


When it comes to ready-made home/SMB-grade NASes, in recent year or two plenty of options popped up: Terramaster F8, Flashstor 6 or 12, BeeLink ME mini N150 (6x NVMe). It's just QNAP and Synology who seem not interested.


Probably because QNAP and Synology pricing is rent seeking behavior on per drive bay pricing models.


How well does buying PCIe to M.2 adapters work for a custom NAS? Slot-wise you should be able to get 16 M.2 devices per motherboard with for example a Supermicro consumer board.


The difficulty with pcie to m.2 adapters is you usually can't use bifurcation below x4 and active PCIe switches got very expensive after PCIe 3.0.

Used multiport SATA HBA cards are inexpensive on eBay. Multiport nvme cards are either passive for bifurcation and give you 4x x4 for an x16 slot or are active and very expensive.

I don't see how you get to 16 m.2 devices on a consumer socket without lots of expense.


Not to mention, the physical x16 slot may be running in x8 mode if you're using a video card.


PCI switches got very expensive after Broadcom bought PLX.


I don't think there are any consumer boards which support this?

In practice you can put 4 drives in the x16 slot intended for a GPU, 1 drive each in any remaining PCIe slots, plus whatever is available onboard. 8 should be doable, but I doubt you can go beyond 12.

I know there are some $2000 PCIe cards with onboard switches so you can stick 8 NVMe drives on there - even with an x1 upstream connection - but at that point you're better off going for a Threadripper board.


Can you point to a specific motherboard? 16 separate PCIe links of any width sounds rather high for a consumer platform.



A few generations old, and HEDT, which isn't exactly consumer but ok. I see one for $100 on ebay, so that's not awful either.

Even that gives you one m.2 slot, and 8/8/8/16 on the x16 slots, if you have the right cpu. Assuming those are can all bifurcate down to x4 (which is most common), that gets you 10 m.2 slots out of the 40 lanes. That's more than you'd get on a modern desktop board, but it's not 16 either.

For home use, you're in a tricky spot; can't get it in one box, so horizontal scaling seems like a good avenue. But in order to do horizontal scaling, you probably need high speed networking, and if you take lanes for that, you don't have many lanes left for storage. Anyway, I don't think there's much simple software to scale out storage over multiple nodes; there's stuff out there, but it's not simple and it's not really targeted towards a small node count. But, if you don't really need high speed, a big array of spinning disks is still approachable.


That's a workstation board, not a regular consumer board, and it is over 5 years old by now - it has even been discontinued by Supermicro.

Buiding a new system with that in 2025 would be a bit silly.


I don't now if you consider it "reasonable" but the Gigabye Aorus TRX boards even from 6 years ago came with a free PCIE expansion card that held 8 M2 sticks, up to 32 TB on a consumer board. It's eATX, of course, so quite a bit bigger than an appliance NAS, and the socket is for a threadripper, more suitable for a hypervisor than a NAS, but if you're willing to blow five to ten grand and be severely overprovisioned, you can build a hell of a rig.


Are you sure? I've seen plenty of motherboards bundle a PCIe riser to passively bifurcate the PCIe slot to support four M.2 drives in an x16 slot or two in an x8 slot, but doing eight M.2 drives in one PCIe slot would either require a PCIe switch that would be too expensive for a free bundled card, or require PCIe bifurcation down to two lanes per link, which I don't think any workstation CPUs have ever supported. And 32TB is possible with just four M.2 SSDs.


If you want to go big in capacity, which is something you usually want for NAS, m.2 becomes super expensive.


They mention GCS fuse. We've had nothing but performance and stability problems with this.

We treat it as a best effort alternative when native GCS access isn't possible.


fuse based filesystems in general shouldn’t be treated as production ready in my experience.

They’re wonderful for low volume, low performance and low reliability operations. (browsing, copying, integrating with legacy systems that do not permit native access), but beyond that they consume huge resources and do odd things when the backend is not in its most ideal state.


I started rewriting gcsfuse using https://github.com/hanwen/go-fuse instead of https://github.com/jacobsa/fuse and found it rock-solid. FUSE has come a long way in the last few years, including things like passthrough.

Honestly, I'd give FUSE a second chance, you'd be surprised at how useful it can be -- after all, it's literally running in userland so you don't need to do anything funky with privileges. However, if I starting afresh on a similar project I'd probably be looking at using 9p2000.L instead.


I think it's possible to write a solid fuse filesystem. Not as performant as in-kernel but it could easily not be the bottleneck depending on the backend.

I commented though because GCP highlights it in a few places as component for AI workloads. I'm curious if anyone is using it in an important application and happy with it.


AWS Lambda uses FUSE and that’s one of the largest prod systems in the world.


An option exists, but they prefer you use the block storage API.


No, as in Lambda itself uses FUSE as an implementation detail of their container filesystem.


It seems there were some major issues, but AWS has developed around them and optimised for its needs; (https://www.madebymikal.com/on-demand-container-loading-in-a...)

Fair, but far from a common advice I’m willing to tell people (other CTOs) to do.


Is the current policy completely flexible? 2 days? (Or am I misreading and it's currently 100% in office?)


I've found QEMUs microvm to be faster at boot while having nicer tooling and a cleaner upgrade path if needing more features. Aside from hype I'm actually not sure why anyone would still use firecracker.


Mainly because of the much larger attack surface of QEMU.


I can't quantify how much of that surface is also reduced with the microvm machine vs other parts of QEMU vs Firecracker... But fair enough point.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: