It is harder and harder to publish SaaS building bricks as open-source, as too many companies are not contributing back their change (sometimes called "freeriders"), leading to the core developers abandoning open-source to protect their business model. This post is a summary of the discussions that happened between Garage core developers and their strategy to make sure that everyone share their improvements while making legally mandatory that future versions are published under the same open license.
As we say in data analytics, "garbage in, garbage out". These ranking based on internet users perceived insecurity have been identified many times for their lack of relevance and their bias.
First bias is "perception", which is influenced by how media want to communicate and who own them - in France, mostly right wing billionaires. There is no actual correlation with actual, real, statistics.
Second, by who answer the survey - often people concerned about insecurity, which is a topic from the right.
Finally, there is no control on how many times you can vote, and some people demonstrated that with very few knowledge, you can completely change the results by sending thousands of vote [1].
The fact that Nantes is deemed highly insecure in France is also the consequence of this city being socialist and the place where police killed a guy named Steve during a party. So these attacks on Nantes being dangerous can also be interpreted as a backlash[2].
Please Hacker News, you're better than this, don't fall in this trap...
It seems the correlation between the article title ("Getting the Grid to Net Zero") and the subject that is actually discussed (maintaining a power grid stability in presence of inverters) is very weak.
Don't get me wrong: the article is very interesting. I really learnt something: I discovered "system inertia", I was not aware of stability issues linked to inverters, and did not know about grid-forming & grid-following inverters, and the research about finding the minimal amount of grid-forming to keep a power-grid stable in case of an issue in a given power plant. All of these topics are very interesting.
But making a connection between inverters and ecology through the "net zero" terms seems either off topic, misleading or irrelevant. First because this "net zero" term is heavily criticized as it means carbon are still emitted but companies are paying for carbon credits (that are not compensating at all the carbon emitted for many reasons [1]). Here building solar panels, wind turbines & batteries emits CO2, and their lifespan is relatively short (at most 10 years for batteries, ~25 years for wind turbines & solar panels, compared to hundreds of years for a dam[7]). Second because climate change is not the only concern about ecology, there are concerning questions about mineral resource extraction, like lithium[2] that is heavily used in batteries, but more generally, we are already extracting the whole Mendeleev periodic table[3]: we don't have alternative mineral resources for batteries or other technologies, the only solution is to extract, produce & consume less. Third, if your only goal is to reduce carbon dioxide equivalent (eqCO2), you should advertise nuclear power plant as the solution. Depending on studies, they produce the same amount or less eqCO2 compared to a wind turbine without batteries[4]. Of course, often eqCO2 is not the only important subject here (being renewable/sustainable is also important, and uranium is a limited resource). And finally, the fact we use renewable energy more and more did not lead to a worlwide energy transition, but an addition. Having a transition will require way more than technologies[5], something that is also not discussed here.
Speaking about solutions to pack a higher percentage of Intermittent renewable energy sources (IRES)[6] in a power-grid through the help of batteries and inverters would have been more accurate in my opinion. Maybe "Why we were not able to achieve 100% renewable energy before?" if you want to be catchy, and it's not perfect, as you are still hiding that you rely on lot of batteries, that are far from being renewable.
As a conclusion, I would say it would be great to be careful when engineers (here IEEE) discuss specific technologies (here power-grid inverters) to not draw conclusion too quickly (having a positive environmental impact), as it's far from being obvious. I know they want to be read, I know that the title must be catchy to attract readers, but it's not an excuse as illustrated above.
First, there is no proof that "LFP grid scale batteries" last longer than regular batteries today as your question may imply.
It seems the first "grid scale batteries" were derived from EV batteries, and are planned for 1 or 2 decades[1].
Basically, we are discussing battery ageing here, which is a complex problem[2].
According to the different studies on the topic I found, mentioning specifically "large-scale" installation like the ones discussed here, the answer is definitely and deceptively the same: between 10 and 20 years[3][5]. More precisely.
From [3]:
> To address the global effort to decrease carbon emissions, many consumers, corporations, and energy providers are adopting the use of electric vehicles and stationary energy storage systems paired with renewable electricity generation. These systems often utilize large-format lithium-ion batteries [...]. Real-world battery lifetime is evaluated by simulating residential energy storage and commercial frequency containment reserve systems in several U.S. climate regions. Predicted lifetime across cell types varies from 7 years to 20+ years, though all cells are predicted to have at least 10 year life in certain conditions.
From [5]:
> In the 2020 report, calendar life for both LFP and NMC Li-ion systems was stated as 10 years. The 2022
report takes additional information from long-term laboratory work (Saft, 2021) and product data into
account (Baxter, 2021b) to establish new calendar lives of 16 years for LFP and 13 years for NMC. The
calendar life is unchanged for 2030.
I also claim that battery are not renewable. One might argue that, if we can recycle batteries like we recycle regular glass, it could be considered as renewable. However, today there are 2 industrialized processes that are not satisfying (pyrometallurgical and hydrometallurgical processing) which "require high energy, and/or complex wet-chemistry steps"[4]. Some explored processes called "direct recycling"[4], which also has severe drawbacks but at least is more promising.
Which makes me think: we are, at least, making huge bets on the future here, as we risk 1) having huge amount of aged batteries in 1 or 2 decades, 2) no more mineral resources to extract.
Thanks! It seems that after your own research that your statement of "at most 10 years for batteries" should really be "at most 20(+?) years"? To be conservative, perhaps 16 years, but still, that's a 60% delta. Also, interesting that in [5], it says estimated LFP battery life went from 10 years in a 2020 analysis to 16 years in the 2022 report.
The kernel documentation defines some tag conventions, one of them is "Suggested-by".
Its definition:
A Suggested-by: tag indicates that the patch idea is suggested by the person named and ensures credit to the person for the idea.
Please note that this tag should not be added without the reporter's permission, especially if the idea was not posted in a public forum.
That said, if we diligently credit our idea reporters, they will, hopefully, be inspired to help us again in the future.
It could have been more appropriate to the situation, I think it's convey better the idea that you have found a solution to a problem, but because you are not familiar with the project, the exact syntax of your patch has not been kept.
I haven't compared the patches but a comment below [1] says:
> The only difference between the patch that was accepted and the one that was proposed is where the fix is. In one case it's in an ifdef outside of an if. In the other it's in an inner if statement. That's it. This is a difference in style not a technical difference in the patch at all.
It sounds like the author did quite a bit more than "suggest" the patch idea. They debugged the issue and wrote an entire patch which was accepted with one small change.
That works, but probably it works better in the context of things that are not reported as security issues where speed and accuracy matter more than form, the kernel maintainers did the right thing by crediting the person as 'Reported-by', your 'Suggested-by' would make things a bit better but clearly isn't what the OP is looking for, they want to be labelled 'Kernel contributor' based on a miniscule patch.
They should be listed as a kernel contributor based on a miniscule patch.
Lines of code modified is a notoriously bad signal for estimating the significance of software engineering contributions. And at the end of the day, credit is damn near free to give out and volunteer projects ought to let it run like water.
That's true, but in this case the contribution really is miniscule and if you look at the actual exchange between the kernel maintainer and the OP it is at a minimum misrepresented in the blog post and on top of that the kernel maintainer put in a bunch of work as well including code changes. OP makes it seem as though something major is at stake here and I just don't see it, it's a four line bug fix for a very old issue on a non-mainstream platform. That doesn't get your name posted next to Torvalds and Cox, though I do support the 'Suggested-by' tag, that would be a nice middle ground.
Finally: if you want 'Kernel contributor' on your CV then the last thing you want to do is to mail security patches to that particular mailing list, especially ones that still need work.
> it's a four line bug fix for a very old issue on a non-mainstream platform
This thinking is what commercial software market dominance is made of. People unwilling to make the software move from the 99% case to the 100% case because those with the authority to hand out credit can't even be bothered to do that. Meanwhile, corporations just pay their people for scutwork, including the unsexy kind like making the software work correctly on a corner case architecture, and while credit isn't given, money is.
If anything, credit should be given even more freely for fixing old problems. "How the hell is this bug 6 years old and still here" is a common criticism of open source software.
It's hardly putting somebody's name "next to" Torvalds to note that they isolated a buffer overrun and contributed a correction for it.
I'm all for the 'Suggested-by' tag but still note that the maintainer did not act in a way that maintainers for the Linux kernel have been acting for pretty much as long as the kernel has existed. Security holes get plugged, credit is secondary to that, and drama over that credit is showing a large misunderstanding about how the Linux kernel has historically dealt with drive by patches, especially small ones.
Well, they're not going to send in any more work that requires this standard of debugging, so the kernel will remain insecure in those ways. That isn't great, but perhaps we'll each just have our own preferred kernel flavors like how a bunch of us would use Con Kolivas's alternate scheduler back in the day.
And that way, forks being present, attribution is required for copyright and hence, copyleft.
3 days ago, I installed Haiku on bare metal: an old PC from ~2004. I was not aware that a new version was planned at that time, but the upgrade was completely smooth.
My idea when I installed Haiku was to make my own version of the "old computer challenge"[1], with an emphasis on using GUI apps.
Similarly to @probono (a FOSS dev), I also found Haiku "shockingly good"[2] at being a lightweight, responsive, easy-to-use desktop OS.
After some patching, I was even able to compile Tectonic[3], a modern LaTeX engine written in Rust, and Quaternion a Matrix client supporting E2EE[4]. All that running on a single core Athlon 64 and 1.5GB of RAM.
I posted some screenshots in a Mastodon threads if you are curious[5] (but my posts are in french sorry :/). And of course this comment is posted from Haiku!
Around 2001, I ran the contemporary BeOS demo on a Pentium MMX 200 MHz machine with 32 MB of RAM. Even with those limitations, the thing screamed. I believe it was live CD you downloaded and burned.
I am absolutely not surprised it works well on Athlon 64.
Ahh, the memories! (266 MHz PII, 64 MB RAM ... maybe upgraded to 384 MB RAM by the time I was quad-booting Debian, Win2k, BeOS and QNX)
Maybe I ran BeOS slightly before a demo CD was available, or maybe I just didn't risk burning a coaster. (Remember those days where you had to worry about your OS not being able to feed the CD burner as fast as it was writing?) When I demoed BeOS around 2000, it was on a floppy (I repurposed a free AoL floppy from a few years earlier... by that time AoL was mailing free CDs instead of free floppies). The demo floppy allowed one to format a BeFS partition on the drive, and I think even put the kernel on the drive, but kept the bootloader on the floppy to encourage purchase.
I woke up one morning to see the floppy drive light on, and apparently a BeOS kernel or usespace driver bug caused it to spin the floppy continuously all night without moving the read/write head. I popped out the floppy and pulled back the dust guard to discover a thin stripe where the magnetic media had been polished off of the floppy. The drive didn't read any floppy correctly after that; presumably the read/write head was covered in magnetic media dust.
I don't remember how, but I eventually found instructions for copying the bootloader off of the downloaded floppy image and getting GRUB to find it, so I didn't need to put my replacement floppy drive at risk.
I remember having the exact same experience on slightly less powerful hardware with that BeOS demo. I remember throwing everything at it and it just kept on going like it was no big deal and me constantly going "wow, wow, wow" haha! It was such a bummer going back to Windows after experiencing that.
Pentium 75 mhz was enough for the BeOS demo. It was almost like using QNX. I believe I tried BeOS on some 486'es too, but if I did not at least it screamed, and burned, as you said, even on a Pentium 75mhz. The only limitation of the 'demo' was that usable space was like locked into 512MB extendable user-space, if I'm not wrong. Please do correct me/this.
BeOS never (officially) supported 486 class processors, I can't recall if it actually uses Pentium instructions and won't run, or if it's just super slow on 486. I think it is actually compiled for Pentium.
> 3 days ago, I installed Haiku on bare metal: an old PC from ~2004. I was not aware that a new version was planned at that time, but the upgrade was completely smooth.
If there is one thing to say about Haiku, their slow and steady approach has resulted in a remarkably solid Kernel and base system. It is extremely light and has a well-built and consistent environment. I've always hoped more engineers would hop on the bandwagon to accelerate development, but what the team has achieved is notable in comparison to other alternative/"hobby" OSes.
It also runs really well on old netbooks - it's revitalised my Asus EEE 701 4G (even though the screen resolution is below the official minimum requirement), it fits comfortably on the internal 4Gb SSD, and even the wifi works!
I had a single core Athlon 64 that I upgraded to a dual core back in the day. That was my primary PC until I got a Ryzen 2400G several years ago. All are really great CPUs for many years after they're made. Next up might be a Zen 5 APU. I'm on a slow upgrade cycle...
I believe Linux should be even faster, right? Probably it only lacks a lightweight and responsive DE, and a distro with sane defaults, e.g. no gazillions of random processes running at the start. But e.g. the compilation, or anything compute heavy should be faster under linux?
Linux is a huge OS by the standards of BeOS and Haiku, with an early-1970s design and layers and layers of legacy cruft between the kernel and the user.
The kernel is not huge though. Even a modern Linux kernel runs on really, really resource limited hardware ( eg. embedded ). As said above, it is all the other crap that takes up memory and slows it down ( and makes it useful of course ).
It is not the Linux kernel that makes the Linux Desktop so much heavier than Haiku though.
Saying that, the first machine I tried (and failed) to install Slackware 1 on was I think a 486 with 8MB of RAM, and I am not sure 21st century Linux will fit on that...
I switch between Haiku and Q4OS on the same netbook, and they are both very responsive. The Linux distro does indeed have some performance advantages. However I haven't tried beta4 yet.
It is possible that musl based distros such as Alpine, could somehow compete for having a lot smaller code footprint to execute, but "normal" glibc ones would hardly match Haiku's speed. That doesn't necessarily make Linux inferior; it's just the price to pay for decades of development from thousands of developers and being portable to a huge number of platforms. The upside is we (Linux users) have a lot more software and supported hardware than Haiku, as of today.
There's a built-in NFSv4 client, but I think it may have fallen a bit behind NFSv4's evolution; I recall hearing you had to turn some feature off in order to get it to connect to a standard exported volume from Linux.
SMB is supported by fusesmb, which is available as a package.
Just a note to say that if you are using Matrix and want your conversations to be indexed in search engine ("Google-searchable"), you can deploy matrix-static[1] or you can use the live instance hosted by the Matrix foundation[2].
I think an interesting comparison, for both Linen and Matrix, would be to compare these 2 approaches: Linen natively indexed conversation and this Matrix "static" client. I would be especially interested by what additional features Linen provides in term of indexing compared to this static client.
(heads up that we're about to replace matrix-static, which powers view.matrix.org, with https://github.com/matrix-org/matrix-public-archive - which is a way better public archive interface for Matrix, built on Hydrogen)
Hi, I would like to mention that some work on a Rust SMTP server has been already done with the Kannader project[1] (Disclaimer: I have no contribution on it but I know the maintainer).
I also work on a Rust IMAP server that is far from being as feature complete as yours. I also chose your `mail-parser` library to parse RFC822/5822, but we observed that in many cases, we did not have enough information to build some BODY/BODYSTRUCTURE responses. We also discovered that line count and many details are not very obvious on IMAP, did you run some tests to compare your IMAP server outputs to existing servers? Or, more generally, what is your approach to ensure compatibility / integration with the existing email ecosystem?
In any case, congratulation for your project, we will follow it closely! I experimented how big these protocols became with all their extensions, this is an impressive work!
You own the servers. This is a tool to build your own object-storage cluster. For example, you can get 3 old desktop PCs, install Linux on them, download and launch Garage on them, configure your 3 instances in a single cluster, then send data to this cluster. Your data will be spread and duplicated on the 3 machines. If one machine fails or is offline, you can still access and write data to the cluster.
Then, how does Garage achieve 'different geographical locations'? I only have my house to put my server(s). That's one of the main reasons I'm using cloud storage. Or is the point that I can arrange those servers abroad myself, independent of the software solution (S3 etc)?
Garage is designed for self-hosting by collectives of system administrators, what we could call "inter-hosting". Basically you ask your friends to put a server box at their home and achieve redundancy that way.
The content is currently stored in plaintext on the disk by Garage, so you have to encrypt the data yourself. For example, you can configure your server to encrypt at rest the partition that contains your `data_dir` and your `meta_dir` or build/use applications that supports client-side encryption such as rclone with its crypt module[0] or Nextcloud with its end-to-end encryption module[1].
Our software is published under the AGPLv3 license and comes with no guarantee, like any other FOSS project (if you do not pay for support). We are considering our software as "public beta" quality, so we think it works well, at least for us.
On the plus side, it survived Hacker News Hug of Death. Indeed, the website we linked is hosted on our own Garage cluster made of old Lenovo ThinkCentre M83 (with Intel Pentium G3420 and 8GB of RAM) and the cluster seems fine. We also host more than 100k objects in our Matrix (a chat service) bucket.
On the minus side, this is the first time we have so much coverage, so our software has not yet been tested by thousands of people. It is possible that in the near future, some edge cases we never triggered are reported. This is the reason why most people wait that an application reaches a certain level of adoption before using it, in other words they don't want to pay "the early adopter cost".
Another benefit compared to MinIO is we have "flexible topologies".
Due to our design choice, you can add and remove nodes without any constraint on number of nodes and size of the storage. So you do not have to overprovision your cluster as recommended by MinIO[0].
Additionally, and we planned a full blog post on this subject, adding or removing a node in the cluster does not lead to a full rebalance of the cluster.
To understand why, I must explain how it works traditionally and how we improved on existing work.
When you initialize the cluster, we split the cluster in partitions, then assign partitions to nodes (see Maglev[1]). Later, based on their hash, we will store data in its corresponding partition. When a node is added or removed, traditional approaches rerun the whole algorithm and comes with a totally different partition assignation. Instead, we try to compute a new partition distribution that minimize partitions assignment change, which in the end minimize the number of partitions moved.
On the drawback side, Garage does not implement erasure coding (as it also the reason of many MinIO's limitations) and duplicate data 3 times which is less efficient. Garage also implements less S3 endpoints than Minio (for example we do not support versioning), the full list is available in our documentation[2].