Arch wiki is far better than most man pages. I've referred to Arch for my own non-Arch systems and when building Yocto systems. Most Arch info applies.
In the ancient days I used TLDP to learn about Linux stuff. Arch wiki is now the best doc. The actual shipped documentation on most Linux stuff is usually terrible.
GNU coreutils have man pages that are correct and list all the flags at least, but suffer from GNU jargonisms and usually a lack of any concise overview or example sections. Most man pages are a very short description of what the program does, and an alphabetic list of flags. For something as versatile and important as dd the description reads only "Copy a file, converting and formatting according to the operands" and there's not even one example of a full dd command given. Yes, you can figure it out from the man page, but it's like an 80s reference, not good documentation.
man pages for util-linux are my go-to example for bad documentation. Dense, require a lot of implicit knowledge of concepts, make references to 90s or 80s technology that are now neither relevant nor understandable to most users.
Plenty of other projects have typical documentation written by engineers for other engineers who already know this. man pipewire leaves you completely in the dark as to what the thing even does.
Credit to systemd, that documentation is actually comprehensive and useful.
Proton is amazing and it's really three different subprojects that deserve a lot of credit each.
First is Wine itself, with its implementation of Win32 APIs. I ran some games through Wine even twenty years ago but it was certainly not always possible, and usually not even easy.
Second is DXVK, which fills the main gap of Wine, namely Direct3D compatibility. Wine has long had its own implementation of D3D libraries, but it was not as performant, and more importantly it was never quite complete. You'd run into all sorts of problems because the Wine implementation differed from the Windows native D3D, and that was enough to break many gams. DXVK is a translation layer that translates D3D calls to Vulkan with excellent performance, and basically solves the problem of D3D on Linux.
Then there's the parts original to Proton itself. It applies targeted, high quality patches to Wine and DXVK to improve game compatibility, brings in a few other modules, and most importantly Proton glues it all together so it works seamlessly and with excellent UX. From the first release of Proton until recently, running Windows games through Steam took just a couple extra clicks to enable Proton for that game. And now even that isn't necessary, Proton is enabled by default so you run a game just by downloading it and launching, same exact process as on Windows.
Yes. The unique point of ReactOS is driver compatibility. Wine is pretty great for Win32 API, Proton completes it with excellent D3D support through DXVK, and with these projects a lot of Windows userspace can run fine on Linux. Wine doesn't do anything for driver compatibility, which is where ReactOS was supposed to fill in, running any driver written for Windows 2000 or XP.
But by now, as I also wrote in the other thread on this, ReactOS should be seen as something more like GNU Hurd. An exercise in kernel development and reverse engineering, a project that clearly requires a high level of technical skill, but long past the window of opportunity for actual adoption. If Hurd had been usable by say 1995, when Linux just got started on portability, it would have had a chance. If ReactOS had been usable ten years ago, it would also have had a chance at adoption, but now it's firmly in the "purely for engineering" space.
"ReactOS should be seen as something more like GNU Hurd. An exercise in kernel development and reverse engineering, a project that clearly requires a high level of technical skill, but long past the window of opportunity for actual adoption."
I understand your angle, or rather the attempt of fitting them in the same picture, somehow. However, the differences between them far surpass the similarities. There was no meaningful user-base for Unix/Hurd so to speak of compared to NT kernel. There's no real basis to assert the "kernel development" argument for both, as one was indeed a research project whereas the other one is just clean room engineering march towards replicating an existing kernel. What ReactOS needs to succeed is to become more stable and complete (on the whole, not just the kernel). Once it will be able to do that, covering the later Windows capabilities will be just a nice-to-have thing. Considering all the criticism that current version of Windows receives, switching to a stable and functional ReactOS, at least for individual use, becomes a no-brainer. Comparatively, there's nothing similar that Hurd kernel can do to get to where Linux is now.
Hurd was not a research project initially. It was a project to develop an actual, usable kernel for the GNU system, and it was supposed to be a free, copyleft replacement for the Unix kernel. ReactOS was similarly a project to make a usable and useful NT-compatible kernel, also as a free and copyleft replacement.
The key difference is that Hurd was not beholden to a particular architecture, it was free to do most things its own way as long as POSIX compatibility was achieved. ReactOS is more rigid in that it aims for compatibility with the NT implementation, including bugs, quirks and all, instead of a standard.
Both are long irrelevant to their original goals. Hurd because Linux is the dominant free Unix-like kernel (with the BSD kernel a distant second), ReactOS because the kernel it targets became a retrocomputing thing before ReactOS could reach a beta stage. And in the case of ReactOS, the secondary "whole system" goal is also irrelevant now because dozens of modern Linux distributions provide a better desktop experience than Windows 2000. Hell, Haiku is a better desktop experience.
"And in the case of ReactOS, the secondary «whole system» goal is also irrelevant now because dozens of modern Linux distributions provide a better desktop experience than Windows 2000. Hell, Haiku is a better desktop experience."
Yet, there are still too many desktop users that, despite the wishful thinking or blaming, still haven't switched to neither Linux, nor Haiku. No mater how good Haiku or Linux distributions are, their incompatibility with the existing Windows simply disqualifies them as options for those desktop users. I bet we'll see people switching to ReactOS when it will get just stable enough, yet long before it will get as polished as either Haiku or any given quality Linux distribution.
No, people will never be switching to ReactOS. For some of the same reasons they don't switch to Linux, but stronger.
ReactOS aims to be a system that runs Windows software and looks like Windows. But, it runs software that's compatible with WinXP (because they target the 5.1 kernel) and it looks like Windows 2000 because that's the look they're trying to recreate. Plenty of modern software people want to run doesn't run on XP. Steam doesn't run on XP. A perfectly working ReactOS would already be incompatible with what current Windows users expect.
UI wise there is the same issue. Someone used to Windows 10 or 11 would find a transition to Windows 2000 more jarring than to say Linux Mint. ReactOS is no longer a "get the UI you know" proposition, it's now "get the UI of a system from twenty five years ago, if you even used it then".
"UI wise there is the same issue. Someone used to Windows 10 or 11 would find a transition to Windows 2000 more jarring than to say Linux Mint. ReactOS is no longer a «get the UI you know» proposition, it's now «get the UI of a system from twenty five years ago, if you even used it then»." "A perfectly working ReactOS would already be incompatible with what current Windows users expect."
That look and feel is the easy part. That can be addressed if it's really an issue. The hard part is the compatibility (that is given by many still missing parts) and stability (the still defective parts). The targeted kernel matters, of course, but that is not set in stone. In fact, there is Windows Vista+ functionality added and written about, here: https://reactos.org/blogs/investigating-wddm although doing it properly would mean rewriting the kernel, bumping it to NT version 6.0
I'm sure there will indeed be many users that will find various ReactOS aspects jarring for as long as there are still defects, lack of polish, or dysfunction on application and kernel (drivers) level. However, considering the vast pool of Windows desktop users, it's reasonable to expect ReactOS to cover the limited needs for enough users at some point, which should turn attention into testing, polish, and funding to address anything still lacking, which then should further feed the adoption and improvement loop.
"No, people will never be switching to ReactOS. For some of the same reasons they don't switch to Linux, but stronger."
To me, this makes sense maybe for corporate world. The reasons that made them stick with Windows has less to do with familiarity or with application compatibility (given the fact that a lot of corporate infrastructure is in web applications). Yes, there must be something else that governs corporate decisions, something to do with the way corporations function, and that will most likely prevent a switch to ReactOS just as it did to Linux based distributions. But, this is exactly why I intentionally specified "for individual use" when I said "switching to a stable and functional ReactOS, at least for individual use, becomes a no-brainer". For individual use, the reason that prevented people to switch to Linux is well known, and ReactOS's reason to be was aimed exactly at that.
> There was no meaningful user-base for Unix/Hurd so to speak of compared to NT kernel.
Sure, but that userbase also already has a way of using the NT kernel: Windows. The point is that both Hurd and ReactOS are trying to solve an interesting technical problem but lack any real reason to use rather than their alternatives that solve enough of the practical problems for most users.
While I think better Linux integration and improving WINE is probably better time spend... I do think there's some opportunity for ReactOS, but I feel it would have to at LEAST get to pretty complete Windows 7 compatibility (without bug fixes since)... that seems to be the last Windows version people remember relatively fondly by most and a point before they really split-brained a lot of the configuration and settings.
With the contempt of a lot of the Win10/11 features, there's some chance it could see adoption, if that's an actual goal. But the effort is huge, and would need to be sufficient for wide desktop installs much sooner than later.
I think a couple of the Linux + WINE UI options where the underlying OS is linux, and Wine is the UI/Desktop layer on top (not too disimilar from DOS/Win9x) might also gain some traction... not to mention distros that smooth the use of WINE out for new users.
Worth mentioning a lot of WINE is reused in ReactOS, so that effort is still useful and not fully duplicated.
> I do think there's some opportunity for ReactOS, but I feel it would have to at LEAST get to pretty complete Windows 7 compatibility
That's not going to happen in any way that matters. If ReactOS ever reaches Win7 compatibility, that would be at a time when Win7 is long forgotten.
The project has had a target of Windows 2000 compatibility, later changed to XP (which is a relatively minor upgrade kernel wise). Now as of 2026, ReactOS has limited USB 2.0 support and wholly lacks critical XP-level support like Wifi, NTFS or multicore CPUs. Development on the project has never been fast but somewhere around 2018 it dropped even more, just looking at the commit history there's now half the activity of a decade ago. So at current rates, it's another 5+ years away from beta level support of NT 5.0.
ReactOS actually reaching decent Win2K/XP compatibility is a long shot but still possible. Upgrading to Win7 compatibility before Win7 itself is three plus decades old, no.
maybe posts like this will move the needle. If i could withstand OS programming (or debugging, or...) I'd probably work on reactOS. I did self-host it, which i didn't expect to work, so at least i know the toolchain works!
Basically if you do the math, it means a whole generation got tired of being on the project and focused into something else, and there is no new blood to account for that.
The history of most FOSS projects after being running for a while.
ReactOS has been very slow to develop, and probably missed the point where it could make an impact. It's still mostly impossible to run on real hardware, and their beta goal (version 0.5 which supports USB, wifi and is at least minimally useful on supported hardware) is still years away. But I never had the impression that gaming was a particularly important focus of the project.
ReactOS is mostly about the reimplementation of an older NT kernel, with a focus on driver compatibility. Their ultimate goal is to be a drop-in replacement for Windows XP such that any driver written for XP would work. That's much more relevant to industrial applications where some device is controlled by an ancient computer because the vendor originally provided drivers for NT 5.0 or 5.1 which don't work on anything modern.
> But I never had the impression that gaming was a particularly important focus of the project.
> ReactOS is mostly about the reimplementation of an older NT kernel, with a focus on driver compatibility. Their ultimate goal is to be a drop-in replacement for Windows XP such that any driver written for XP would work. That's much more relevant to industrial applications where some device is controlled by an ancient computer because the vendor originally provided drivers for NT 5.0 or 5.1 which don't work on anything modern.
Fifteen years ago, they could have focused on both the industrial and consumer use cases. There were a lot of people who really didn't want to leave Windows XP in 2010-11, even just for their personal use.
Admittedly, FLOSS wasn't nearly as big of a thing back then like it is now. A larger share of GNU/Linux and BSD installs were on servers at the time, so it was a community mainly focused on commercial and industrial applications. Maybe that's what drove their focus.
It functionally is a project from fifteen-twenty years ago. Development activity was somewhat slow but steady but it largely fizzled out around I think 2018? The project tried to get political and financial support of the Russian government but failed to secure it, Aleksey Bragin transitioned to working in the crypto space, and of course with every year the number of potential users dependent on Windows 2000/XP is decreasing.
I think by now ReactOS is best viewed as an enthusiast research / challenge project with no practical use, like GNU Hurd. Just as Hurd is interesting in terms of how kernels can be done, but isn't a viable candidate for practical use, ReactOS is now in the same category. Very interesting as an exercise in reimplementing NT from scratch using clean room techniques but no longer a system that has a shot at gaining any adoption.
> That's much more relevant to industrial applications where some device is controlled by an ancient computer because the vendor originally provided drivers for NT 5.0 or 5.1 which don't work on anything modern.
In most of those applications, you just leave the computer be and don't touch it. In some cases (especially medical devices) you may not even be allowed to touch it for legal/compliance reasons. If the hardware dies, you most likely find the exact same machine (or something equivalent) and run the same OS - there are many scenarios where replacing the computer with something modern is not viable (lack of the correct I/O interfaces, computer is too fast, etc.)
If there were software bugs which could impact operations, they probably would have arisen during the first few years when there was a support contract. As for security issues - you lock down access and disconnect from any network with public internet access.
All that assumes that ReactOS is a perfect drop-in replacement for whatever version of Windows you are replacing, and that is probably not a good assumption.
In my experience, things like ReactOS would have been more useful in parts of the world with let's say a less thorough approach to things like compliance.
A factory has a CNC machine delivered fifteen years ago that's been run by the same computer all along. The computer eventually gives up the ghost, the original IT guy who got the vendor's drivers and installed them on that computer with an FCKGW copy of WinXP is long gone. Asking the current IT guy, the easiest solution (in a hypothetical timeline where a usable ReactOS exists) is to take the cheapest computer available, install ReactOS, throw in drivers from the original vendor CD at the bottom of some shelf and call it a day.
We might have to agree to disagree here, but I think the scenario where the IT guy uses XP and "finds" a license for it is the approach I would take if I was put in this situation. If the vendor for the CNC machine certified/tested their machine against Windows XP, and does not offer any support for new operating systems, I would be very reluctant to use anything else - whether it is another version of Windows which could accept the same drivers, or an open source clone. Again, I'm assuming that ReactOS manages to be a perfect clone, which is may or may not be in practice.
Kryptos K4 seems to me like a potential candidate for AI systems to solve if they're capable of actual innovation. So far I find LLMs to be useful tools if carefully guided, but more like an IDE's refactoring feature on steroids than an actual thinking system.
LLMs know (as in have training data) everything about Kryptos. The first three messages, how they were solved including failed attempts, years of Usenet / forum messages and papers about K4, the official clues, it knows about the World Clock in Berlin, including things published in German, it can certainly write Python scripts that would replicate any viable pen-and-paper technique in milliseconds, and so on.
Yet as far as I know (though I don't actively follow K4 work), LLMs haven't produced any ideas or code useful to solving K4, let alone a solution.
Yeah, you would suspect that the individual elements of solving K4 exist in some LLM, but so far the LLM slop answers are just very confident and very wrong.
My biggest complaint is that the users aren’t skeptical. They don’t even ask the LLM to verify if the answer it just generated matches the known hints from the puzzle artist. Beyond that, they don’t ask it to verify whether the decryption method actually yields the plaintext it confidently spit out.
I’m super impressed with Claude Code, though. For my use case, planning and building iOS app prototypes, it is amazing.
Technically there has only been one fatal accident in space, the Soyuz 11 failure which killed the crew of three. That occurred above the Karman line, all other spaceflight related fatalities were at much lower altitudes or on the ground.
Surely AGI would be matching humans on most tasks. To me, surpassing humans on all cognitive tasks sounds like superintelligence, while AGI "only" need to perform most, but not necessarily all, cognitive tasks at the level of a human highly capable at that task.
Personally I could accept "most" provided that the failures were near misses as opposed to total face plants. I also wouldn't include "incompatible" tasks in the metric at all (but using that to game the metric can't be permitted either). For example the typical human only has so much working memory, so tasks which overwhelm that aren't "failed" so much as "incompatible". I'm not sure exactly what that looks like for ML but I expect the category will exist. A task that utilizes adversarial inputs might be an example of such.
Thanks, I'll see about an on-page zoom. On my 1440p the whole table fits even with the side drawer open and with my webdev inexperience I didn't even think about zoom controls other than the browser's being an option.
I love your slide puzzle too. Very cool with different hint levels, where you can have just the element symbol or the full name as well. Surely trivial for chemists but not so for me.
Thanks! My brother is a chemical engineer so he's about the only one who can come close to solving the puzzles. :)
Also really like how the Timeline fades out the elements to filter by year.
Hmmm... I wonder if its a HiDPI scaling thing? I tried the site on a couple browsers (Safari, Chromium) and even on a 4K monitor it only fits Hydrogen to Nitrogen.
StackOverflow was great because it's not like a support forum or a mailing list. It's more like a repository of knowledge. It's been extremely helpful to me when arriving from Google, and I've gotten a couple useful responses to my own questions either. Awesome resource.
Where SO started failing in my opinion is when the "no duplicate questions" rule started to be interpreted as "it's a duplicate if the same or very similar question has ever been answered on the site". That caused too many questions to have outdated answers as the tech changes, best practices change and so on. C# questions have answers that were current for .NET Core 1.0 and should be modified. I have little webdev experience but I know JS has changed rapidly and significantly, so 2012 answers to JS questions are likely not good now.
This echoes my own experience. The very few times I attempted to post a question it was later flagged as duplicate, pointing to some other question which matched the keywords but didn't at all match the actual use case or problem. I don't know if this was the result of an automated process or zealous users, but it certainly put me off ever trying to engage with the community there.
> Where SO started failing in my opinion is when the "no duplicate questions" rule started to be interpreted as "it's a duplicate if the same or very similar question has ever been answered on the site".
What else could it mean? The entire point is that if you search for the question, you should always find the best version of that question. That only works by identifying it and routing all the others there.
> That caused too many questions to have outdated answers as the tech changes
You are, generally, supposed to put the new answer on the old question. (And make sure the question isn't written in a way that excludes new approaches. Limitations to use a specific library are generally not useful in the long term.)
Of course, working with some libraries and frameworks is practically like working in a different language; those get their own tags, and a question about doing it without that framework is considered distinct as long as everyone is doing their jobs properly. The meta site exists so that that kind of thing can be hashed out and agreed upon.
> C# questions have answers that were current for .NET Core 1.0 and should be modified.
No; they should be supplemented. The old answers didn't become wrong as long as the system is backwards-compatible.
The problem is mainly technical: Stack Overflow lacked a system to deprecate old answers, and for far too long the preferred sort was purely based on score. But this also roped in a social problem: high scores attract more upvotes naturally, and most users are heavily biased against downvoting anything. In short, Reddit effects.
> You are, generally, supposed to put the new answer on the old question
If you're asking the question, you don't know the new answer.
If you're not asking the question, you don't know the answer needs updating as it is 15 years old and has an accepted answer, and you didn't see the new question as it was marked as a dupe.
Even if you add the updated answer, it will have no votes and so has a difficult battle to be noticed with the accepted answer, and all the other answers that have gathered votes over the years.
I remain somewhat skeptical of LLM utility given my experience using them, but an LLM capable of validating my ideas OR telling me I have no clue, in a manner I could trust, is one of those features I'd really like and would happily use a paid plan for.
I have various ideas. From small scale stuff (how to refactor a module I'm working on) to large scale (would it be possible to do this thing, in a field I only have a basic understanding of). I'd love talking to an LLM that has expert level knowledge and can support me like current LLMs tend to ("good thinking, this idea works because...") but also offer blunt critical assessment when I'm wrong (ideally like "no, this would not work because you fundamentally misunderstand X, and even if step 1 worked here, the subsequent problem Y applies").
LLMs seem very eager to latch onto anything you suggest is a good idea, even if subtly implied in the prompt, and the threshold for how bad an idea has to be for the LLM to push back is quite high.
Have you tried actually asking for a detailed critique with a breakdown of the reasoning and pushback on unrealistic expectations? I've done that a few times for projects and got just what you're after as a response. The pushback worked just fine.
I have something like that in my system prompt. While it improves the model it's still a psychopathic sycophant. It's really hard to balance between it just going way too hard in the wrong direction and being overly nice.
The latter can be really subtle too. If you're asking things you don't already know the answer to it's really difficult to determine if it's placating you. They're not optimized for responding with objective truth, they're optimized for human preference. It always takes the easiest path and it's easy for a sycophant to not look like a sycophant.
I mean literally the whole premise of you asking it not to engage in sycophancy is it being sycophantic. Sycophancy is their nature
> I mean literally the whole premise of you asking it not to engage in sycophancy is it being sycophantic.
That's so meta it applies to everything though. You go to a business advisor to get business advice - are they being sycophantic because you expect them to do their work? You go to a gym trainer to push you with specific exercise routine - are they being sycophantic because you asked for help with exercise?
It's ultimately a trust issue and understanding motivations.
If I am taking to a salesperson, I understand their motivation is to sell me the product. I assume they know the product reasonably well but I also assume they have no interest in helping me find a good product. They want me to buy their product specifically and will not recommend a competitor. With any other professional, I also understand the likely motivations and how they should factor into my trust.
For more developed personal relationships of course there are people I know and trust. There are people I trust to have my best interests at heart. There are people I trust to be honest with me, to say unpleasant things if needed. This is also a gradient, someone I trust to give honest feedback on my code may not be the same person I trust to be honest about my personal qualities.
With LLMs, the issue is I don't understand how they work. Some people say nobody understands LLMs, but I certainly know I don't understand them in detail. The understanding I have isn't nearly enough for me to trust LLM responses to nontrivial questions.
Fair... but I think you're also over generalizing.
Think about how these models are trained. They are initially trained as text completion machines, right? Then to turn them to chatbots we optimize for human preferential output, given that there is no mathematical metric for "output in the form of a conversation that's natural for humans".
The whole point of LLMs is to follow your instructions. That's how they're trained. An LLM will never laugh at your question, ignore it, or any thing that humans may naturally do unless they are explicitly trained for that response (e.g. safety[0])
So that's where the generalization of the more meta comment breaks down. Humans learning to converse aren't optimizing for for the preference of the person they're talking to. They don't just follow orders, and if we do we call them things like robots or NPCs.
I go to a business advisor because of their expertise and because I have trust in them that they aren't going to butter me up. But if I go to buy a used car that salesman is going to try to get me. The way they do that may in fact be to make me think they aren't buttering me up.
Are they being sycophantic? Possibly. There are "yes men". But generally I'd say no. Sycophancy is on the extreme end, despite many of its features being common and normal. The LLM is trained to be a "yes man" and will always be a "yes man".
tldr:
Denpok from Silicon Valley is a sycophant and his sycophancy leads to him feigning non-sycophancy in this scene
https://www.youtube.com/watch?v=XAeEpbtHDPw
[0] This is also why jailbreaking is not that complicated. Safety mechanisms are more like patches and they're in an unsteady equilibrium. They are explicitly trained to be sycophantic.
In the ancient days I used TLDP to learn about Linux stuff. Arch wiki is now the best doc. The actual shipped documentation on most Linux stuff is usually terrible.
GNU coreutils have man pages that are correct and list all the flags at least, but suffer from GNU jargonisms and usually a lack of any concise overview or example sections. Most man pages are a very short description of what the program does, and an alphabetic list of flags. For something as versatile and important as dd the description reads only "Copy a file, converting and formatting according to the operands" and there's not even one example of a full dd command given. Yes, you can figure it out from the man page, but it's like an 80s reference, not good documentation.
man pages for util-linux are my go-to example for bad documentation. Dense, require a lot of implicit knowledge of concepts, make references to 90s or 80s technology that are now neither relevant nor understandable to most users.
Plenty of other projects have typical documentation written by engineers for other engineers who already know this. man pipewire leaves you completely in the dark as to what the thing even does.
Credit to systemd, that documentation is actually comprehensive and useful.
reply