I have switched from Windows completely to Linux more than 20 years ago, after a few years of dual-booting.
The moment when I could ditch Windows was when I got on Linux several video-related programs, e.g. a DVD player and a program that could use my TV tuner. For all other applications I had already switched to Linux earlier. Those other applications included MS Office, which at that time I continued to use, but I was using it on Linux under CrossOver, where it worked much better than on the contemporaneous Windows XP (!!). The switch to Linux was not free as in beer, because I was using some programs that I had purchased, e.g. MS Office Professional and CrossOver (which is an improved version of Wine, guaranteed to work with certain commercial programs). I did the switch not to save money, but to be able to do things that are awkward or impossible on Windows.
I do all the things that you mention, and many others, on various desktops and laptops with Linux. I do not doubt that there may be Linux distributions where you may have difficulties in combining very different kinds of applications. However, there certainly also exist distributions without such problems.
For instance, I am using Gentoo Linux, precisely because it allows an extreme customization, I really can combine any kinds of applications with minimal problems, even in most cases when they stupidly insist to use dynamic libraries of a certain version, with each application wanting a different version.
As another example, I am using XFCE as a graphic desktop environment, because it provides only the strictly necessary functions and it allows me to easily combine otherwise conflicting applications, e.g. Gnome applications with KDE applications.
XFCE is actually a great example of the problem with Linux
It's wayland support is utterly broken right now and getting very little attention. The major distros are about to put X11 in the grave and then XFCE will die (or more likely it'll live on in some weird offshoot distro).
That's not really an acceptable situation for a consumer product.
now obviously xfce is not one of the main DEs pushed by the distros, but it's plight is a symptom of a couple of problems that plague linux
Compatibility is important. MSFT, for all their faults, puts a shitload of effort into making sure that even old ass software keeps working. They're not perfect (especially in the last few years) but they're miles ahead of linux here. As a user, I shouldn't ever have to know or care about wayland or pipewire or whatever other nonsense, but that's not the case. I have to know just so I can find software that works with my system.
X11 is not gone yet, XFCE is working on wayland support, and in addition to that, there are projects that work on allowing X11 window managers to run on wayland.
XFCE just has not updated.
If this is a "linux problem", then what about android and ios? its a million times worse, but somehow thats a perfectly good situation for a consumer product?
My parents, being much over 80 years old, have been using for many years Linux, more precisely Gentoo Linux, but they have no idea what "Linux" is.
Obviously, I have installed all software on their computers and I have kept it up to date.
However, after that, they have just used the computers for reading and editing documents or e-mail messages, for browsing the Internet, for watching movies or listening music, much the same as they would have done with any other operating system. When they had a more unusual need, I had to search and install an appropriate program and teach them how to use it.
They had the advantage of having a "consultant" to solve any problem, but none of the problems that they have encountered were problems that they would not also encounter on Windows. Actually on Linux when you have a problem, you can be pretty certain that someone competent can find a solution, in the worst case by reading the source code, when other better documentation does not exist. On Windows, I have encountered far worse problems than on Linux, when whole IT support departments scratched their heads and could not understand what is happening, for weeks, and sometimes forever.
By far the main advantage of Windows over Linux in ease of use is that it comes preinstalled on most computers. I have installed Windows professionally and it frequently has been far more difficult than installing Linux on the same hardware, but normal people are shielded from such experiences.
Most modern Linux distributions have one great advantage in ease of use over Windows: the software package manager. Whenever you need some application, you just search an appropriate package and you install it quickly and freely. Such package managers for free software have existed many decades before app stores (e.g. FreeBSD already had one more than 30 years ago) and they remain better than any app store, by not requiring any invasive account for their use, or mandatory payments.
>> They had the advantage of having a "consultant" to solve any problem, but none of the problems that they have encountered were problems that they would not also encounter on Windows.
I drew a hard "no family tech support" line decades ago, and the difference then is that they can at least find a Windows tech-support consultant. What happens if an octogenarian phones Geek Squad and says they're running Variant <X> of Linux?
When you create LibreOffice documents and you want to send them to others, which may not be LibreOffice users, the normal procedure is to export your documents as PDF files, which ensures that anyone can use them.
Less frequently, you may want to export your documents to MS formats, if you want them to be editable, but that is much less foolproof than exporting to PDF.
I have worked for many years in companies where almost everybody was using MS Office, while I preferred to use LibreOffice (nowadays Excel remains better than any alternative, but I actually prefer LibreOffice Write to MS Word, because I think that the latter has regressed dramatically during the last 2 decades). Despite that, my coworkers were not even aware that I was using LibreOffice, as all the documentation generated by me was in PDF format.
Product documentation in any serious company should be in PDF format anyway, not in word processor formats that cannot be used by anyone who does not have an appropriate editor or viewer. Even using MS Office is not a guarantee that you can use any MS Office document file, as I have seen cases when recent MS Office versions could not open some ancient MS Office files, which could be opened by other tools, e.g. they could be imported in LibreOffice.
PDF is THE choice for cross-platform presentation and printing, but a real PITA for collaboration, funny enough one of the places where the web version of Word is pretty decent. A lot of industries live in Word/Office, and "generate PDF" is a pretty small part of their workflow. Also remember that printing to PDF without an expensive purchase was not a thing for many decades; I've only stopped using the Win2PDF license I bought 25 years ago on my most recent computers!
Any fonts look much better on a monitor with a higher resolution and the size of the fonts must not vary with the resolution of the monitor. A 4k monitor always provides more legible text than an 2560x1440 monitor.
The size of the fonts used by your documents is specified in typographic points, e.g. 12 points or 14 points. This corresponds to a fixed size on the screen, regardless of the screen resolution. The increased resolution only makes the letters more beautiful, not smaller.
If your fonts become smaller on a monitor with a higher resolution, then you are holding it in the wrong way, i.e. your operating system is badly configured and it does not know the correct dots-per-inch value for your monitor, so it uses a DPI value that corresponds to the obsolete VGA monitors.
A decent operating system should configure automatically the right DPI, because the monitor provides this value to the GPU when it is initialized.
Despite this, for some weird reason many operating systems do not use the DPI value read from the monitor to configure automatically the graphics interface, so it must still be configured manually by the user. Even worse is that the corresponding setting is frequently well hidden, so it is difficult to discover.
In any case, these endless discussions about fonts being to small on high-resolution monitors have been caused only by some incompetent morons who for inexplicable reasons have been in charge of the display settings of the popular operating systems. The user may have reasons to override the true DPI value of the monitor, but by default the OS should have always used the value provided by the monitor EDID, and then you would have never seen any change in font sizes when substituting monitors with different resolutions (except when even more incompetent Web designers specify some sizes in pixels instead of length units; allowing pixels besides length units for the sizes of graphic elements has been a huge mistake, but when this was done several decades ago, most computers did not have GPUs yet, so there were concerns about the rasterization speed in software).
I used to work in my mom and dad's print shop when I was a kid. 6 picas in an inch, 12 points in a pica, and by the time you go home your hands smell like hypo. That should give you an idea of how old I am.
For a kid I was passably good at setting up headlines for paste-up, but I never had to be the one who used an X-Acto Knife.
I'll die on the hill where 2K is better than 4K if your livelihood depends on having to stare at a screen at a distance of 60cm for upwards of 10 hours a day, longer sometimes.
I also think you missed my point about about the anti-aliasing. For various reasons I still use Windows and some of my favorite monospace fonts only exist in the the .FON format. I can emulate the X-Windows experience of using the misc-fixed-medium family and it works just fine for my needs.
I agree that on monitors with insufficient resolution ancient bitmap fonts can be sharper, because they are free of artifacts caused by mismatch between the shape of the letters and the pixel grid.
Your problem is precisely that you use monitors with a too low resolution. On monitors with a high enough resolution, you approach the quality of printed paper and you can use monospace fonts that are more beautiful than any bitmap fonts, without being able to perceive the pixels.
The only problem is that big monitors also need a bigger resolution and the combination of big size with big resolution can be expensive.
While for a size of 27" or 32" the 4k monitors can be quite cheap, I believe that at such sizes a 5k resolution is the minimum for good text rendering, and 5k monitors remain expensive.
No, it is a poor pixel density when compared with a printed book, which should be the standard for judging any kind of display used for text.
At the sizes of 27" or 32", which are comfortable for working with a computer, 5k is the minimum resolution that is not too bad when compared with a book or with the acuity of typical human vision.
For a bigger monitor, a 4k resolution is perfectly fine for watching movies or for playing games, but it is not acceptable for working with text.
More accurately, you have spotted not a Linux user in general, but a user of certain Linux distributions, which in my opinion have inadequate display configuration settings.
I am also using only Linux on all my desktops and laptops, and I have never used any display with a resolution less than 4k, for at least the last 12 or 13 years.
Despite of that, I have never encountered any problems with "scaling", because in Linux I have never used any kind of "scaling" (unlike in Windows, which has a font "scaling").
In the kind of Linux that I have been using, I only set an appropriate dots-per-inch value for the monitor, which means that there is no "scaling", which would reduce graphic quality, but all programs render the fonts and other graphic elements at an appropriate size and using in the right way the display resolution.
I configure dots-per-inch values that do not match the actual dpi values of the monitors, but values that ensure that the on-screen size is slightly larger than the on-paper size, because I stay at a greater distance from the monitor than I would keep a paper or a book in my hand (i.e. I set higher dpi values than the real ones, so that any rendering program will believe that the screen is smaller than in reality, so it will render e.g. a 12 point font at a slightly bigger size than 12 points and e.g. an A4 page will be bigger on screen than an A4 sheet of paper; for instance I use 216 dpi for a 27 inch 4k Dell UltraSharp monitor).
While Americans very frequently complain that the Chinese state subsidizes various industries, I am astonished that they do not see any similarity with the fact that I never heard of any really big investment project in USA, e.g. the building of any new big factory or new company headquarters, that was done otherwise than after receiving very substantial tax reductions of various kinds from the local government of the place chosen for the project. In many parts of Europe those kinds of tax reductions would be illegal, being considered a form of state aid for a private company.
And yet virtually all European lawmakers get $ from governments threatening to cut jobs.
Many countries actively lose money for those jobs, Serbia is an example. They go to extreme lengths to underbid competition for stellantis factories and get a net negative impact.
If you can't survive without taxpayers paying the bills, just die ffs.
I do not see how it can be claimed that "LLMs are a spectacular demolition of that premise", because LLMs must be trained on an amount of text far greater than that to what a human is exposed.
I have learned one foreign language just by being exposed to it almost daily, by watching movies spoken in that language, without using any additional means, like a dictionary or a grammar (because none were available where I lived; this was before the Internet). However, I have been helped in guessing the meaning of the words and the grammar of the language, not only by seeing what the characters of the movie were doing, correlated to the spoken phrases, but also by the fact that I knew a couple of languages that had many similarities with the language of the movies that I was watching.
In any case, the amount of the spoken language to which I had been exposed for a year or so, until becoming fluent in it, had been many orders of magnitudes less than what is used by a LLM for training.
I do not know whether any innate knowledge of some grammar was involved, but certainly the knowledge of the grammar of other languages had helped tremendously in reducing the need for being exposed to greater amounts of text, because after seeing only a few examples I could guess the generally-applicable grammar rules.
There is no doubt that the way by which a LLM learns is much dumber than how a human learns, which is why this must be compensated by a much bigger amount of training data.
Seeing how the current inefficiency of LLM training has already caused serious problems for a great number of people, who either had to give up on buying various kinds of electronic devices or they had to accept to buy devices of a much worse quality than previously desired and planned, because the prices for DRAM modules and for big SSDs have skyrocketed, due to the hoarding of memory devices by the rich who hope to become richer by using LLMs, I believe that it has been proven beyond doubt that the way how LLMs learn, for now, is not good enough and it is certainly not a positive achievement, as more people have been hurt by it than the people who have benefited from it.
Presumably one would want to use Ed448 in order to achieve for session key establishment or for digital signing a level of security comparable to using for encryption AES with a 256-bit key.
ED25519 has a level of security only comparable with AES with an 128-bit key.
Nowadays many prefer to use for encryption AES or similar ciphers with a 256-bit key, to guard against possible future advances, like the development of quantum computers. In such cases, ED25519 remains the component with the lowest resistance against brute force, but it is less common to use something better than it because of the increase in computational cost for session establishment.
> Presumably one would want to use Ed448 in order to achieve for session key establishment or for digital signing a level of security comparable to using for encryption AES with a 256-bit key.
Ed448 is an instantiation of EdDSA (the Edwards curve digital signature algorithm) over the Edwards448 curve (a Goldilocks curve), as defined in RFC 7748 and RFC 8032.
Key establishment would use X448 (formerly "Curve448") for Diffie-Hellman, although ECDH over Edwards448 is also (strictly speaking) possible.
Using Ed448 for key exchange is a TypeError.
But that's neither here nor there. I was asking about real world applications that need Ed448 specifically, not a vague question of how cryptography works.
> ED25519 has a level of security only comparable with AES with an 128-bit key.
No. The whole notion of "security levels" is a military meme that doesn't actually meaningfully matter the way people talk about it.
There are about 2^252 possible Ed25519 public keys. Recovering a secret key from Pollard's rho takes about 2^126 or so computations (where each computation requires a scalar multiplication), and that's why people pair it with an equivalent "security level" as AES-128, but the only meaningful difference between the algorithms (besides their performance footprint) is security against multi-user attacks.
With a 256-bit AES key, you can have 2^40 users each choose 2^50 keys and still have a probability of key reuse below 2^-32.
With 128-bit AES keys, you don't have that guarantee. 2^90 keys is well beyond the birthday bound of a 128-bit function, which means the probability of two users choosing the same key is higher than 2^32. (It's actually higher than 50% at 2^90 out of 2^128.)
However, despite the "security level" claims, Ed25519 has 2^252 keys. The multi-user security of Ed25519 (and X25519) is meaningfully on par with AES-256.
As things stand today, the 128-bit symmetric cryptography "security level" is unbreakable. You would need to run the entire Bitcoin mining network for on the order of a billion years to brute force an AES-128 key.
> Nowadays many prefer to use for encryption AES or similar ciphers with a 256-bit key, to guard against possible future advances, like the development of quantum computers.
This is a common misunderstanding. So common that I once made the same mistake.
Grover's attack requires a quantum circuit size of 2^106.
> In such cases, ED25519 remains the component with the lowest resistance against brute force, but it is less common to use something better than it because of the increase in computational cost for session establishment.
I do not understand what this sentence is trying to say.
It should be noted that iOS/macOS is likely to be not vulnerable because for them the Dolby decoder has been compiled as any C/C++ program should be compiled by default everywhere, i.e. with bounds checking enabled.
Unfortunately, all C/C++ compilers have as the default option to omit bounds checking, but any decent compiler has options for enabling bounds checking and other run-time checks suitable for catching all the undesirable behaviors that are undefined in the C/C++ standards. The default should be to enable such options globally for any program and to disable them selectively only for the code parts where benchmarks have demonstrated that they prevent the program to reach the target performance and code analysis has concluded that the erroneous behavior cannot happen.
The claim that C/C++ are unsafe programming languages is only in small part true, because most of the unsafety is caused by the compiler options that are chosen to be default by tradition, and not intrinsically by the language. The C/C++ standards fail to define a safe behavior for many situations, but they also do not prevent a compiler to implement the safe behavior, e.g. the fact that the standard does not require mandatory bounds checking for accessing arrays and structures does not mean that a compiler should not implement such checking.
When a C/C++ program is compiled with safe compilation options, instead of the default options, then it becomes quite safe, as most errors that would be caught by a "safer" language would also be caught when running the C/C++ program.
> When a C/C++ program is compiled with safe compilation options, instead of the default options, then it becomes quite safe, as most errors that would be caught by a "safer" language would also be caught when running the C/C++ program.
Sean Baxter has been providing quite a number of crazy examples that even if they wanted to which there is no sign they do, C++ couldn't attempt to fix without major language changes.
Bounds checking in more places by default, catching some types of initialization screw up, these are all nice enough in some sense - indeed in this particular case maybe they close the vulnerability - but they're band aids, the pig is gone dad. https://www.youtube.com/watch?v=1XIcS63jA3w
That's a lot of words, but how is that even possible?
Pointers and arrays are basically interchangeable in C, and you have to do that constantly in any large program. Even the blog post has a malloc in it.
Once you start passing around a pointer to the middle of the array all size info is lost.
Are you talking about -fsanitize=address? It's too slow to be used in production
I believe GP is talking about -fbounds-safety [0, 1]. From my understanding this will cause the compiler to emit an error if it can't figure out how to bounds check a pointer access at either compile time or run time. You then need to either add appropriate annotations to provide the missing information or otherwise restructure the code to satisfy the compiler.
As implemented in the most popular compilers "-fsanitize=address" is indeed slow.
However, for the majority of the code of a program, enabling this and all the other sanitize options will have a negligible effect on the useful performance.
Like I have said, sanitize options should be disabled in performance-critical sections, which have been identified as such by profiling, not by guessing, but only after examining those sections thoroughly, to be certain that the undefined behavior cannot be triggered.
Currently, the sanitize options are significantly slower than they should be in an optimized implementation, because there is a vicious circle. The application developers do not enable such options for production because they believe that they are slow and the compiler developers do not make the effort to improve their speed, because they believe that the application developers will not enable them in production code anyway.
However, these problems are not inherent to the language or compiler, they are caused by a bad historical tradition of neglecting the correctness of a program whenever cheating can improve the performance in the best case (which will be the only one demonstrated to potential customers), even if that makes the worst case catastrophic.
Even Rust is not immune to bad traditions, e.g. by disabling overflow checking in release builds, as opposed to debug builds.
The moment when I could ditch Windows was when I got on Linux several video-related programs, e.g. a DVD player and a program that could use my TV tuner. For all other applications I had already switched to Linux earlier. Those other applications included MS Office, which at that time I continued to use, but I was using it on Linux under CrossOver, where it worked much better than on the contemporaneous Windows XP (!!). The switch to Linux was not free as in beer, because I was using some programs that I had purchased, e.g. MS Office Professional and CrossOver (which is an improved version of Wine, guaranteed to work with certain commercial programs). I did the switch not to save money, but to be able to do things that are awkward or impossible on Windows.
I do all the things that you mention, and many others, on various desktops and laptops with Linux. I do not doubt that there may be Linux distributions where you may have difficulties in combining very different kinds of applications. However, there certainly also exist distributions without such problems.
For instance, I am using Gentoo Linux, precisely because it allows an extreme customization, I really can combine any kinds of applications with minimal problems, even in most cases when they stupidly insist to use dynamic libraries of a certain version, with each application wanting a different version.
As another example, I am using XFCE as a graphic desktop environment, because it provides only the strictly necessary functions and it allows me to easily combine otherwise conflicting applications, e.g. Gnome applications with KDE applications.
reply