Hacker Newsnew | past | comments | ask | show | jobs | submit | 65a's commentslogin

As a user, I like wayland. X11 was a security disaster. Wayland is much better about tearing.

What scares me though are all the responsibilities passed to compositors, because what ends up happening is that each compositor may reimplement what should be common functionality in annoying ways. This is especially true for input things, like key remapping. This ultimately fragments linux desktop experiences even harder than it was before.


Huh. The "security" preventing me from doing things I want to do is a major reason I dislike Wayland :/. (e.g. automation & scripting / input events, clipboard, ...)

It also has noticeable mouse lag for me, I really hope this isn't due to avoiding tearing.


With great power comes great responsibility :)

That's a nice quip, but what does it mean in this case? If you remove "insecure" or "dangerous" features that people actually need from software, what you achieve is people using other software, and thus you have failed your responsibility?

Win32 has managed to do this without any API change, all the existing APIs work. The same approach would've worked for X11.

What it does is simple - all the functions that deal with windows/handles or events simply do not work on ones that you don't have access to, for example, the EnumWindows function allows you to wall through the tree of windows simply do not see the ones the process has no access to. SetWindowsHookEx which allows you to intercept and modify messages meant for other windows simply doesnt fire for messages you're not supposed to access.

Granted, outside of UWP apps, the application of security is rather lax (this is for legacy purposes, the security's there, just not enforced), but for apps running as admin, or UWP apps, the sandboxing is rather solid.


Indeed, this is the right approach.

Moreover, it is possible to choose as the default policy that no program may access a window that it did not open, but then there must exist a very simple method for the user to specify when access is permitted, e.g. by clicking a set of windows to grant access to them.


> X11 was a security disaster.

This only matters if you compare properly sandboxed apps, otherwise an app that runs with your uid can still do harm and practically indirectly completely compromise the system..

Are most flatpaks _properly_ sandboxed? Of course not.


And X11 always had a mechanism for isolating clients as well, i.e. trusted and untrusted clients. Nobody used it because it was irrelevant before sandboxing.

IMHO the security advantage of Wayland is mostly a myth and probably the same is true regarding tearing. The later is probably more an issue with respect to drivers and defaults.

On my desktop computers and on most of my laptops I have never experienced tearing in X11, at least during the last 25 years, using mostly NVIDIA GPUs, but also Intel GPUs and AMD GPUs.

I have experienced tearing only once, on a laptop about 10 years ago, which used NVIDIA Optimus, i.e. an NVIDIA GPU without direct video output, which used the Intel GPU to provide outputs. NVIDIA Optimus was a known source of problems in Linux and unlike with any separate NVIDIA GPU, which always worked out-of-the-box without any problems for me, with that NVIDIA Optimus I had to fiddle with the settings for a couple of days until I solved all problems, including the tearing problem.

Perhaps Wayland never had tearing problems, but I have used X11 for several decades on a variety of desktops and laptops and tearing has almost never been a problem.

However, most of the time I have used only NVIDIA or Intel GPUs for display and it seems that most complaints about tearing have been about AMD. I have always used and I am still using AMD GPUs too, but I use those for computations, not connected to monitors, so I do not know if they could have tearing problems.


A security disaster? Howso?

Well, it allowed local users to actually use their computers for computing instead of just safely consuming "apps" -- obviously that needed to go.

Letting any GUI application capture all input and take full control of the desktop completely defeats the point of sandboxing and X11 does exactly that.

> Defeats the point of sandboxing

Sandboxing defeats the point of said applications. If you want your computer to have no functionality, check out Figma. A clickable prototype sounds like precisely the security the world needs right now.


So accordingly, ActiveX was a brilliant idea and any web page should be able to execute code in the kernel context, otherwise no meaningful functionality can be provided

The whole problem with wayland is this mistaken absurd belief that the security standards of a desktop are equivalent to those of a website.

Yawn, X11 (and similar "unsecure" desktop environments) existed for half a century and the sky hasn't fallen. I'm tired of that "will somebody think of the children/grandparents" scare mongering.

It hasn't, but Windows has had its fair share of keyloggers, RATs, and so on, and I think we can all agree that anti-virus software is an inherently flawed concept.

The only thing keeping those away from Linux was its market share. With npm malware on the rise, this is no longer enough of a protection.


Keyloggers for example.

Linux always has been a system were the existence of malware was ignored, specially Desktop, contrary to other OSes (tooling included). But since a couple of years ago can be observed (I observe) slooow movements trying to correct this colossal mistake.

If this is the best way to do it or not, I do not enter. I particularly just welcome most of the advancements about this matter in Linux due such absence of worrying, keeping my fingers crossed that the needed tooling arrives on time (ten years behind Windows, I think).


so the security um, hack here is that someone has unauthorized access to your machine. its not related to x11. If you run untrusted code, thats it... who cares about x11?

Why did you used the "untrusted code" term? sounds like if you were delegating all the weight over the user's shoulders,

two years ago, trusted code like xz-utils [0] had seven months of freedom in the infected systems.

[0] https://news.ycombinator.com/item?id=39891607

> its not related to x11

Ideally one want to detect malware the earlier possible, and try to restrict what they can do from the beginning, until is noticed.

In this case Wayland, voluntarily or not, it's more restrictive than X11 with the access to screen and keyboard.

I know, I know, later the reply of the community will be a couple of downvotes more and "that already existed", "you could use, bla bla bla", and this is how Linux is ten years (minimal) behind Windows in tooling for this matter ¯\_(ツ)_/¯


I'd really like to get more clarification on offline mode and privacy. The github issues related to privacy did not leave a good feeling, despite being initially excited. Is offline mode a thing yet? I want to use this, but I don't want my code to leave my device.


You can sometimes find the serial lines if you are careful. Otherwise you can use the flashrom to store the output, and read it back out after each failure. It is much easier to just poke around and find the serial if you can, either from schematics (it seems the author has these) or by hand with a lot of patience or board scrying.


Possibly. Usually this is handled by the embedded controller, and not sure if that was reversed or not. You may be able to tristate the GPIO line that tells the CPU that a pin means PROCHOT, which would allow you to ignore the ECs attempts to do this.


> I certainly want to get rid of gpg from my life if I can

I see this sentiment a lot, but you later hint at the problem. Any "replacement" needs to solve for secure key distribution. Signing isn't hard, you can use a lot of different things other than gpg to sign something with a key securely. If that part of gpg is broken, it's a bug, it can/should be fixed.

The real challenge is distributing the key so someone else can verify the signature, and almost every way to do that is fundamentally flawed, introduces a risk of operational errors or is annoying (web of trust, trust on first use, central authority, in-person, etc). I'm not convinced the right answer here is "invent a new one and the ecosystem around it".


It's not like GPG solves for secure key distribution. GPG keyservers are a mess, and you can't trust their contents anyways unless you have an out of band way to validate the public key. Basically nobody is using web-of-trust for this in the way that GPG envisioned.

This is why basically every modern usage of GPG either doesn't rely on key distribution (because you already know what key you want to trust via a pre-established channel) or devolves to the other party serving up their pubkey over HTTPS on their website.


Yes, not saying that web of trust ever worked. "Pre-established channel" are the other mechanisms I mentioned, like a central authority (https) or TOFU (just trust the first key you get). All of these have some issues, that any alternative must also solve for.


So if we need a pre-established channel anyways, why would people recommending a replacement for GPG workflows need to solve for secure key distribution?

This is a bit like looking at electric cars and saying ~"well you can't claim to be a viable replacement for gas cars until you can solve flight"


A lot of people are using PGP for things that don’t require any kind of key distribution. If you’re just using it to encrypt files (even between pointwise parties), you can probably just switch to age.

(We’re also long past the point where key distribution has been a significant component of the PGP ecosystem. The PGP web of trust and original key servers have been dead and buried for years.)


This is not the first time I see "secure key distribution" mentioned in HN+(GPG alternatives) context and I'm a bit puzzled.

What do you mean? Web of Trust? Keyservers? A combination of both? Under what use case?


I'm assuming they mean the old way of signing each others signatures.

As a practical implementation of "six degrees of Kevin Bacon", you could get an organic trust chain to random people.

Or at least, more realistically, to few nerds. I think I signed 3-4 peoples signatures.

The process had - as they say - a low WAF.


> As a practical implementation of "six degrees of Kevin Bacon", you could get an organic trust chain to random people.

GPG is terrible at that.

0. Alice's GPG trusts Alice's key tautologically. 1. Alice's GPG can trust Bob's key because it can see Alice's signature. 2. Alice's GPG can trust Carol's key because Alice has Bob's key, and Carol's key is signed by Bob.

After that, things break. GPG has no tools for finding longer paths like Alice -> Bob -> ??? -> signature on some .tar.gz.

I'm in the "strong set", I can find a path to damn near anything, but only with a lot of effort.

The good way used to be using the path finder, some random website maintained by some random guy that disappeared years ago. The bad way is downloading a .tar.gz, checking the signature, fetching the key, then fetching every key that signed in, in the hopes somebody you know signed one of those, and so on.

And GPG is terrible at dealing with that, it hates having tens of thousands of keys in your keyring from such experiments.

GPG never grew into the modern era. It was made for persons who mostly know each other directly. Addressing the problem of finding a way to verify the keys of random free software developers isn't something it ever did well.


What's funny about this is that the whole idea of the "web of trust" was (and, as you demonstrate, is) literally PGP punting on this problem. That's how they talked about it at the time, in the 90s, when the concept was introduced! But now the precise mechanics of that punt have become a critically important PGP feature.


I don't think it punted as much as it never had that as an intended usage case.

I vaguely recall the PGP manuals talking about scenarios like a woman secretly communicating with her lover, or Bob introducing Carol to Alice, and people reading fingerprints over the phone. I don't think long trust chains and the use case of finding a trust path to some random software maintainer on the other side of the planet were part of the intended design.

I think to the extent the Web of Trust was supposed to work, it was assumed you'd have some familiarity with everyone along the chain, and work through it step by step. Alice would known Bob, who'd introduce his friend Carol, who'd introduce her friend Dave.


In a signature context, you probably want someone else to know that "you" signed it (I can think of other cases, but that's the usual one). The way to do that requires them to know that the key which signed the data belongs to you. My only point is that this is actually the hard part, which any "replacement" crypto system needs to solve for, and that solving that is hard (none of the methods are particularly good).


> The way to do that requires them to know that the key which signed the data belongs to you.

This is something S/MIME does and I wouldn't say it doesn't do so well. You can start from mailbox validation and that already beats everything PGP has to offer in terms of ownership validation. If you do identity validation or it's a national PKI issuing the certificate (like in some countries) it's a very strong guarantee of ownership. Coughing baby (PGP) vs hydrogen bomb level of difference.

It much more sounds to me like an excuse to use PGP when it doesn't even remotely offer what you want from a replacement.


I think it should be mostly ad-hoc methods:

if you have a website put your keys in a dedicated page and direct people there

If you are in an org there can be whatever kind of centralised repo

Add the hashes to your email signature and/or profile bios

There might be a nice uniform solution using DNS and derived keys like certificate chains? I am not sure but I think it might not be necessary


4800MHz single rank ok?


There's a great postmortem here about what might have been a similar SEU (single event upset--bitflip) here: https://www.atsb.gov.au/sites/default/files/media/3532398/ao...


StarMax series (and the 4400) seemed to be about as close to CHRP as we got. My off-brand StarMax clone (PowerCity) had a PS/2 and an ISA port. Ran BeOS well, and had a quirk that I could hear a tight loop on the speaker.


AFAIK most StarMax systems that were released (a prototype exists of a CHRP StarMax model) are based on the Tanzania / LPX-40 design, which is mostly a traditional PCI PowerMac[1], albeit with oddities like support for PC style floppy drives. PS/2 is handled by the CudaLite microcontroller which presents it to the OS as ADB devices for example. I've not heard of a version with ISA slots, although I assume you could just have a PCI to ISA bridge chip, even if MacOS presumably wouldn't do anything with it.

[1] https://cdn.preterhuman.net/texts/computing/apple_hardware_d...


Right, I think those were the closest we got to the CHRP standard, as they moved the platform toward PC-style floppies, PS/2, ATX PSU and even more generic "platform" stuff than most clones. I'm fairly sure I had an ISA slot, I do remember trying to get a bargain bin NE2K card working in mine under linux (it didn't work). Definitely did nothing under OS 8/9.

The powercity models were interesting, because they came out after Apple revoked Motorola's clone license. A German company, ComJet, bought up the boards and sold unlicensed clones cheap. Case was slightly different, but otherwise they corresponded to StarMax models (fairly certain they were identical but may have been last revision boards).


Kinda sorts. The systems that the "MacOS on CHRP" thing ran on had a very strange looking device tree, with some bizarre combination of PC and Mac peripherals.

  Apple Cobra Open Firmware CHRP 1.1 B3 built on 08/18/97 at 13:04:24
  Copyright Apple Computer 1994,1996,1997
  Copyright IBM Corporation 1996
  All rights reserved.
   ok
  0 > dev / ls 
  ff82ec18: /cpus
  ff82ee08:   /PowerPC,604e@0
  ff82f600: /chosen
  ff82f750: /memory@0
  ff82f8d8: /memory-controller@fec00000
  ff82f9d8: /openprom
  ff82fab8: /rom@ff000000
  ff82ff48:   /boot-rom@fff00000
  ff830060: /options
  ff830828: /aliases
  ff830c78: /packages
  ff830d00:   /deblocker
  ff8314c8:   /disk-label
  ff832090:   /obp-tftp
  ff835db8:   /mac-parts
  ff836578:   /mac-files
  ff837de0:   /fat-files
  ff839700:   /iso-9660-files
  ff83a148:   /bootinfo-loader
  ff83b7d0:   /xcoff-loader
  ff83c060:   /pe-loader
  ff83c7d0:   /elf-loader
  ff83da18:   /terminal-emulator
  ff83dab0: /rtas
  ff83dc70: /pci@80000000
  ff83ff38:   /isa@b
  ff8414e0:     /nvram@i74
  ff841ad0:     /rtc@i70
  ff842500:     /parallel@i378
  ff842988:     /serial@i3f8
  ff843020:     /serial@i2f8
  ff8436b8:     /sound@i534
  ff850288:     /8042@i60
  ff8515f8:       /keyboard@0
  ff854b88:       /mouse@1
  ff8554c0:     /fdc@i3f0
  ff858730:       /disk@1
  ff85bac0:     /op-panel@i808
  ff85bba0:     /pwr-mgmt@i82a
  ff85bed8:     /timer@i40
  ff85c070:     /interrupt-controller@i20
  ff85c250:     /dma-controller@i0
  ff85c738:   /pci-ide@b,1
  ff85d028:     /ide@0
  ff85db78:     /ide@1
  ff85e6c8:       /cdrom@0
  ff862e60:   /mac-io@d
  ff863468:     /scsi@10000
  ff865298:       /disk
  ff8660c8:       /tape
  ff8671b8:     /adb@11000
  ff867cb0:       /keyboard@2
  ff8685a0:       /mouse@3
  ff8687c0:     /escc-legacy@12000
  ff8689b8:       /ch-a@12002
  ff868b08:       /ch-b@12000
  ff868c58:     /escc@13000
  ff868e40:       /ch-a@13020
  ff869500:       /ch-b@13000
  ff869bc0:     /via@16000
  ff869cb0:     /interrupt-controller@40000
  ff869e70:   /cirrus@e
  ff86e2c8:   /pci1022,2000@f
   ok
  0 >
Refer to the "Macintosh Technology in the Common Hardware Reference Platform" book for more information, if you're curious about the Mac IO pieces.

The Motorola Yellowknife board seems remarkably similar to this system, as well as the IBM Long Trail system (albeit with Long Trail using a VLSI Golden Gate versus a MPC106 memory controller). Both of them use W83C553 southbridges and PC87307 Super I/O controllers.

The architecture is kind of weird, but the schematics on NXP's website can probably elucidate a bit more on the system's design.


I really cannot say Uber's use of Go is particularly idiomatic to me, having started writing Go more than a decade ago now. It just strikes me as overwrought, and I've worked on big services.


UEFI itself is way too complex, has way too much surface (I'm surprised this didn't abuse some poorly written SMI handler), and provides too little value to exist. Secure boot then goes on to treat that place as a root of trust, which is security architecture mistake, but works ok in this case. This all could be a lot better.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: