For someone who claims to take a principled stance on these sorts of things, it feels very unprincipled to leverage the risk that others take in e.g. carrying a cell phone.
Consider that there are two components here: one is that Stallman is uncomfortable with the risk of carrying a tracking device (aka cell phone) around with him. The other is that he wants to make it known that people shouldn't carry cell phones because of that tracking; part of his platform is advocating for and against things like this.
If he was merely worried about the risk, and was just out to protect himself, then using someone else's cell phone (which would be at hand regardless of whether or not he used it) would be a perfectly reasonable, pragmatic thing to do. Transferring the risk, as you say.
But using someone else's cell phone is a violation of the principle. How can I take his advocacy seriously if he freely admits that we need cell phones out in the world, otherwise it's even too inconvenient for him to go about his business?
He does leverage the risk that others take, but those others are also the people who collectively build society so as to require taking that risk. It's kind of tit-for-tat in a way.
>How can I take his advocacy seriously
You could just listen to what he has to say and consider whether or not it's true. His personal behaviour at the end of the day has little bearing on that. "He doesn't even do XYZ therefore I won't believe him" feels more like a rationalization one comes up with because one doesn't want to believe him in the first place.
It's not setting an example if you shift responsibility to someone else.
Setting an example would be just doing without the things he doesn't agree with. Need to make a call but only other people's cell phones are available? Well, you don't make the call. Need wifi but no open networks are available? Well, you don't get wifi. Is this even more inconvenient than the already-inconvenient use of other people's cell phones or wifi logins? Absolutely. But it's actually sticking to your principles.
I really want to disagree with this, and have more faith in humanity, but I suspect you are more or less right. Even if it's 1,000 or even 10,000 or 100,000 cameras returned, it'll likely amount to a nothingburger for Amazon.
To make a real statement here, we'd probably need several million returns in the US alone. (A quick search suggests more than 20M installs in the US.)
It is far, far, far, far more likely that this sort of mass surveillance capability will be used for bad purposes (even by law enforcement) than it will be used to find an escaped child murderer. (Hell, I am convinced that this sort of thing is already more frequently used for bad purposes than good.)
Also like, how many escaped child murderers are there per year in the US? Like... one? I don't think that's worth pervasive mass surveillance, though I would understand how a parent whose kid had been abducted might believe it would be.
I'm that guy following at 3 or 4 car lengths in heavy traffic an people are constantly funneling in front of me, all to go exactly the same speed they'd be going if they were behind me.
If I ran a company like that (I'm very glad I don't; it would stress me out and I'd hate it), I would immediately fire anyone who did something like that. Anyone who would so blatantly steal a communal resource from their peers is an untrustworthy scumbag. Why would I trust them with integral parts of my business if I can't even trust them to not walk away with $20 worth of shared pizza?
> The detail about needing to reinstall Windows NT just to add a second CPU shows how tightly coupled OS and hardware were — there was no abstraction layer pretending otherwise.
In this case there was: the reason you need to reinstall to go from uniprocessor to SMP was because NT shipped with two HALs (Hardware Abstarction Layer): one supporting just a single processor, and one supporting more than one.
The SMP one had all the code for things like CPU synchronization and interrupt routing, while the UP one did not.
If they'd packed everything into one HAL, single-processor systems would have to take the performance hit of all the synchronization code even though it wasn't necessary. Memory usage would be higher too. I expect that you probably could run the SMP HAL on a UP system (unless Microsoft put extra code in to make it not let you), but you wouldn't really want to do that, as it would be slower and require more RAM.
So it wasn't that those abstraction layers didn't exist back then. It was that abstraction layers can be expensive. This is still true today, of course, but we have the cycles and memory to spare, more or less, which was very much not the case then.
> If they'd packed everything into one HAL, single-processor systems would have to take the performance hit of all the synchronization code even though it wasn't necessary. Memory usage would be higher too.
Linux also used to be like this, but these days has unified MP/UP kernels; on single-CPU systems (or if you give nosmp), the extra code is patched away at boot time. It wouldn't have been an unheard of technique at the time.
I actually would love this to be built in to a language/compiler. A lot of times when I’m building a single-threaded program but I’m using libraries written by other people. These libraries don’t know whether they are being incorporated into programs with single thread or not. So they either take the performance penalty of assuming multi-threaded (the approach by std::shared_ptr) or they give callers choice by making two implementations (Rust Arc and Rc). But the latter doesn’t actually work because this needs to be a global setting, not just a decision made at a local call site. It won’t work if such a library is a transitive dependency.
Glibc has a bunch of tests throughout the codebase where it checks if there have been any threads started besides the main one. I don’t really know how effective they are from a performance perspective. (In principle, turning fgetc into getc_unlocked, for instance, could be quite beneficial.) Microsoft used to have a single-threaded C runtime, but it was done away with some time ago, I’m guessing because they started putting things into the platform that would start and manage random threads outside the programmer’s control.
Yes but a SMP kernel on a UP system would be slightly but noticeably slower. And a UP kernel on an SMP system wouldn’t use the second processor - and rarely, wouldn’t boot.
But even in the era of LILO you could switch kernels pretty easily.
They could have shipped both HALs. Or made it easy to switch which one was in use without reinstalling.
CDs were around and hard drives weren’t that small at the time. (Or maybe the really early SMP versions predated widespread availability of CD-ROMs, but I remember dealing with this nonsense and reinstalling from an MSDN CD set.)
With NT4, I'm pretty sure both HALs were on the CD-ROM (unless you had an exotic system with a custom HAL, which came with its own install media). Keep in mind your use case is approximately nobody, you either had a SMP system or you didn't.
It was really not that rare to want to move a disk from one system to another. Except that there was an obnoxiously high chance that Windows would refuse to boot.
Yeah, remember the primary disk controller was set in the registry. (And on an SMP system probably some specific SCSI.) You could fix that, but easier to reinstall. Of all the things to bitch about, this one seems strained.
A big component to coding is typing. If you aren't doing the typing, then, unless you are dictating code to someone else to mechanically, verbatim type out for you, you are not coding.
I do believe directing an LLM to write code, and then reviewing and refining that code with the LLM, is a skill that has value -- a ton of value! -- but I do not think it is coding.
It's more like super-technical product management, or like a tech lead pair programming with a junior, but in a sort of mentorship way where they direct and nudge the junior and stay as hands-off as possible.
It's not coding, and once that's the sum total of what you do, you are no longer a coder.
You can get defensive and call this gatekeeping, but I think it's just the new reality. There's no shame in admitting that you've moved to a stage of your life where you build software but your role in it isn't as a coder anymore. Just as there's no shame in moving into management, if that's what you enjoy and are effective at it.
(If presenting credentials is important to you, as you've done, I've been doing this since 1989, when I was 8 years old. I've gone down to embedded devices, up through desktop software, up to large distributed systems. Coding is my passion, and has been for most of my life.)
I love building things too, but for me, the journey is a big part of what brings me joy. Herding an LLM doesn't give me joy like writing code does. And the finished project doesn't feel the same when my involvement is limited to prompting an LLM and reviewing its output.
If I had an LLM generate a piece of artwork for me, I wouldn't call myself an artist, no matter how many hours I spent conversing with the LLM in order to refine the image. So I wouldn't call myself a coder if my process was to get an LLM to write most/all the code for me. Not saying the output of either doesn't have value, but I am absolutely fine gatekeeping in this way: you are not an artist/coder if this is how you build your product. You're an artistic director, a technical product manager, something of that nature.
That said, I never derived joy from every single second of coding; there were and are plenty of parts to it that I find tedious or frustrating. I do appreciate being able to let an LLM loose on some of those parts.
But sparing use is starting to really only work for hobby projects. I'm not sure I could get away with taking the time to write most of it manually when LLMs might make coworkers more "productive". Even if I can convince myself my code is still "better" than theirs, that's not what companies value.
No they don't. One team meticulously documents and specs out what the original code does, and then a completely independent team, who has never seen the original source code, implements it.
Consider that there are two components here: one is that Stallman is uncomfortable with the risk of carrying a tracking device (aka cell phone) around with him. The other is that he wants to make it known that people shouldn't carry cell phones because of that tracking; part of his platform is advocating for and against things like this.
If he was merely worried about the risk, and was just out to protect himself, then using someone else's cell phone (which would be at hand regardless of whether or not he used it) would be a perfectly reasonable, pragmatic thing to do. Transferring the risk, as you say.
But using someone else's cell phone is a violation of the principle. How can I take his advocacy seriously if he freely admits that we need cell phones out in the world, otherwise it's even too inconvenient for him to go about his business?
reply