Hacker Newsnew | past | comments | ask | show | jobs | submit | 20k's commentslogin

This programming model seems like the wrong one, and I think its based on some faulty assumptions

>Another advantage of this approach is that it prevents divergence by construction. Divergence occurs when lanes within a warp take different branches. Because thread::spawn() maps one closure to one warp, every lane in that warp runs the same code. There is no way to express divergent branching within a single std::thread, so divergence cannot occur

This is extremely problematic - being able to write divergent code between lanes is good. Virtually all high performance GPGPU code I've ever written contains divergent code paths!

>The worst case is that a workload only uses one lane per warp and the remaining lanes sit idle. But idle lanes are strictly better than divergent lanes: idle lanes waste capacity while divergent lanes serialize execution

This is where I think it falls apart a bit, and we need to dig into GPU architecture to find out why. A lot of people think that GPUs are a bunch of executing threads, that are grouped into warps that execute in lockstep. This is a very overly restrictive model of how they work, that misses a lot of the reality

GPUs are a collection of threads, that are broken up into local work groups. These share l2 cache, which can be used for fast intra work group communication. Work groups are split up into subgroups - which map to warps - that can communicate extra fast

This is the first problem with this model: it neglects the local work group execution unit. To get adequate performance, you have to set this value much higher than the size of a warp, at least 64 for a 32-sized warp. In general though, 128-256 is a better size. Different warps in a local work group make true independent progress, so if you take this into account in rust, its a bad time and you'll run into races. To get good performance and cache management, these warps need to be executing the same code. Trying to have a task-per-warp is a really bad move for performance

>Each warp has its own program counter, its own register file, and can execute independently from other warps

The second problem is: it used to be true that all threads in a warp would execute in lockstep, and strictly have on/off masks for thread divergence, but this is strictly no longer true for modern GPUs, the above is just wrong. On a modern GPU, each *thread* has its own program counter and callstack, and can independently make forward progress. Divergent threads can have a better throughput than you'd expect on a modern GPU, as they get more capable at handling this. Divergence isn't bad, its just something you have to manage - and hardware architectures are rapidly improving here

Say we have two warps, both running the same code, where half of each warp splits at a divergence point. Modern GPUs will go: huh, it sure would be cool if we just shifted the threads about to produce two non divergent warps, and bam divergence solved at the hardware level. But notice that to get this hardware acceleration, we need to actually use the GPU programming model to its fullest

The key mistake is to assume that the current warp model is always going to stick rigidly to being strictly wide SIMD units with a funny programming model, but we already ditched that concept a while back on GPUs, around the Pascal era. As time goes on this model will only increasingly diverge from how GPUs actually work under the hood, which seems like an error. Right now even with just the local work group problems, I'd guess you're dropping ~50% of your performance on the table, which seems like a bit of a problem when the entire reason to use a GPU is performance!


> Modern GPUs will go: huh, it sure would be cool if we just shifted the threads about to produce two non divergent warps, and bam divergence solved at the hardware level

Could you kindly share a source for this? Shader Execution Reordering (SER) is available for Ray tracing, but it is not a general-purpose feature that can be used in generic compute shaders.

> Divergent threads can have a better throughput than you'd expect on a modern GPU, as they get more capable at handling this. Divergence isn't bad, its just something you have to manage - and hardware architectures are rapidly improving here

I would strongly advise against this. GPUs are highly efficient when neighboring threads within a warp access neighboring data and follow largely the same code path. Even across warps, data locality is highly desirable.


>I would strongly advise against this. GPUs are highly efficient when neighboring threads within a warp access neighboring data and follow largely the same code path. Even across warps, data locality is highly desirable.

Its a bit like saying writing code at all is bad though. Divergence isn't desirable, but neither is running any code at all - sometimes you need it to solve a problem

Not supporting divergence at all is a huge mistake IMO. It isn't good, but sometimes its necessary

>Could you kindly share a source for this? Shader Execution Reordering (SER) is available for Ray tracing, but it is not a general-purpose feature that can be used in generic compute shaders.

https://docs.nvidia.com/cuda/cuda-programming-guide/03-advan...

My understanding is that this is fully transparent to the programmer, its just more advanced scheduling for threads. SER is something different entirely

Nvidia are a bit vague here, so you have to go digging into patents if you want more information on how it works


>The second problem is: it used to be true that all threads in a warp would execute in lockstep, and strictly have on/off masks for thread divergence, but this is strictly no longer true for modern GPUs, the above is just wrong. On a modern GPU, each thread has its own program counter and callstack, and can independently make forward progress. Divergent threads can have a better throughput than you'd expect on a modern GPU, as they get more capable at handling this. Divergence isn't bad, its just something you have to manage - and hardware architectures are rapidly improving here

I haven't found any evidence of the individual program counter thing being true beyond one niche application: Running mutexes for a single vector lane, which is not a performance optimization at all. In fact, you are serializing the performance in the worst way possible.

From a hardware design perspective it is completely impractical to implement independent instruction pointers other than maybe as a performance counter. Each instruction pointer requires its own read port on the instruction memory and adding 32, 64 or 128 read ports to SRAM is prohibitively expensive, but even if you had those ports, divergence would still lead to some lanes finishing earlier than others.

What you're probably referring to is a scheduler trick that Nvidia has implemented where they split a streaming processor thread with divergence into two masked streaming processor threads without divergence. This doesn't fundamentally change anything about divergence being bad, you will still get worse performance than if you had figured out a way to avoid divergence. The read port limitations still apply.


Threads have program counters individually according to nvidia, and have done for nearly 10 years

https://docs.nvidia.com/cuda/cuda-programming-guide/03-advan...

> the GPU maintains execution state per thread, including a program counter and call stack, and can yield execution at a per-thread granularity

Divergence isn't good, but sometimes its necessary - not supporting it in a programming model is a mistake. There are some problems you simply can't solve without it, and in some cases you absolutely will get better performance by using divergence

People often tend to avoid divergence by writing an algorithm that does effectively what pascal and earlier GPUs did, which is unconditionally doing all the work on every thread. That will give worse performance than just having a branch, because of the better hardware scheduling these days


Python is by far the slowest programming language, an order of magnitude slower than other languages

One of the reason mercurial lost the dvcs battle is because of its performance - even the mercurial folks admitted that was at least in part because of python


>I don’t really understand the, “more, better, faster,” cachet to be honest. Writing the code hasn’t been the bottle neck to developing software for a long time. It’s usually the thinking that takes most of the time and if that goes away well… I dunno, that’s weird. I will understand it even less.

This is what I've always found confusing as well about this push for AI. The act of typing isn't the hard part - its understanding what's going on, and why you're doing it. Using AI to generate code is only faster if you try and skip that step - which leads to an inevitable disaster


> The act of typing isn't the hard part - its understanding what's going on, and why you're doing it. Using AI to generate code is only faster if you try and skip that step - which leads to an inevitable disaster

It’s more than just typing though. A simple example remembering the exact incantation of CSS classes to style something that you can easily describe in plain English.

Yes, you could look them up or maybe even memorize them. But there’s no way you can make wholesale changes to a layout faster than a machine.

It lowers the cost for experimentation. A whole series of “what if this was…” can be answered with an implementation in minutes. Not a whole afternoon on one idea that you feel a sunk cost to keep.


> It’s more than just typing though. A simple example remembering the exact incantation of CSS classes to style something that you can easily describe in plain English.

Do that enough and you won't know enough about your codebase to recognise errors in the LLM output.


imo a question is, do you still need to understand the codebase? What if that process changes and the language you’re reading is a natural one instead of code?

> What if that process changes and the language you’re reading is a natural one instead of code?

Okay, when that happens, then sure, you don't need to understand the codebase.

I have not seen any evidence that that is currently the case, so my observation that "Continue letting the LLM write your code for you, and soon you won't be able to spot errors in its output" is still applicable today.

When the situation changes, then we can ask if it is really that improtant to understand the code. Until that happens, you still need to understand the code.


The same logic applies to your statement:

> Do that enough and you won't know enough about your codebase to recognise errors in the LLM output.

Okay, when that happens, then sure, you'll have a problem.

I have not seen any evidence that that is currently the case i.e. I have no problems correcting LLM output when needed.

When the situation changes, then we can talk about pulling back on LLM usage.

And the crucial point is: me.

I'm not saying that everyone that uses LLM to generate code won't fall into "not able to use LLM generated code".

I now generate 90% of the code with LLM and I see no issues so far. Just implementing features faster. Fixing bugs faster.


> The same logic applies to your statement:

>> Do that enough and you won't know enough about your codebase to recognise errors in the LLM output.

> Okay, when that happens, then sure, you'll have a problem.

It's not exactly the same: how will you know that you are missing errors due to lack of knowledge?

> I now generate 90% of the code with LLM and I see no issues so far.

Well, that's my point, innit? "I see no errors" is exactly the same outcome from "missing the errors that are generated".


You do have a point but as the sibling comment pointed out, the negative eventuality you are describing also has not happened for many devs.

I quite enjoy being much more of an architect than I could compared to 90% of my career so far (24 years in total). I have coded my fingers and eyes out and I spot idiocies in LLM output from trivially easy to needing an hour carefully reviewing.

So, I don't see the "soon" in your statement happening, ahem, anytime soon for me, and for many others.


> I have coded my fingers and eyes out and I spot idiocies in LLM output from trivially easy to needing an hour carefully reviewing.

This is exactly the opposite experience of sibling, who reports not seeing any issues in the generated code.

You report spotting idiocies, he reports seeing nothing, and you are both making the same argument :-/


What happens when your LLM of choice goes on an infinite loop failing to solve a problem?

What happens when your LLM provider goes down during an incident?

What happens when you have an incident on a distributed system so complex that no LLM can maintain a good enough understanding of the system as a whole in a single session to spot the problem?

What happens when the LLM providers stop offering loss leader subscriptions?


AFAIK everything I use has timeouts, retries, and some way of throwing up its hands and turning things back to me.

I use several providers interchangeably.

I stay away from overly complex distributed systems and use the simplest thing possible.

I plan to wait for some guys in China to train a model on traces that I can run locally, benefitting from their national “diffusion” strategy and lack of access to bleeding-edge chips.

I’m not worried.


> What if that process changes and the language you’re reading is a natural one instead of code?

Natural language is not a good way to specify computer systems. This is a lesson we seem doomed to forget again and again. It's the curse of our profession: nobody wants to learn anything if it gets in the way of the latest fad. There's already a historical problem in software engineering: the people asking for stuff use plain language, and there's a need to convert it to a formal spec, and this takes time and is error prone. But it seems we are introducing a whole new layer of lossy interpretation to the whole mess, and we're doing this happily and open eyed because fuck the lessons of software engineering.

I could see LLMs being used to check/analyze natural language requirements and help turn them into formal requirements though.


> But it seems we are introducing a whole new layer of lossy interpretation to the whole mess (...)

I recommend you get acquainted with LLMs and code assistants, because a few of your assertions are outright wrong. Take for example any of the mainstream spec-driven development frameworks. All they do is walk you through the SRS process using a set of system prompts to generate a set of documents featuring usecases, functional requirements, and refined tasks in the form of an actionable plan.

Then you feed that plan to a LLM assistant and your feature is implemented.

I seriously recommend you check it out. This process is far more structured and thought through than any feature work that your average SDE ever does.


> I recommend you get acquainted with LLMs and code assistants

I use them daily, thanks for your condescension.

> I could see LLMs being used to check/analyze natural language requirements and help turn them into formal requirements though.

Did you read this part of my comment?

> Take for example any of the mainstream spec-driven development frameworks. All they do is walk you through the SRS process using a set of system prompts to generate a set of documents featuring usecases, functional requirements, and refined tasks in the form of an actionable plan.

I'm not criticizing spec-driven development frameworks, but how battle-tested are they? Does it remove the inherent ambiguity in natural language? And do you believe this is how most people are vibe-coding, anyway?


> Did you read this part of my comment?

Yes, and your comment contrasts heavily with the reality of using LLMs as code assistants, as conveyed in comments such as "a whole new layer of lossy interpretation. This is profoundly wrong, even if you use LLMs naively.

I repeat: LLM assistants have been used to walk users through software requirements specification processes that not only document exactly what usecases and functional requirements your project must adhere to, but also create tasks and implement them.

The deliverable is both a thorough documentation of all requirements considered up until that point and the actual features being delivered.

To drive the point home, even Microsoft of all companies provides this sort of framework. This isn't an arcane, obscure tool. This is as mainstream as it can be.

> I'm not criticizing spec-driven development frameworks, but how battle-tested are they?

I really recommend you get acquainted with this class of tools, because your question is in the "not even wrong" territory. Again, the purpose of these tools is to walk developers through a software requirements specification process. All these frameworks do is put together system prompts to help you write down exactly what you want to do, break it down into tasks, and then resume the regular plan+agent execution flow.

What do you think "battle tested" means in this topic? Check if writing requirements specifications is something worth pursuing?

I repeat: LLM assistants lower formal approaches to the software development lifecycle by orders of magnitude, to the point you can drive each and every single task with a formal SRS doc. This isn't theoretical, it's month's old stuff. The focus right now is to remove human intervention from the SRS process as well with the help of agents.


> Yes, and your comment contrasts heavily with the reality of using LLMs as code assistants, as conveyed in comments such as "a whole new layer of lossy interpretation. This is profoundly wrong, even if you use LLMs naively.

Most people, when told they sound condescending, try to reframe their argument in order to remove this and become more convincing.

Sadly, you chose to double down instead. Not worth pursuing.

> This isn't theoretical, it's month's old stuf

Hahaha! "Months old stuff"!

Disengaging from this conversation. Over and out.


That's a bold assertion without any proof.

It also means you're so helpless as a developer that you could never debug another person's code, because how would you recognize the errors, you haven't made them yourself.


This is not correct. CSS is the style rules for all rendering situations of that HTML, not just your single requirement that it "looks about right" in your narrow set of test cases.

Nobody writing production CSS for a serious web page can avoid rewriting it. Nobody is memorizing anything. It's deeply intertwined with the requirements as they change. You will eventually be forced to review every line of it carefully as each new test is added or when the HTML is changed. No AI is doing that level of testing or has the training data to provide those answers.

It sounds like you're better off not using a web page at all if this bothers you. This isn't a deficiency of CSS. It's the main feature. It's designed to provide tools that can cover all cases.

If you only have one rendering case, you want an image. If you want to skip the code, you can just not write code. Create a mockup of images and hand it off to your web devs.


Eh, I've written so much CSS and I hate it so much I use AI to write it now not because it's faster or better at doing so, just so I don't need to do it.

So AI is good for CSS? That’s fine, I always hated CSS.

> It lowers the cost for experimentation. A whole series of “what if this was…”

Anecdotal, but I've noticed while this is true it also adds the danger of knowing when to stop.

Early on I would take forever trying to get something exactly to whats in my head. Which meant I would spend too much time in one sitting then if I had previously built it by hand.

Now I try to time box with the mindset "good enough".


> But there’s no way you can make wholesale changes to a layout faster than a machine.

You lost me here. I can make changes very quickly once I understand both the problem and the solution I want to go with. Modifying text is quite easy. I spend very little time doing it as a developer.


Don't worry. In a few years we'll be like the COBOL programmers who still understand how things work, our brains haven't atrophied, and we make good money fixing the giant messes created by others.

Sounds awful. I'm not interested in fixing giant messes. I'll just be tinkering away making little things (at scale) where the scope is very constrained and the fixing isn't needed.

People can do their vibecoding to make weird rehackings of stuff I did, almost always to make it more mainstream, limited, and boring, and usually to some mainstream acclaim. And they can flame out, not my problem.

I'm not fixing anybody's giant mess. I'm doing the equivalent of simply refusing to give up COBOL. To stop me, people will have to EOL a huge amount of working useful stuff for no good reason and replace it with untrustworthy garbage.

I am aware this is exactly the plan on so many levels. Bring it. I don't think it's going to be popular, or rather: I think only at this historical moment can you get away with that and not immediately be called on it, as a charlatan.

When our grandest celebrity charlatans go in the bin, the time for vibecoding will truly be over.


AI not just types code for you. It can assist with almost every part of software development. Design, bug hunting, code review, prototyping, testing.

It can even create a giant ball of mud ten times faster than you can.

A Luddite farm worker can assist in all those things, the question is, can it assist in a useful manner?

Not only it can but it does.

Just as I was reading this claude implemented a drag&drop of images out of SumatraPDF.

I asked:

> implement dragging out images; if we initiate drag action and the element under cursor is an image, allow dragging out the image and dropping on other applications

then it didn't quite work:

I'm testing it by trying to drop on a web application that accepts dropped images from file system but it doesn't work for that

Here's the result: https://github.com/sumatrapdfreader/sumatrapdf/commit/58d9a4...

It took me less than 15 mins, with testing.

Now you tell me:

1. Can a farm worker do that?

2. Can you improve this code in a meaningful way? If you were doing a code review, what would you ask to be changed?

3. How long would it take you to type this code?

Here's what I think: No. No. Much longer.


Why is it using a temp file? Is there really no more elegant way to pass around pointers to images than spilling to disk?

Of course there is, but slop generators be slopping

What is it, o wise person stingy with the information.

I admire you for what you've created wrt Sumatra. It's an excellent piece of software. But, as a matter of principle, I refuse to knowingly contribute to codebases using AI to generate code, including drive-by hints, suggestions, etc.

You, or rather Claude, are not the first to solve this problem and there are examples of better solutions out there. Since you're willing to let Claude regurgitate other people's work, feel free to look it up yourself or have Claude do it for you.


The code is really bad, so I'd have a lot to say about it in a review. Couldn't do it in 15 minutes, though.

It always seemed to me like its lootbox behavior. Highly addictive for the dopamine hit you get.

"This is what I've always found confusing as well about this push for AI."

I think it's a few things converging. One is that software developers have become more expensive for US corporations for several reasons and blaming layoffs on a third party is for some reason more palatable to a lot of people.

Another is that a lot of decision makers are pretty mediocre thinkers and know very little about the people they rule over, so they actually believe that machines will be able to automate what software developers do rather than what these decision makers do.

Then there's the ever-present allure of the promise that middle managers will somehow wrestle control over software crafts from the nerds, i.e. what has underpinned low-code business solutions for ages and always, always comes with very expensive consultants, commonly software developers, on the side.


> This is what I've always found confusing as well about this push for AI.

They want you to pay for their tokens at their casino and rack up a 5 - 6 figure bill.


> This is what I've always found confusing as well about this push for AI. The act of typing isn't the hard part - its understanding what's going on, and why you're doing it.

This is a very superficial and simplistic analysis of the whole domain. Programmers don't "type". They apply changes to the code. Pressing buttons in a keyboard is not the bottleneck. If that was the case, code completion and templating would have been a revolutionary, world changing development in the field.

The difficult part is understanding what to do and how to do it, and why. It turns out LLMs can handle all these types of task. You are onboarding onto a new project? Hit a LLM assistant with /explain. You want to implement a feature that matches a specific requirement? You hit your LLM assistant with /plan followed by apply. You want to cover some code with tests? You hit your LLM assistant with /tests.

In the end you review the result,and do with it whatever you want. Some even feel confident enough to YOLO the output of the LLM.

So while you still try to navigate through files, others already have features out.


> If that was the case, code completion and templating would have been a revolutionary, world changing development in the field.

And yet this is "AI is world changing, look at how fast it can change code!"

> So while you still try to navigate through files, others already have features out.

Your argument is "it can also read code faster too" - but it doesn't have the same tacit knowledge within the codebase. Documentation and comments can be wrong sometimes. Names are poorly chosen

That's the thing about reviews: the implementor doesn't know what's needed for the feature, but the reviewer now needs to. The latter can't trust the former anymore.

/Explain is constantly wrong. /Plan is constantly over engineered. /Tests are constantly fragile

The only benefit AI has produced to existing codebase is now people care a lot more about getting documentation right and adding little snippet/how-tos called "skills" or whatever.


There's a good reason everyone calls them microslop these days. The sooner we're all able to ditch this crappy company, the better - they're actively holding back the tech industry at this point

Yea, I'm in the process of converting our complete ETL infrastructure from SSIS/SQL Server to Python/PostgreSQL. Next step is Office 365, which will be more difficult, but doable since we are a small company anyway.

Apple suite (Pages, Numbers etc) works well, has good mobility to mobile, and is free.

Apple Mail and Apple Calendar are fine to replace Exchange, as is Thunderbird(see 1), but Mail is more turnkey (1-click pairs with MS Exchange)

You can downgrade your O365 licenses to Exchange Plan 1 and keep your email hosting at a tiny fraction of the price of full 365 suite.

(1): Beware thunderbird has an open and unsolved bug that randomly deletes all your emails, kind of like a 1d1 million dice roll.


Are you converting the SSIS automatically somehow or rewriting it?

They have been holding back the tech industry for decades now.

To be fair, the tech industry been holding itself back for decades now too, since lots of people seemingly have somewhat low prices to go from being a FOSS evangelist to wearing a "Microsoft <3 Open Source" t-shirt.

that's just a byproduct of "job creators" holding the keys to a comfortable life over everyones head.

i dont think its fair to conflate the tech industries self-owns with microsofts damages. microsoft has for decades poured untold resources and money into capturing everything they possibly could to sustain themselves with honestly what i call cultural and software vendor lock. we're only just now seeing the gaming industry take its first real footsteps towards non-windows targets, but for the most part the decades of evangelizing Microsoft apis and bankrolling schools and education systems to carry courses for their way of doing things makes that a particularly uphill battle thats going to take a lot more time. people have built entire careers out of the microsoft-way in multiple industries. pure microsoft houses are still everywhere at many orgs, so many of them don't even recognize that there is another path. there's plenty of infra/dbadmin/devops people who are just pure windows still. there's multiple points where microsoft did have the best in class solution for something, but these days you'd be hard pressed to not go another way if you were starting from scratch. problem is such a lift and shift is really hard to do for orgs that have spent decades being a microsoft shop.

in a roundabout way, this sort of translates to real long lasting impact/damage to me. microsoft has always been such a force over history that it caused a massive rift in computing. no matter how much they embrace linux and claim to not fight the uphill battle of open source anymore, that modus operandi of locking people into their suite of things still exists on so many fronts and is in some ways more in your face than it's ever been. there's no benefit of the doubt to give here, i just have a hard time choosing microsoft for... well anything.


Microsoft has been trying to kill everything in computers that's not-Microsoft, for as long as I've been alive. Their actual power comes and goes, strengthens and weakens, but it's been a continuous background threat to personal computing since the first day something other than Microsoft tried to get traction in the industry.

Looking at the rest of the tech industry in 2026 that might be a blessing.

What does this even mean? It's like throwing around the word 'bloat'.

We can explain it to you, but we can't understand it for you.

Explanation: Microslop is a power hungry, greedy and frankly evil corporation whose only goal is complete financial domination of the government, business, and personal tech industries. They actively promote making regressive software, increasing complexity, and hiding straightforward processes behind an information veil.

Example: Go to learn.microsoft.com and try to actually learn HOW to do anything. You'll read 35 pages of text talking about the concept of working with a specific microslop product but not 1 single explicit example of HOW to accomplish a specific task.

Example: Windows 11

Example: Copilot

The whole company is run by backassward tech hicks and digital yokels who can't think past a dime on the floor for a dollar in customer satisfaction, and somehow they run the majority of non-server space or personal device tech on the planet.


Funny enough, I do read the Copilot Studio and Dynamics Customer Service and Power Platform documentation and understand it. But reading documentation from any vendor is a skill. Don’t throw me in front of Google or Oracle documentation and expect me to understand it off the bat.

And of course companies in the US are wanting to make money/capture markets. They’re not a charity. None of that has any relation to holding back the industry. Unless you wish to explain how they hold back all FOSS projects.

You don’t need to be rude in your replies. This is HN, not reddit.


Outside of work, I don't use Windows very often if at all. I have a 2017 laptop that Microsoft made, and it is so damn sluggish for absolutely no reason, its VERY VERY vanilla mind you.

Apple also holds back the tech industry in many ways. All companies seem willing to put profits before progress.

active directory and excel runs the world.

what is apple doing that is similar?


How is active directory and excel holding the tech industry back?

Apple is holding the tech industry back by forbidding any browser on iOS except Safari and then refusing to implement any APIs that would allow web applications to compete with their app store. Apple is choosing profit over progress.


i remember years and years ago learning some posix/shell syntax and working in terminal. felt like my love for windows unraveled in real time. these days using windows... feel like i gotta take a shower after. like many i was just raised on windows it was the household operating system i had like 20 years of general computer usage under my belt on windows before i finally felt a mac trackpad for the first time. that hardware experience alone was the first pillar kicked out upholding my "windows is the best" philosophies. then i got into coding, then i tripped and fell out of hourly boeing slave labor into a sql job (lost 55% yearly income, no regrets yo). then i started discovering the open source world, and learned just how much computing goes on outside of the world of windows and how many insanely bright minds are out there contributing to... not microsoft. now i have linux and macos machines everywhere, i still haven't found the bottom but the last 6-7 years or so have been a really rich journey.

currently have a 32bit win xp env spun up in 86box just to compile a project in some omega old visual studio dotnet 7 and the service pack update at the time (don't ask). it is seriously _wild_ being in there, feels like stepping into a time machine. nostalgia aside, the OS is for the most part... quiet. doesn't bother you, everything is kind of exactly where you expect it to be, no noise in my start menu, there isnt some omega bing network callstack in my explorer, no prompts to o365 my life up.

it feels kinda sad, what an era that was. it's just more annoying to do any meaningful work in windows these days.

im currently working with c/cpp the idiot way (nothing about my story is ever conventional sigh), by picking a legacy project from like 22 years ago. this has forced me to step back into old redhat 7.1+icc5, old windows xp + dotnet7 like i explained above, and im definitely taking the most unpragmatic approach ever diving in here.. but there's one thing that absolutely sticks out to me: microsoft has always tried to capitalize on everything. tool? money. vendor lock. os? money. vendor lock. entire industries/education system capture? lotta money. lotta vendor lock. lotta generational knowledge lock.

they are lucky people are still using github. theyve tried to poke the bear a few times and theyre slowly but surely enshittifying the place, but im just kinda losing any reverence for microsoft altogether. microsoft has been big for a hot minute now, they have their eras. you can feel when things are driven by smart visionary engineers working behind the scenes, and you can tell when things are in pure slop mode microservice get rich or die trying mode. yea, microsoft has.. always been vendor-lock aggro and kinda hostile, but the current era microsoft is by far the grossest it's ever been. see: microsoft teams (inb4 "i use teams every day, i dont have a problem with it")

im aware people smarter than me can write diatribes on why windows is the best at x thing, but im only informed by my own experience of having to use all three (linux/macos/windows) for my professional work life: i grew up thinking windows was the best.. now im like mostly confident that windows is actually the worst lol. by a pretty damn decent margin. i was gaslit for ages


Yeah. I felt in a similar manner when I moved to Linux. Microsoft seemed to make people dumber. I do actually use both Linux and Windows (Win10 only), largely for testing various things, including java-related software. But every time I use Windows, I am annoyed at how slow everything is compared to Linux. (I should mention that I compile almost everything from source on Linux, so most of the default Linux stack I don't use; many linux distributions also suck by default, so I have to uncripple the software stack. I also use versioned appdirs similar as to how GoboLinux does, but in a more free form.)

Microsoft has spent most of its life as a corporate bureaucracy that produces sales-and-marketing content, some of which happens to moonlight as software.

> feel like i gotta take a shower after

I run Crossover and I feel like I gotta take a shower after. Just knowing there's a folder called drive_c on my Mac is the stuff of nightmares.


Quantum computing, and the generic term 'quantum' is gearing up to be the next speculative investment hype bubble after AI, so prepare for a lot of these kinds of articles

nah. governments around the world are hoovering up traffic today with the hope of a "cheap" (by nation state standards) quantum computer. Some of the secrets sent today are "evergreen" (i.e are still relevant 10+ years into the future), amongst a whole lot of cruft. There is massive incentive to hide the technology to keep your peers transmitting in vulnerable encryption as long as possible.

For sure, that or just ensuring they have laws in place that grant them access to the unencrypted data we are sending to CDNs operating in their jurisdiction (when necessary for national security reasons).

At least it's time bound: hope to have this job done by 2029!

This seems like a good way to weed out models: ask them to include the term capybara in their commit messages


Contracts aren't for handling errors. That blog post is extremely out of date, and doesn't reflect the current state of contracts

Modern C++ contracts are being sold as being purely for debugging. You can't rely on contracts like an assert to catch problems, which is an intentional part of the design of contracts


>abuse like bots, scraping

10/10, I've got no notes


I definitely get this. The thing that gives me hope is that you only need to poison a very small % of content to damage AI models pretty significantly. It helps combat the mass scraping, because a significant chunk of the data they get will be useless, and its very difficult to filter it by hand


The asymmetry is what makes this very interesting. The cost to inject poison is basically zero for the site owner, but the cost to detect and filter it at scale is significant for the scraper. That math gets a lot worse for them as more sites adopt it. It doesn't solve the problem, but it changes the economics.


Everyone acknowledges that the US killed a whole bunch of kids, including the US


I'm not sure why the other reply here was flagged and killed. The US absolutely has NOT acknowledged that they killed school children. The DoW and other government officials have only publicly stated that an investigation is taking place.


I don't know about the US internal propaganda, but international media seems pretty certain on this war crime


This is incorrect. The US government (via Secretary Hegseth) has only confirmed that they are investigating the incident.

What the US has NOT confirmed:

- that they are responsible for the bombing

- who hit the school

- whether the school was an intended target of US strikes

- whether it was struck intentionally

- that it was mistaken for a military site

- any casualty count

- whether there were civilians or children in the casualty count

The US has explicitly DENIED:

- That they deliberately target civilian targets

These are the facts about what the US has actually confirmed. We are all entitled to our opinion of what happened. But we should be able to acknowledge that they are just that: opinions. We don't actually know what happened. And I find it scary and dangerous that so many people, on hacker news and elsewhere, are acting like they do.

Sources:

- https://www.war.gov/News/Transcripts/Transcript/Article/4421...

- https://www.war.gov/News/Transcripts/Transcript/Article/4434...


Constant lies, incompetence and corruption. Why would anyone trust anything they have to say or any investigation they might conduct?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: