Hacker Newsnew | past | comments | ask | show | jobs | submit | budro's commentslogin

But they're not willing. And they'll often just cause more damage/loss than they provide in benefit to the company. Even the most benign drug which people claim is fine is one that makes you lazy and slow. Why would you employ someone so addicted to it that they can't abstain for a few weeks?

"Piss tests" filter out a lot of anti-social behavior.


>But they're not willing.

This is bullshit.

>And they'll often just cause more damage/loss than they provide in benefit to the company.

That's for the company to decide.

>Even the most benign drug which people claim is fine is one that makes you lazy and slow.

If you need someone hyper to do it, then hire the meth freaks.

>Why would you employ someone so addicted to

You seem to have misread my comment. I haven't deleted it, go back and check. I didn't say "companies should be forced to hire them". I promise that's not what you saw. But no need to take my word for it, move your eyes upward about 2 inches and check it out.

>"Piss tests" filter out a lot of anti-social behavior.

No, it literally causes people to steal copper wiring and pipes, turning salvageable houses into wrecks, causing millions in property damage, and in some cases causing deaths. Piss tests let them decide to not hire people who were borderline enough that the only way they could tell they were using drugs was to use piss tests. And I don't know how you can't see this, you're being irrational.


I think it mostly comes down to the standard library guiding you down this path explicitly. The C stdlib is quite outdated and is full of bad design that affects both performance and ergonomics. It certainly doesn't guide you down the path of smart design.

Zig _the language_ barely does any of the heavy lifting on this front. The allocator and io stories are both just stdlib interfaces. Really the language just exists to facilitate the great toolchain and stdlib. From my experience the stdlib seems to make all the right choices, and the only time it doesn't is when the API was quickly created to get things working, but hasn't been revisited since.

A great case study of the stdlib being almost perfect is SinglyLinkedList [1]. Many other languages implement it as a container, but Zig has opted to implement it as an intrusively embedded element. This might confuse a beginner who would expect SinglyLinkedList(T) instead, but it has implications surrounding allocation and it turns out that embedding it gives you a more powerful API. And of course all operations are defined with performance in mind. prepend is given to you since it's cheap, but if you want postpend you have to implement it yourself (it's a one liner, but clearly more expensive to the reader).

Little decisions add up to make the language feel great to use and genuinely impressive for learning new things.

[1] https://ziglang.org/documentation/master/std/#std.SinglyLink...


Antithesis talks about some very particular use cases, but they're not the first to explore this. SQLite has something similar in the form of their `testcase` macro. [1]

What's most shocking about their whole website is how it goes to show how unserious the software industry is at large with regards to testing. It's not surprising when a good chunk of the programmers I know are vehemently opposed to "adding too many assertions", let alone something like this.

[1] https://www.sqlite.org/testing.html#testcase


I think what the article gets at, but doesn't quite deliver on, is similar to this great take from Casey Muratori [1] about how programming with a learning-based mindset means that AI is inherently not useful to you.

I personally find AI code gen most useful for one-off throwaway code where I have zero intent to learn. I imagine this means that the opposite end of the spectrum where learning is maximized is one where the AI doesn't generate any code for me.

I'm sure there are some people for which the "AI-Driven Engineering" approach would be beneficial, but at least for me I find that replacing those AI coding blocks with just writing the code myself is much more enjoyable, and thus more sustainable to actually delivering something at the end.

[1] https://youtu.be/apREl0KmTdQ?t=4751 (relevant section is about 5 minutes long)


Interesting take.

I think it boils down to personal preference where some people want to use AI while others don't. I also learn when coding with my AI agent. I learn about using the tool more effectively. As someone who has been coding for 10 years, I find more pleasure in AI assisted coding.

But aside from taste, the product and the business don't care about what I like. It's about shipping quality updates more quickly. And while there might be some tension in saying this, I'm convinced that I can do that much more quickly in AI assisted coding.


"learning is maximized is one where the AI doesn't generate any code for me"

Obviously you have to work to learn, but to me this is a bit like saying learning is maximized when you never talk to anyone or ask for help — too strong.


I don't think it was that strong of an over-generalization. AI doesn't seem to help out in the same way a human would. My teammates will push back and ask for proof of effort (a PR, some typedefs, a diagram, etc.). And sometimes they'll even know how to solve my problem since they have experience with the codebase.

On the other hand you have AI which, out of the box, seems content to go along with anything and will happily write code for me. And I've never seen it have a single insight on the same level as my teammates. All of which is to say, AI doesn't really feel like something you can properly "ask" something. It's especially far away from that when it's just generating code and nothing else.


Type hints seem fantastic for when you're in maintenance mode and want to add sanity back to a system via automated tooling.

However for new projects I find that I'd much rather pick technologies that start me off with a sanity floor which is higher than Python's sanity ceiling. At this point I don't want to touch a dynamically typed language ever again.


Vibes are totally different though. People go childfree by choice, whereas men go MGTOW because there __is no other choice__ (that they find easy or preferable). I've heard it described before as "men sent their own way (MSTOW)", which is fitting since one usually identifies with the label after many unsuccessful relationships.


I work on embedded appliance software at my job. A few comments:

It's quite easy to find yourself having non-zero boot times for some unfortunate reasons. At least in my org, the software as a whole is RAM/ROM constrained rather than speed constrained. Even when you're this close to bare metal, devs tend to write over-abstracted code riddled with inefficiencies. And of course most people don't profile the application at all. This is a symptom of the software being under-tested imo. I have personally written tooling to integration test the whole application for a few appliances, and for one appliance initializing the application 56 times took over 1 second. On a modern machine it should take milliseconds. After profiling I found that 99%+ of our time was spent servicing a subscription tied to all events, that really only needed to subscribe to just one or two.

Along with that there are other reasons for long apparent boot up times:

- Waiting for other boards to connect and talk to each other. Your UI can't do anything until it knows the state from the main control.

- Randomized delays to prevent current surges after a blackout. You'll see this on ACs or other appliances that might have hundreds of identical units in a building.

- Waiting for flash memory to be readable

All of this adds up to seconds of boot time. Yet ultimately none of this matters to the business people because we're an appliance company, NOT a software company. Our software is mostly incidental to having a functioning product, and boot times could go way higher without the business being worried. Though recently yes, we have entered the data market hence the push for smart features. Word to the wise, avoid any appliance with Android in it if you don't like the idea of forced connected features!

I unfortunately don't have any solutions to most of the problems presented in the article. All I can do is continue to try writing bullet-proof software and push back against forced connected features.


I'm sorry for shitting on your job, but it seems like the solution is to bring back the buttons we had in the 80s and 90s and drop all the software garbage? I don't want a UI, I want an On button.


It's fine, I take every opportunity I can to shit on my own job lol.

Ironically what you're looking for can be found in the lowest end, and the highest end products. Low end means low features, so you can get away with just a knob and maybe a few LEDs. Look at Hotpoint (GE's low end brand) or a low end LG washer [0].

High end usually forgoes a flashy UI as well since it's about the style and being a centerpiece.

The mid end is where it's weird because features justify the extra cost. In order to make those available you need to have an LCD screen and more buttons.

In all these categories you'll run into software though. It's cheaper than a electro-mechanical solution. We only fall back to the old ways when required for safety/compliance.

[0] https://www.homedepot.com/p/324433017


You will probably find the ones you like with simple interfaces and a few physical buttons for an additional price. Simple things are now premium.


> And this is where Rust differs; Rust will reduce the likelihood of bugs from happening in the first place. I'm not saying Zig doesn't also do that, Rust just does it more.

I'd argue that it does this a LOT more. When it comes to spatial safety, Rust and Zig are on par. However in terms of temporal safety I think Zig lags far behind Rust... along with basically every other compiled systems language.

If you had to come up with a single good reason to use Rust over Zig, it would definitely be temporal safety. A majority of applications don't need this though, or can be designed such that they don't.

One of my main issues with Rust is that it makes it easy to write spaghetti code that uses reference counting all over the place. I don't recommend people to use Rust until they become a "N+1 programmer" that Casey Muratori describes as "grouped element thinking" [1]. At this point, most of that programmers problems are solved and suddenly the Zig vs Rust debate becomes much more nuanced.

[1] https://www.youtube.com/watch?v=xt1KNDmOYqA


> until they become a "N+1 programmer" that Casey Muratori describes as "grouped element thinking"

Thank you for this great link.


> When it comes to spatial safety

Only at the cost of runtime checks and/or misusable allocator passing.


> Only at the cost of runtime checks

And those checks are well worth paying for, enough that Rust also has them! I'm sure Rust would have even worse ergonomics if it insisted on no runtime bounds/unwrap checks. The performance cost is low enough that every systems programming language should have spatial safety. To illustrate, Google found that hardening libc++ by adding bounds checks only led to a 0.30% performance impact on their services [1]. It's not hard to imagine that a similar run-time cost affects Zig and Rust. Both languages make attempts to optimize the checks away if the operation is known-safe. Although, maybe Rust is more successful due to the borrow checker.

> and/or misusable allocator passing.

Could you clarify how allocator misuse might lead to a spatial safety violation?

[1] https://security.googleblog.com/2024/11/retrofitting-spatial...


I've done a couple of projects in Rust, and I can appreciate this sentiment.

I don't think my code was pretty and I know it wasn't idiomatic. But I have some experience in writing concurrent code, so I have an understanding of the reasoning behind Rust's borrowing semantics, why lifetimes are important, and the difference between boxed values, rc, arc, and I additionally understand the semantics of semaphores, mutexes, spinlocks, and critical sections.

(As an aside, I don't know if I'm an N+1 programmer. My code works, long term, requires very little maintenance, and generally stays out the way. Just the way I like it.)

But- I see these recurring themes in posts from Rustaceans along the lines of "the compiler tells you what's wrong" and "if it compiles, it's correct!"

These kinds of statements worry me. Not necessarily because I think their code is going to blow something up, but because I think a sizeable portion of the community does not really understand the guarantees Rust provides, and under what contexts they are valid. I mean, sure, you might not have a data race, but you sure as hell can code yourself into race condition or a deadlock, and from what I understand from HNers, these situations are not all that uncommon in Rust crates. I'm also led to believe that the panic handling situation in some crates isn't ideal.

You shouldn't just haphazardly Arc<Mutex<T>> all the things just because that gives you the Send + Sync that you think you're looking for (or the compiler is looking for). You should understand what the hell you're doing, be able to tell if those things are necessary, and understand the ramifications of using those abstractions.

I think there's a lot of good going on in the Rust ecosystem, and I wish I had more work in that area, but there's a lot of blind advocacy there that assumes the Rust approach is the best, but the language does have serious ergonomic limitations in certain low level scenarios. My whole career I've had to focus on scope and objects that outlive their scope, but now every other word I seem to hear from the Rust community is "Lifetimes" and "Ownership", like these concepts were completely unfamiliar to developers before and now the Rust Way(tm) is not only THE WAY, but THE ONLY RIGHT WAY.

I don't want to go too far off the rails, because I find the ecosystem intriguing, and I see tremendous value there. Whether those approaches stand the test of time isn't a given (although let's face it, they probably will because they are sound and they are gaining developer mindshare, which is the most important factor).

I just worry about ergonomics. Coding is not supposed to be easy, so please don't misunderstand- but coding, especially for those that do it day in and day out for decades, should not be a chore. There are definitely some QOL improvements that could be realized in the near term, although I'm not sure what those would look like.


Most of the human-made slop you'll see is going to be, at least on its surface, high quality and well produced since that's what goes viral and gets shared. To that end, I agree with you.

It is worth noting though that the other 99% of human-made slop doesn't make it to you since it never gets popular, hence why hard to filter human-made slop can seem over-represented just through survivorship bias.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: