Hacker Newsnew | past | comments | ask | show | jobs | submit | lich_king's commentslogin

It is, because of the baked-in asymmetry. "I couldn't be bothered to write it, but you have to read it". Unless your expectation is that I'm going to have my chatbot summarize the messages from your chatbot, in which case, maybe we should just both ride off into the sunset.

Silicone doesn't require plasticizers (because it's elastic on its own) or fire retardants (because it doesn't burn easily). The material itself is also considered biologically inert and is less affected by temperature, solvents, etc. So it's usually the best choice for stuff like that. The reason it's not as common is that it's more expensive and not as durable. It has relatively poor abrasion and cut resistance.

But then, I wouldn't worry about headphones at all. You probably sleep on a mattress made from polyurethane foam that contains plasticizers and fire retardants in much greater quantities. The same goes for your car seats, and they off-gas a lot more when parked in the sun. You'd probably need to eat 1,000 earbuds to match that.


> If they do like it, what's the issue?

Are you serious? However goofy that sounds, they paid for a specific fantasy. They would not have paid if you advertised the service as "talk dirty with a random dude in India". If the reason they paid for this service is that they were promised a specific person, that's fraud. As simple as that.

Your judgment about whether the services are equivalent doesn't matter. If I pay you for Gucci socks, and you intentionally send me cheaper HZBZZYXY socks from Amazon instead, that's fraud even if they're still socks.


> If I pay you for Gucci socks, and you intentionally send me cheaper HZBZZYXY socks

The difference is t he product is 'blessed' by the official seller: Would you feel defrauded if Gucci sends you the Gucci-branded socks you ordered, but you discover later they were made by the HZBZZYXY factory in Guangdong rather than by an Italian master sock-craftsman?


That question falls entirely under the legal concept of false advertising. Depends on what Gucci proclaimed.

Jamtarians pride themselves with such deeds

Eh. It's different, but framing it as uniquely challenging seems silly. There are very few other jobs where you don't need to deliver any specific, measurable results for months or years. And your "career-ending" outcome is that you go and get a cozy industry job in the same field because you already have a degree. Now, you might have a difficulty adjusting to that because they will want you to get stuff done.

You can read the actual bill here: https://legiscan.com/MT/text/SB212/id/3212152/Montana-2025-S...

In essence, it doesn't really mandate anything; it says you should have a plan, and only for "critical infrastructure facilities":

"Section 4. Infrastructure controlled by critical artificial intelligence system. (1) When critical infrastructure facilities are controlled in whole or in part by a critical artificial intelligence system, the deployer shall develop a risk management policy after deploying the system that is reasonable and considers guidance and standards in the latest version of the artificial intelligence risk management framework from the national institute of standards and technology, the ISO/IEC 4200 artificial intelligence standard from the international organization for standardization, or another nationally or internationally recognized risk management framework for artificial intelligence systems. A plan prepared under federal requirements constitutes compliance with this section."

So it's essentially lip service to AI safety, probably to quell some objections to a bill that otherwise limits regulation of tech platforms.


I did read it. The point is there are no findings that justify the regulation in light of the grant of rights in the same bill. The only WHEREAS that approaches the level of a finding amounts to "many are saying..."

If you're running your open source project or other hobby endeavor, you can do it however you want. People will either adapt to your style or leave. The same, with some caveats, applies to running your own company (the caveats being lawsuits and needless drama if you take it too far).

But if you're a line employee for a corporation, this is the wrong approach, for two reasons. First, you will encounter many people who misinterpret directness as hostility, simply because your feelings toward another person are hard to convey in a chat message unless you include all that social-glue small talk. And if people on average think you're a jerk, they will either avoid you or reflexively push back.

But second... you're not that brilliant. Every now and then, the thing you think is wrong isn't actually wrong, you just don't understand why your solution was rejected beforehand. Maybe there are business requirements you don't know about, maybe things break in a different way if you make the change. Asking "hey, help me understand why this thing is the way it is" is often a better opener than "yo dude, your thing is broken, here's what you need to do, fix it now".


Anthropic, maybe, but what is the philosophical niche of OpenAI? Their only consistent philosophical position about AI is "let's make more money".

I think OpenAI is more of an aesthetic. Very... Apple-like, polished, with an eye towards making really cool stuff. And aesthetics are a type of philosophy.

This is less noble than how Anthropic presents themselves but still much more attractive to many than XAI.


The feeling on the street is that Anthropic IS the Apple of the AIs.

Come now, surely Anthropic is a premium Linux distribution.

And Apple a premium Unix derivative?

To a researcher, the aesthetic is more like Bell Labs, with many research teams working with some autonomy, which is why the public naming of model releases appears chaotic. Very different to the top-down approach of Apple.

> aesthetics are a type of philosophy.

What philosophy is that?


It's literally called aesthetics, the philosophical approach is the original meaning of the word - https://en.wikipedia.org/wiki/Aesthetics

Properly, focusing on aesthetics as an ethic would be practicing the philosophy of aestheticism - https://en.wikipedia.org/wiki/Aestheticism



"You can use my model to kill others if Dario won't do it sir"

> In practice this still doesn't mean 50 % of white collar can't be automated though.

Let me ask you this, though: if we wanted to, what percentage of white collar jobs could have been automated or eliminated prior to LLMs?

Meta has nearly 80k employees to basically run two websites and three mobile apps. There were 18k people working at LinkedIn! Many big tech companies are massive job programs with some product on the side. Administrative business partners, program managers, tech writers, "stewards", "champions", "advocates", 10-layer-deep reporting chains... engineers writing cafe menu apps and pet programming languages... a team working on in-house typefaces... the list goes on.

I can see AI producing shifts in the industry by reducing demand for meaningful work, but I doubt the outcome here is mass unemployment. There's an endless supply of bs jobs as long as the money is flowing.


Meta has 80k employees to run the world's most massive engine of commerce through advertising and matching consumers to products.

They build generative AI tools so people can make ads more easily.

They have some of the most sophisticated tracking out there. They have shadow profiles on nearly everyone. Have you visited a website? You have a shadow profile even if you don't have a Facebook account. They know who your friends are based on who you are near. They know what stores you visit when.

Large fractions of their staff are making imperceptible changes to ads tracking and feed ranking that are making billions of dollars of marginal revenue.

What draws you in as a consumer is a tiny tip of the iceberg of what they actually do.


So like parent said, mostly bs jobs that would improve the product if removed </s>

Totally fair! I think my point might be this is more malice than incompetence.

There are many reasons why we are seeing cuts economically, but the fact that it is possible to make such large cuts is because there were way too many people working at these companies. They had so much cheap money that they over-hired, now money isn't so cheap and they need to reduce headcount. AI need not enter the conversation to get to that point.

This is unfair and dismissive of many roles. Coordination in a massive, technically complex company that has to adhere to laws and regulations is a critical role. I don't get why people shit on certain roles (I'm a SWE). Our PgMs reduce friction and help us be more productive and focused. Technical writers produce customer-facing content and code, and have nothing to do with supporting internal bureaucracy. There are arguments against this in Bullshit Jobs but do you think companies pay PgMs or HR employees hundreds of thousands of dollars a year out of the goodness of their own hearts? Or maybe they actually help the business?

It's also because as you increase organisational complexity, you need to manage it somehow, which generally means hiring more people to do that. And then you need to hire people to manage those new managers. Ad infinitum. The increased complexity begets more complexity.

It sort of reminds me of The Collapse of Complex Societies by Joseph Tainter. These companies are their own microcosms of a complex society and I bet we will see mass layoffs in the future, not from AI but from those companies collapsing into a more sustainable state.


You realize that the reason you need to manage this organizational complexity is largely because the organization is so huge?...

The reality is that you could run LinkedIn with far, far fewer people. You probably need fewer than 100 for core engineering, and likely less than 1,000 overall if you include compliance, sales, and so on - especially since a lot of overseas compliance stuff is outsourced to consulting firms, it's not like you have a team of lawyers in every country in the world.

Before there was so much money in the system, we used to run companies that way. Two decades ago, I worked for a company that had tens of millions of users, maintained its own complex nationwide infra (no AWS back then), and had 400 full-time employees. That made coordination problems a lot easier too. We didn't need ten layers of people and project management because there just wasn't that many of us.


When doubling the number of employees can triple your revenue, you do it.

Keeping a website running with high uptime is not the goal. Maximizing revenue and profit is. The extra people aren't waste, they're what drive the incremental imperceptible changes that make these companies profitable.


This seems like a just-so story.

You can see it happen in reverse with X/Twitter.

Did reducing waste affect the user experience or uptime of Twitter? Not really.

But advertising revenues plummeted, because those extra employees were mostly not about the user experience or keeping the website up, they were about servicing the advertisers that brought the company revenue.


I thought advertising revenues plummeted mostly for content/optics/PR reasons, not ad-buyer-facing feature reasons.

Content moderation was an ad-buyer-facing feature. I really see no evidence Musk actually understood that. When he took it away he was all like surprised Pikachu face that advertisers left.

And how much revenue did that company bring in compared to something like Meta?

Maybe there's a correlation there?


I think the person you're replying to is perfectly aware of the correlation, considering it was a primary feature of their comment.

Not really? The main point of their comment is that companies could be much smaller based on their experience at a much smaller company.

I'm implying that big companies couldn't make as much money as they do without all the employees they have.


Their last para seems to acknowledge the correlation, but flips your assumed causal direction. I.e. they seem to be implying that the that excess money causes the complexity.

Almost every critique of the axiom of infinity is philosophical. I don't think you can just say "the axiom is sound, so what's your point". And you don't even get to claim that because of Godel's incompleteness theorem.

The axioms were not handed to us from above. They were a product of a thought process anchored to intuition about the real world. The outcomes of that process can be argued about. This includes the belief that the outcomes are wrong even if we can't point to any obvious paradox.


We built LLMs so that you can express your ideas in English and no longer need to code.

Also, English is really too verbose and imprecise for coding, so we developed a programming language you can use instead.

Now, this gives me a business idea: are you tired of using CodeSpeak? Just explain your idea to our product in English and we'll generate CodeSpeak for you.


I'm sure that this time the language will be simple and English-like enough that execs can use it directly, similarly to COBOL and SQL.

The idea is this would be a kind of IL for natural language queries. Then the main LLM isn't dependent on quirks of English.

No joke. I'm 100% sure that if it's successful, we will find CC's skill to write specs for CodeSpeak.

Yeah. It's hard to express and understand nested structures in a natural language yet they are easy in high-level programming languages. E.g. "the dog of first son of my neighbour" vs "me.neighbour.sons[0].dog", "sunny and hot, or rainy but not cold" vs "(sunny && hot) || (rainy && !cold)".

In the past maths were expressed using natural language, the math language exists because natural language isn't clear enough.


Did you mean AbstractNeighborDispatcherFactory?

relevant Dijkstra https://www.cs.utexas.edu/~EWD/transcriptions/EWD06xx/EWD667...

"In order to make machines significantly easier to use, it has been proposed (to try) to design machines that we could instruct in our native tongues. this would, admittedly, make the machines much more complicated, but, it was argued, by letting the machine carry a larger share of the burden, life would become easier for us. It sounds sensible provided you blame the obligation to use a formal symbolism as the source of your difficulties. But is the argument valid? I doubt."


Damn, I am the product A-GAIN?

COBOL?

That seems like it could lead to imprecise outcomes, so I've started a business that defines a spec to output the correct English to input to your product.

sssssh! if this catches on we can keep our jobs! (j/k, mostly)

I'm really glad random HN commenters know it better than someone that built a language that has been used in thousands of products.

Standard appeal to accomplishment, past success does not guarantee future success... especially on this joke comment

Kotlin is generally considered a bit of a dud in the modern programming language space.

I reckon this comment from 6 years ago predicts Kotlin's fate https://news.ycombinator.com/item?id=24197817 I consider it prophetic.

My gut says Kotlin is great for individual developer experience. But I never heard or saw credible reports on the Total Cost of Ownership, e.g., Kotlin engineers hiring, swapping out on a team.


It's a blessing when you're in the native Android / React Native / Flutter space.

Even Noble Prize owners made huge mistakes after the prize.

Somewhere Dijkstra is laughing his ass off.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: