I clicked the link expecting a discussion of what would cause a Python interpreter to crash, but found only a generic list of low level problems for any piece of software, and the statement that the headline was an interview question at YouTube ten years ago.
You're not contradicting the post you're replying to; you're describing common workarounds to approximate exactly-once semantics using at-least-once semantics on the message queue combined with external storage on the reader side.
Even under the AI-maximalist assumption that human workers and LLMs are going to be interchangeable, I'm having a hard time seeing the logic.
LLMs are not going to go on parental leave; have various protected statuses; have worker protection, visa issues or a compensation structure. In other words, there's no synergy at all from having them be managed in the same way as actual human resources. (And I'm sorry for using the term human resources unironically -- that just follows from the AI maximalist assumption.)
There's maybe some synergy in workforce planning, but if HR was doing that then there's already something broken in the business. HR is supposed to contribute legal expertise first, cultural and team dynamics expertise second, and process expertise not at all.
Your scientific take is useful in the case where selection bias is unavoidable and needs to be corrected for.
This case is not like that; if the insurance agency wants to dispute the 90% false denial rate, it would be trivial for them to take a random sample of _all_ cases, go through the appeal process for those, and publish the resulting number without selection bias.
As long as that doesn't happen, the most logical conclusion for us outside observers is: the number is probably not so much lower than 90% that it makes a difference.
The insurance company may well have already done that; this is being put by someone who is suing them and looking for reasons that the AI bot is bad. The article is silent on what the company response to the accusation was and, realistically, we'd expect the appealed denials to have a very high rate of error whether determined by bots or humans. Few people indeed are going to waste time arguing a hopeless case against an insurance company - this is classic selection bias.
What do you think the claim approval rate is? Less than 10%?
It stands to reason that the overwhelming majority of cases where the claim was approved were approved correctly. Unless that rate is well under 15%, it’s impossible to have the claimed “90% error rate”.
It's clear from the quoted paragraph that by "error rate" they actually meant "false denial rate". That's also the words I used in the comment you are replying to.
Did you comment because you take issue with misuse of the term error rate, or because you think that correct approvals make up for incorrect denials, and that therefore overall error rate is a useful metric?
Are you making the claim that small numbers are always irrelevant? In that case it's worth reading up on the greenhouse effect. It's unfortunately very true that water vapour and CO2 have an outsize effect on the planet's heat balance.
You mix that claim with an implication about people who don't know that number off the top of their heads, but:
- work on sustainability for feelgood
- are independently wealthy otherwise.
I don't understand exactly what you're implying but it doesn't seem very generalizable. Society can face arbitrarily complicated threats, to counter which we can create incentives even for those who don't know the details.
How to get political agreement on the important threats and the best (effective / fair / ...) incentives is an open problem.
That wasn't my point entirely but admittedly it was a part of it. I have not heard a good reason for why a trace gas has such an effect. Bearing in mind that 0.04% is total CO2 and human produced CO2 is a tiny fraction of that - can't seem to find an agreed on percentage on that one but happy to be enlightened. The other point I was making is that when I have asked that basic question to people who are seemingly frightened of the effects of CO2 I have got massively inflated guesstimates e.g. 50%. I just think if you are going to be scared and try and scare others you should know at least that.
Gotcha, I see. You're not wrong that a combination of scientific illiteracy and political orthodoxy can give terrible outcomes. So some default skepticism is quite wise.
If you want to understand the effect of the trace gases from first principles, you can read more about it under "Effective temperature of the Earth" at https://en.m.wikipedia.org/wiki/Stefan%E2%80%93Boltzmann_law . It also links to the Wikipedia page about the greenhouse effect, which you'll appreciate more after reading about the Stefan Boltzmann law first.
OK, will read, much appreciated. Just have to deal with my fence that blew down in the high winds we had in the UK yesterday - climate change no doubt (joke). Thank you.
It's the opposite in democratic countries such as France: while every other person or entity is free to do everything except what's explicitly regulated, the government only has those powers that are explicitly granted to them by laws and regulations.
The comment you replied to was referring to the EU regulating major tech powers through e.g. GDPR and DSA.
I disagree, because one does not get to the bottleneck being _the efficiency of extracting memory from one process and transferring it to another_ without major fizzbuzz-specific optimizations first.
For example, there's a clever bit representation to get base-10 carries to happen natively.
The initial competition requirements are not particularly vague about this point: Measuring throughput with `<program> | pv > /dev/null` is prescribed, and it also says
> Architecture specific optimizations / assembly is also allowed. This is not a real contest - I just want to see how people push fizz buzz to its limit - even if it only works in special circumstances/platforms.
Thank you for the link, that was incredibly interesting. For other people following the link: don't miss out on going into the comment section and finding the original designer of the mechanism complimenting the video and then sharing his own original model - in Legos!
Slightly exaggerated paraphrase: "You YouTube kids with your fancy 3D printers, here at Ball Aerospace & Technology we didn't need all that, we had Legos"
> some of the filings will fly up against gravity to the bar, against the force of gravity. It takes energy to lift these filings. Where does that energy come from?
In this example the energy comes from whatever is holding the bar magnet itself up against gravity. If it's not doing any work (ie. it's not adding energy; say it's a rope or a spring) then it's not just the filings that move: the bar magnet and the filings move towards each other (with the filings moving the bulk of the distance) and the end result is that some of the gravitational energy in the bar magnet is transferred to the filings.
I think this only seems like a tricky question with an informal idea of what "energy" is. It doesn't have the same interpretation problem as what the linked article is about.