If you perform nearly any work at all in a given week you're entitled to your salary, and they can't fire you. They might be able to take away the $15/day stipend from your pay, and there are obvious additional negatives (6 months with limited context and practice of your craft will reduce your performance when you get back too), but that 2-week cap is a lawsuit waiting to happen unless they also forbid you from doing any work while on jury duty.
As I say grand jury duty is often not every day, you can always take your PTO, and there are always nights and weekends. A company can always keep paying your base salary but, as you say, there could be longer term consequences.
Of course there is. Raw machine code is the gold standard, and everything else is an attempt to achieve _something_ at the cost of performance, C included, and that's even when considering whole-program optimization and ignoring the overhead introduced by libraries. Other languages with better semantics frequently outperform C (slightly) because the compiler is able to assume more things about the data and instructions being manipulated, generating tighter optimizations.
I was talking about building code not run-time. But regarding run-time, no other language does outperform C in practice, although your argument about "better semantics" has some grain of truth in it, it does not apply to any existing language I know of - at least not to Rust which is in practice for the most part still slower than C.
On their own merits, people choose SMS-based 2FA, "2FA" which lets you into an account without a password, perf-critical CLI tools written in Python, externalizing the cost of hacks to random people who aren't even your own customers, eating an extra 100 calories per day, and a whole host of other problematic behaviors.
Maybe Ada's bad, but programmer preference isn't a strong enough argument. It's just as likely that newer software is buggier and more unsafe or that this otherwise isn't an apples-to-apples comparison.
I made no judgement about whether Ada is subjectively "bad" or not. I used it for a single side project many years ago, and didn't like it.
But my anecdotal experience aside, it is plain to see that developers had the opportunity to continue with Ada and largely did not once they were no longer required to use it.
So, it is exceedingly unlikely that some conspiracy against C++, motivated by mustache-twirling Ada gurus, is afoot. And even if that were true, knocking C++ down several pegs will not make people go back to Ada.
C#, Rust, and Go all exist and are all immensely more popular than Ada. If there were to be a sudden exodus of C++ developers, these languages would likely be the main beneficiaries.
My original point, that C++ isn't what's standing in the way of Ada being popular, still stands.
It's probably just a higher rate of autonomous vehicles needing stop signs and buses identified at that moment, and cognitive bias causes you to only remember when that happens when you recently performed an update. /s
>It's probably just a higher rate of autonomous vehicles needing stop signs and buses identified at that moment
I can't tell whether you're serious but in case you are, this theory immediately falls apart when you realize waymo operates at night but there aren't any night photos.
My assumption is that CF has something like a SVM that it's feeding a bunch of datapoints into for bot detection. Go over some threshold and you end up in the CAPTCHA jail.
I'm certain the User-Agent is part of it. I know that for certain because a very reliable way I can trigger the CF stuff is this plugin with the wrong browser selected [1].
I mostly agree, but it's more appropriate to weigh contributions against an FTE's output rather than their input. If I have a $10m/yr feature I'm fleshing out now and a few more lined up afterward, it's often not worth the time to properly handle any minor $300k/yr boondoggle. It's only worth comparing to an FTE's fully loaded cost when you're actually able to hire to fix it, and that's trickier since it takes time away from the core team producing those actually valuable features and tends to result in slower progress from large-team overhead even after onboarding. Plus, even if you could hire to fix it, wouldn't you want them to work on those more valuable features first?
They were running a big kubernetes infrastructure to handle all of these RPC calls.
That takes a lot of engineer hours to set up and maintain. This architecture didn't just happen, it took a lot of FTE hours to get it working and keep it that way.
Kube is trivial to run. You hit a few switches on GKE/EKS and then a few simple configs. It doesn't take very many engineer hours to run. Infrastructure these days is trivial to operate. As an example, I run a datacenter cluster myself for a micro-SaaS in the process of SOC2 Type 2 compliance. The infra itself is pretty reliable. I had to run some power-kill sims before I traveled and it came back A+. With GKE/EKS this is even easier.
Over the years of running these I think the key is to keep the cluster config manual and then you just deploy your YAMLs from a repo with hydration of secrets or whatever.
The cost is not just tokens, you need an actual human contributor looking into the issue, prompting, checking output, validating, deploying,... Difficult to compute the actual AI ROI. If $300K didn't matter without AI, it probably still doesn't matter with AI.
That reminds me of one of the easiest big wins I've had in my career. SystemD was causing issues, so I slapped in Gentoo with the real-time kernel patch. Peak latency (practically speaking, the only core metric we cared about -- some control loop doing a bunch of expensive math and interacting with real hardware) went down 5000x.
That specific advice isn't terribly transferable (you might choose to hack up SystemD or some other components instead, maybe even the problem definition itself), but the general idea of measuring and tuning the system running your code is solid.
What do you think is causing the issue? We are having the same kind of problem. Core isolation, no_hz, core pinning, but i am still getting interrupted by nmi interrupts
Details depend, but the root cause is basically the same every time: your hardware is designed to do something other than what you want it to do. It might be close enough that you want to give it a shot anyway (often works, often doesn't), but solutions can be outside of the realm of what's suitable for a "prod-ready" service.
If you're experiencing NMIs, the solution is simple if you don't care about the consequences; find them and remove them (ideally starting by finding what's generating them and verifying you don't need it). Disable the NMI watchdog, disable the PMU, disable PCIe Error Reporting (probably check dmesg and friends first to ensure your hardware is behaving correctly and fix that if not), disable anything related to NMIs at the BIOS/UEFI/IPMI/BMC layers, register a kernel module to swallow any you missed in your crusade, and patch the do_nmi() implementation with something sane for your use case in your custom kernel (there be dragons here, those NMIs obviously exist for a reason). It's probably easier to start from the ground up adding a minimal set of software for your system to run than to trim it back down, but either option is fine.
Are you experiencing NMIs though? You might want to take a peek at hwlatdetect and check for SMIs or other driver/firmware issues, fixing those as you find them.
It's probably also worth double-checking that you don't have any hard or soft IRQs being scheduled on your "isolated" core, that no RCU housekeeping is happening, etc. Make sure you pre-fault all the memory your software uses, no other core maps memory or changes page tables, power scaling is disabled (at least the deep C-states), you're not running workloads prone to thermal issues (1000W+ in a single chip is a lot of power, and it doesn't take much full-throttle AVX512 to heat it up), you don't have automatic updates of anything (especially not microcode or timekeeping), etc.
Also, generally speaking, your hardware can't actually multiplex most workloads without side effects. Abstractions letting you pretend otherwise are making compromises somewhere. Are devices you don't care about creating interrupts? That's a problem. Are programs you don't care about causing cache flushes? That's a problem. And so on. Strip the system back down to the bare minimum necessary to do whatever it is you want to do.
As to what SystemD is doing in particular? I dunno, probably something with timer updates, microcode updates, configuring thermals and power management some way I don't like, etc. I took the easy route and just installed something sufficiently minimalish and washed my hands of it. We went from major problems to zero problems instantly and never had to worry about DMA latency again.
In the academic circles I frequent, it's not true. Any one journal might reject the good stuff, but it doesn't take more than a few applications to find a journal who recognizes it, and the cost of producing the research is so high that with the current career incentives it'd be ridiculous not to continue submitting. That does mean that journal "quality" matters less than you might think, but I don't think anyone's surprised by that notion either.
Errors the other direction are more common. I'll state that as an easily verified fact, but people like fun stories, so here's an example:
One professor I worked with had me write up a bunch of case studies of some math technique, tried to convince me that it was worth a paper, paid somebody else to typeset my work, and told me to compensate him if I wanted my name on the "paper." I didn't really; it was beneath any real mathematician; but there now exists some journal which has a bastardized, plagiarized version of my work with some other unrelated author tacked on available for the world to see [0], and it's worth calling out that nothing about the "paper" is journal-worthy. It's far too easy to find a home for academic slop, and I saw that in every field I spent any serious amount of time in.
Not all software can be sufficiently insulated from external changes, but almost all software I care about can be. My normal update cadence is every 2-3 years, and that's only because of a quirk in my package manager making it annoying for shiny new tools to coexist with tools requiring old dependencies. The most important software I use hasn't changed in a decade (i.e., those updates were no-ops), save for me updating some configurations and user scripts once in awhile. I imagine that if I were older the 18yr effective-update-cycle would happen naturally as well.
My gut reaction is that the software you're describing relies heavily on external integrations. Is that correct?
There's a sort of mixing of units happening here, and I think it's causing some confusion. Here's an example (greatly simplified) scenario highlighting a flaw in your rationale:
1. Energy at your normal usage costs $1000/yr.
2. You can spend $20k now to have access to equivalent energy output for the next 40 years before it degrades to unusability.
3. Next year, somebody invents a flux capacitor bringing all energy costs for everyone down to $1/yr.
If you don't buy the thing, you spend $1039 over the next 40 years. If you buy the thing you spend $20k, and it's hit its expected lifespan, so you don't recoup any further benefits.
The real world has inflation, wars, more sane invention deltas, and all sorts of complications, but the general idea still holds. If you expect tech to improve quickly enough and are relying on long-term payoffs, it can absolutely be worth delaying your purchase.
If you predict massive improvements in solar/battery/etc tech, the only way it makes sense to invest now is if those improvements aren't massive enough, you expect sufficiently bad changes to the alternatives, etc. I.e., you're playing the odds about some particular view of how the world will progress, and your argument needs to reflect that. It's not inherently true that just because solar pays off now it will in the future.
reply