Re displacing freelance translation, yes - it can displace the 95% of cases where 95% accuracy is enough. Like you mention though, for diplomatic translations, court proceedings, pacemaker manuals etc you're still going to need a human at least checking every line since the cost of any mistake is so high
This is going to be the concrete block which finally breaks the back of the academic peer review system, i.e. it's going to be a DDoS attack on a system which didn't even handle the load before LLMs.
Maybe we'll need to go back to some sort of proof-of-work system, i.e. only accepting physical mailed copies of manuscripts, possibly hand-written...
I tried Prism, but it's actually a lot more work than just using claude code. The latter allows you to "vibe code" your paper with no manual interaction, while Prism actually requires you review every change.
I actually think Prism promotes a much more responsible approach to AI writing than "copying from chatgpt" or the likes.
Or it makes gatekeepers even more important than before. Every submission to a journal will be desk-rejected, unless it is vouched for by someone one of the editors trusts. And people won't even look at a new paper, unless it's vouched for by someone / published in a venue they trust.
That will just create a market for hand-writers. Good thing the economy is doing very well right, so there aren't that many desperate people who will do it en-masse and for peanuts.
LaTeX was one of the last bastions against AI slop. Sadly it's now fallen too. Is there any standardised non-AI disclaimer format which is gaining use?
A blog post looking at developments in the translation market and projecting them onto the future of software engineering. TLDR: software is already bifurcating into low-grade consumer slop where AI lowers expectations, but serious B2B and enterprise software is diverging and (should be) getting better as a result of AI use
One whole technique not mentioned in the paper or comments is bitslicing. For non-branching code (e.g. symmetric ciphers) it's guaranteed constant-time and it would be a remarkable compiler indeed which could introduce optimizations and timing variations to bit-sliced code...
The link I provided is about random delays being inferior to setting a high water mark, yes.
I'm not just echoing the argument made by the link, though. I'm adding to it.
I don't think the "cap runtime to a minimum value" strategy will actually help, due to how much jitter your cap measurements will experience from the normal operation of the machine.
If you filter it out when measuring, you'll end up capping too low, so some values will be above it. For a visualization, let's pretend that you capped the runtime at 80% what it actually takes in the real world:
function biased() {
return Math.max(0.8, Math.random())
}
let samples = [];
for (let i = 0; i < 1000; i++) {
samples.push(biased());
}
// Now plot `samples`
Alternatively, let's say you cap it sufficiently high that there's always some slack time at the end.
Will the kernel switch away to another process on the same machine?
If so, will the time between "the kernel has switched to another process since we're really idling" to "the kernel has swapped back to our process" be measurable?
It's better to make sure your actual algorithm is actually constant-time, even if that means fighting with compilers and hardware vendors' decisions.
A quick rule-of-thumb: if 'military-grade encryption' is mentioned, the author likely has no domain knowledge.
Further the article claims that SPNs are used in RSA... which is completely wrong and indicates no domain knowledge.
The article has completely mis-interpreted the paper. The paper is written in Chinese but with an English abstract - the article seems to have just pulled keywords out.
I wonder whether a LLM hallucination is at play somewhere???
And yet they got themselves into a demographic death spiral