Hacker Newsnew | past | comments | ask | show | jobs | submit | i2km's commentslogin

> but the Chinese play long games

And yet they got themselves into a demographic death spiral


Re displacing freelance translation, yes - it can displace the 95% of cases where 95% accuracy is enough. Like you mention though, for diplomatic translations, court proceedings, pacemaker manuals etc you're still going to need a human at least checking every line since the cost of any mistake is so high


This is going to be the concrete block which finally breaks the back of the academic peer review system, i.e. it's going to be a DDoS attack on a system which didn't even handle the load before LLMs.

Maybe we'll need to go back to some sort of proof-of-work system, i.e. only accepting physical mailed copies of manuscripts, possibly hand-written...


I tried Prism, but it's actually a lot more work than just using claude code. The latter allows you to "vibe code" your paper with no manual interaction, while Prism actually requires you review every change.

I actually think Prism promotes a much more responsible approach to AI writing than "copying from chatgpt" or the likes.


> This is going to be the concrete block which finally breaks the back of the academic peer review system

Exactly, and I think this is good news. Let's break it so we can fix at last. Nothing will happen until a real crisis emerges.


There's problems with the medical system, therefore we should set hospitals on fire to motivate them to make them better.


Disrupting a system without good proposals for its replacement sounds like a recipe for disaster.



Very myopic comment.


Maybe Open AI will sell you 'Lens' which will assist with sorting through the submissions and narrow down the papers worth reviewing.


Or it makes gatekeepers even more important than before. Every submission to a journal will be desk-rejected, unless it is vouched for by someone one of the editors trusts. And people won't even look at a new paper, unless it's vouched for by someone / published in a venue they trust.


Overleaf basically already has the same thing


That will just create a market for hand-writers. Good thing the economy is doing very well right, so there aren't that many desperate people who will do it en-masse and for peanuts.


Handwriting is super easy to fake with plotters.


Is there something out there to simulate the non-uniformity and errors of real handwriting?


> i.e. only accepting physical mailed copies of manuscripts, possibly hand-written...

And you think the indians will not hand write the output of LLMs ?

Not that I have a better suggestion myself..


LaTeX was one of the last bastions against AI slop. Sadly it's now fallen too. Is there any standardised non-AI disclaimer format which is gaining use?


A blog post looking at developments in the translation market and projecting them onto the future of software engineering. TLDR: software is already bifurcating into low-grade consumer slop where AI lowers expectations, but serious B2B and enterprise software is diverging and (should be) getting better as a result of AI use


One whole technique not mentioned in the paper or comments is bitslicing. For non-branching code (e.g. symmetric ciphers) it's guaranteed constant-time and it would be a remarkable compiler indeed which could introduce optimizations and timing variations to bit-sliced code...


The author of the paper knows about bitslicing [1], so not mentioning it seems deliberate.

My guess is that bitslicing only gets you so far.

[1]: https://bearssl.org/constanttime.html#bitslicing


Actually, the link you provide seems to support the parent comment's suggestion, rather than detract from it.

The previous comment was suggesting making sure that every code path takes the same amount of time, not adding a random delay (which doesn't work).

And while I agree that power-analysis attacks etc. are still going to apply, the over-arching context here is just timing-analysis


The link I provided is about random delays being inferior to setting a high water mark, yes.

I'm not just echoing the argument made by the link, though. I'm adding to it.

I don't think the "cap runtime to a minimum value" strategy will actually help, due to how much jitter your cap measurements will experience from the normal operation of the machine.

If you filter it out when measuring, you'll end up capping too low, so some values will be above it. For a visualization, let's pretend that you capped the runtime at 80% what it actually takes in the real world:

  function biased() {
    return Math.max(0.8, Math.random())
  }
  let samples = [];
  for (let i = 0; i < 1000; i++) {
    samples.push(biased());
  }
  // Now plot `samples`
Alternatively, let's say you cap it sufficiently high that there's always some slack time at the end.

Will the kernel switch away to another process on the same machine?

If so, will the time between "the kernel has switched to another process since we're really idling" to "the kernel has swapped back to our process" be measurable?

It's better to make sure your actual algorithm is actually constant-time, even if that means fighting with compilers and hardware vendors' decisions.


1984 could only ever have been written by an Englishman


You get the hell out and emigrate. I did so last year. It's not going to get better chap


Where did you go?


A quick rule-of-thumb: if 'military-grade encryption' is mentioned, the author likely has no domain knowledge.

Further the article claims that SPNs are used in RSA... which is completely wrong and indicates no domain knowledge.

The article has completely mis-interpreted the paper. The paper is written in Chinese but with an English abstract - the article seems to have just pulled keywords out.

I wonder whether a LLM hallucination is at play somewhere???

The article does not mention AES


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: