Hacker Newsnew | past | comments | ask | show | jobs | submit | pmarreck's commentslogin

Can quantum computing do even basic math yet? I think this was the holdup. Or perhaps I'm missing the point.

This is a good question, and currently the answer is no. Quantum computers can only run very short, simple algorithms right now, because the qubits they're built out of are noisy. You need a lot of error correction, which the community is working on.

The thing is, unlike ordinary computers, quantum computers can factor numbers about as easily as they can multiply them. So as soon as they can multiply two large integers, they'll also be able to factor the result and break RSA encryption based on keys of that size.

This blog post gives a good sense of the state of the art and what progress might look like:

Why haven't quantum computers factored 21 yet? https://algassert.com/post/2500


And isn't the response already known in the validation process?

I don't understand your question. Can you elaborate?

Replication of quantum factorisation records with a 8bits home computer, Abacus and a dog

What are you trying to say here? This makes no sense.

That's a famous paper that debunk a lot of things related to marketing announcements. Basically nothing has truly factorized 15, let alone 21.

I think "basic math" here means arithmetic or similar. Solutions exists, but current machines are noisy:

V. Vedral, A. Barenco, and A. Ekert, Quantum net- works for elementary arithmetic operations, Physi- cal Review A 54, 147 (1996).


> I think this was the holdup

It isn't...


[flagged]


This comment is wildly inappropriate and violates the community guidelines here. I suggest you delete it.

It doesn't do basic math ... just the hard one :)

This one has a bit of... an Art Deco flavor, perhaps, is it?

That is usually configurable at the terminal level- for example, both wezterm and ghostty have available configs to control this behavior.

> That is usually configurable at the terminal level

And if you use Emacs, it's configurable at the buffer level. [1] This lets me build a version of Iosevka where `~=` and `!=` both become ligaturized but in different major modes, avoiding any confusion.

[1]: https://github.com/mickeynp/ligature.el


Good to know. I’ve been using ghostty and generally not a fan of the code ligatures (or just too stubborn to adapt!).

I'm not either. I think it may look "cool" visually but when trying to work with code with those in it, it seems odd, like that it's a single character even though it's not and it just breaks the flow

I didn't like them either. Thankfully, they can easily be disabled. See my config: https://github.com/pmxi/dotfiles/blob/e779c5921fbe308fad0c95...

Because most of those who commented are among those who do not like ligatures, I must present a counterpoint, to diminish the statistical bias.

Some people like ligatures, some people do not like them, but this does not matter, because any decent text editor or terminal emulator has a setting to enable or disable ligatures.

Any good programming font must have ligatures, which will keep happy both kinds of users, those who like and those who dislike ligatures.

I strongly hate the straitjacket forced by ASCII upon programming languages, which is the root cause of most ambiguous grammars that complicate the parsing of programming languages and increase the probability of bugs, and which has also forced the replacement of traditional mathematical symbols with less appropriate characters.

Using Unicode for source programs is the best solution, but when having to use legacy programming languages in a professional setting, where the use of a custom preprocessor would be frowned upon, using fonts with ligatures is still an improvement over ASCII.


A coding font is supposed to help you distinguish between characters, not confuse them for each other. Also, ASCII ligatures usually look worse than the proper Unicode character they are supposed to emulate. The often indecisive form they take (glyphs rearranged to resemble a different character, but still composed of original glyph shapes; weird proportions and spacing due to the font maintaining the column width of the separate ASCII code points) creates a strong uncanny valley effect. I wouldn't mind having "≤", "≠" or "⇒" tokens in my source code, but half-measures just don't cut it.

No need to rely on app-specific configs. You can disable it globally in your fontconfig. For example, this disables ligatures in the Cascadia Code font:

  <match target="font">
    <test name="family" compare="eq" ignore-blanks="true">
      <string>Cascadia Code</string>
    </test>
    <edit name="fontfeatures" mode="append">
      <string>liga off</string>
      <string>dlig off</string>
    </edit>
  </match>

Here is someone disabling ligatures for Noto Sans Mono: https://blahg.josefsipek.net/?p=610

Berkeley Mono was the first time I bought a font.

It's so good. Perfect even. And they have a really neat customization tool.

I've been using it for a few years now and they actually still occasionally release a new version of it. Haven't gotten tired of it yet.

The only complaint I have about it is that I had to do a hacky workaround to get my Nix setups to pull it in since it's proprietary.

I even forked their "Machine Report" tool (which presumes Debian) to make it work on Linux/NixOS by applying a "polyfill": https://github.com/pmarreck/usgc-machine-report-nixos-editio...


Hope they enjoy working on Java code... Forever... With 3 month release cycles, no CD... LOL

Yeah but they're not owed the telemetry. It's a privilege (if you convince your users to interface with your service via their client), not a right.

These techniques were all the rage on early Macintosh things

why would anyone actually interested in scientific research come to this, since it literally undermines the whole practice of science?


Publish or perish. Academia requiring PhDs to publish or be fired. It's made entire fields echo chambers and prone to political influence.


so a perverse incentive, basically. shocker. got it


This is where most reasonable people would say “OK, fine”

CLEARLY, a lot of developers are not reasonable


It is entirely reasonable for a project to require you to attest that the thing you are contributing is your own work.

The unreasonable ones are the ones with the oppositional-defiant “You can’t tell me I can’t use an LLM!” reaction.


It IS their own work.

The simplest refutation of your point of view is, who or what is responsible if the work submission is wrong?

It will always be the person’s, never the computer’s. Conveniently, AI always acts as if it has no skin in the game… because it literally and figuratively doesn’t… so for people to treat it like it does, should be penalized


If it’s the output of an LLM, it’s not their own work.


Who prompted the LLM?

Who vetted the output?

Who ensured there was adequate test coverage?

Who insisted on a certain design?

Who is to blame if it's bad code? That is the same entity that is responsible, and the same entity that "did it"

tl;dr your stance is full of poop, my dude


“I looked up the topic on Wikipedia and I highlighted the text and I selected copy and I selected paste so I don’t see how this is plagiarism.”

That’s what you sound like.


You sound like someone who has literally zero understanding as to why that is a ridiculous comparison.

There are a thousand and one ways that I participate when building something with LLM assistance. Everything from ORIGINATING AN IDEA TO BEGIN WITH, to working on a thorough spec for it, to ensuring tests are actually valid, to asking for specific designs like hexagonal design, to specific things like benchmarks... literally ALL OF THE INITIATIVE IS MINE, AND ALL OF THE SUCCESS/FAILURE CONSEQUENCES ARE MINE, AND THAT IS ULTIMATELY ALL THAT MATTERS

Please head towards a different career if you now have a stupid and contrived excuse not to continue working with the machines, because you sound like a whining child

And you're not answering the question, because you know it would end your point: WHO OR WHAT IS RESPONSIBLE IF THE CODE SUCCEEDS OR FAILS?


I started working in the industry when you were able to buy a Lisp Machine new and have been studying AI even longer, and I’ve been very successful in it. I not only know what I’m talking about, I have the experience to back it up.

You sound like someone who’s deeply in denial about exactly how the LLM plagiarism machines work. You really do sound like a student defending themselves against a plagiarism charge by asserting that since they did the work of choosing the text to put into their essay and massaging the grammar so it fit, nobody should care where it came from.


By that definition, every single human who wrote a paper after reading a source document is a “plagiarism machine”

and I’m 53 and well remember Symbolics from freshman year at Cornell, in fact my application essay to it was about fuzzy logic (AI-tangential) and probably got me in, so I too am quite familiar

i’m also quite good at debate. the flaw in your logic is that plagiarism requires accountability and no machine can be accountable, only the human that used it, ergo, it is still the work of the human, because the human values, the human vets, the human initiates, and the human gains or loses based on the combined output, end of story; accelerated thought is still thought, and anyway, if a machine can replicate thought, then it wasn’t particularly original to begin with


and your stance is not your own if you got the LLM to stand for you. ;-P

human prompting != human production


You not realizing how ridiculous this is, is exactly why half of all devs are about to get left behind.

Like, this should be enshrined as the quintessential “they simply, obstinately, perilously, refused to get it” moment.

Shortly, no one is going to care about anyone’s bespoke manual keyboard entry of code if it takes 10 times as long to produce the same functionality with imperceptibly less error.


> Shortly, no one is going to care about anyone’s bespoke manual keyboard entry of code if it takes 10 times as long to produce the same functionality with imperceptibly less error.

Well that day doesn't appear to be coming any time soon. Even after years of supposed improvements, LLMs make mistakes so frequently that you can't trust anything they put out, which completely negates any time savings from not writing the code.


Sorry, but this is user error.

1) Most people still don't use TDD, which absolutely solves much of this.

2) Most poople end up leaning too heavily on the LLM, which, well, blows up in their face.

3) Most people don't follow best practices or designs, which the LLM absolutely does NOT know about NOR does it default to.

4) Most people ask it to do too much and then get disappointed when it screws up.

Perfect example:

> you can't trust anything they put out

Yeah, that screams "missing TDD that you vetted" to me. I have yet to see it not try to pass a test correctly that I've vetted (at least in the past 2 months) Learn how to be a good dev first.


> no one is going to care about anyone’s bespoke manual keyboard entry of code if it takes 10 times as long to produce the same functionality with imperceptibly less error.

No one is going to care about anyone’s painstaking avoidance of chlorofluorocarbons if it takes ten times as long to style your hair with imperceptibly less ozone hole damage.


This is a non-argument. All of the cloud LLM's are going to move to things like micronuclear. And the scientific advances AI might enable may also help avoid downstream problems from the carbon footprint


I wasn't gesturing to the energy/environmental impacts of AI.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: