This is a good question, and currently the answer is no. Quantum computers can only run very short, simple algorithms right now, because the qubits they're built out of are noisy. You need a lot of error correction, which the community is working on.
The thing is, unlike ordinary computers, quantum computers can factor numbers about as easily as they can multiply them. So as soon as they can multiply two large integers, they'll also be able to factor the result and break RSA encryption based on keys of that size.
This blog post gives a good sense of the state of the art and what progress might look like:
> That is usually configurable at the terminal level
And if you use Emacs, it's configurable at the buffer level. [1] This lets me build a version of Iosevka where `~=` and `!=` both become ligaturized but in different major modes, avoiding any confusion.
I'm not either. I think it may look "cool" visually but when trying to work with code with those in it, it seems odd, like that it's a single character even though it's not and it just breaks the flow
Because most of those who commented are among those who do not like ligatures, I must present a counterpoint, to diminish the statistical bias.
Some people like ligatures, some people do not like them, but this does not matter, because any decent text editor or terminal emulator has a setting to enable or disable ligatures.
Any good programming font must have ligatures, which will keep happy both kinds of users, those who like and those who dislike ligatures.
I strongly hate the straitjacket forced by ASCII upon programming languages, which is the root cause of most ambiguous grammars that complicate the parsing of programming languages and increase the probability of bugs, and which has also forced the replacement of traditional mathematical symbols with less appropriate characters.
Using Unicode for source programs is the best solution, but when having to use legacy programming languages in a professional setting, where the use of a custom preprocessor would be frowned upon, using fonts with ligatures is still an improvement over ASCII.
A coding font is supposed to help you distinguish between characters, not confuse them for each other. Also, ASCII ligatures usually look worse than the proper Unicode character they are supposed to emulate. The often indecisive form they take (glyphs rearranged to resemble a different character, but still composed of original glyph shapes; weird proportions and spacing due to the font maintaining the column width of the separate ASCII code points) creates a strong uncanny valley effect. I wouldn't mind having "≤", "≠" or "⇒" tokens in my source code, but half-measures just don't cut it.
No need to rely on app-specific configs. You can disable it globally in your fontconfig. For example, this disables ligatures in the Cascadia Code font:
The simplest refutation of your point of view is, who or what is responsible if the work submission is wrong?
It will always be the person’s, never the computer’s. Conveniently, AI always acts as if it has no skin in the game… because it literally and figuratively doesn’t… so for people to treat it like it does, should be penalized
You sound like someone who has literally zero understanding as to why that is a ridiculous comparison.
There are a thousand and one ways that I participate when building something with LLM assistance. Everything from ORIGINATING AN IDEA TO BEGIN WITH, to working on a thorough spec for it, to ensuring tests are actually valid, to asking for specific designs like hexagonal design, to specific things like benchmarks... literally ALL OF THE INITIATIVE IS MINE, AND ALL OF THE SUCCESS/FAILURE CONSEQUENCES ARE MINE, AND THAT IS ULTIMATELY ALL THAT MATTERS
Please head towards a different career if you now have a stupid and contrived excuse not to continue working with the machines, because you sound like a whining child
And you're not answering the question, because you know it would end your point: WHO OR WHAT IS RESPONSIBLE IF THE CODE SUCCEEDS OR FAILS?
I started working in the industry when you were able to buy a Lisp Machine new and have been studying AI even longer, and I’ve been very successful in it. I not only know what I’m talking about, I have the experience to back it up.
You sound like someone who’s deeply in denial about exactly how the LLM plagiarism machines work. You really do sound like a student defending themselves against a plagiarism charge by asserting that since they did the work of choosing the text to put into their essay and massaging the grammar so it fit, nobody should care where it came from.
By that definition, every single human who wrote a paper after reading a source document is a “plagiarism machine”
and I’m 53 and well remember Symbolics from freshman year at Cornell, in fact my application essay to it was about fuzzy logic (AI-tangential) and probably got me in, so I too am quite familiar
i’m also quite good at debate. the flaw in your logic is that plagiarism requires accountability and no machine can be accountable, only the human that used it, ergo, it is still the work of the human, because the human values, the human vets, the human initiates, and the human gains or loses based on the combined output, end of story; accelerated thought is still thought, and anyway, if a machine can replicate thought, then it wasn’t particularly original to begin with
You not realizing how ridiculous this is, is exactly why half of all devs are about to get left behind.
Like, this should be enshrined as the quintessential “they simply, obstinately, perilously, refused to get it” moment.
Shortly, no one is going to care about anyone’s bespoke manual keyboard entry of code if it takes 10 times as long to produce the same functionality with imperceptibly less error.
> Shortly, no one is going to care about anyone’s bespoke manual keyboard entry of code if it takes 10 times as long to produce the same functionality with imperceptibly less error.
Well that day doesn't appear to be coming any time soon. Even after years of supposed improvements, LLMs make mistakes so frequently that you can't trust anything they put out, which completely negates any time savings from not writing the code.
1) Most people still don't use TDD, which absolutely solves much of this.
2) Most poople end up leaning too heavily on the LLM, which, well, blows up in their face.
3) Most people don't follow best practices or designs, which the LLM absolutely does NOT know about NOR does it default to.
4) Most people ask it to do too much and then get disappointed when it screws up.
Perfect example:
> you can't trust anything they put out
Yeah, that screams "missing TDD that you vetted" to me. I have yet to see it not try to pass a test correctly that I've vetted (at least in the past 2 months) Learn how to be a good dev first.
> no one is going to care about anyone’s bespoke manual keyboard entry of code if it takes 10 times as long to produce the same functionality with imperceptibly less error.
No one is going to care about anyone’s painstaking avoidance of chlorofluorocarbons if it takes ten times as long to style your hair with imperceptibly less ozone hole damage.
This is a non-argument. All of the cloud LLM's are going to move to things like micronuclear. And the scientific advances AI might enable may also help avoid downstream problems from the carbon footprint
reply