Hacker Newsnew | past | comments | ask | show | jobs | submit | olejorgenb's commentslogin

> I wonder if there is a more general solution that can make models spend more compute on making important choices, while making generation of the "obvious" tokens cheaper and faster.

I think speculative decoding count as a (perhaps crude) way implementing this?


Source of that image though.. ?

An AI prompt in June 2025.

The tools was mostly already known, no? (I wish they had a "present" tool which allowed to model to copy-paste from files/context/etc. showing the user some content without forcing it through the model)

Yeah in fact one thing claude is freaking great at is decompilation.

If you can download it client side you can likely place a copy in a folder and ask claude

‘decompile the app in this folder to answer further questions on how it works. As an an example first question explain what happens when a user does X’.

I do this with obscure video games where i want to a guide on how some mechanics work. Eg. https://pastes.io/jagged-all-69136 as a result of a session.

It can ruin some games but despite the possibility of hallucinations i find it waaay more reliable than random internet answers.

Works for apps too. Obfuscation doesn’t seem to stop it.


Whoa, when did they come out with JA3?

> This is a production-grade agentic system that happens to live in your terminal.

You read the code?

> The ink/ directory — roughly 50 files — is not the popular npm ink package. Anthropic built their own React-based terminal rendering engine from scratch.

Interesting


Maybe the first sentence of the "Final Thoughts" is man-made. The rest is LLM slop.


To be fair, they display it reasonable prominently in GitHub when you are logged in. Given that, I feel the post title fall under the click bait category. I was fully aware of the Co-pilot opt-out change, but still clicked due the phrasing of the title.

Do yourself a favor and upgrade your history search with fzf shell integration (or similar): https://youtu.be/u-qLj4YBry0?t=223 / https://junegunn.github.io/fzf/shell-integration/

Or use atuin[1]

[1]: https://atuin.sh/


Opera was by far the best browser for a while for sure. Sad they couldn't keep up :/


It wasn't about keeping up. It was 100% about Google putting billions in advertising and abusing their dominance. Besides legit stuff like paying millions or more likely billions for billboards, spots in tv/radio/etc... there were monopoly "ads" on google.com, gmail,com, youtube.com homepages. And of course the classic of blocking features based on user agent alone, lying to people they need to use Chrome to access a product or a feature. They just needed to manipulate the masses and now almost everyone uses browser from an advertising company and they can keep pulling the rug.

My impression is that they are a compression expert, not a color expert. Make sense they chose uniform flat colors :D


The model is great. The UX is ~~horrible~~ annoying...


Don't get me started. For every half-decent choice, there's a multitude of insane choices. After all this time they still don't have side-by-side review.

Equally as annoying, the break from VSCode is horrible. Having to use a separate registry, not having basic settings sync, the delay behind mainline VSCode updates.

Then, it's just plain buggier than others. The agent terminal just doesn't work semi-regularly, it doesn't like listing directories in the @, the SSH plugin crashes every other time it tries to connect, undoing agent work undoes edits I made in unrelated files sometimes. Sometimes updates just regress performance hard for seemingly no reason.

I also noticed the token use is wildly less efficient than CC or Codex these days. After almost no time at all it's up to 100,000 tokens and they're charging $1 per request for Sonnet. Side-by-side, Cursor spent $17 in the same time CC spent $4. Which is bizarre to me, since they advertise how their indexing and semantic search is more token efficient?

The autocomplete model was the only reason I stayed as long as I did. I wish there was a VSCode equivalent.


Well, the UI as a whole is ok to me (except the parts which is way too volatile). I was talking about the UX of the autocomplete model. The model are very often spot on and fast, but it's impossible to properly configure it to be less in your face. Making it basically useless for day-to-day development.


To be fair, is "with RL", "just"?

They should have disclosed it though. If they didn't it's a bad look for sure.


Could you explain how much improvement RL+fine tuning has given Composer 2.0 over Kimi K2.5? I don't fully grasp the work Cursor model has done here.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: