Hacker Newsnew | past | comments | ask | show | jobs | submit | bilater's commentslogin

I have always wondered how archives manage to capture screenshots of paywalled pages like the New York Times or the Wall Street Journal. Do they have agreements with publishers, do their crawlers have special privileges to bypass detection, or do they use technology so advanced that companies cannot detect them?

Astro to Cloudflare, Bun to Anthropic. Good trend seeing people toiling away at OS financially rewarded.

Big difference is that Anthropic blocks competitors from using its products (they literally cut direct api access. Or even through 3rd party like Cursor).

Two whole projects. A stampede.

big beloved projects

I wonder if we'll get something like a CORS for agents where they can only pass around data to whitelisted ips (local, claude sanctioned servers etc).

Isn't the whole issue here that because the agent trusted Anthrophic IP's/URL's it was able to upload data to Claude, just to a different user's storage?

I share some of my hacky experiments here

https://www.hackyexperiments.com


I'm curious how tools like Claude Code or Cursor edit code. Do they regenerate the full file and diff it, or do they just output a diff and apply that directly? The latter feels more efficient, but harder to implement.

Most things don't work. You can be an arm chair critic and scoff and you may be right a lot of times. But you'll also never really build anything of note and/or have crazy hockey stick growth in your life.

If you are using local models for coding you are midwiting this. Your code should be worth more than a subscription.

The only legit use case for local models is privacy.

I don't know why anyone would want to code with an intern level model when they can get a senior engineer level model for a couple of bucks more.

It DOESN'T MATTER if you're writing a simple hello world function or building out a complex feature. Just use the f*ing best model.


Or if you want to do development work while offline.


Good to have fallbacks but in reality most ppl ( at least in the west) will have internet 99% of the time.


Sure, but I am not one of them. I find myself wanting to code on trains and planes pretty often, and so local toolchains are always attractive for me.


Is this some kind of mental problem that you want to tell people what they do and how they spend their money? Pretty jerk attitude IMO


"senior engineer level model" is the biggest cope I've ever seen


I will use a local coding model for our proprietary / trade secrets internal code when Google uses Claude for its internal code and Microsoft starts using Gemini for internal code.

The flip side of this coin is I'd be very excited if Jane Street or DE Shaw were running their trading models through Claude. Then I'd have access to billions of dollars of secrets.


> I'd be very excited if Jane Street or DE Shaw were running their trading models through Claude. Then I'd have access to billions of dollars of secrets.

Using Claude for inference does not mean the codebase gets pulled into their training set.

This is a tired myth that muddies up every conversation about LLMs


> This is a tired myth that muddies up every conversation about LLMs

Many copyright holders, and the courts would beg to differ.


lol yeah its weird to me why even ppl on HN can't wrap their heads around stateless calls.


unless you control both the client and the server, you cannot prove a call is stateless.


I like how this is trending with another post called 'Slowness is a Virtue'


for now


This is not my post but it does resonate with me and I have noticed it with my workflows as well. Like it or not this is the rapidly approaching futu...present?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: