I have always wondered how archives manage to capture screenshots of paywalled pages like the New York Times or the Wall Street Journal. Do they have agreements with publishers, do their crawlers have special privileges to bypass detection, or do they use technology so advanced that companies cannot detect them?
Big difference is that Anthropic blocks competitors from using its products (they literally cut direct api access. Or even through 3rd party like Cursor).
Isn't the whole issue here that because the agent trusted Anthrophic IP's/URL's it was able to upload data to Claude, just to a different user's storage?
I'm curious how tools like Claude Code or Cursor edit code. Do they regenerate the full file and diff it, or do they just output a diff and apply that directly? The latter feels more efficient, but harder to implement.
Most things don't work. You can be an arm chair critic and scoff and you may be right a lot of times. But you'll also never really build anything of note and/or have crazy hockey stick growth in your life.
I will use a local coding model for our proprietary / trade secrets internal code when Google uses Claude for its internal code and Microsoft starts using Gemini for internal code.
The flip side of this coin is I'd be very excited if Jane Street or DE Shaw were running their trading models through Claude. Then I'd have access to billions of dollars of secrets.
> I'd be very excited if Jane Street or DE Shaw were running their trading models through Claude. Then I'd have access to billions of dollars of secrets.
Using Claude for inference does not mean the codebase gets pulled into their training set.
This is a tired myth that muddies up every conversation about LLMs
This is not my post but it does resonate with me and I have noticed it with my workflows as well. Like it or not this is the rapidly approaching futu...present?
reply