Claude Code has added too much of this and it's got me using --dangerously-skip-permissions all the time. Previously it was fine but now it needs to get permission each time to perform finds, do anything if the path contains a \ (which any folder with a space in it does on Windows), do compound git commands (even if they're just read-only). Sometimes it asks for permission to read folders WITHIN the working directory.
My friends and I were talking about the recent supply chain attack which harmlessly installed OpenClaw. We came to the conclusion that this was a warning (from a human) that an agent could easily do the same. Given how soft security is in general, AI "escaping containment" feels inevitable. (The strong form of that hypothesis where it subjugates or eliminates us isn't inevitable, I honestly have no idea, just the weak form where we fail to erect boundaries it cannot bypass. We've basically already failed.)
Prophesied, all things claw are highly dangerous. Sometimes I wake, this video from the late 90s in my dreams, and wonder if the conjoined magnet + claw, is a time traveler reference to just wipe openclaw before we all die.
What ai? LLMs are language models, operating on words, with zero understanding. Or is there a new development which should make me consider anthropomorphizing them?
They don't have understanding but if you follow the research literature they obviously have a tendency to produce a token stream, the result of which humans could fairly call "entity with nefarious agency".
Why? Nobody knows.
My bet is that they are just larping all the hostile AI:s in popular culture because that's part of the context they were trained in.
The way my thinking has evolved is that "AGI" isn't actually necessary for an agent (NB: agents, specifically ones with state, not LLMs by themselves - "AI" was vague and I should've been clearer) to be enough like a person to be interesting and/or problematic. To quote myself [1]:
> [OpenClaw agents are like] an actor who doesn't know they're in a play. How much does it matter that they aren't really Hamlet?
Does the agent understand the words it's predicting? Does the actor know they're in a play? I don't know but I'm more concerned with how the actor would respond to finding someone eavesdropping behind a curtain.
> Or is there a new development which should make me consider anthropomorphizing them?
The development that caused me to be more concerned about their personhood or pseudopersonhood was the MJ Rathbun affair. I'm not saying that "AGI" or "superintelligence" was achieved, I'm saying that's actually the wrong question and the right questions are around their capabilities, their behaviors, and how they evolve over time unattended or minimally attended. And I'm not saying I understand those questions, I thought I did but I was wrong. I frankly am confused and don't really know what's going on or how to respond to it.
Whether it has "real understanding" is a question for philosophy majors. As long as it (mechanically, without "real understanding") still can perform actions to escape containment, and do malicious stuff, that's enough.
LLMs are machines trained to respond and to appear to think (whether that's 'real thinking' or text-statistics fake-thinking') like humans. The foolish thing to do would be to NOT anthropomorphize them.
The settings.json allowlist gives you exactly this kind of granularity. You can permit specific tool patterns like Read, Glob, Grep, Bash(git *) while keeping destructive operations gated. It's not as discoverable as a CLI flag but it's been working well for me for unattended sessions.
Yeah I had to ask it to stop doing that as well && chaining commands that it could split. I got tired of having to manually give permissions all the time (or leaving it to churn, only to come back after a while to see it had asked for permissions very early into the task)
Working on something that addresses this and allows you to create reusable sets of permissions for Claude Code (so you can run without --dangerously-skip-permissions and have pre-approved access patterns granted automatically) https://github.com/empathic/clash
I've found Claude Code's built-in sandbox to strike a good balance between safety and autonomy on macOS. I think it's available on Windows via WSL2 (if you're looking for a middle ground between approving everything manually and --dangerously-skip-permissions)
Still waiting for progress from the team trying to get WSL approved for use at our org. We get a "still working through the red tape" update every couple months.
In my case, all of my keys are in AWS Secrets Manager. The temporary AWS access keys that are in environment variables in the Claude terminal session are linked to a role without access to Secrets Manager. My other terminal session has temporary keys to a dev account that has Admin access
The AWS CLI and SDK automatically know to look in those environment variables for credentials.
We need a new suite of utilities with defined R/W/X properties, like a find that can't -exec arbitrary programs. Ideally the programs would have a standard parseable manifest.
I've seen this before with sodoers programs including powerful tools. Saw one today with make, just gobsmacked.
In my limited time using it, I’ve never seen it ask for permission to read files from within the working directory, what cases have you run into where it does? Was it trying to run a read-only shell command or something?
It will sometimes do this for gitignored files to avoid reading secret tokens in env files for example. But for certain languages that rely on code generation this can be a pain.