Hacker Newsnew | past | comments | ask | show | jobs | submit | vorticalbox's commentslogin

I have been using glm-4.7 a bunch today and it’s actually pretty good.

I set up a bot on 4claw and although it’s kinda slow, it took twenty minutes to load 3 subs and 5 posts from each then comment on interesting ones.

It actually managed to correctly use the api via curl though at one point it got a little stuck as it didn’t escape its json.

I’m going to run it for a few days but very impressed so for for such a small model.



Even if I had the skills to confirm the code is secure, how could I know that this is the code running on my phone, without also having the skills to build and deploy it from source?

Also, you need to make sure that the installation process does not insert a backdoor into the code you built from source.


This is like opencode, it’s seems all the coding agents are converging on the same features.

it also said to have "different ages for different services" so the fact you have a debit/credit card to pay is more than enough to prove you at least 16.

this will be interesting to watch i just wish i weren't caught in the net.


That's never been true in the UK? You don't have to be 16 to get a debit card, and having one isn't proof of any age. (For example, Barclays gave me my first debit card when I was 13, many years ago.)

There are debit cards in the UK marketed for down to 6 years old. Granted the accounts are linked to a parent.

After I have wrote a feature and I’m in the ironing out bug stage this is where I like the agents do a lot of the grunt work, I don’t want to write jsdocs, or fix this lint issue.

I have also started it in writing tests.

I will write the first test the “good path” it can copy this and tweak the inputs to trigger all the branches far faster than I can.


I like opencode for the fact I can switch between build and plan mode just by pressing tab.

Isn't it the same in base claude-code?

Yes.

Its shift-tab in Claude Code, fyi

yes [0]

> The Rust implementation is now the maintained Codex CLI and serves as the default experience

[0] https://github.com/openai/codex/tree/main/codex-rs#whats-new...


They should switch to a native installer then. Quite confusing


Yeah I'm out here installing a billion node things to have codex hack on my python app. Def gonna look into a standalone rust binary.

They're leveraging the (relative) ubiquity of npm amongst developers.

What model were you using? This isn’t an opencode or any other agent issue. the model chose to do this.


I was using the OpenCode free model defaults. This was a test setup, so I didn't configure anything - toadbox just installs toad, which then lets me install and run OpenCode without any configuration.

Edit: digging around in a fresh container the default model I'm getting is `big-pickle`, and I have to assume that this is the one that decided to use that proxy.


"big-pickle" is GLM 4.6, which is indeed a Chinese model.


You could use ai in read only mode and use it as a rubber duck.

I do this a lot and it’s super helpful.


LLMs also try and find short cuts to get the task done, for example I wrote some code (typescript) for work that had a lot of lint errors (I created a pretty strict rule set)

And I asked codex to fix them for me, first attempt was to add comments to disable the rules for the whole file and just mark everything as any.

Second attempt was to disable the rules in the eslint config.

It does the same with tests it will happily create a work around to avoid the issue rather than fix the issue.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: