Hacker Newsnew | past | comments | ask | show | jobs | submit | AlexC04's commentslogin

to directly answer this bit:

> Feels like a fundamental bottleneck for production agent systems, so would love to compare how you're thinking about the latency vs accuracy tradeoff.

I'm really not focusing on latency right now. My short term goal is to prove the thesis that `ail` can improve same-model performance on SWEBench Pro vs. their own published results.

Can I run swebp with GLM-4.6 and get a score better than their published `68.20` https://www.swebench.com/?

The argument is that the latency right now just isn't the part we should worry about. If we're reducing the time to code something from ~6 weeks to 1 hour... then does it really matter tha we add an other 30 minutes of tool calls if we get it 100% right vs. 80% right?

Make it work -> Make it right -> make it fast.

I'm still on the first one tbh :rofl-emoji:


so - my approach is still being built and I'm still very hand wavy around how it is going to come together, but effecively I'm building pipelines of prompts. Rather than running our LLM sequences as long running sessions where the entire context gets loaded on every turn (a recipe for rot), we unlock the ability to introduce a thinking layer at each step in between the process.

So before each turn is sent into the LLM we (potentially) run a local process to assemble a bespoke context of only what is required for that specific turn.

If a tool call is not going to be needed on the prompt, we don't include it in the system prompt on that round.

I'm still formalizing the spec at the moment and think I'm about six months to a year out before I have a full human ready UI running.

This is the foundational paper I'm basing the tool on: https://github.com/AlexChesser/ail/blob/main/docs/blog/the-y... while the spec starts here: https://github.com/AlexChesser/ail/blob/main/spec/core/s01-p...

Essentially I'm trying to build an artificial neocortex and frontal lobe to provide a complete layer of Executive Function that operates on top of our agents - like Claude Code (or whatever else).

I'm basing the roadmap on the about 100 years of cognitive science. We've legitimately had names for all these failure modes (in humans) since the 1960's. We have observations of what we're witnessing in agents from 1848.

We have the roadmap from Psychology.


this is a pretty important piece and the research backs you up. Moving that context out of your system prompt dynamically is going to help reduce your lost in the middle effect. Context rots almost immediately. I've got a project that is being built to address this directly as well, but I'm still very early days.

Keep it up! you're on the right track.

Hong, K., & Chroma Research Team. (2025). Context rot: How increasing input tokens impacts LLM performance. Chroma Research. https://research.trychroma.com/context-rot

Liu, N. F., Lin, K., Hewitt, J., Paranjape, A., Bevilacqua, M., Petroni, F., & Liang, P. (2024). Lost in the middle: How language models use long contexts. Transactions of the Association for Computational Linguistics, 12, 157–173. https://doi.org/10.1162/tacl_a_00638


Hey! This looks a lot like what I'm working on, from a slightly different angle. I think you're on the right track. In fact, cortex as a name is perfect since you're effectively building the executive function layer for search and selection. I also think rust is the right language to go with.

I'm going do a deeper read of your work in a bit. I'd love it if you took a look at my theory of artificial cognition The YAML of the Mind https://alexchesser.medium.com/the-yaml-of-the-mind-8a4f945a..., dropped in to the `ail` project and let me know what you think.

I just have to get the kids to school and I'll pop back into cortex later


Hey folks, I wrote this. If you're interested in the concepts or pressure testing the ideas a little deeper, please feel free to comment here or reach out directly.

I appreciate that it's pretty long feel free to point your LLMs at it


this is really exciting and dovetails really closely with the project I'm working on.

I'm writing a language spec for an LLM runner that has the ability to chain prompts and hooks into workflows.

https://github.com/AlexChesser/ail

I'm writing the tool as proof of the spec. Still very much a pre-alpha phase, but I do have a working POC in that I can specify a series of prompts in my YAML language and execute the chain of commands in a local agent.

One of the "key steps" that I plan on designing is specifically an invocation interceptor. My underlying theory is that we would take whatever random series of prose that our human minds come up with and pass it through a prompt refinement engine:

> Clean up the following prompt in order to convert the user's intent > into a structured prompt optimized for working with an LLM > Be sure to follow appropriate modern standards based on current > prompt engineering reasech. For example, limit the use of persona > assignment in order to reduce hallucinations. > If the user is asking for multiple actions, break the prompt > into appropriate steps (**etc...)

That interceptor would then forward the well structured intent-parsed prompt to the LLM. I could really see a step where we say "take the crap I just said and turn it into CodeSpeak"

What a fantastic tool. I'll definitely do a deep dive into this.


that's a bit of a meta discussion and it'd probably reveal some super interesting things about how tech culture have changed in the last ~15 years.

I've been on HN since 2010 (lost the password to my first account, alexc04) and I recall a time when it felt like every second article on the front-page was an bold directive pronouncement or something just aggressively certain of its own correctness.

Like "STOP USING BASH" or "JQUERY IS STUPID" - not in all caps of course but it created an unpleasant air and tone (IMO, again, this is like 16 years ago now so I may have memory degredation to some extent)

Things like donglegate got real traction here among the anti-woke crew. There have been times where the venn diagram of 4chan and hackernews felt like it had a lot more overlap. I've even bowed out of discussion for years at a time or developed an avoidance reaction to HN's toxic discussion culture.

IMO it has been a LOT better in more recent years, but I also don't dive as deep as I used to.

ANYWAYS - my point is I would be really interested to see a sentiment analysis of HN headlines over the years to try and map out cultural epochs of the community.

When has HN swayed more into the toxic and how has it swayed back and forth as a pendulum over time? (or even has it?)

I wonder what other people's perspective is of how the culture here has changed over time. I truly think it feels a lot more supportive than it used to.


> This isn’t a bubble inflating. It’s capital and intelligence relocating.

Is this GPT? this kind of not X but Y pattern is a real code smell for slop and can cause someone to immediately bounce out of your writing. The pattern can really feel hard hitting when you're reading what GPT put in front of you, but others are REALLY starting to reject it.

https://en.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writing

https://saigaddam.medium.com/it-isnt-just-x-it-s-y-54cb403d6...

https://www.blakestockton.com/dont-write-like-ai-1-101-negat...

Like it or not this can lead people to immediately reject your entire message, shut down and stop reading.


I agree completely and I don't think you can.

Even people recording short form video are doing it. They're reading out their chat-gpt-psychosis induced fever dreams using scripts written by chatGPT.

The 7-part tweets that build to a slop crescendo are doing my head in.

The solution might be some combination of:

1. leave the social media sites where the slop is irredeemable. 2. unfollow everyone, reset your algorithm. 3. be aggressive about who you add back in. Make sure they're humans having high quality discussions. 4. be aggressive about who you block. Lower the bar on blocking - one and done. No chances, no wait-and-see. 5. move to smaller communities of real humans.

None of this has worked for me yet. I'm still swimming in a vast sea of slopity slop slop. Dead internet theory appears to be playing out in front of us.

edit: On threads I've been trying to use their `dear algo` feature pretty aggressively but it doesn't work very well. I've asked it to remove some types of comments and it seems to just add more of them.


If I could have one of these cards in my own computer do you think it would be possible to replace claude code?

1. Assume It's running a better model, even a dedicated coding model. High scoring but obviously not opus 4.5 2. Instead of the standard send-receive paradigm we set up a pipeline of agents, each of whom parses the output of the previous.

At 17k/tps running locally, you could effectively spin up tasks like "you are an agent who adds semicolons to the end of the line in javascript", with some sort of dedicated software in the style of claude code you could load an array of 20 agents each with a role to play in improving outpus.

take user input and gather context from codebase -> rewrite what you think the human asked you in the form of an LLM-optimized instructional prompt -> examine the prompt for uncertainties and gaps in your understanding or ability to execute -> <assume more steps as relevant> -> execute the work

Could you effectively set up something that is configurable to the individual developer - a folder of system prompts that every request loops through?

Do you really need the best model if you can pass your responses through a medium tier model that engages in rapid self improvement 30 times in a row before your claude server has returned its first shot response?


Models can't improve themselves with their own (model) input, they need to be grounded in truth and reality.


But at one point the model is sufficiently large enough to accomplish any task a human could specify. For software development, I think we're pretty much at that point with the latest Anthropic/Google/OpenAI models. We have no idea where the direction of token pricing is going to go in the future, but the consensus seems to be that it will only get more expensive. If Taalas can offer the same functionality that we have with frontier models today at a 1/10 of the cost and 10x the speed then they're going to take over a large part of the market.


I think so. The last few months have shown us that it isn't necessarily the models themselves that provide good results, but the tooling / harness around it. Codex, Opus, GLM 5, Kimi 2.5, etc. all each have their quirks. Use a harness like opencode and give the model the right amount of context, they'll all perform well and you'll get a correct answer every time.

So in my opinion, in a scenario like this where the token output is near instant but you're running a lower tier model, good tooling can overcome the differences between a frontier cloud model.


It's 2.5kW so it likely won't sit in your computer (quite beyond what a desktop could provide in power alone to a single card, let alone cool). It's 8.5cm^2 which is a beast of a single die.

Basically logistically it's going to need to be in a data centre.

It's ideal for small context high throughput. Perhaps parsing huge text piles like if you had the entire Epstein files as text.

I think Claude code benefits from larger context to keep your entire project in view and deep reasoning.

What this would certainly replace is when Claude dispatched to Haiku for manual NLP tasks.


> It's 2.5kW so it likely won't sit in your computer (quite beyond what a desktop could provide in power alone to a single card, let alone cool). It's 8.5cm^2 which is a beast of a single die.

I wonder how you cool a 3x3cm die that outputs 2.5 kW of heat. In the article they mention that the traditional setup requires water cooling, but surely this does as well, right?


Can't imagine what else could manage that nearly 2.8W/mm2.

It does make you wonder if they copy is misleading about something so simple how much else could be puffery?

Maybe they mean that a standard liquid cooling system will work?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: