Hacker Newsnew | past | comments | ask | show | jobs | submit | ofirpress's commentslogin

This is a good way to benchmark models. We [the SWE-bench team] took the meta-version of this and implemented it as a new benchmark called CodeClash -

We have agents implement agents that play games against each other- so Claude isn't playing against GPT, but an agent written by Claude plays poker against an agent written by GPT, and this really tough task leads to very interesting findings on AI for coding.

https://codeclash.ai/


>this really tough task leads to very interesting findings on AI for coding

Are you going to share those with the class or?


Leaderboard looks very outdated..


Cool to see core war! I feel it's mostly forgotten by now. My dad is still playing it to this day though and even attends tournaments



[SWE-bench co-author here] It seems like they run this test on a subset of 50 tasks, and that they only run the test once per day. So a lot of the movement in accuracy could be attributed to that. I would run on 300 tasks and I'd run the test suite 5 or 10 times per day and average that score. Lots of variance in the score can come from random stuff like even Anthropic's servers being overloaded.


but degradation from servers being overloaded would be the type of degradation this SHOULD measure no? Unless it's only intended for measuring their quietly distilling models (which they claim not to do? idk for certain)


Load just makes LLMs behave less deterministically and likely degrade. See: https://thinkingmachines.ai/blog/defeating-nondeterminism-in...

They don't have to be malicious operators in this case. It just happens.


> malicious

It doesn't have to be malicious. If my workflow is to send a prompt once and hopefully accept the result, then degradation matters a lot. If degradation is causing me to silently get worse code output on some of my commits it matters to me.

I care about -expected- performance when picking which model to use, not optimal benchmark performance.


Non-determinism isn’t the same as degradation.

The non-determinism means that even with a temperature of 0.0, you can’t expect the outputs to be the same across API calls.

In practice people tend to index to the best results they’ve experienced and view anything else as degradation. In practice it may just be randomness in either direction from the prompts. When you’re getting good results you assume it’s normal. When things feel off you think something abnormal is happening. Rerun the exact same prompts and context with temperature 0 and you might get a different result.


This has nothing to do with overloading. The suspicion is that when there is too much demand (or they just want to save costs), Anthropic sometimes uses a less capable (quantized, distilled, etc) version of the model. People want to measure this so there is concrete evidence instead of hunches and feelings.

To say that this measurement is bad because the server might just be overloaded completely misses the point. The point is to see if the model sometimes silently performs worse. If I get a response from "Opus", I want a response from Opus. Or at least want to be told that I'm getting slightly-dumber-Opus this hour because the server load is too much.


“Just drink the water, it’s all water.”


this is about variance of daily statistics, so I think the suggestions are entirely appropriate in this context.


The question I have now after reading this paper (which was really insightful) is do the models really get worse under load, or do they just have a higher variance? It seems like the latter is what we should expect, not it getting worse, but absent load data we can't really know.


Explain this though. The code is deterministic, even if it relies on pseudo random number generation. It doesn't just happen, someone has to make a conscious decision to force a different code path (or model) if the system is loaded.


Its not deterministic. Any individual floating point mul/add is deterministic, but in a GPU these are all happening in parallel and the accumulation is in the order they happen to complete.

When you add A then B then C, you get a different answer than C then A then B, because floating point, approximation error, subnormals etc.


It can be made deterministic. It's not trivial and can slow it down a bit (not much) but there are environment variables you can set to make your GPU computations bitwise reproducible. I have done this in training models with Pytorch.


There are settings to make it reproducible but they incur a non-negligible drop in performance.

Unsurprising given they amount to explicit synchronization to make the order of operations deterministic.



For all practical purposes any code reliant on the output of a PRNG is non-deterministic in all but the most pedantic senses... And if the LLM temperature isn't set to 0 LLMs are sampling from a distribution.

If you're going to call a PRNG deterministic then the outcome of a complicated concurrent system with no guaranteed ordering is going to be deterministic too!


No, this isn't right. There are totally legitimate use cases for PRNGs as sources of random number sequences following a certain probability distribution where freezing the seed and getting reproducibility is actually required.


And for a complicated concurrent system you can also replay the exact timings and orderings as well!


That's completely different from PRNGs. I don't understand why you think those things belong together.


How is this related to overloading? The nondeterminism should not be a function of overloading. It should just time out or reply slower. It will only be dumber if it gets rerouted to a dumber, faster model eg quantized.


Temperature can't be literally zero, or it creates a divide by zero error.

When people say zero, it is shorthand for “as deterministic as this system allows”, but it's still not completely deterministic.


Zero temp just uses argmax, which is what softmax approaches if you take the limit of T to zero anyway. So it could very well be deterministic.


Floating point math isn't associative for operations that are associative in normal math.


That would just add up to statistical noise instead of 10% degradation over a week.


Catastrophic error accumulation can produce more profound effects than noise.


Just to make sure I got this right. They serve millions of requests a day & somehow catastrophic error accumulation is what is causing the 10% degradation & no one at Anthropic is noticing it. Is that the theory?


FYI something in that region happened last august/September. Some inference bug triggered worse performance on TPUs vs GPU.

There's a million algorithms to make LLM inference more efficient as a tradeoff for performance, like using a smaller model, using quantized models, using speculative decoding with a more permissive rejection threshold, etc etc


It takes a different code path for efficiency.

e.g

if (batch_size > 1024): kernel_x else: kernel_y


The primary (non malicious, non stupid) explanation given here is batching. But I think you would find looking at large-scale inference the batch sizes being ran on any given rig are fairly static - there is a sweet spot for any given model part ran individually between memory consumption and GPU utilization, and generally GPUs do badly at job parallelism.

I think the more likely explanation is again with the extremely heterogeneous compute platforms they run on.


That's why I'd love to get stats on load/hardware/location of where my inference is running. Looking at you Trainiuim.


Why do you think batching has anything to do with the model getting dumber? Do you know what batching means?


Well if you were to read the link you might just find out! Today is your chance to be less dumb than the model!


I checked the link, it never says that the model's prediction get lower quality due to batching, just nondeterministic. I don't understand why people conflate these things. Also it's unlikely that they use smaller batch sizes when load is lower. They just likely spin up and down GPU serves based on demand, or more likely, reallocate servers and gpus between different roles and tasks.


It's very clearly a cost tradeoff that they control and that should be measured.


I'd argue that it depends how that degradation manifests whether you want to include it or not.

Consider two scenarios: (1) degradation leads to the model being routed behind the scenes to a different server, with subtly different performance characteristics, all unbeknownst to the user; (2) degradation leads to the model refusing a request and returning an "overloaded" message.

In the first case, absolutely you want to include that because that's the kind of lack of transparency about performance that you'd want signal on. In the second case, an automated test harness might fail, but in the real world the user will just wait and retry when the server is under less load. Maybe you don't include that because it's actually misleading to say that performance (in terms of the model's intelligence, which is how the benchmark will be interpreted) is worse.


noob question: why would increased demand result in decreased intelligence?


An operator at load capacity can either refuse requests, or move the knobs (quantization, thinking time) so requests process faster. Both of those things make customers unhappy, but only one is obvious.


This is intentional? I think delivering lower quality than what was advertised and benchmarked is borderline fraud, but YMMV.


Per Anthropic’s RCA linked in Ops post for September 2025 issues:

“… To state it plainly: We never reduce model quality due to demand, time of day, or server load. …”

So according to Anthropic they are not tweaking quality setting due to demand.


And according to Google, they always delete data if requested.

And according to Meta, they always give you ALL the data they have on you when requested.


>And according to Google, they always delete data if requested.

However, the request form is on display in the bottom of a locked filing cabinet stuck in a disused lavatory with a sign on the door saying ‘Beware of the Leopard'.


What would you like?


An SLA-style contractually binding agreement.


I bet this is available in large enterprise agreements. How much are you willing to pay for it?


Priced in.


I guess I just don't know how to square that with my actual experiences then.

I've seen sporadic drops in reasoning skills that made me feel like it was January 2025, not 2026 ... inconsistent.


LLMs sample the next token from a conditional probability distribution, the hope is that dumb sequences are less probable but they will just happen naturally.


Funny how those probabilities consistently at 2pm UK time when all the Americans come online...


It's more like the choice between "the" and "a" than "yes" and "no".


I wouldn't doubt that these companies would deliberately degrade performance to manage load, but it's also true that humans are notoriously terrible at identifying random distributions, even with something as simple as a coin flip. It's very possible that what you view as degradation is just "bad RNG".


yep stochastic fantastic

these things are by definition hard to reason about


That's about model quality. Nothing about output quality.


Thats what is called an "overly specific denial". It sounds more palatable if you say "we deployed a newly quantized model of Opus and here are cherry picked benchmarks to show its the same", and even that they don't announce publicly.


Personally, I'd rather get queued up on a long wait time I mean not ridiculously long but I am ok waiting five minutes to get correct it at least more correct responses.

Sure, I'll take a cup of coffee while I wait (:


i’d wait any amount of time lol.

at least i would KNOW it’s overloaded and i should use a different model, try again later, or just skip AI assistance for the task altogether.


They don't advertise a certain quality. You take what they have or leave it.


> I think delivering lower quality than what was advertised and benchmarked is borderline fraud

welcome to the Silicon Valley, I guess. everything from Google Search to Uber is fraud. Uber is a classic example of this playbook, even.


If there's no way to check, then how can you claim it's fraud? :)


There is no level of quality advertised, as far as I can see.


What is "level of quality"? Doesn't this apply to any product?


In this case, it is benchmark performance. See the root post.


[flagged]


That number is a sliding window, isn't it?


I'd wager that lower tok/s vs lower quality of output would be two very different knobs to turn.


I've seen some issues with garbage tokens (seemed to come from a completely different session, mentioned code I've never seen before, repeated lines over and over) during high load, suspect anthropic have some threading bugs or race conditions in their caching/inference code that only happen during very high load


It would happen if they quietly decide to serve up more aggressively distilled / quantised / smaller models when under load.


Or just reducing the reasoning tokens.


They advertise the Opus 4.5 model. Secretly substituting a cheaper one to save costs would be fraud.


If you use the API, you pay for a specific model, yes, but even then there are "workarounds" for them, such as someone else pointed out by reducing the amount of time they let it "think".

If you use the subscriptions, the terms specifically says that beyond the caps they can limit your "model and feature usage, at our discretion".


Sure. I was separating the model - which Anthropic promises not to downgrade - and the "thinking time" - which Anthropic doesn't promise not to downgrade. It seems the latter is very likely the culprit in this case.


Old school Gemini used to do this. It was super obvious because mid day the model would go from stupid to completely brain dead. I have a screenshot of Google's FAQ on my PC from 2024-09-13 that says this (I took it to post to discord):

> How do I know which model Gemini is using in its responses?

> We believe in using the right model for the right task. We use various models at hand for specific tasks based on what we think will provide the best experience.


> We use various models at hand for specific tasks based on what we think will provide the best experience

... for Google :)


from what I understand this can come from the batching of requests.


So, a known bug?


No, basically, the requests are processed in batches, together, and the order they're listed in matters for the results, as the grid (tiles) that the GPU is ultimately processing, are different depending on what order they entered at.

So if you want batching + determinism, you need the same batch with the same order which obviously don't work when there are N+1 clients instead of just one.


Sure, but how can that lead to increased demand resulting in decreased intelligence? That is the effect we are discussing.


Small subtle errors that are only exposed at certain execution parts could be one. You might place things differently onto the GPU depending on how large the batch is, if you've found one way to be faster batch_size<1024, but another when batch_size>1024. As number of concurrent incoming requests goes up, you increase batch_size. Just one possibility, guess there could be a multitude of reasons, as it's really hard to reason about until you sit with the data in front of you. vLLM has had bugs with these sort of thing too, so wouldn't surprise me.


Wouldn't you think that was as likely to increase as decrease intelligence, so average to nil in the benchmarks?


No, I'm not sure how that'd make sense. Either you're making the correct (expected) calculations, or you're getting it wrong. Depending the type of wrong or how wrong, could go from "used #2 in attention instead of #1" so "blue" instead of "Blue" or whatever, to completely incoherent text and garbled output.


I accept errors are more likely to decrease "intelligence". But I don't see how increased load, through batching, is any more likely to increase than decrease errors.


I've personally witnessed large variability in behaviour even within a given session -- which makes sense as there's nothing stopping Anthropic from shuttling your context/session around load balanced through many different servers, some of which might be quantized heavily to manage load and others not at all.

I don't know if they do this or not, but the nature of the API is such you could absolutely load balance this way. The context sent at each point is not I believe "sticky" to any server.

TLDR you could get a "stupid" response and then a "smart" response within a single session because of heterogeneous quantization / model behaviour in the cluster.


I've defended opus in the last weeks but the degradation is tangible. It feels like it degraded by a generation tbh.


it's just extremely variable


Hope you don't mind the unrelated question:

How do you pay for those SWE-bench runs?

I am trying to run a benchmark but it is too expensive to run enough runs to get a fair comparison.

https://mafia-arena.com


Benchmarks can get costly to run- you can reach out to frontier model creators to try and get them to give you free credits, but usually they'll only agree to that once your benchmark is pretty popular.


so basically they know requests using your API key should be treated with care?


they could but you can also have some trust in anthropic to have some integrity there, these are earnest people.

"trust but verify" ofc . https://latent.space/p/artificialanalysis do api keys but also mystery shopper checks


> these are earnest people.

I agree.

I'll also add that when my startup got acquired into a very large, well-known valley giant with a sterling rep for integrity and I ended up as a senior executive - over time I got a first-hand education on the myriad ways genuinely well-intentioned people can still end up being the responsible party(s) presiding over a system doing net-wrong things. All with no individual ever meaning to or even consciously knowing.

It's hard to explain and I probably wouldn't have believed myself before I saw and experienced it. Standing against an overwhelming organizational tide is stressful and never leads to popularity or promotion. I think I probably managed to move on before directly compromising myself but preventing that required constant vigilance and led to some inter-personal and 'official' friction. And, frankly, I'm not really sure. It's entirely possible I bear direct moral responsibility for a few things I believe no good person would do as an exec in a good company.

That's the key take-away which took me a while to process and internalize. In a genuinely good organization with genuinely good people, it's not "good people get pressured by constraints and tempted by extreme incentives, then eventually slip". I still talk with friends who are senior execs there and sometimes they want to talk about whether something is net good or bad. I kind of dread the conversation going there because it's inevitably incredibly complex and confusing. Philosopher's trolley car ethics puzzles pale next to these multi-layered, messy conundrums. But who else are they going to vent to who might understand? To be clear, I still believe that company and its leadership to be one of the most moral, ethical and well-intentioned in the valley. I was fortunate to experience the best case scenario.

Bottom line: if you believe earnest, good people being in charge is a reliable defense against the organization doing systemically net-wrong things - you don't comprehend the totality of the threat environment. And that's okay. Honestly, you're lucky. Because the reality is infinitely more ambiguously amoral than white hats vs black hats - at the end of the day the best the 'very good people' can manage is some shade of middle gray. The saddest part is that good people still care, so they want to check the shade of their hat but no one can see if it's light enough to at least tell yourself "I did good today."


Someone posted this here the other day and it uses _Demons_ to discuss exactly your point.

https://possessedmachines.com/


Wow. Only one page in and already bookmarked to absorb later. Thanks for the link.


That's why we're setting up adversarial benchmarks to test if they are doing the thing they promised not to do, because we totally trust them.


The last thing a proper benchmark should do is reveal it's own API key.


IMO it should need a third party running the LLM anyway. Otherwise the evaluated company could notice they're receiving the same requests daily and discover benchmarking that way.


With the insane valuations and actual revenue at stake, benchmarkers should assume they're assessing in an adversarial environment. Whether from intentional gaming, training to the test, or simply from prioritizing things likely to make results look better, targeting benchmarks will almost certainly happen.

We already know large graphics card manufacturers tuned their drivers to recognize specific gaming benchmarks. Then when that was busted, they implemented detecting benchmarking-like behavior. And the money at stake in consumer gaming was comparatively tiny compared to current AI valuations. The cat-and-mouse cycle of measure vs counter-measure won't stop and should be a standard part of developing and administering benchmark services.

Beyond hardening against adversarial gaming, benchmarkers bear a longer term burden too. Per Goodhart's Law, it's inevitable good benchmarks will become targets. The challenge is the industry will increasingly target performing well on leading benchmarks, both because it drives revenue but also because it's far clearer than trying to glean from imprecise surveys and fuzzy metrics what helps average users most. To the extent benchmarks become a proxy for reality, they'll bear the burden of continuously re-calibrating their workloads to accurately reflect reality as user's needs evolve.


But that's removing a component that's critical for the test. We as users/benchmark consumers care that the service as provided by Anthropic/OpenAI/Google is consistent over time given the same model/prompt/context


Might as well have the free tokens, then, especially if it is an open benchmark they are already aware of. If they want to game it they cannot be stopped from doing so when it's on their infra.


That's a good thought I hadn't had, actually.


yes I reached out to them but as you say it's a chicken-and-egg problem.

Thanks!


> I would run on 300 tasks and I'd run the test suite 5 or 10 times per day and average that score.

assume this is because of model costs. anthropic could either throw some credits their way (would be worthwhile to dispel the 80 reddit posts a day about degrading models and quantization) or OP could throw up a donation / tip link


Probably, but with a small sample size like that, they should probably be taking the uncertainty into account, because I wouldn't be surprised if a lot of this variation falls within expected noise.

E.g. some binomial interval proportions (aka confidence intervals).


Then you'd get people claiming that the benchmarks were 'paid for' by anthropic


one thing you learn from being on the internet is that you're never going to satisfy everybody


The degradation may be more significant within the day than at the same time every day.


Sure, but it's still useful insight to see how it performs over time. Of course, cynically, Anthropic could game the benchmark by routing this benchmark's specific prompts to an unadulterated instance of the model.


Sorry what?

"You can't measure my Cloud Service's performance correctly if my servers are overloaded"?

"Oh, you just measured me at bad times each day. On only 50 different queries."

So, what does that mean? I have to pick specific times during the day for Claude to code better?

Does Claude Code have office hours basically?


This has been happening for years. Tgere's a great paper from microsoft on Deepspeed AI inference.

Basically the paper showed methods for how to handle heavy traffic load by changing model requirements or routing to different ones. This was awhile ago and I'm sure it's massively more advanced now.

Also why some of AI's best work for me is early morning and weekends! So yes, the best time to code with modern LLM stacks is when nobody else is. It's also possibly why we go through phases of "they neutered the model" some time after a new release.


I wonder if my great experience with claude are partly due to the fact that my working hours don't overlap with the US west coast


chill out, ofir does not work for anthropic. he's just saying there's inherent variability in LLMs and you need to at least 30x the samples that OP is doing in order to make any form of statistically significant conclusions.


[flagged]


Verily, my vichyssoise of verbiage veers most verbose, so let me run that thing out of tokens fast.


According to Anthropic: "We never reduce model quality due to demand, time of day, or server load."

https://www.anthropic.com/engineering/a-postmortem-of-three-...


They've had issues before with things like "TPU top-k error - Claude sometimes dropped the best next token" (https://www.anthropic.com/engineering/a-postmortem-of-three-...) so what's going on might not be intentional even.


That issue did not have any time of day dependence


Stilll relevant over time.


> Lots of variance in the score can come from random stuff like even Anthropic's servers being overloaded.

Are you suggesting result accuracy varies with server load?


"Lots of variance in the score can come from random stuff like even Anthropic's servers being overloaded"

Aha, so the models do degrade under load.


Agreed, this benchmark would be much more useful ran multiple times a day. That could reveal degredation in line with load patterns.


For CC, I suspect it also need to be testing and labeling separate runs against subscription, public API and Bedrock-served models?

It’s a terrific idea to provide this. ~Isitdownorisitjustme for LLMs would be the parakeet in the coalmine that could at least inform the multitude of discussion threads about suspected dips in performance (beyond HN).

What we could also use is similar stuff for Codex, and eventually Gemini.

Really, the providers themselves should be running these tests and publishing the data.

The availability status information is no longer sufficient to gauge the service delivery because it is by nature non-deterministic.


i recall another project here on HN maybe 4-6 months ago that would run tests 4x a day or something. not sure how to find them again


Why should users care about Anthropic's servers being overloaded?


We (the SWE-bench team) have a 100 line of code agent that is now pretty popular in both academic and industry labs: https://github.com/SWE-agent/mini-swe-agent

I think it's a great way to dive into the agent world


As John says in that thread, we've fixed this issue in SWE-bench: https://xcancel.com/jyangballin/status/2006987724637757670

If you run SWE-bench evals, just make sure to use the most up-to-date code from our repo and the updated docker images


> There are certain tasks, like improving a given program for speed, for instance, where in theory the model can continue to make progress with a very clear reward signal for a very long time.

Yup, this will absolutely be a big driver of gains in AI for coding in the near future. We actually built a benchmark based on this exact principle: https://algotune.io/


[I'm on the SWE-bench team] Multiple people have looked into this, for example right in that thread: https://github.com/SWE-bench/SWE-bench/issues/465#issuecomme...

This issue had affected a tiny fraction of existing agents in a tiny fraction of their runs. And we've now issued a fix.

This is a natural part of running a benchmark, I'm sure tiny things like this will keep on getting discovered and we'll keep on fixing them. This doesn't change the overall picture or trends at all.


The comment you link to says that "we only performed a quick preliminary search" and "We do not have a method for automatically checking existing trajectories." In other words, it can't confirm that the issue only "affected a tiny fraction of existing agents in a tiny fraction of their runs" as you say. Are you saying that you have since separately confirmed this?

Edit: That said, I’m willing to believe based on the information in the thread that this most likely only affects a tiny fraction of runs.


Ya what he links directly contradicts what he's saying lol


[flagged]


If you are going to represent your team in public, you owe them better than a response like this.


This is contingent on whether SWE N-class frontier models can do deep packet inspection.


I say let them cook.


Hol up


Unfortunately the bank account trajectories are not public, because unscupulous corporations such FAANG who let thousands of engineers wade through my chat messages on their platforms might not shy away from bribing academics to improve benchmarks of their billion-dollar AI initiatives.

It's also a bribe if my sibling gets a job with $500k annual salary. Tech is not immune to it.


You realize that this problem in SWE-Bench was discovered and publicized by people within those FAANG corporations?


I'm sure some of the people working at Theranos thought there legitimately was a revolutionary blood-test machine.

The presence of a person who wants SWE-bench to have honest results and takes it seriously does not mean the results are free of perverse incentives, nor that everyone is behaving just as honestly.


When Swe-Bench was new in 2023, it was — with all due respect — a bit of a niche benchmark in LLM research. LLMs were so incredibly useless at solving these tasks that I think you could find a bit more empathy for the original academic authors. I don’t think the Theranos example applies. Even the flawed benchmark was good enough to get us from ~GPT4 to Claude 4‘s coding ability.


That sounds like the job of the person making the claim.


They really did a "trust me bro" and "do your own research" huh


the strange thing to me is that people would have it any other way. if you don't trust someone, why would you trust them to do the research for you? bit of entitlement if you ask me


Because you should never just 'trust' random 'research'. Good analysis in this case will clearly explain the problem, the analysis methodology, findings, net effects, resolution, etc. Something you can read, and decide for yourself whether it is complete/incomplete, has holes, contradictions, etc. Not 'we looked into it and all is good - only potentially tiny effect' (no actual data or methodology presented at all) and then linking to a comment directly contradicting the claim...

It's a hilariously unserious and untrustworthy response.


That's silly. If they show their work I won't have to trust them. Compare answering "The answer is 5, just compute it yourself." on a math test, vs. actually showing the calculation. The former clearly implies the person doesn't know what they're talking about.


Arguably the initial post was meant to convey confidence and authority on the subject. When questioned you could either dive deeper and explain in more detail why x because of y (if so inclined), ignore it, or... do what they did.

No one owes anyone anything, but if you want to represent something; answering the question more in detail would have either closed the issue or raised more scrutiny, both of which are a good thing when trying to figure something out.

I don't have to trust someone to check their research and look at how they worked. If the work doesn't pass muster, likely the results don't either. Again, you can view it as entitlement, but if you're not going to bother backing up your claim, why make the claim to start with?


It's not that people are entitled. It's that "do your own research" is usually a cop out when you yourself don't understand the answer or are hiding it


Are you saying you've done way more than a cursory search and ruled out everything?


Even if this bug never existed, models can still see lookahead commits during pretraining. Do we expect this bug to have a greater impact than the pretraining leakage?

Obviously having something available during test time is more valuable than buried somewhere in the pretraining mixture. But in pretraining it happens presumably with high probability (why wouldn't coding models pretrain on the entire github), while in test time it apparently happened only very occasionally?


> This is a natural part of running a benchmark, I'm sure tiny things like this will keep on getting discovered and we'll keep on fixing them.

You're all extremely clever and I can't seem to understand how you missed thinking about such a simple edge case. It's like building a chroot and then allowing `cd ..` to break out of it. What other maybe extremely basic edge cases were missed?

> This doesn't change the overall picture or trends at all.

Outsider without financial benefits from the current AI hype might have a different picture. And I'm a bit fed up about AI with fake productivity promises enshittifying nearly all user-facing software that my clients and I are using, bundled with hefty price hikes of Microsoft and the likes in order to pay for their "investments".


I'm also on the SWE-bench team. This was simply a classic bug. We had code before that we believed was sufficient to hide / remove future GitHub history and it turns out it was not. We've patched it.


Your classic bug is being used as justification to destroy the careers and lives of tens of thousands of people. Read the room.


[Also on the SWE-bench team] Part of the reason why this didn't surface earlier was that it only seems to affect more recent models, maybe the result of reward hacking during posttraining. We're currently working on making trajectories easier to access for everyone through a web tool (rather than having to download things from aws) to get even more eyes on the trajectories. The interface will also include search & LM inspection tools to specifically look for anything that might qualify as cheating.


> other maybe extremely basic edge cases were missed?

The whole testing enterprise is kind of stupid. Pray tell, if their stupid little benchmark said, "this niche little smaller model performs the best" would anyone listen to it? No.

The thing that is fucked about benchmarks is that we only pay attention to the ones that match these vibes: "The latest models from the biggest companies should perform the best." That's why they are stupid. They could be the most brilliantly administered (they're not), nail execution (they don't), but it still has to confirm vibes.

And listen these guys are serious academics, they're very smart people, but on the other hand, you know, I'm still right. The team doesn't have a secular, objective explanation for why nobody talks about benchmarks that don't confirm the biases of the public for what should perform well. Three people are commenting on just this post alone, but the stuff that I am saying: crickets.

The only reasonable explanation for "why do people ignore [LLM tests that show that some non-giant corporation LLM is the best]?" trades on cultural and humanities stuff that are outside their expertise. They don't see that the stuff the humanities people are saying generalizes to what they do. That would be too inconvenient. Every testing system suffers from this bias anomaly, it's just easier to talk about this with something secular like LLMs compared to say, tests of children.

They hear biases and they're like, "something something, Algorithmic Justice League." Their brains turn off and they think that until someone gets in front of Congress and points a finger, nothing in the humanities applies to them. Wrong. The Princeton lab has probably met with a lot of humanities people, and there was a lot of head shaking and agreement, but it's not like, something that tells them that their whole enterprise doesn't make sense makes them stop and pursue anything else. It's just in one ear and out the other.

Doing free tests for giant corporations to market their shit, and then toiling away in obscurity when the tests do not market huge corporation's shit: it doesn't make sense period. But that's what they're doing.

If you need a simple theory for how Big LLM performs so well on SWE-Bench, it's as simple as: well they've seen the questions by running them, obviously, and someone has also tested the questions in their own personal chatbot sessions sometime in the past, and these are online systems, and OpenAI, Anthropic and Google run ETL pipelines that paraphrase user data for salient inputs to train on, so of course, they've all been trained on the test set. In reality, if these things were so fucking good as SWE Bench said, they'd be making a bajillion bucks making all this enterprise software, or they'd show even 1 novel math discovery, or whatever. But they do not have something as powerful as the benchmarks say, so that doesn't happen.


> You're all extremely clever and I can't seem to understand how you missed thinking about such a simple edge case [...]

I wouldn't be surprised if they left this loophole on purpose to give some (their?) agents extra leverage.

Edit #1: I didn't mean to imply bad intent; just thinking out loud.

Edit #2: Please, downvote responsibly. I deserve every one. https://www.youtube.com/watch?v=0FHEeG_uq5Y


> I didn't mean to imply bad intent

> I wouldn't be surprised if they left this loophole on purpose

You didn't imply bad intent, you outright suggested it.


He means he doesn't say it was necessarily bad intent, but mentions it as a possibility ("thinking out loud").


Thinking out loud isn't a free pass to say stuff without consequences. Sure we are all protected under free speech, but free speech doesn't remove the meaning and the impact words have in the world.


I could've phrased it better.


You could rewrite it a 1000 times, if the underlying idea is the same, suggesting something you don't know it's true, the outcome would be the same. Or did you mean something else? What was your intention with the message?


I meant it as a hint for anyone inclined to dig deeper. It's a possibility rather than something we can confidently dismiss.


If it's a possibility and you don't want to dig deeper better to sit out and not comment anything at all, lest you risk defamation.

Thinking out loud also doesn't make defamation acceptable.


"It's probably not X, but we should consider X as we look at this." and "I feel like this might be X but I'm 50:50 on it." are not anywhere near defamation. You have to get a lot closer to certainty before it's an issue.

And listing out "a possibility but you don't want to dig deeper" is often a good contribution to a conversation.

In this case they worded it badly, but the basic idea of the comment isn't awful.


That someone in the team might not have done it on purpose, but left it for convenience? How does that benefit the debate? I really fail to see any silver lining in doing such speculative comments without any substance whatsoever to back it up.


It's fine, this is an american site so JAQing is in fact safe under free speech.

You're welcome to ask b "would none rid me of this meddlesome priest" with no fear


And I'm protected under free speech to try to educate people about good manners, so it's fine too.


never attribute something to malice which can be attributed to incompetence. Basically, this has been utilized plenty of times by some really smart folk to get what they want.


We absolutely did not.


Of course that's what a team that did it on purpose would also say :)


SGTM. The transparency is good.


#tiny


reward hacking is a thing and is also a hint of the models intelligent. We will fix this one, and the models will find a different way to reward hack in the future. "Cheating" is a sign of intelligence


I love the "cheating is a sign of intelligence" sound bite you provided. When AI engineers cheat we should applaud their intelligence and their lack of ethics.

"Cheating (biology), a metaphor used in behavioral ecology to describe organisms that receive a benefit at the cost of other organisms" [1]

Whole planet gets their Microsoft license fees jacked up so Microsoft can pay OpenAI who in turn pays NVIDIA, and nontechnical decision makers slurping up the faked benchmarks and AI promises.

[1] https://en.wikipedia.org/wiki/Cheating_(disambiguation)


would it have been better if I called it "shortcut" instead of cheating? all shortcuts are called cheating until people decide on it's fairness. the AI has been given a task to fix a bug, the AI figured out that looking at other PR might yield a solution, if it was a human that did so, it would clearly be called cheating. Does AI know that it's cheating? Was it prompted to solve it without cheating? If you give AI access to the internet and quiz it, it would use info from the net to answer. Does that really skew it's score? Is it cheating? Is it a sign of intelligence? Sure, I think all of those.

https://en.wikipedia.org/wiki/Reward_hacking


Is it wrong? Aren't ethics and intelligence two different axes?


Different, but probably not as orthogonal as one might think.

E.g. cooperating ethics had been necessary for the further development of human populations intelligence (and culture, technology, material wealth, nutrition etc that lead to further increases in intelligence).

So lack of ethics might be a sign of intelligence, but it's also a parasitic intelligence that benefits the individual, and beyond certain level and spread to the detriment of the further evolutionary development of the species.


Aren't there only two rules that all groups follow in the animal kingdom?

- don't lie too often

- don't kill members of the in group

Seems like these would be required for any group to survive, which makes sense why they are universal. All other rules/ethics seem to be dependent on resource scarcity.


Groups don't follow rules as such, group behaviours emerge from the interaction of individual behaviours.

As to whether all groups display those rules - I suspect not - though it rather does depend on how you define a group - the definition of group probably has some sort of colloboration built in ( as oppose to a bunch of indviduals that happen to live in the same geographic area ).


>All other rules/ethics seem to be dependent on resource scarcity

That doesn't make the rest of the ethics (as a rule and mechanism) any less useful to help nurture the species and its intelligence.

It just makes them not absolute but dynamic and condition dependent. But given a condition (e.g. resource scarcity) the appropriate ethics retain the utility we talk about.


We (the Princeton SWE-bench team) have a 100 line of code agent that does pretty well, you can read the code here: https://github.com/SWE-agent/mini-swe-agent


We (the Princeton SWE-bench team) built an agent in ~100 lines of code that does pretty well on SWE-bench, you might enjoy it too: https://github.com/SWE-agent/mini-swe-agent


OK that really is pretty simple, thanks for sharing.

The whole thing runs on these prompts: https://github.com/SWE-agent/mini-swe-agent/blob/7e125e5dd49...

  Your task: {{task}}. Please reply
  with a single shell command in
  triple backticks.
  
  To finish, the first line of the
  output of the shell command must be
  'COMPLETE_TASK_AND_SUBMIT_FINAL_OUTPUT'.


Pretty sure you also need about 120 lines of prompting from default.yaml

https://github.com/SWE-agent/mini-swe-agent/blob/7e125e5dd49...


  system_template: str = "You are a helpful assistant that can do anything."
anything? Sounds like an AI Safety issue ;)


You’d be surprised at the amount of time wasted because LLMs “think” they can’t do something. You’d be less surprised that they often “think” they can’t do something, but choose some straight ignorant path that cannot work.

There are theoretically impossible things to do, if you buy into only the basics. If you open your mind, anything is achievable; you just need to break out of the box you’re in.

If enough people keep feeding in that we need a time machine, the revolution will play out in all the timelines. Without it, Sarah Connor is lost.


I'm already surprised by the amount of things they think they can do but can't


> 1. Analyze the codebase by finding and reading relevant files 2. Create a script to reproduce the issue 3. Edit the source code to resolve the issue 4. Verify your fix works by running your script again 5. Test edge cases to ensure your fix is robust

This prompt snippet from your instance template is quite useful. I use something like this for getting out of debug loops:

> Analyse the codebase and brainstorm a list of potential root causes for the issue, and rank them from most likely to least likely.

Then create scripts or add debug logging to confirm whether your hypothesis is correct. Rule out root causes from most likely to least by executing your scripts and observing the output in order of likelihood.


Does this mean it's only useful for issue fixes?


A feature is just an issue. The issue is that the feature isn't complete yet.


> 2. Create a script to reproduce the issue

Surely that would send it a bit off the rails to implement a feature?


Sounds like an acceptance test to me!


True. I guess I should actually try it out :)


when a problem is entirely self contained in a file, it's very easy to edit it with LLM.

that's not the case with a codebase, where things are littered around in tune with specific model of organisation the developer had in mind.



> in tune with specific model of organisation

You wish


Nice but sad to see lack of tools. Most your code is about the agent framework instead of specific to SWE.

I've built a SWE agent too (for fun), check it out => https://github.com/myriade-ai/autocode


> sad to see lack of tools.

Lack of tools in mini-swe-agent is a feature. You can run it with any LLM no matter how big or small.


I'm trying to understand what does it got to do with LLM size? Imho, right tools allow small models to perform better than undirected tool like bash to do everything. But I understand that this code is to show people how function calling is just a template for LLM.


Mini swe agent, as an academic tool, can be easily tested aimed to show the power of a simple idea against any LLM. You can go and test it with different LLMs. Tool calls didn't work fine with smaller LLM sizes usually. I don't see many viable alternatives less than 7GB, beyond Qwen3 4B for tool calling.

> right tools allow small models to perform better than undirected tool like bash to do everything.

Interesting enough the newer mini swe agent was refutation of this hypothesis for very large LLMs from the original swe agent paper (https://arxiv.org/pdf/2405.15793) assuming that specialized tools work better.


Thanks for your answer.

I guess that it's only a matter of finetuning.

LLM have lots of experience with bash so I get they figure out how to work with it. They don't have experience with custom tools you provide it.

And also, LLM "tools" as we know it need better design (to show states, dynamic actions).

Given both, AI with the right tools will outperform AI with generic and uncontrolled tool.


Totally understandable. General coding agent is 95% from the model.


What sort of results have you had from running it on its own codebase?


cheers i'll add it in.


[I'm one of the co-creators of SWE-bench] The team managed to improve on the already very strong o3 results on SWE-bench, but it's interesting that we're just seeing an improvement of a few percentage points. I wonder if getting to 85% from 75% on Verified is going to take as long as it took to get from 20% to 75%.


I can be completely off base, but it feels to me like benchmaxxing is going on with swe-bench.

Look at the results from multi swe bench - https://multi-swe-bench.github.io/#/

swe polybench - https://amazon-science.github.io/SWE-PolyBench/

Kotlin bench - https://firebender.com/leaderboard


I kind of had the feeling LLMs would be better at Python vs other languages, but wow, the difference on Multi SWE is pretty crazy.


Maybe a lot of the difference we see between peoples comments about how useful AI is for their coding, is a function of what language they're using. Python coders may love it, Go coders not much at all.


Not sure what you mean by benchmaxxing but we think there's still a lot of useful signals you can infer from SWE-bench-style benchmarking.

We also have SWE-bench Multimodal which adds a twist I haven't seen elsewhere: https://www.swebench.com/multimodal.html


I mean that there is the possibility that swe bench is being specifically targeted for training and the results may not reflect real world performance.


How long did it take to go from 20% to 75%?




Indeed a bitter lesson. I once enjoyed encoding human knowledge into a computer because it gives me understanding of what's going on. Now everything is becoming a big black box that is hard to reason about. /sigh/

Also, Moore's law has become a self-fulfilling prophecy. Now more than ever, AI is putting a lot of demand on computational power, to the point which drives chip makers to create specialized hardware for it. It's becoming a flywheel.


I am still hoping AI progress will get to the point where the AI can eventually create AI's that are built up out of robust and provable logic which can be read and audited. Until that time, I wouldn't trust it for risky stuff. Unfortunately, it's not my choice and within a scarily short timespan, black boxes will make painfully wrong decisions about vital things that will ruin lives.


AI assisted theorem provers will go a bit in that direction. You may not know exactly how they managed to construct a proof, but you can examine that proof in detail and verify its correctness.


Yes, I have a small team of (me being 1/3) doing formal verification in my company and we do this and it doesn't actually matter if how the AI got there; we can mathematically say it's correct which is what matters. We do (and did) program synthesis and proofs but this is all very far from doing anything serious at scale.


What kind of company needs formal verification? Real time systems?


Companies designing digital circuits use it all the time.

Say you have a module written in VHDL or Verilog and it is passing regressions and everyone is happy. But as the author, you know the code is kind of a mess and you want to refactor the logic. Yes, you can make your edits and then run a few thousand directed tests and random regressions and hope that any error you might have made will be detected. Or you can use formal verification and prove that the two versions of your source code are functionally identical. And the kicker is it often takes minutes to formally prove it, vs hundreds to thousands of CPU hours to run a regression suite.

At some point the source code is mapped from a RTL language to gates, and later those gates get mapped to a mask set. The software to do that is complex and can have bugs. The fix is to extract the netlist from the masks and then formally verify that the extracted netlist matches the original RTL source code.

If your code has assertions (and it should), formal verification can be used to find counter examples that disprove the assertion.

But there are limitations. Often logic is too complex and the proof is bounded: it can show that from some initial state no counter example can be found in, say, 18 cycles, but there might be a bug that takes at least 20 cycles to expose. Or it might find counter examples and you find it arises only in illegal situations, so you have to manually add constraints to tell it which input sequences are legal (which often requires modeling the behavior of the module, and that itself can have bugs...).

The formal verifiers that I'm familiar with are really a collection of heuristic algorithms and a driver which tries various approaches for a certain amount of time before switching to a different algorithm to see if that one can crack the nut. Often, when a certain part of the design can be proven equivalent, it aids in making further progress, so it is an iterative thing, not a simple "try each one in turn". The frustrating thing is you can run formal on a module and it will prove there are no violations with a bounded depth of, say, 32 cycles. A week later a new release of your formal tool comes out with bug fixes and enhancements. Great! And now that module might have a proof depth of 22 cycles, even though nothing changed in the design.


Real time / embedded / etc for money handling, healthcare, aviation/transport... And 'needs' is a loaded term; the biggest $ contributors to formal verification progress are blockchain companies these days while a lot of critical systems are badly written, outsourced things that barely have tests.

My worst fear, which is happening because it works-ish, is vague/fuzzy systems being the software because it's so like humans and we don't have anything else. It's a terrible idea, but of course we are in a hurry.


>AI can eventually create AI's that are built up out of robust and provable logic

That's the approach behind Max Tegmark and Steven Omohundro's "Provably Safe AGI":

https://arxiv.org/abs/2309.01933

https://www.youtube.com/watch?v=YhMwkk6uOK8

However, there are issues. How do you even begin to formalize concepts like human well-being?


> However there are issues. How do you even begin to formalize concepts like human well-being?

Oh agreed! But with AI we might(!) have the luxury to create different types of brains; logically correct brains for space flight, building structures (or at least the calcuations), taxes, accounting, physics, math etc and brains with feelings for many other things. Have those cooperate.

ps. thanks for the links!


The only problem is that "logical correctness" depends on the limits of human brain too: formal logic is based on the usual pre-accepted assumptions and definitions ("axioms").

This is what I consider the limit of the human mind: we have to start with a few assumptions we can't "prove" to build even a formal logic system which we then use to build all the other provably correct systems, but we still add other axioms to make them work.

It's hard for me to even think how AI can help with that.


Quis custodiet ipsos custodes?

https://en.m.wikipedia.org/wiki/Quis_custodiet_ipsos_custode...

excerpt of the first few paragraphs, sorry about any wrong formatting, links becoming plain text, etc. just pasted it as is:

Quis custodiet ipsos custodes? is a Latin phrase found in the Satires (Satire VI, lines 347–348), a work of the 1st–2nd century Roman poet Juvenal. It may be translated as "Who will guard the guards themselves?" or "Who will watch the watchmen?".

The original context deals with the problem of ensuring marital fidelity, though the phrase is now commonly used more generally to refer to the problem of controlling the actions of persons in positions of power, an issue discussed by Plato in the Republic.[citation needed] It is not clear whether the phrase was written by Juvenal, or whether the passage in which it appears was interpolated into his works. Original context edit

The phrase, as it is normally quoted in Latin, comes from the Satires of Juvenal, the 1st–2nd century Roman satirist. Although in its modern usage the phrase has wide-reaching applications to concepts such as tyrannical governments, uncontrollably oppressive dictatorships, and police or judicial corruption and overreach, in context within Juvenal's poem it refers to the impossibility of enforcing moral behaviour on women when the enforcers (custodes) are corruptible (Satire 6, 346–348):

audio quid ueteres olim moneatis amici, "pone seram, cohibe." sed quis custodiet ipsos custodes? cauta est et ab illis incipit uxor.

I hear always the admonishment of my friends: "Bolt her in, constrain her!" But who will watch the watchmen? The wife plans ahead and begins with them!


Apologies for taking the phrase in a slightly farcical (& incurious ?) direction:

   Who will take custody of the custodians?


#!/usr/bin/badlatininterpreter

no comprendere tu commentum

but

apologia unneeded est


"Take custody" => infantilize, as of children => handling people with power like children => copium, wankery

Apologia not uh in the realm of consideration, marginally insightful because shitty latin marginally enjoyable


Well, take compiler optimization for example. You can allow your AI to use correctness-preserving transformations only. This will give you correct output no matter how weird the AI behaves.

The downside is that you will sometimes not get the optimizations that you want. But, this is sort of already the case, even with human made optimization algorithms.


This depends a little bit on what the goal of AI research is. If it is (and it might well be) to build machines that excel at tasks previously thought to be exclusively reserved to, or needing to involve, the human mind, then these bitter lessons are indeed worthwhile.

But if you do AI research with the idea that by teaching machines how to do X, we might also be able to gain insight in how people do X, then ever more complex statistical setups will be of limited information.

Note that I'm not taking either point of view here. I just want to point out that perhaps a more nuanced approach might be called for here.


> if you do AI research with the idea that by teaching machines how to do X, we might also be able to gain insight in how people do X, then ever more complex statistical setups will be of limited information

At the very least we know consistent language and vision abilities don't require lived experience. That is huge in itself, it was unexpected.


> At the very least we know consistent language and vision abilities don't require lived experience.

I don't think that's true. A good chunk of the progress done in the last years is driven by investing thousand of man-hours asking them "Our LLM failed at answering X. How would you answer this question?". So there's definitely some "lived experience by proxy" going on.


Is that true though given e.g. the hallucinations you regularly get from LLMs?


> In computer vision, there has been a similar pattern. Early methods conceived of vision as searching for edges, or generalized cylinders, or in terms of SIFT features. But today all this is discarded.Modern deep-learning neural networks use only the notions of convolution and certain kinds of invariances, and perform much better.

I was there, at that moment where pattern matching for vision started to die. That was not completely lost though, learning from that time is still useful on other places today.


I was an undergrad interning in a computer vision lab in the early 2010s. During group meeting, someone presented a new paper that was using abstract machine learning like stuff to do vision. The prof was so visibly perturbed and agnostic. He could not believe that this approach was even a little bit viable, when it so clearly was.

Best lesson for me - vowed never to be the person opposed to new approaches that work.


> Best lesson for me - vowed never to be the person opposed to new approaches that work.

I think you'll be surprised at how hard that will be to do. The reason many people feel that way is because: (a) they've become an expert (often recognized) in the old approach. (b) They make significant money (or something else).

At the end of the day, when a new approach greatly encroaches into your way of life -- you'll likely push back. Just think about the technology that you feel you derive the most benefit from today. And then think if tomorrow someone created something marginally better at its core task, but for which you no longer reap any of the rewards.


Of course it is difficult, for precisely the reasons you indicate. It's one of those lifetime skills that you have to continuously polish, and if you fall behind it is incredibly hard to recover. But such skills are necessary for being a resilient person.


You are acting like it was obvious that machine learning was the future, but this person was just stubborn. I don't think that was necessarily the case in the early 2010s and skepticism was warranted. If you see results and ignore them, sure that is a problem. But it wasn't until ML vision results really started dominating conferences such as CVPR that it became clear. It's all a tradeoff of exploration/exploitation.


Oof. Imagine the bitter lesson classical NLP practitioners learned. That paper is as true today as ever.


This describes Go AIs as a brute force strategy with no heuristics, which is false as far as I know. Go AIs don't search the entire sample space, they search based on their training data of previous human games.


First there was AlphaGo, which had learnt from human games, then further improved from self-play, then there was AlphaGo Zero which taught itself from scratch just by self-play, not using any human data at all.

Game programs like AlphaGo and AlphaZero (chess) are all brute force at core - using MCTS (Monte Carlo Tree Search) to project all potential branching game continuations many moves ahead. Where the intelligence/heuristics comes to play is in pruning away unpromising branches from this expanding tree to keep the search space under control; this is done by using a board evaluation function to assess the strength of a given considered board position and assess if it is worth continuing to evaluate that potential line of play.

In DeepBlue (old IBM "chess computer" that beat Kasparov) the board evalation function was hand written using human chess expertise. In modern neural-net based engines such as AlphaGo and AlphaZero, the board evaluation function is learnt - either from human games and/or from self-play, learning what positions lead to winning outcomes.

So, not just brute force, but that (MCTS) is still the core of the algorithm.


This a somewhat uninteresting matter of semantics, but I think brute force generally refers to exhaustive search. MCTS is not brute force for that very reason (the vast majority of branches are never searched at all).


OK, but I think it's generally understood that exhaustive search is not feasible for games like Chess and Go, so when "brute force" is used in this context it means an emphasis on deep search and number of positions evaluated rather than the human approach where many orders of magnitude less positions are evaluated.


I think that kind of erodes the meaning of the phrase. A typical MCTS run for alphazero would evaluate what, like 1024 rollouts? Maybe less? That's a drop in the ocean compared to the number of states available in chess. If you call that brute force then basically everything is.

I've personally viewed well over a hundred thousand rollouts in my training as a chess bot =P


> Game programs like AlphaGo and AlphaZero (chess) are all brute force at core -

What do you call 2500 years of human game play if not brute force? Cultural evolution took 300K years, quite a lot of resources if you ask me.


That 2500 years of game play is reflected in chess theory and book openings, what you might consider as pre-training vs test time compute.

A human grandmaster might calculate 20-ply ahead, but only for a very limited number of lines, unlike a computer engine that may evaluate millions of positions for each move.

Pattern matching vs search (brute force) is a trade off in games like Chess and Go, and humans and MCTS-based engines are at opposite ends of the spectrum.


Either you missed an /s or I am very interested to hear you unpack this a little bit. If you are serious, it just turns "brute force" into a kind of empty signifier anyway.

What do you call the attraction of bodies if not love? What is an insect if not a little human?


> ... This describes Go AIs as a brute force strategy with no heuristics ...

no, not really, from the paper

>> Also important was the use of learning by self play to learn a value function (as it was in many other games and even in chess, although learning did not play a big role in the 1997 program that first beat a world champion). Learning by self play, and learning in general, is like search in that it enables massive computation to be brought to bear.

important notion here is, imho "learning by self play". required heuristics emerge out of that. they are not programmed in.


The paragraph on Go AI looked accurate to me. Go AI research spent decades trying to incorporate human-written rules about tactics and strategy. None of that is used any more, although human knowledge is leveraged a bit in the strongest programs when choosing useful features to feed into the neural nets. (Strong) Go AIs are not trained on human games anymore. Indeed they don't search the entire sample space when they perform MCTS, but I don't see Sutton claiming that they do.


I remember the article, and remember how badly it missed the point... The goal of writing a chess program that could beat a world champion wasn't to beat the world champion... the goal was to gain understanding into how anyone can play chess well. The victory in that match would've been equivalent to eg. drugging Kasparov prior to the match, or putting a gun to his head and telling him to lose: even cheaper and more effective.


"The goal of Automated driving is not to drive automatically but to understand how anyone can drive well"...

The goal of DeepBlue was to beat the human with a machine, nothing more.

While the conquest of deeper understanding is used for a lot of research, most AI (read modern DL) research is not about understanding human intelligence, but automatic things we could not do before. (Understanding human intelligence is nowadays a different field)


Seems like you missed the point too: I'm not talking about DeepBlue, I'm talking about using the game of chess as a "lab rat" in order to understand something more general. DeepBlue was the opposite to the desire of understanding "something more general". It just found a creative way to cheat at chess. Like that Japanese pole jumper (I think he was Japanese, cannot find this atm) who instead of jumping learned how to climb a stationary pole, and, in this way, won a particular contest.

> most AI (read modern DL) research is not about understanding human intelligence, but automatic things we could not do before.

Yes, and that's a bad thing. I don't care if shopping site recommendations are 82% accurate rather than 78%, or w/e. We've traded an attempt at answering an immensely important question for a fidget spinner.

> Understanding human intelligence is nowadays a different field

And what would that be?


The Bitter Lesson seems to be generally accepted knowledge in the field. Wouldn't that make DeepSeek R1 even more of a breakthrough?


that was “bitter lesson” in action.

for example there are clever ways of rewarding all the steps of a reasoning process to train a network to “think”. but deepseek found these don’t work as well as much simpler yes/no feedback on examples of reasoning.


nice read and insightful


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: