The vast majority of AI companies I talk to seem to evaluate models mostly based on vibes.
At my company, we use a mix of offline and online evals. I’m primarily interested in search agents, so I’m fortunate that information retrieval is a well-developed research field with clear metrics, methodology, and benchmarks. For most teams, I recommend shipping early/dogfooding internally, collecting real traces, and then hand-curating a golden dataset from those traces.
Many people run simple ablation experiments where they swap out the model and see which one performs best. That approach is reasonable, but I prefer a more rigorous setup.
If you only swap the model, some models may appear to perform better simply because they happen to work well with your prompt or harness. To avoid that bias, I use GEPA to optimize the prompt for each model/tool/harness combination I’m evaluating.
Ah, interesting – yeah only swapping out the model isn't super insightful since models perform differently given different prompts. I'm going to look into GEPA, thanks!
At my company, we use a mix of offline and online evals. I’m primarily interested in search agents, so I’m fortunate that information retrieval is a well-developed research field with clear metrics, methodology, and benchmarks. For most teams, I recommend shipping early/dogfooding internally, collecting real traces, and then hand-curating a golden dataset from those traces.
Many people run simple ablation experiments where they swap out the model and see which one performs best. That approach is reasonable, but I prefer a more rigorous setup.
If you only swap the model, some models may appear to perform better simply because they happen to work well with your prompt or harness. To avoid that bias, I use GEPA to optimize the prompt for each model/tool/harness combination I’m evaluating.