I’ve interviewed many developers who pass a code test, get hired, and then can’t design a basic interface for an object. Or take forever getting through a large feature. Or can’t commmunicate clearly. Or go down the rabbit hole and implement something really complex and shiny but irrelevant. (Irrelevant isn’t always bad, but does often expose a weakness in effectively interpreting user needs).
Maybe 1 in 10 engineers who pass a code test end up being actually good software engineers. The yield definitely varies by the role.
Wait, if code tests aren’t perfectly predictive, then how do interviewers get feedback about failures (or even successes) of the screening process? They don’t, because recruiters and hiring managers tend to keep that to themselves.
The story is way way more complicated than this post alludes.
I know of at least a couple instances where the person failed the code test and then (according to LinkedIn) got a solid job somewhere else.
Triplebyte actually has some hard data on this trend. Many candidates definitely bomb a few initial screens, do well on later screens, and then burn out and even withdraw from later on-sites. I imagine there’s high variance. Any experienced recruiter I’m sure has observed the same pattern... bombing one code test is typically not definitive.
>bombing one code test is typically not definitive
How many times would you say someone has to bomb Triplebyte (and/or similar) to make it “definitive”?
And after said number of failures what is it exactly that is “definitive”? What is the conclusion you would make? What is the conclusion that the candidate should make?
When I work with students, they're able to get _somewhere_ after several tries, even if they're tuned out for the first 5-7 times or so.
Definitive is a relative term. It's the point at which you give up. If you give up early, that might be a good thing or a bad thing for you. It's a choice you have to make yourself.
There's also the result that at Google they found hires who had one bad score typically outperformed others; the hypothesis was that there was somebody at Google who was willing to fight for them. Having a supportive manager / peer is IMO a much greater predictor of success than passing code tests.
Triplebyte has posted that onsites typically have a 20-30% pass rate. That means a totally average developer can expect to do 3-5 onsites per offer, and there can be significant variance in that number depending on any number of factors, including what the candidate ate for breakfast. There’s very little information conveyed by the number of technical interviews someone has failed.
Yup. Just means the normal distribution applies to software engineers too, which makes perfect sense. A handful will be so god awful you'll wonder how it is they even have a 10 year career in the first place. A handful will be in the first couple of years in their career and you'd think they have been doing it for 10 years. Most are average.
But here's the kicker... in software engineering average means bad. Because average code is not good code, because only amazing code is good code. So all these questions and tests trying to figure out who's a good engineer are kind of rubbish. Mostly they will be average engineers. But companies seem to think "we only hire the best" all the meanwhile the trash-fire slowly burns.
Maybe 1 in 10 engineers who pass a code test end up being actually good software engineers. The yield definitely varies by the role.
Wait, if code tests aren’t perfectly predictive, then how do interviewers get feedback about failures (or even successes) of the screening process? They don’t, because recruiters and hiring managers tend to keep that to themselves.
The story is way way more complicated than this post alludes.