Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

1) I absolutely would NOT do the "write a test; write just enough code to pass that test; write another test..." thing for this. My strong inclination would be to write a fairly complete set of tests for is-in-set, and then focus on making that function work.

The latter is what I’d expect most developers who like a test-first approach to do. I don’t see anything wrong with it, either. I just don’t think it’s the same as what TDD advocates are promoting.

I think it's pretty telling that when TDD people talk about tests that are hard to write, they mean easy tests in hard to get at areas of your code. I've never heard one discuss what to do if the actual computations are hard to verify (ie 4 & 5 above) and when I've brought it up to them the typical response is "Wow, guess it sucks to be you."

Indeed. At this point, I’m openly sceptical of TDD advocacy and consider much of it to be somewhere between well-intentioned naïveté and snake oil. There’s nothing wrong with automated unit testing, nor with writing those unit tests before/with the implementation rather than afterwards. Many projects benefit from these techniques. But TDD implies much more than that, and it’s the extra parts — or rather, the idea that the extra parts are universally applicable and superior to other methods — that I tend to challenge.

Thus I object to the original suggestion in this thread, which was that a developer probably doesn’t know what they are doing just because they can’t articulate a test case according to the critic’s preferred rules. I think those rules are inadequate for many of the real world problems that software developers work on.



I almost feel like we should come up with a "How the hell would you test this?" challenge for TDD advocates. At least, my impression is it is mostly naïveté rather than snake oil.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: