I kind of wish I had the time to actually do it right now and see how it works. But here's how I imagine it going:
1) Establish tests for the is-in-set function. You're absolutely right that the most obvious way to do this meaningfully is to reimplement the function. A better approach would be to find some way to leverage an existing "known good" implementation for the test. Maybe a graphics file of the Mandelbrot set we can test against?
2) Establish tests that given an arbitrary (and quick!) is-in-set function, we write out the correct chunk of graphics (file?) for it.
3) Profit.
Observations:
1) I absolutely would NOT do the "write a test; write just enough code to pass that test; write another test..." thing for this. My strong inclination would be to write a fairly complete set of tests for is-in-set, and then focus on making that function work.
2) There's really no significant design going on here. I'd be using the exact same overall design I used for my first Mandelbrot program, back in the early 90s. (And of course, that design is dead obvious.)
In my mind, the world of software breaks down something like this:
1) Exhaustive tests are easy to write.
2) Tests are easy to write.
3) Tests are a pain to write.
4) Tests are incredibly hard to write.
5) Tests are impossible to write.
I think it's pretty telling that when TDD people talk about tests that are hard to write, they mean easy tests in hard to get at areas of your code. I've never heard one discuss what to do if the actual computations are hard to verify (ie 4 & 5 above) and when I've brought it up to them the typical response is "Wow, guess it sucks to be you."
1) I absolutely would NOT do the "write a test; write just enough code to pass that test; write another test..." thing for this. My strong inclination would be to write a fairly complete set of tests for is-in-set, and then focus on making that function work.
The latter is what I’d expect most developers who like a test-first approach to do. I don’t see anything wrong with it, either. I just don’t think it’s the same as what TDD advocates are promoting.
I think it's pretty telling that when TDD people talk about tests that are hard to write, they mean easy tests in hard to get at areas of your code. I've never heard one discuss what to do if the actual computations are hard to verify (ie 4 & 5 above) and when I've brought it up to them the typical response is "Wow, guess it sucks to be you."
Indeed. At this point, I’m openly sceptical of TDD advocacy and consider much of it to be somewhere between well-intentioned naïveté and snake oil. There’s nothing wrong with automated unit testing, nor with writing those unit tests before/with the implementation rather than afterwards. Many projects benefit from these techniques. But TDD implies much more than that, and it’s the extra parts — or rather, the idea that the extra parts are universally applicable and superior to other methods — that I tend to challenge.
Thus I object to the original suggestion in this thread, which was that a developer probably doesn’t know what they are doing just because they can’t articulate a test case according to the critic’s preferred rules. I think those rules are inadequate for many of the real world problems that software developers work on.
I almost feel like we should come up with a "How the hell would you test this?" challenge for TDD advocates. At least, my impression is it is mostly naïveté rather than snake oil.
1) Establish tests for the is-in-set function. You're absolutely right that the most obvious way to do this meaningfully is to reimplement the function. A better approach would be to find some way to leverage an existing "known good" implementation for the test. Maybe a graphics file of the Mandelbrot set we can test against?
2) Establish tests that given an arbitrary (and quick!) is-in-set function, we write out the correct chunk of graphics (file?) for it.
3) Profit.
Observations: 1) I absolutely would NOT do the "write a test; write just enough code to pass that test; write another test..." thing for this. My strong inclination would be to write a fairly complete set of tests for is-in-set, and then focus on making that function work.
2) There's really no significant design going on here. I'd be using the exact same overall design I used for my first Mandelbrot program, back in the early 90s. (And of course, that design is dead obvious.)
In my mind, the world of software breaks down something like this: 1) Exhaustive tests are easy to write. 2) Tests are easy to write. 3) Tests are a pain to write. 4) Tests are incredibly hard to write. 5) Tests are impossible to write.
I think it's pretty telling that when TDD people talk about tests that are hard to write, they mean easy tests in hard to get at areas of your code. I've never heard one discuss what to do if the actual computations are hard to verify (ie 4 & 5 above) and when I've brought it up to them the typical response is "Wow, guess it sucks to be you."