Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Test-Driven Design: not for the early stages of development (tbray.org)
35 points by yungchin on June 24, 2009 | hide | past | favorite | 11 comments


mjd also wrote on this:

"There must be a specification," a couple of people said. "Just write the tests to check the requirements in the specification." There was an amazing disconnect here between what I was asking and what they were answering. http://use.perl.org/comments.pl?sid=42511&cid=67765

I think TDD is a great idea - if "you know what you are doing". It solidifies your interfaces. Which is great, provided you won't want to change your interfaces.

It's the dogmatism of TDD that scares me. When "heresy" is applied to a methodology, it makes me think that a theory of reality has been favoured over reality. But hey, maybe that leaves the field clear for the reality-seekers.


I'm not sure I fully understood everything you were saying here.

But I would mention that unit tests give you a nice picture of what will break when you DO change your interfaces. The list of broken tests is your new task set.

This is a significant benefit.


I agree with you that tests will tell you when you break your interfaces. My position is that in some cases you don't want that.

mjd explains it well. Here's my attempt at elaboration:

It's only in a particular kind of project, at a particular stage: Imagine you are designing a project, in which you change the interfaces at the drop of a hat. Daily (or oftener). In this scenario, the "new task set" of the list of broken tests is extra work that increases the viscosity of your code. It's a distraction from your actual exploratory task, of trying out different interfaces. You are setting out to change interfaces, because you are experimenting with what they should be: that's your actual task. This only applies when you need extremely fluid code, and when it doesn't really matter if there are lots of bugs. Think of a poet daydreaming, or an artist sketching out a very rough outline, or the first prototype of an inventor (made with wax and strings).


I thought of another way to say this (not that you need it):

Sometimes you write code to throw away. James Gosling has said he does this. Fred Brooks said "build one to throw away". Novelists write drafts, artists draw sketches and sculptors carve studies (I was amazed to see that Michelangelo did several studies for his sculpture of David - full size, in marble!)

In contrast, tests are an investment in code. They make the code better, more reliable, more correct - more tested. They make it harder to throw away the code.


Not much to add to that, it's pretty much exactly the way I feel things ought to be. I.e. be sensible and use your head: if you've got things figured out, spend some time putting together some good tests, they'll save you in the long run. If, on the other hand, you're still exploring, writing code and tests is likely to be a waste of time if you end up throwing things out a few times.


As long as your code can be unit tested, it's okay to test it later.

But what if the "exploratory" code becomes hard or almost impossible to test (due to dependency et al.)? Refactor it? but you have no test that acts as a safety net during the refactoring period.


I agree with and understand this part:

>.. when I’m getting started, I never know what X and Y are.

> .. once you’re into maintenance mode,"

But, how does one know when maintenance mode starts? In the build something/release/add feature/release/add feature cycle, it is easy to rationalize that I am not in maintenance mode until it is late.

Of course, as davidw said above, no advice is better than "use your head" :)


I think migrating from prototype to maintenance mode is a continuum. For some reason maintenance mode seems to begin (IMHO) some where around 60%--80% of work for version 1.0. At that point it's too risky not to be testing routinely and things should have settled enough that you could depend on some solidified interfaces.

I like this approach to TDD. This feels more efficient than starting with failed test cases on day 1.


My last job was quite an eye-opener as far as TDD was concerned. I was a C++ developer in my team who also coded in Java as the need arose. The Java team lead would absolutely INSIST that we write the JUnit tests before coding the actual implementation (no such headache for the C++ side). It was a major pain because the time we should be doing "real work" would be used up for writing tests.

We came to appreciate that however when we got into the maintenance phase of the project. Just running the tests quickly would give us confidence that the nothing we did in the current release had broken whatever was existing. We had a bunch of test-harnesses for each Java module in the system. It was the coolest thing I had seen!


I tried to have a crack at the testing issue as it applies to startups in my last blog post http://unfeatureddocuments.com/content/unit-tests-headlights... Loosely put, my feeling is, that for a startup, you need to focus testing on those parts of the code that you believe will still be being used in six months time.


I used to start with one chunk of prototype code in main() and factor out functions and classes as I realize they make sense. Now I make assertions as I go and name it testMain(). Sometimes there's just one assertion of the final result at the end, because even that helps.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: