Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You cant have it both ways in my opinion. High test coverage == testing every possible paths == looking at implementation details. If you are testing an algorithm(and games are full of these)you want it to be a 100% accurate therefore you dont have much choice.


Tests should be based on the specification. If I want to change some internal implementation detail I should only have to verify that the current tests pass.

If a e.g game contains a sorting in some place in the renderer, I can replace the quicksort with a mergesort as long as the renderer interface is still testing ok. The new sort algorithm may have new special case paths (even number of items vs odd for example) but it's not a concern of the renderer public interface. I may however have introduced a bug with an odd number of items here and the old code was 100% covered and now it isn't. So there is a potential problem and the 99% has actually helped spot it.

If the sorting is a private implementation detail of the renderer then there is no other place to test it than to add a new test to the renderer component only because the sorting algo requires it for a code path. This is BAD.

The proper action here is NOT to add tests to the renderer component to test the sorting code path, but instead to make the sorting visible and testable in isolation via its own public interface.

So one of the positive things about requiring coverage is that if you do it right, it will lead to smaller and more decoupled modules of code.

The bad thing is that if you do it wrong you will have your God classes and a bunch of tests coupled tightly to them.


This is sort of the conclusion I was coming to. I am glad to hear someone else express it :)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: