One thing I like to say when people bring up test coverage is that code being "covered" only says it was run, not that it was correct, so it's only a weak statement of quality. However, code that is not covered is completely unknown, so there is a bigger chance of bugs there. Obviously though, if that code is trivial, it may not be worth the maintenance overhead of another test for just that.
So yeah, when a team claims 100% code coverage, usually that is just a signal that they care about testing and the quality of the code, therefore it tends to be less buggy. Not necessarily because 100% coverage itself made it so.
Really I only use a code coverage tool to check for important places that aren't covered at all, AFTER I have tried to think of the proper behavior/spec of the code from an outside perspective. It's like a secondary check after you think you are already done. That keeps you focused on what correct input and output are, and then patching up the little areas that you missed with a tool.
So yeah, when a team claims 100% code coverage, usually that is just a signal that they care about testing and the quality of the code, therefore it tends to be less buggy. Not necessarily because 100% coverage itself made it so.
Really I only use a code coverage tool to check for important places that aren't covered at all, AFTER I have tried to think of the proper behavior/spec of the code from an outside perspective. It's like a secondary check after you think you are already done. That keeps you focused on what correct input and output are, and then patching up the little areas that you missed with a tool.