They’ll be forced to work on it when then the bugs in the new system are uncovered.
If the system is simple enough someone might take enough time to understand and verify the test suite to the point where they can keep adding regression tests to it and maybe mostly call it done.
They probably won’t do this though (based on the situation the company was in in the first place) and people will have Claude fix it and write tests that no one verified. And in a while the test suite will be so full tests that reimplement the code instead of testing it that it will be mostly useless.
Then someone else will come in and vibe code a replacement that won’t have the bugs the current system does but will have a whole new set.
And the cycle will continue.
The same cycle that I’ve seen in the bottom 80% of companies I’ve worked for, just faster.
I have a crash reporting system that sends me crash information in text file: callstack of crashed thread, basic os info and logs of this execution.
The way I (well, claude) fixed this bug is: I said "analyze crash report: <paste crash report>" and it does it in under a minute.
Recently I've fixed at least 30 bugs with this process (you can view recent checkins).
Those are crashes that I found hard to fix because even though by human standard I'm both expert developer and expert Windows API developer.
But I'm not an autistic machine that can just connect the dots between how every windows api works and how it ties to the callstack and information from the log, in under a minute.
There's nothing I love more than unfounded arrogance.
How about you try to make a change to SumatraPDF code base.
Let's see how good of an engineer you are when you actually have to write a line of C++ code in complex codebase as opposed to commenting on a check in with an explanation of the issue and a fix.
Fixing a bug is in the wheelhouse of AI to the extent that the fix can be verified — since there is a clear objective function. The real question is whether there are unintended side effects (e.g., new bugs that get introduced) or whether the test cases are comprehensive enough to determine whether the fix worked.
If the system is simple enough someone might take enough time to understand and verify the test suite to the point where they can keep adding regression tests to it and maybe mostly call it done.
They probably won’t do this though (based on the situation the company was in in the first place) and people will have Claude fix it and write tests that no one verified. And in a while the test suite will be so full tests that reimplement the code instead of testing it that it will be mostly useless.
Then someone else will come in and vibe code a replacement that won’t have the bugs the current system does but will have a whole new set.
And the cycle will continue.
The same cycle that I’ve seen in the bottom 80% of companies I’ve worked for, just faster.