Journals employ copy editors to address just those sorts of mistakes, why should we not hold software to the same standard as academic language? But more importantly, these software best practices aren't mere "grammatical mistakes," they exist because well-organized, well-tested code has fewer bugs and is easier for third parties to verify. Third-parties validating that the code underlying an academic paper executes as expected is no different than third-parties replicating the results of a physical experiment. You can be damn sure that an experimental methodology error invalidates a paper, and you can be damn sure that bad documentation of the methodology dramatically reduces the value/reliability of the paper. Code is no different. It's just been the wild west because it is a relatively new and immature field, so most academics have never been taught coding as a discipline nor held to rigorous standards in their own work. Is it annoying that they now have to learn how to use these tools properly? I'm sure it is. That doesn't mean it isn't a standard we should aim for, nor that we shouldn't teach the relevant skills to current students in sciences so that they are better prepared when they become researchers themselves.
> Third-parties validating that the code underlying an academic paper executes as expected is no different than third-parties replicating the results of a physical experiment.
First, it's not no different--it's completely different. Third parties have always constructed their own apparatus to reproduce an experiment. They don't go to the original author's lab to perform the experiment!
Second, a lot of scientific code won't run at all outside the environment it was developed in.
If it's HPC code, it's very likely that the code makes assumptions about the HPC cluster that will cause it to break on a different cluster. If it's experiment control / data-acquisition code, you'll almost certainly need the exact same peripherals for the program to do anything at all sensible.
I see a lot of people here on HN vastly over-estimating the value of bit-for-bit reproducibility of one implementation, and vastly underestimating the value of having a diversity of implementations to test an idea.
I’m glad someone else feels this way. It’s an expectation that scientists can share their with other scientists using language. Scientists aren’t always the best writers, but there are standards there. Writing good code is a form of communication. It baffles me that there are absolutely no standards there.
I agree with your overall point, but I just want to point out that many (most?) journals don't employ copy-editors, or if they do, then they overlook many errors, especially in the methods section of papers.