Sharing code is not the full answer, especially in case of Monte-Carlo simulations in physics, because
that kind of algorithm is hard to test: What is the testing oracle?
What is the specification?
But sharing code is part of the answer.
Setting up a culture where it is unacceptable to submit a paper
without open-sourcing the code and suitable testing (for simple
edge ases), and suitable scripts that make reproducing the software
simulations easy, is good scientific 'hygiene'. See for example [1,
2] for efforts towards reproducible software submissions in computer
science.
Reproduciblility is the very essence of the scientific method.
Sharing code is not the full answer, especially in case of Monte-Carlo simulations in physics, because that kind of algorithm is hard to test: What is the testing oracle? What is the specification?
But sharing code is part of the answer.
Setting up a culture where it is unacceptable to submit a paper without open-sourcing the code and suitable testing (for simple edge ases), and suitable scripts that make reproducing the software simulations easy, is good scientific 'hygiene'. See for example [1, 2] for efforts towards reproducible software submissions in computer science.
Reproduciblility is the very essence of the scientific method.
[1] http://evaluate.inf.usi.ch/artifacts
[2] http://www.artifact-eval.org/