Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I've been writing tools like this in Python for a very long time and haven't run into anything close to what you're describing. However, if this happened to me and I was annoyed enough to do something about it, I would use a tool (e.g. pip-compile) that creates a requirements.txt file, which specifies all the libraries and dependency version numbers explicitly. If something broke one day, you can stick that script in a directory with a venv and just run it from there. There's no need to resort to compiled languages like Go and Rust which will very likely give you a large raft of other unrelated problems (slower development velocity, comparatively limited libraries, etc.). Going through all of that just to have a binary in hand doesn't seem remotely worth it from my perspective (YMMV).


The OCI container is the binary. Docker is a tool to take any existing runtime and turn it into a static binary you can feed to any future linux(ish) kernel.

I on the other hand, have had plenty of situations where some system somewhere is managed with some config management tool (such as puppet) and it is told "make this application with these dependencies" and the application, such as "manage elasticsearch index rotation" uses the system python and does pip install to install all the things. This idiom seems fine, but ultimately ends up with a hideous mess where it worked one day and didn't work a year later, because either the system python is different or the dependencies are all messed up or the pip tool itself on the system python isn't compatible with modern package distribution.

You can (correctly) say "don't do that" but the same idiom in a shell script will almost certainly work across decades regardless of if "sh" is bash or ash or ksh, to some degree of "works" (again, you can _correctly_ say that no shell script ever works correctly).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: