Not sure what you are trying to say. Are you trying say that the CVS publication happened too early? If so, the timeline tells a different story:
2016-12-08 First contact with Ansible security team
2016-12-09 First contact with Redhat security team ([email protected])
2016-12-09 Submitted PoC and description to [email protected]
2016-12-13 Ansible confirms issue and severity
2016-12-15 Ansible informs us of intent to disclose after holidays
2017-01-05 Ansible informs us of disclosure date and fix versions
2017-01-09 Ansible issues fixed version
I always preferred using plain SSH and Shell for automatically configuring servers.
That's why I had a look at Ansible, but despite this simple concept, it seemed too bloated to me: too many layers around this simple idea. But I had never thought to be proven right, let alone that this complexity actually translates into 6 scary security holes in a row.
To propose an alternative, this is how I'm currently doing it. I'd love to hear what others think about it, or whether others have tried similar things:
Note: This works best of a relatively small and/or heterogenous set of servers/VMs. So it's not really the full-blown Ansible use case. But then, how many people really do have billions of users and a large set of homogenous servers at their hands? I bet the "long tail" of web applications doesn't need more than a few servers. Or, if your hardware is strong enough, just two servers, where the first one takes all the load, and the second one is a failover system on stand-by.
Use a plain shell and plain shell commands, which has the advantage that you can quickly try these interactively in any VM. Use "set -eu" so it halts on error, instead of blindly executing all following commands. Use appropriate commands, e.g. "install" instead of "cat+chmod+chown". Use "--backup=numbered" to keep backups of previous versions of files, if you want. Use only idempotent commands, i.e. commands that you can run multiple times without changing anything of significance. Prefer portable commands, but use distro-specific code as needed. The time you switch to another distro you'll have a new script, anyway, because you'll have a different system with different requirements. Write everything down in the style of a "copy&paste" documentation (Does this count as "literate programming"?), but put all your remarks in shell commands. Then automate the "paste" part by making it executable.
Example file (executable file "myserver_config"):
#!/bin/sh
tail -n +3 "$0" | ssh -p 1234 root@myserver_ip ; exit
set -eu
# Install packages
...
# Configure Foo
...
# Configure Bar
...
# Configure Nginx
#
# Configure Nginx so that the foo fits into the bar
# and boggles the foobar.
install --backup=numbered -o root -g root -m 600 /dev/stdin /etc/nginx/nginx.conf <<'EOF'
error_log /var/log/nginx/error.log;
events {
...
}
http {
access_log /var/log/nginx/access.log;
...
}
EOF
# Do more stuff
...
Edit this file and redeploy by simply executing it:
./myserver_config
The script may seem hard to read, but with syntax highlighting it is a breeze. And you can easily convert this to HTML, AsciiDoc or Markdown if you need a "real" documentation for your customer or other project members. It's just a few lines of code in Python, or whatever language you prefer.
A couple of reasons spring to mind as to why Ansible is an improvement on the eminently workable approach you outline here.
One is idemopotency - The fact that it is not easy to run the above script on a bunch of host just to 'fix up' a config that was somehow broken or to add or amend a directive in httpd.conf type of thing. With Ansible, you could re-reun the playbook and it would only change what needed to be changed to bring everything in line with the playbook, i.e. just the directives in httpd.conf and then do the reconfigure on httpd to bring in the changed config.
The other is inventory, so such a change as I outlined could be, with a one liner playbook command, be run against all Dev hosts only, or Dev and UAT hosts only, or Red Hat hosts only etc, when you're managing a lot of hosts this is invaluable. Even for just checking some config or other easily by running a shell command against some set of hosts.
Ansible's idempotency is an illusion. Removing an item from a playbook will practically cause it to be "forgotten" by Ansible, but left on machines where it was deployed previously. While there are mechanisms to remove stale items (state=absent), in practice it takes Spock-level discipline to do this properly. Playbooks are also not tied to a particular Ansible version, meaning just upgrading Ansible can cause changes even though your definitions are still the same. In my experience Ansible promises consistent environments, but consistently fails to deliver.
> One is idemopotency - The fact that it is not easy to run the above script on a bunch of host just to 'fix up' a config that was somehow broken or to add or amend a directive in httpd.conf type of thing
In my case, I simply overwrite all config files and restart all services. This is not "efficient" (but still sub-second), but I believe this is idempotent by all means.
When you write `install`, is this another script that you have installed on the server beforehand or is it a function/alias or something? [EDIT: Duh, didn't realize this was actually a standalone program (standard but non-POSIX AFAICT) I haven't used.]
> While I use here documents a bunch too this is the first time I've seen this idiom
I came up with this trick to avoid nested here documents, which would make syntax highlighting unusable.
> When you write `install`, is this another script that you have installed on the server beforehand or is it a function/alias or something?
The "install" tool is very common on almost all Unix systems. It is usually called by a Makefile on "make install", but of course can be used by anyone.
In my experience with ansible, salt and chef their key selling point compared to bash script is composibility and abstracting away the underlaying distribution. The later one is so leaky it's not very useful. Composibility is possible but the building blocks lack any opinionated structure, so everyone re-invents the wheel anyway.
Today I wouldn't use any of those anymore and, as you suggested, use shell scripts in a immutable infrastructure fashion (aka, reinstall system when changes are needed).
Sure, but we ansible users are already accustomed to that. :/
More seriously, ansible is terrible and I have a long list of complaints, but it is still better than everything else at the moment. Being able to run playbooks from anywhere without a special controller host is very important, and seems to have been missed by most competitors.
> To provide an alternative. This is how I'm currently doing it. I'd love to hear what others think about it:
Different tools for different scales. Ansible exists as an improvement over ad-hoc shell scripts, simple/ad-hoc inventories, copy-pasting etc.
I think it's very unfair to all the contributors to highlight some security issues and say "I told you so", and propose a shell script as an alternative.
EDIT: I see your edits, acknowledging more complex environments...