Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Shipping applications. Also, building.

Lets say I have a web app - and there's a production server which runs in docker. Cool! Some port-forwarding easiness with nginx or your favorite flavor of X and it's good to go.

When you want to upgrade, you run your integration tests on the new container. To "Ship" you just, well, change the port forward once you confirm it's good.

If you want to change it to an entirely new server - easy. Just pull your image and create your container - then change your IP information to point to the new server.

Docker is to contain things, not necessarily for security, but to make a build contained. Make it have no dependencies outside the container, other than being able to run Docker.



Can you explain that further? I don't get it at all.

No application is that simple. You need services that persist between deployments. Like a database.

So you make the database run in it's own container and have the containers talk.

Now you want to upgrade the database. How do you do that?

So you have some persistent storage that can be attached to the containers.

Now you create a new container that has all the exact knowledge (in the form of ad-hoc scripts?) that it's job is to get the attached volume and upgrade the data there and then run the new version of the database?

How is that more convenient or in any way better than chef or puppet?


Data migrations are tricky no matter what technology you apply to automate them.

Also, there is no real animosity or conflicting choice between Chef/Puppet and Docker. While there's overlap, there's nothing preventing you from taking the best of both technologies and integrating them. In fact, there are projects (like deis.io) which attempt to do just that.

I have written about how to do basic data migrations with Docker. You can find the link in my profile.


First off Sorry, I didn't want to imply that there's any animosity or conflict between Chef/Puppet and Docker. I don't think there is any?

Also I completely agree that data migrations are tricky, that's why I need a tool to help me there as best as possible.

I should have asked: Given that I'm familiar with or can learn Chef/Puppet what advantage do I get from (also) using Docker. Or what advantage will I get down the road from (also) using Docker?

For example limiting resources come to mind, giving the processes in the container different IO priorities, memory allowances etc. But that's all just cgroup settings, I can do that without creating a container (via Docker) for individual processes.

I just don't see any situation where a Docker container is more useful than a Chef/Puppet recipe. And for any "complicated" setup I feel the advanced features of Chef/Puppet that allow me to fine-tune the setup for each deployment make up for the ease of use of Docker.

Yet there's a huge amount of buzz around Docker and containerization in general. What am I missing?


> Yet there's a huge amount of buzz around Docker and containerization in general. What am I missing?

Chef/Puppet don't solve the problem of running multiple apps with conflicting dependencies on the same machine. A docker image is kinda of like a more efficient virtual machine in that it isolates containers from each other. Maybe you're running 15 containers on one machine each running various different versions of rails, or whatever.

Chef/Puppet let you automate the setup of a machine so you can duplicate it; a docker image basically is a machine you just copy around, and like a VM, they're their own little worlds (for the most part).

That's my understanding anyway.


Ansible author here (http://github.com/ansible/ansible).

My view on this, basically, is that the role of automation tools in the Docker realm is going to be exactly like it is with most of the people who like to 'treat cloud like cloud' -- i.e. immutable services.

The config management tools -- whether Ansible, Puppet, Chef, Ansible, whatever -- are a great way to define the nature of your container and have a more efficient description of it.

You'll also continue to use management software to set up the underlying environment.

And you might use some to orchestrate controls on top, but the set of management services to manage docker at a wider scale are still growing in nature and very new.

I'm keeping an eye on things like shipyard but expecting we'll see some more companies emerge in this space that provide management software on top.

Is Docker right for everyone? Probably not. However I like how it is sort of (in a way) taking the lightweight vagrant style model and showing an avenue to which software developed in that way can be brought into production, and the filesystem stuff is pretty clever.


I'd like to see more innovation around Ansible + Docker being particularly compelling. Do you have any ideas on what that could look like?


> Yet there's a huge amount of buzz around Docker and containerization in general. What am I missing?

Hi, I'm the creator of Docker. You are not the only one asking this question :) Here's a discussion on the topic between me and the author of a fairly critical blog post titled "docker vs reality: 0-1", which should give you an idea of his state of mind :) I left a comment at the end.

http://www.krisbuytaert.be/blog/docker-vs-reality-0-1


Thanks for this link, Solomon!


I think they can co-exist: the market and the use cases are really broad and there's a lot of space for a wide variety of tooling.

I wrote a blog post about Docker and Configuration Management that elaborates on this:

http://kartar.net/2013/11/docker-and-configuration-managemen...

And I wrote a blog post talking about using Puppet and Docker that talks about how they might interact:

http://kartar.net/2013/12/building-puppet-apps-inside-docker...


Yet there's a huge amount of buzz around Docker and containerization in general. What am I missing?

Dependency management without having the high(er) cost of full VMs.


This is a good point. You can keep the database storage in another container, or linked in the root filesystem.

How do you upgrade the database usually? You can also do it the same way.

Not saying it really solves all the problems, just that it solves some of them.

Lots of people go years without upgrading the database that "shipped" with their application! As well, I know of many enterprise applications that literally ship with an entire _server_ as their method of production. You literally buy a server! Containers seem better than that.

They definitely aren't to be used for everything. I wouldn't use them in your situation at all - but they work well for many other things.


I've been running into this line of thought quite a bit as I explore containers, but nowhere is it addressed how contained applications talk to each other. An app seldom lives on its own - it will integrate databases, API calls into other systems, etc - how are those configured reliably and correctly when in such a transient environment? Are they also containers? If so, how are they discovered? Dynamic configuration generation or does the application have to be aware of how to discover them at runtime?

What about in a dev environment? It seems that configuration on a single local host would look vastly different, in spite of the same code running.


If you read through the redis service example it might answer some of your questions:

http://docs.docker.io/en/latest/examples/running_redis_servi...

It shows how a container can be "linked" to another container with an alias, and that alias then causes environment variables within the container to point to the correct IP address and port:

    DB_NAME=/violet_wolf/db
    DB_PORT_6379_TCP_PORT=6379
    DB_PORT=tcp://172.17.0.33:6379
    DB_PORT_6379_TCP=tcp://172.17.0.33:6379
    DB_PORT_6379_TCP_ADDR=172.17.0.33
    DB_PORT_6379_TCP_PROTO=tcp


Cool, thanks for the link.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: