They demonstrated an easy way to deploy a Go application utilizing Docker ( https://docker.com/ ) and Google App Engine.
The Docker Engine container comprises just the application and its dependencies. It runs as an isolated process in userspace on the host operating system, sharing the kernel with other containers. Thus, it enjoys the resource isolation and allocation benefits of VMs but is much more portable and efficient.
Which implements an http server - that's all, see here for more details:
https://godoc.org/net/http
It is not necessary, nevertheless possible, to use something like nginx or apache in front of a Go web application.
This seems strange to me. What is the point of installing Go on (presumably) your production servers? Seems like you could just an easily cross-compile a release binary and put THAT in the Docker image.
I find this to be somewhat wacky. Go applications are portable by design. There's no dependencies in the first place. As long as your web servers don't clash on ports or directories, you can deploy as many as you want, built with a dozen different versions of go.
I think it's easy to miss the point of docker which is to create a standard unit for deployment. Just like a shipping container is the standard unit of shipping... Which is quite pronounced in their graphics and explanation... Docker helps make it all uniform. You also get cgroups and all the fun stuff that brings, along with a level of isolation and configuration.
I'll concede that my first reaction was "somebody who has finished the docker tutorials should easily be able to make a Go container, what's the big deal?". But, this is targeted towards Go people and not necessarily Docker-Go people.
I was considering asking about the point of using Docker with Go binaries on Twitter last night, my thinking similar to yours. Hours later an article on HN shows how to do it.
The only thing I could come up with is maybe you have additional dependencies outside your binary required to make the environment run?
I've been looking for a good reason to use Docker, but I'm not seeing the benefit given my Go based environments. If I need fine grain container-like control FreeBSD Jails have been working just fine for me.
It's certainly valid that if your Go binary needs a specific version of Mongo out something that it could be useful to wrap that up in a container.... But just installing go in a container so you can build your Go code from scratch when you deploy is silly. Just like no one builds Mongo from scratch when they deploy it.
Docker seems to give applications the portability of a binary, so packaging a binary in Docker seems a bit convoluted. However, in a shared hosting environment (e.g. CoreOS), there are advantages to being to run different docker containers together independent of what is on them.
It really isn't "significant" -- it's just stringing together docker containers and the Google Cloud as a demonstration of how quickly someone can deploy a Go service. It's like the old Rails scaffold.
Like Rails scaffold, this isn't enough on its own. There's many concerns for putting something on the open web and making it ready for production use. But what is really slick about this is that it shows how low the barrier of entry is to containerizing and deploying apps very, very quickly.
Google Cloud is just one example. dotCloud, tutum, orchardup, and others are out there. I'm sure a self-hosted, possibility open source, solution will be on the way a la openstack.
The benefit of containers over clouds are widely documented--but as a developer it means FAST phoenix servers that mean I can iterate on deployment orchestration, configuration management, etc.
If you take a look at the example's source at: https://github.com/golang/example/blob/master/outyet/main.go
You will notice, that they import "net/http"
Which implements an http server - that's all, see here for more details: https://godoc.org/net/http It is not necessary, nevertheless possible, to use something like nginx or apache in front of a Go web application.