I’ve transitioned a lot of my production workload into a kubernetes cluster, and we’re shortly going to begin moving over some of our product too. I’ve been using fairly vanilla Docker for some of my local development toolset, though.
Once upon a time, I ran postgresql natively on my Mac, first by installing the official packages and later with
brew install postgresql. Later, I started running my local postgresql in a Docker container. The same was roughly true for all of my other requirements, such as memcached or redis or rabbitmq.
If I just
docker run ... these prerequisites, I now have to make sure they’re still running at any given point in time. While moving to Docker brings me a tremendous ease-of-provisioning that I cannot ignore, this aspect is a regression; no longer can I use something like launchd to keep the service running if it fails. I could, and have, used
docker-compose to create services in each project that provide the necessary software, but this has often lead me to running two or three copies of, say, postgresql at a time, if I don’t remember to shut them down when I’m done.
And really, that’s more than a little overkill. My local databases aren’t performance tweaked. They aren’t (generally) set up with any custom configuration, and to whatever degree they are, they’re fairly generic tweaks that I apply to all of my databases. For dev, all that really matters is that I happen to have a postgres process running and the software has access to it. Why run three copies?
A few assumptions I make:
- I prefer my VM to be at
172.31.255.254, as this private subnet is practically never used. By anyone. Ever.
~/srcis shared into the VM at
~core/srcand, thus, available to containers with the
- Kubernetes services are NodePorts. I replace the thousands digit with
30; that is, the standard postgres
5432port is comes out on
Vagrantfile is based off of a great setup by Josh Butts over at offers.com. I added setup for etcd, fleet, and flannel, and added the units for all of the kubernetes components. You can find my setup here.
vagrant up and you’re good to go. From Josh’s setup, you can expose your services to the world by adding ports to the
The four most common services I use are mysql, postgresql, rabbitmq, and redis. Since I don’t apply any tweaks on my dev box, I’m using the generic, official Docker images.
$ kubectl create -f <service>/rc.yaml $ kubectl create -f <service>/service.yaml
The services store their state in
/home/core/data/<service>. If I wind up with data I need, this is what I back up. You might be tempted to share this back out, but POSIX does stupid things sometimes and, chiefly, I’ve found it really messes with postgresql.
My example kubes are here.
Because this is based on Josh’s setup, Docker is exposed outside the VM and I can use
docker run ... and other commands locally on my Mac with the correct environment, without necessarily having to rely on kubernetes.
$ export DOCKER_HOST=tcp://172.31.255.254:2375 $ docker run ... or $ docker-compose up
Generally, I use
docker-compose while developing, then test it by running it in my local kubelet before pushing it out to our production kubelets.