Running Crons in Docker with Supervisord

Recently I've had an interesting conversation in #docker on Freenode with a guy that's been trying to get crons working inside his Docker container. I hadn't yet had a chance to look at that, and so we took off on a late-night debug session exchanging Dockerfiles via Pastebin. He has a bunch of other stuff going on, but at the core, he's just running an Apache webserver instance and then wants to run some crons in that container as well. I took his Dockerfile and related scripts, and pared them down to the bare minimum, commenting out everything that wasn't related directly to getting Apache and cron to work. You can take a look at what I came up with:

FROM ubuntu:14.04
MAINTAINER curtisz <>

# we install stuff this way to keep it all on one layer
# (which reduces the overall size of our image)
RUN apt-get update -y && \
	apt-get install -y \
		cron \
		apache2 \
		supervisor && \
	rm -rf /var/lib/apt/lists/*

# apache stuff
RUN mkdir -p /var/lock/apache2 /var/run/apache2 /etc/supervisor/conf.d/
ENV APACHE_LOG_DIR /var/log/apache2
ENV APACHE_LOCK_DIR /var/lock/apache2
# empty out the default index file
RUN echo "" /var/www/html/index.html

# cron job which will run hourly
# (remember that COPY is better than ADD for plain files or directories)
COPY ./crons /etc/cron.hourly/
RUN chmod +x /etc/cron.hourly/crons
# test crons added via crontab
RUN echo "*/1 * * * * uptime >> /var/www/html/index.html" | crontab -
RUN (crontab -l ; echo "*/2 * * * * free >> /var/www/html/index.html") 2>&1 | crontab -

# supervisord config file
COPY ./supervisord.conf /etc/supervisor/conf.d/supervisord.conf
WORKDIR /var/www/html/
CMD /usr/bin/supervisord -c /etc/supervisor/conf.d/supervisord.conf

You can tell what's going on here, it's pretty straightforward. To get this working, we need to install apache2, supervisor, and of course, cron. The next few lines are configuration options for Apache. Then finally we get to the test crons. I'm dumping a simple cron shell script into /etc/cron.hourly in the container and making it executable, and then creating two new crons via crontab. Please note how I have added these crons with crontab - by also listing the previously-added crons with crontab -l. If you don't do this, whatever you add via crontab will overwrite whatever's in there now. My crons are just stupid simple crons for dumping something easy into Apache's index.html file so we can prove they're running. My little crons file looks like this:

ps aux | grep apache > /var/www/html/index.html

Some things to notice about this file... First, it starts with a typical shell script shebang (#!/bin/sh). It should also be executable. Lastly, this file can only contain alphanumeric characters in its filename (and underscores). It cannot contain a dot, which means naming it something like won't work.

Next, we're adding the config file for supervisord. Then simply exposing the port, changing our working directory, and then specifying our CMD to start supervisord. Speaking of, this is what our supervisord.conf file looks like:

logfile = /var/log/supervisord.log
logfile_maxbytes = 50MB
command=cron -f
command=/usr/sbin/apache2ctl -D FOREGROUND

Pretty standard fare. And so let's build our container and start it:

docker build -t="crontest" .
docker run -it --name="crontest" -p 8080:80 crontest:latest

And there you have it! Give your crons a few minutes to execute, then fire up a browser on your localhost and point it to http://localhost:8080/index.html. You should see the output of our test crons there at the tail of the file. Refresh to see more.

That's all there is to it! Previously, I've used the cron system available on the Docker host, which certainly has its benefits. First and foremost of which is not having to use supervisord inside a Docker container. Since Docker is just a fancy way to run a process, you want to avoid loading up your container with a bunch of cruft. It's not a VM! But when you can't avoid it and you really need to run crons alongside your containerized processes, it's no sweat to get it going.

Using a Private Docker v2 Registry with Nginx-Proxy

Today in #docker on Freenode there was a person with a problem with their v1 Docker registry. I think I jinxed it when I said it was "extremely easy" to get a v2 registry running behind an Nginx proxy. It turned into a nightmare, and I'm sharing the design process to help anyone else that might need to debug problems with a similar setup.

So I had previously spent time getting a private registry to work behind the jwilder/nginx-proxy image, which is a great reverse proxy for docker containers. There is a lot of movement with Docker and especially the ecosystem of orchestration applications that exist around it. These days, plenty of stuff out there does what the nginx-proxy image does. Personally I like Consul and Serf from Hashicorp. Incidentally, they also make a hell of a nice application -- Vault -- that solves most of the problem with sharing sensitive configuration information. Anyways, for single-host-multiple-container environments, I still prefer proxying with the nginx-proxy image. I use it to front for all my web applications, and our Gitlab installation, so it only made sense to front my v2 registry with the same proxy.

The way the nginx-proxy image works is that it is bound to listen on tcp/80 and tcp/443 and uses the host machine's /var/run/docker.sock to listen for docker events, and then other containers are started with a VIRTUAL_HOST environment variable. When that happens, the docker-gen utility creates an Nginx template and starts routing requests to the container. You can also mount htpasswd files into the proxy and SSL certs to manage authentication and HTTPS connections. It also supports custom directives and custom templates.

So anyways, the problem the guy was having I was pretty sure he wouldn't be able to get support for, because v1 has been officially deprecated on Docker Hub, and also is no longer the primary registry endpoint the Docker client looks for when attempting to connect to a registry. Support for v2 was introduced in 1.6, and as of version 1.9, the client now prefers v2 registry endpoints over v1. So the time has come to upgrade.

Let's get started. First, pull the images we need to work with:

docker pull jwilder/nginx-proxy
docker pull registry:2.2

It's important to note that registry:latest does not point to the latest version of registry. The latest version points to v1! We need to make sure we're pulling v2. The docker registry is actually a part of the "distribution" repository. You can and should check for the latest version of both the v2 registry image and also any documentation there. It's generally true that the Docker documentation is mostly very good and correct, so make sure you read that and prefer that information above mine.

To configure the v2 registry, we need to create a minimal config.yml file. I usually keep all my stuff for docker under /var/docker/<container>, so sudo sh -c "mkdir -p /var/docker/registry && cd /var/docker/registry/ && vim config.yml" and put this into it:

version: 0.1
    level: info
    formatter: json
        service: registry
        environment: staging
        source: registry
    addr: :5000
    secret: biglongsecretwhatever
        rootdirectory: /var/lib/registry

This is the bare minimum you'll need to get your registry going. You should change http.secret to something long and random. For example you can use a bash one-liner like cat /dev/urandom | tr -dc a-zA-Z0-9 | head -c36 to get a random string. Now save this yaml file. Finally, mkdir /var/docker/registry/lib to create a directory for storing our registry images on the local host. I like to use AWS S3, and if this interests you, take a look at my last post on this subject for instructions.

We've got our v2 registry primed to run, but we won't run it quite yet.

Let's get the nginx-proxy image going. First mkdir /var/docker/nginx-proxy && cd /var/docker/nginx-proxy to get into our base host directory. Use mkdir vhost.d to create a directory for storing our special Nginx directives.

Now, if you want to have nginx-proxy handle your SSL certificates and authentication -- and we do, since docker will complain about a registry running without HTTPS -- you'll want to mdir htpasswd && mkdir certs.d at this time as well. For illustrative purposes, let's say our registry domain is and we've already pointed DNS at the host. So we'll name our SSL certificate to and our key to and drop both of those files into the /var/docker/nginx-proxy/certs.d directory.

With HTTP authentication, we can just do some real basic stuff. We don't need anything fancy, since the client supports basic authentication. You can use htpasswd (from the apache2-utils repo on Linux Mint or Ubuntu) to generate authentication information. Save this information in /var/docker/nginx-proxy/htpasswd/ similar to the way we named our SSL data. For future reference, you can also store a default certificate and key for HTTPS requests that arrive at your nginx-proxy which aren't routable to one of your containers.

We need a couple more files dropped into /var/docker/nginx-proxy/vhost.d. The first is and looks like this:

client_max_body_size 0;
chunked_transfer_encoding on;
location /v2/ {
  # Do not allow connections from docker 1.5 and earlier
  # docker pre-1.6.0 did not properly set the user agent on ping, catch "Go *" user agents
  if ($http_user_agent ~ "^(docker\/1\.(3|4|5(?!\.[0-9]-dev))|Go ).*$" ) {
    return 404;
  add_header Docker-Distribution-Api-Version "registry/2.0";
  #more_set_headers     'Content-Type: application/json; charset=utf-8';
  include               vhost.d/docker-registry.conf;
location /v1/_ping {
  auth_basic off;
  include               vhost.d/docker-registry.conf;
  add_header X-Ping     "inside /v1/_ping";
  add_header X-Ping     "INSIDE /v1/_ping";
location /v1/users {
  auth_basic off;
  include               vhost.d/docker-registry.conf;
  add_header X-Users    "inside /v1/users";
  add_header X-Users    "INSIDE /v1/users";

These directives do a couple of things. First, they lift the artificial limit Nginx places on request size. Since you're going to be uploading huge files to your v2 registry, we need to turn off that limitation. This directive also prevents access by client versions 1.5 and below (which only use v1 registry endpoints anyway). Of particular interest is that the location directives for v1 endpoints have been proven to fix docker client bugs which caused the v2 registry to throw a 404 when trying to connect.

The second file is /var/docker/nginx-proxy/vhost.d/docker-registry.conf and looks like this:

proxy_pass                ;
proxy_set_header  Host              $http_host;   # required for docker client's sake
proxy_set_header  X-Real-IP         $remote_addr; # pass on real client's IP
proxy_set_header  X-Forwarded-For   $proxy_add_x_forwarded_for;
proxy_set_header  X-Forwarded-Proto $scheme;
proxy_read_timeout                  900;

This forwards the IP address of your clients to the registry for logging purposes, as well as providing a couple of required headers.

Now that we've configured our v2 registry and nginx-proxy, let's start them up! We'll begin with the nginx-proxy container (shoutout to Arthur for noticing I'd forgotten to mount vhost.d in this next command):

docker run -d \
  --name "nginx-proxy" \
  --restart "always" \
  -p 80:80 \
  -p 443:443 \
  -v /var/docker/nginx-proxy/certs:/etc/nginx/certs:ro \
  -v /var/docker/nginx-proxy/htpasswd:/etc/nginx/htpasswd:ro \
  -v /var/docker/nginx-proxy/vhost.d:/etc/nginx/vhost.d \
  -v /var/run/docker.sock:/tmp/docker.sock \

Next, let's start our v2 registry container:

docker run -d \
  --name="registry" \
  --restart="always" \
  -v "/var/docker/registry/config/config.yml:/etc/docker/registry/config.yml" \
  -v "/var/docker/registry/lib:/var/lib/registry" \
  -e "" \

Notice here that we're not binding our v2 registry port to any port on the host. This is because we don't want to bind that port to the host interface. We want to "bind" that port to the nginx-proxy container. You may be wondering how the nginx-proxy container knows that it should route inbound HTTPS requests to port 5000 on the registry container. The answer is that it binds to whatever port your container has exposed, either via EXPOSE in the Dockerfile or --expose in the docker run command. In our example, the v2 registry image has EXPOSE 5000 in its Dockerfile.

This brings me to one last pain point I had with the nginx-proxy image. As wonderful as it is, it's not clear that you cannot proxy anything that isn't based on the HTTP protocol. So for example you can't have the nginx-proxy image front for your SMTP server or your FTP server. Also, something else that took me a while to understand... Let's say you want to run a cAdvisor container to expose some metrics for your Prometheus server. The documentation wants you to make that thing listen on tcp/8080. That's totally fine, but if you want to run this container behind your nginx-proxy container, you'll want to not bind that port to the host. And since it's exposed port 8080 in its Dockerfile, you can simply start it with -e "" and it will then be available at, which will be served from behind your nginx-proxy. The nginx-proxy container gets an inbound HTTPS request on tcp/443, then routes the connection on the backend to tcp/8080 on your cAdvisor container. No --link necessary!

A piece of advice when debugging problems: Start with docker logs nginx-proxy and check if the problem is with your container or if it's with your nginx-proxy. Also you can use docker exec -it nginx-proxy /bin/bash to drop into a shell on your nginx-proxy container and poke around. You should check the /etc/nginx/conf.d/default.conf to make sure your container is being properly exposed to the nginx-proxy and its docker-gen utility.

Good luck!

Publish/Subscribe: The Five Ws (and of course, the How)

A friend of mine recently asked me about the publish/subscribe ("pubsub") programming pattern. As this is something I use in almost every project, I thought I'd be able to find a decent tutorial for him online. Something that would be helpful to someone familiar with programming, but not familiar with this pattern. As it happens, most of the pubsub documentation or tutorials out there are specific to their use in one situation: APIs. That's all fine and dandy, but the pubsub pattern is so much more powerful and applicable in so many more situations than that niche. I am pretty sure it's my favorite programming pattern of them all. Not so hard to believe once you know that I write JavaScript for a living. But, I think JavaScript programmers are more sensitive to the applicability of the pubsub pattern than programmers in other languages, because JavaScript is asynchronous.

So what exactly does that mean?

If you've ever written any JavaScript, you have probably gotten yourself in trouble with its asynchronous nature. For example, you've probably thought about doing something like this:

// read data from a file ...
var data = readDataFromFile('/path/to/file.txt');
// ... then do something with the data
console.log('your name is: ' + data.username);

But you can't do that. JavaScript's asynchronous nature means the (totally made-up) function readDataFromFile('/path/to/file.txt') is not executed immediately. Instead it is tossed onto the event queue and executed later, after the console.log('your name is: ' + data.username); is executed. This happens because all the code in the current scope is executed in a batch, and then JavaScript grabs the next thing on the event queue and executes everything in that function's scope, and so on. "External" function calls get put onto the event queue as they are encountered, instead of executed, as they are in synchronous languages like Python or C. You could write a lot of JavaScript before you encounter this behavior, though. JavaScript is pretty nebulous about when functions get put on the event queue versus executed directly. Usually you trigger the event queue when making calls out to external resources, for example via AJAX or file I/O. Well, in order to work around this limitation in JavaScript, we use callbacks. Callbacks are functions that we pass as parameters to other functions that get executed after all the processing inside the first function is done. That sounds stupid, so let me illustrate. Here's an example of proper asynchronous JavaScript using callbacks:

// read data from a file ...
fs.readFile('/path/to/file.txt', function( err, data ) {
    //                           ^^^^^^^^^^^^^^^^^^^^^
	// ... then do something with the data *in a callback*
	if (err) {
    	throw new Error('problem doing stuff: ' + err);
    console.log('your name is: ' + data.username);

Now you're probably asking yourself why I'm even talking about asynchronous programming in JavaScript. What does that have to do with the pubsub pattern? Well, after a while, you fall into what's called "callback hell" when you chain callbacks with callbacks with callbacks. Take a look at what should be a simple operation: Reading from a file, making a change to the data, then writing those changes to a file:

var filename = '/path/to/file.txt';
var savefile = '/path/to/new/file.txt';

fs.stat(filename, function(err, stat) {
	// inside first callback
	if (err) throw new Error('could not stat file: '+err);
    if (!stat.isFile()) throw new Error('path is not a file!');
    fs.readFile(filename, function(err, data) {
    	// inside second callback
    	if (err) throw new Error('could not read file: '+err);
        modifyUsername(data.username, function(name) {
        	// inside third callback
        	fs.stat(savefile, function(err, stat) {
            	// inside fourth callback
            	if (err) throw new Error('could not stat file: '+err);
                if (!stat.isFile()) throw new Error('path is not a file!');
                fs.writeFile(savefile, name, function(err) {
                	// i want to kill myself ..............
                    // and look at all the close brackets and parentheses!
                    // how embarrassing!
// yikes. are you sure you closed up all your functions properly?

"Holy shit. JavaScript sucks!" they will say. And so. Even the most faithful will pause to think.

Now, if only there were a way to write a blob of callback code, and then create a "trigger" that we could pull when we were ready to execute the callback blob, we could clean this mess right up. ... Well, that's right! You've figured out that my beloved pubsub can swoop in and save the day.

Here is what super-simplified pubsub calls look like:

// when you have a blob of code you want to run later:
subscribe(  "this-can-be-any-string",    functionToCall    );
// then when you're ready to execute the blob of code:
publish(  "this-can-be-any-string",   [  array, of, parameters  ]);

The first parameter to the subscribe() function is a string that we use to "index" the function we want to call. We store the function using this string as a label. This is so that later, when we call the publish() function, we reference the function we want to execute using the same string ("this-can-be-any-string") and then the second parameter passed to the publish() function is an array of arguments to pass to the function we will execute. In the above two lines, the result of the code ends up logically looking like this:

functionToCall(array, of, parameters);

Take a look at the pubsub object. It's very easy to read:

// here is our pubsub object
// this is how we enabled the pattern
var $pubsub = (function() {
	var cache = {};
    function _flush() { cache = {}; }
	function _pub( topic, args, scope ) {
    	if (cache[topic]) {
        	var current = cache[topic];
            for (var i=0; i<current.length; i++) {
				current[i].apply(scope || this, args || []);
    function _sub( topic, callback ) {
    	if (!cache[topic]) {
        	cache[topic] = [];
	return {
    	flush: _flush,
    	pub: _pub,
    	sub: _sub

Now take a look at the refactored code, which uses pubsub to escape callback hell by "subscribing" some functions to events which we later "publish":

// convenience function to DRY up our calls to fs.stat()
var fileStat = function( filename, callback ) {
	fs.stat(filename, function(err, stat) {
    	if (err) throw new Error('could not stat file: '+err);
        if (!stat.isFile()) throw new Error('path is not a file!');
        typeof(callback) === 'function' && callback();
// now begins our list of discrete functions to execute in a specific order
// (note the $ calls within each function!)
var fileRead = function( filename ) {
	fileStat(filename, function() {
        fs.readFile(filename, function(err, data) {
        	if (err) throw new Error('could not read file: '+err);
            $'/username/modify', [data]);
var fileWrite = function( filename, data ) {
	fileStat(filename, function() {
        fs.writeFile(filename, data, function(err) {
        	if (err) throw new Error('could not write file: '+err);
var modifyUsername = function( username ) {
	var newUsername = username + '_modified';
	$'/file/write', ['/path/to/new/file.txt', newUsername]);
var moreStuff = function() {
	// do more stuff after everything else

// set up our subscriptions
$pubsub.sub('/file/read', fileRead);
$pubsub.sub('/file/write', fileWrite);
$pubsub.sub('/username/modify', modifyUsername);
$pubsub.sub('/continue/process', moreStuff);

// kick off the whole process with this "publish" statement
$'/file/read', ['/path/to/file.txt']);

This looks a lot nicer, doesn't it? It's a bit more typing, but once you grok what's happening here in this post, you will never want to go back to that awful callback hell. So I've spent the last few minutes answering the 5 Ws of the publish/subscribe pattern by showing you how to do it. If you're not totally clear on what's happening here, start reading from the top, and hand-copy the code into your IDE. Taking a closer look by hand-copying the code is something that always helps things sink in. Before you know it, you'll grok the pubsub pattern and be using it to hit all kinds of nails.

Meet DOSBox, the kickass... Debugger?

One of the members of an ARG I play recently started talking about an old piece of equipment he'd purchased, which supposedly had been used by phone repair technicians to do their work. The equipment in question is an Itronix T5000, which has an in-built modem, speedy 486 processor, and 640KB of RAM. Kilobytes, folks. This was the 90s. You know, incidentally, I fondly remember having 640KB of RAM in my very first computer, and having to juggle peripherals.

Anyways, unfortunately for our friend, when he powered the device on, this is what he saw:

Our friend Mister Argent managed to offload all of the device's files to USB using an available restore feature. He just didn't know how to proceed. Fortunately for him, when it comes to binaries from the 90s, I'm your guy. Now, it's been a few years since my last encounter with something of this nature, and I hadn't realized that most of the tools we used to use for reversing no longer function on today's platforms. A simple strings of the files gave me nothing useful. Interesting, sure, but nothing useful to solve our primary predicament. There is a PASSWORD.DAT file in the collection but it's clearly not plaintext and here specifically, I am nothing more than a hobbyist, and definitely no cypherpunk. I would need to reverse this binary to get anywhere. It didn't take me long, however, to remember the only game I spend time playing these days -- which also happens to be a binary from the 90s -- and more importantly, the platform I use to play it: DOSBox.

If you've never heard of DOSBox, it is basically what you're thinking it is after my description above: An emulator for DOS applications. The thing about DOSBox that makes it special -- besides being the key to many glorious, wonderful games from the 90s that you couldn't otherwise play -- is that the creator has built in a very useful debugger.

I'm a die-hard Linux Mint user, since I can't stand Ubuntu's Unity UI almost as much as I can't stand Michael Shuttleworth. One of the nice things about Linux Mint -- besides the fact that it hasn't immediately jumped into the systemd assimilation chamber -- is that it uses Ubuntu as a base, and therefore has its repositories available for consumption. DOSBox is available in Ubuntu's default repository (and probably in other default distro repos), but if you want to use the debugger, you've got to compile it with a special option, which means building from source. On LM/Ubuntu, you're going to need a few things in order to compile. If you're adventurous, you probably already have build-essential, autoconf, and automake. If not:

sudo apt-get install build-essential autoconf automake

Either way, you're going to need to get the DOSBox dependencies:

sudo apt-get build-dep dosbox

When you're compiling DOSBox for its debugger, you need a curses library. You'll need to install one to continue. Thanks to this answer on Stack Overflow, resolving this on Linux Mint/Ubuntu is a cinch:

sudo apt-get install lib32ncurses5-dev

Next, download the source and extract the source (sorry for the SourceForge link):

wget ""
tar -xvf dosbox-0.74.tar.gz

Next we'll build our awesome DOSBox debugger, but something you should know here is that DOSBox actually comes with two "levels" of debugging capability. Compiling with --enable-debug will get you most of the debugging features, but there are a few important ones you'll want the convenience of by compiling with --enable-debug=heavy. Most importantly, this "heavy" debugger enables the heavycpu command, which is a hardcore CPU logger that makes following code a lot easier:

cd dosbox-0.74

At this point, we're going to need to modify the source a little bit to prevent some errors in the actual compilation. Thanks to this helpful post by the DOSBox author, we know exactly what we need to change in the source to prevent the error. Let's create a little patch file to do the work for us. Create a new file in the dosbox-0.74 directory:

vim ./dosbox-0.74.patch

Paste the following contents into the editor:

diff -rupN dosbox-0.74/include/dos_inc.h dosbox-0.74.patched/include/dos_inc.h
--- dosbox-0.74/include/dos_inc.h	2010-05-10 10:43:54.000000000 -0700
+++ dosbox-0.74.patched/include/dos_inc.h	2015-07-07 14:52:42.057078234 -0700
@@ -28,6 +28,8 @@
 #include "mem.h"
+#include <stddef.h>
 #ifdef _MSC_VER
 #pragma pack (1)
diff -rupN dosbox-0.74/src/cpu/cpu.cpp dosbox-0.74.patched/src/cpu/cpu.cpp
--- dosbox-0.74/src/cpu/cpu.cpp	2010-05-12 02:57:31.000000000 -0700
+++ dosbox-0.74.patched/src/cpu/cpu.cpp	2015-07-07 14:52:23.641077942 -0700
@@ -30,6 +30,7 @@
 #include "paging.h"
 #include "lazyflags.h"
 #include "support.h"
+#include <stddef.h>
 Bitu DEBUG_EnableDebugger(void);
 extern void GFX_SetTitle(Bit32s cycles ,Bits frameskip,bool paused);
diff -rupN dosbox-0.74/src/dos/dos.cpp dosbox-0.74.patched/src/dos/dos.cpp
--- dosbox-0.74/src/dos/dos.cpp	2010-05-10 10:43:54.000000000 -0700
+++ dosbox-0.74.patched/src/dos/dos.cpp	2015-07-07 14:52:11.929077757 -0700
@@ -31,6 +31,7 @@
 #include "setup.h"
 #include "support.h"
 #include "serialport.h"
+#include <stddef.h>
 DOS_Block dos;
 DOS_InfoBlock dos_infoblock;
diff -rupN dosbox-0.74/src/ints/ems.cpp dosbox-0.74.patched/src/ints/ems.cpp
--- dosbox-0.74/src/ints/ems.cpp	2010-05-10 10:43:54.000000000 -0700
+++ dosbox-0.74.patched/src/ints/ems.cpp	2015-07-07 14:51:59.081077554 -0700
@@ -32,6 +32,7 @@
 #include "setup.h"
 #include "support.h"
 #include "cpu.h"
+#include <stddef.h>
 #define EMM_PAGEFRAME	0xE000
 #define EMM_PAGEFRAME4K	((EMM_PAGEFRAME*16)/4096)

Now save the file and exit, and apply the patch you have just created:

patch -p1 < ./dosbox-0.74.patch

Finally, compile DOSBox:

./configure --enable-debug=heavy

I've already got DOSBox installed, and so I chose not to install over it with my debugger-enabled version. But if you don't care, go ahead and place your newly-built debugging DOSBox version into your executables directory:

sudo make install

Awesome! We've got a bitchin' debugger! The second part of this story will cover the password discovery process using our fresh-from-source DOSBox debugger.

Docker Registry v2: Adventures in Ambiguity

All I need is a private Docker registry that I can host myself.

If you're anything like me, you've been excitedly awaiting the release of the v2.0 Docker Registry. Version 1 was not very good. The company behind Docker is in no hurry to bite the hand that feeds them, and so development of the registry has been spotty at best. Among other things, the documentation is not great and the registry has no built-in authentication protocol. I understand that it's much better for business to get people frustrated with setting up their own private registry and then point them at your hosted services, where it's very easy to write a check to have someone else take care of this mess for you. But I am not in the habit of writing checks, and my check would probably bounce anyways.

The documentation available for deploying a v2 registry is specific to one situation. It is a set of instructions for using Compose (yet another Docker technology with a seemingly nebulous purpose at this time) to get both a v1 and a v2 registry working behind an Nginx proxy. But as I am using the Nginx Docker reverse proxy by Jason Wilder, so I don't need to bring in an external Nginx server. Nor do I need to answer requests for a v1 registry, as I am not using any Docker clients earlier than version 1.6.0.

All I need is a private Docker registry that I can host myself.

So... What we need to do is rip out all the extra stuff so that we're left with what we need. We don't need a v1 registry, so ignore all that. We're using jwilder/nginx-proxy to proxy our inbound requests, so ignore the instructions about pulling in the Nginx server. The average docker user right now doesn't really know what Compose is for or what it does -- though it will reduce complexity for most of us some day in the future -- so just ignore all the cruft about Compose. We're left with something close to what we're looking for.

First, clone the Distribution repository from Github and change into that directory:

git clone && cd distribution

We'll build our registry server from this repository:

docker build -t=registryv2 .

Now our container will build and should be listed in the output of docker images. We're almost there, but first we need to configure the registry. The future may include something I've talked about before called the docker vault, which is a cryptographically-secure, in-container, ephemeral storage mechanism which holds our sensitive configuration data. But for the moment, we don't have access to the vault because it doesn't exist. Here, we're going to have to rely on storing configuration data in a file, and then mount a volume on the host which exposes our config file to the container.

The v2 registry currently reads configuration data from cmd/registry/config.yml, so we need to map a directory on the host to this directory in the container. If you're trying to configure a v2 registry, just totally forget about everything related to configuring a v1 registry. The new configuration options are in the documentation. I'll include my own sanitized debug mode configuration file so you've got a sanity check reference:

version 0.1
    level: debug
        service: registry
        environment: staging
        rootdirectory: /registry
        layerinfo: inmemory
    addr: :5000
    secret: somerandomstring
        addr: localhost:5001

This is all pretty self-explanatory, if you've played with running any version of a docker registry. I'm running this on tcp/5000 during development for testing purposes. We are also running the server to listen on tcp/5001 (on local loopback only) if we need to connect and get some verbose debug information.

Now for production mode configuration, I use a different setup, which is more like something you'd expect to see out there in the real world:

version 0.1
    level: info
        service: registry
        environment: staging
        accesskey: AKIA0Z6307DRPWJ5VH03F
        secretkey: OgP2Yhk1ZjFFf+aYokvnqI3qTlenCxSW2nbb9zpB
        region: us-west-1
        encrypt: false
        secure: true
        v4auth: true
        chunksize: 5242880
        rootdirectory: /registryv2
        layerinfo: redis
    addr: :443
    secret: ZpAedwVDFHK7mkNFFKSP8OQY
        addr: localhost:5001
    addr: localhost:6379
    db: 0

Here, we're setting our registry to use an AWS S3 backend (that configuration data is of course dummy data, but feel free to try it, leet). We're also using a Redis container for caching, which speeds things up considerably. Again, the jwilder/nginx-proxy container auto-detects which port is exposed on a container, and I want this registry to listen on HTTPS (tcp/443), so I've changed its listen port appropriately.

So, because our registry is going to use Redis for caching, we need to spin up a Redis instance and link it to our registry container. Real quick, let's pull that Redis image:

docker pull sameersbn/redis

Now run it:

docker run -d --name="registryv2-redis" --restart="always" sameersbn/redis

And when I run my registry container, it looks something like this:

docker run -d \
  --name="registryv2" \
  --restart="always" \
  --link registryv2-redis:redis \
  -v /var/docker/registryv2/config:/go/src/ \

Finally, it's not clear in the documentation, but to create a repository on your new v2 registry, you've got to tag your images correctly. Let's suppose for a second you have just created an image called myContainer and you'd like to create that on your new v2 registry:

docker tag myContainer:latest

This command tags your container not only with the latest tag, but also specifies exactly which registry you want to use for your new repository. Now you can push this image to your new v2 registry. Enjoy!