Potential risk using Docker Registry

Some people recommend to use your own Private Registry for your images instead the public one. The real problem becomes when you do not have full control of the images. Let me expose an example.

Chain of images

Do you know how many images do you store every time you pull an unknown image? You can use this tool for a graph like mine:

Images chain tree
Image chain tree (Even I removed lots of old images last week)

The truth is that when you are preparing your own image you don’t think too much about the ones in the middle, don’t you? But, will you notice that a layer in the middle changed?

1. Create an image `harmless`

$ docker pull busybox
$ docker run busybox touch /harmless
$ docker commit fbea5a2af7b4 martes13/harmless:1.0
$ docker push martes13/harmless:1.0

2. Let it be consumed

$ docker build .
Sending build context to Docker daemon 2.048 kB
Sending build context to Docker daemon
Step 0 : FROM martes13/harmless:1.0
---> 177b857f26b2
Step 1 : RUN touch /helloworld
---> Using cache
---> 3956484611c5
Step 2 : CMD /bin/sh
---> Running in 22425198de95
---> 62577fd6e477
Removing intermediate container 22425198de95
Successfully built 62577fd6e477
$ docker run -it 62577fd6e477
/ # ls
[..] harmless    helloworld [..]

3. Transform `harmless` into `harmful`

$ docker pull martes13/harmless:1.0
$ docker run martes13/harmless:1.0 touch /harmfull
$ docker commit 4dd3248e115a martes13/harmless:1.0
$ docker push martes13/harmless:1.0

4. Let update progressively all consumers

$ docker build .
$ docker run -it 7924ddc5e198
/ # ls
[..]        harmful     harmless    helloworld    [..]

What happened?

Maybe you did not noticed, but we updated an old version (1.0) with a new image pointer. You could think that you are using something well-tested and that will not change, but it did. So if an evil guy is able to steal your account full of used images, he will be able to modify all versions and spread something evil to all your fans.


Well, official images are out of the scope. Their signature is verified so they “cannot” be changed.

And then, in the newest version 1.6 (now we use 1.5) there is a new feature which allows to download checking the Digest (sha256), so you will be able to check if there is any change. We will have image:tag or image@digest . Sounds good.

Before that, I propose 2 solutions:

1. Create your own Private Registry and ensure that the root images do not change (read my last post)

2. Apply unit tests when your new images is built. As everything in your image is a file, just ensure that nothing unexpected is there (diff from last one?). Halt the execution until human QA verifies it if something changed.

PD: Today Docker got $95M of investment…

Potential risk using Docker Registry

Practical Security inside a Dockerized Network

Imagine you have a bash shell in a machine but you realize it’s a docker container. How would you proceed to go deeper?

Step 1. Understand your container

Usually you won’t be able to use 0-days against the kernel but you won’t find extra security measures either (expect when running a non-root user). Vectors attacks or methodologies haven’t changed at all but this context is a bit special.

You are in a container which can disappear at any moment. It’s probably stateless and has a single responsibility. The operative system is quite safe by default because it uses minimum capacities to operate. Forget to leave a bind shell (or reverse) and come back later. They only need to patch, build, push, pull and deploy. Maybe 10 minutes? Time runs against you.

As soon as you obtain a bash shell (somehow), you must reverse the definition of this container. There are 2 elements to guess:

Dockerfile: Instructions writes files, so you only need to diff against the original image. Even there is a public registry, using a basic image (ex. busybox, ubuntu, debian, fedora, …) could be enough.

$ find / -type f ! -path "/sys*" ! -path "/proc*" -exec md5sum {} +

docker arguments: These are the keys for pivoting. Try to rebuild the original “docker run …” execution line.

Volumes $ mount
$ df
Links $ net ip
$ cat /etc/hosts
Environment (+ports) $ env

STEP 2. Find the weakest point

The most common vulnerability, now and forever, will be default and wrong configurations: human mistakes. Understand the platform and try to extract sensitive data using the designed flow. Probably it’s so lose that you will be able to dump the full database.

I have seen several docker related projects use this line:

$ docker run -it -v /var/run/docker.sock:/var/run/docker.sock ubuntu

(or as a Host:Port with or without SSL). Install socat for connecting to the sock:

$ apt-get install socat
$ socat TCP4-LISTEN:1337,fork UNIX-CONNECT:/var/run/docker.sock &

Then, read the doc: https://docs.docker.com/reference/api/. You have several options. For example, you could export another containers as raw data or create a new container mounting the root directory as a folder. You can successfully compromise the full machine in this scenario and most importantly you will have a lot of critical information to use against the internal network.

Another far more complex vector is to take advantage of ephemeral processes that can be running in our compromised machine and open new vulnerabilities into another machines. For example, a Jenkins uses our compromised development machine to build and push a new image into the registry. You can use tools like `inotify`, detect this process, and if there is a shared volume (data-pattern), change the code before it’s pushed.

STEP 3. Complementary software

In real production environments it is not common to execute raw docker commands for starting processes. These tools are quickly identified as they don’t expect you there.

Docker-compose sets as environment variables all the linked containers with their ports. However, tools like this are going to be hell for an intruder. It’s pretty easy to set volumes as read-only, limit cpu, memory, etc.

Supervisord is a process control system which will work as pid 1 and manage the sub-processes in that container. If you are so lucky to have enough access you will find the logs of all running applications in /var/log/supervisor/ . In the case that the app was running directly as a command/entry-point I think you cannot obtain this information.

Docker-registry is a private repository and is without a doubt your goal. Maybe you can scan the whole internal network searching the default port 5000 open, or you can try to guess their backend (s3?) and retrieve the images, or you can access to the Redis instance used as cache, etc. The registry doesn’t provide directly Auth. It has to be provided by extra tools (nginx, ..) or what is called Index Auth service. So if you find the HTTP/s API, start with GET /v1/search and dump everything. You will also be able to push something.

An example of what I’m talking about: http://blog.programster.org/2015/03/17/run-your-own-private-docker-registry/ . It’s protected from outside (SSL + firewall) but once you are in one container you have free access to any image.

Consul provides information about domain resolution and a key/value storage. It has a security model for MITM and other attacks, but also an ACL feature. However, ACL are disabled by default. (It’s another great tool that needs to be mentioned)

There are a bunch of new tools appearing which use docker to optimize a specific kind of task. They use to have some kind of security in mind but must be enabled and properly configured. Quick and dirty feature development, not enough QA work, using beta or alpha version, etc are the keys for finding a bogus system.

bonus. Denial of Service

If you want to directly crash the machine (which uses more than your container), probably the easier option is:

$ dd if=/dev/zero of=/fileOrPartition (or /mounted_volume_file)

Docker is able to limit the resources of the CPU, Memory, SWAP, … but it still doesn’t let you limit the harddrive. Probably you are able to get it full and something will crash.


This article was quite simple but I don’t like when people talks about the security in docker from only a theoretical viewpoint. Hope you enjoyed it. (And yes, to obtain a shell in a container is NOT trivial).

More advice here: https://github.com/GDSSecurity/Docker-Secure-Deployment-Guidelines

Practical Security inside a Dockerized Network