Made to Order Software Corporation Logo

Docker, an advanced chroot utility

Chasm—just like a Docker creates a chasm between two sets of software

SECURITY WARNING

Before installing Docker and containers with services on your Linux system, make sure to read and understand the risks as mentioned on this Docker and iptables page. Especially, Docker will make all your containers visible to the entire world through your Internet connection. This is great if you want to indeed share that service with the rest of the world, it's very dangerous if you are working on that container service since it could have security issues that need patching and such. Docker documents a way to prevent that behavior by adding the following rule to your firewall:

iptables -I DOCKER-USER -i eth0 ! -s 192.168.1.0/24 -j DROP

This means that unless the IP address matches 192.168.1.0/24, the access is refused. The `eth0` interface name should be replaced with the interface name you use as the external ethernet connection. During development, you should always have such a rule.

That has not worked at all for me because my local network includes many other computers on my LAN and this rule blocks them all. So really not a useful idea.

Instead, I created my own entries based on some other characteristics. That includes the following lines in my firewall file:

*filter
:DOCKER-USER - [0:0]

-A DOCKER-USER -j early_forward
-A DOCKER-USER -i eno1 -p tcp -m conntrack --ctorigdstport 80 --ctdir ORIGINAL -j DROP
-A DOCKER-USER -i eno1 -p tcp -m conntrack --ctorigdstport 8080 --ctdir ORIGINAL -j DROP
-A DOCKER-USER -i eno1 -p tcp -m conntrack --ctorigdstport 8081 --ctdir ORIGINAL -j DROP

My early_forward allows my LAN to continue to work. These are my firewall rules that allow my LAN computers to have their traffic forwarded as expected.

Then I have three rules that block port 80, 8080, and 8081 from Docker.

Docker will add new rules that will appear after (albeit not within the DOCKER-USER list) and will open ports for whatever necessary service you install in your Dockers.

Note that the only ports you have to block are ports that Docker will share and that you have otherwise open on your main server. If Docker opens port 5000 and your firewall does not allow connections to port 5000 from the outside, then you're already safe. On my end I have Apache running so as a result I block quite usual HTTP ports from Docker.

Docker

As we are helping various customers, we encounter new technologies.

In the old days, we used chroot to create a separate environment where you could have your own installation and prevent the software from within that environment access to everything on your computer. This is particularly useful for publicly facing services liek Apache, Bind, etc.

However, chroot is difficult to setup properly. You need to have all the necessary libraries installed, for example.

Docker takes that technology to the next level by hiding all the complicated setup as it takes care of it for you. It will create an environment where you can actually install any software automatically. Then they take care of running the software and also of proxying network ports so everything works as expected.

Once you created such an environment you can actually save the image and share it with your peers so they can run the exact same software (especially, with the exact same versions even if it's on the edge or not on the edge). In other words, this is a bit like the snap installer only it's a separate environment from your main Linux/macOS installation.

Getting Started

First, you want to install docker on your computer with:

apt-get install docker docker-compose docker-containerd docker.io

You may not need all of the features offered by all of these packages, but that way you get all the functionality. It's likely going to install many additional python packages. Much of Docker is written in Python.

Now to use Docker, it is much simpler if you add yourself to the docker group, run this command:

sudo usermod -a -G docker alexis

Replace "alexis" with your user name. If you use docker as different users, you want to repeat this command for each user.

IMPORTANT NOTE: adding yourself to a group member is not going to add you to the group in your current logging environment. The easiest way to make it effective is to log out and back in. There are other methods to having your new group effectively taken in account, but they are hacks. Just log out and back in. it's the fastest and cleanest way.

The fact that you can have the exact version of certain software in your image makes it very likely to work on most systems available out there. This is pretty powerful. The creation takes time, but once ready, the deployment is very easy. You can just use the command:

docker pull <image/name>

This command downloads (pulls) the image on your computer and makes it possible to run it with the docker run command. For example, there is a project one can use to replicate the Amazon stack functionality (so you can develop on your computer and not waste bandwidth/space on a real Amazon system). This is called localstack. You can get the image with:

docker pull localstack/localstack

The run command is a little more involved as you have to map all the ports that the image offers (assuming the image has a set of services). There may be other parameters that are required.

docker run -d -p 4567-4584:4567-4584 -p 8080:8080 -name localstack localstack/localstack

The -d option means that docker will run that service in the background (detach). The -p is used to map the ports. Usually you keep the ports one to one to the original, but you can change them if you need to (i.e. a conflict between different services).

The -name option allows you to give a specific name to the instance. For example, if you use the localstack just for the S3 functionality, you could rename that socker "s3".

Once running, you should start seeing the ports opened by the services and ready for you to consume. Docker will make use of your network stack and firewall to properly redirect the ports.

Now when you reboot, the docker should restart everything as if you had not rebooted your computer. Some images, though, may not work that well. localstack being one of them. I still have problems where it doesn't auto-restart.

To see what is currently running, use the ps command:

docker ps -a

The -a is to show everything (all).

To start and stop docker services you can those two commands:

docker start localstack
docker stop localstack

The stop command is useful if you want to upgrade an image. To do so, the easiest I've found is to remove the existing environment and restarting it with the run command:

docker stop localstack
docker rm localstack
docker run -d -p ... (see the run command above)

Note: if you need to remove an image instead, you can use the rmi command:

docker rmi localstack

You can get more help using the help command and option like so:

docker help
docker run --help

Most of the files docker deals with are saved under:

/var/lib/docker

Note that the contents are protected. You'll need to have root permissions (sudo) to be able to list the files found under /var/lib/docker.

To debug and run commands from within the docker itself, use the exec as the docker command and bash or sh as the shell:

docker exec -it localstack /bin/bash
docker exec -it ac8f0dbe9087 sh

The name after the "-it" is the name passed after the "-name" above or the SHA256. It is required to attach to a running container.

Determining the List of Processes Running

Chances are, your docker does not include your usual Unix tools: ps, top, htop, etc.

As a result, you are likely to have difficulties to list the processes that are currently running. One way I use is to list the commands found in the /proc folder like so:

for p in /proc/*/cmdline; do cat $p; echo; done

The cmdline files do not have a "\n" at the end, this is why I have an extra echo command.

Debugging a Docker

As shown above, you can execute a command inside a Docker. When executing a shell such as sh or bash, you can then have a look at the contents of the Docker file system.

Another useful way to debug what is going on is to look at the logs. This is done with the logs command like so:

docker logs localstack | less

That command displays the content of the logs of that Docker. If you are writing an application which is going to be run in a Docker, you may want to consider writing the logs directly to stderr and stdout.

For tools that write to a log file, such as Nginx, you can also use a great Unix trick as follow:

root@123456789ab:/# ls -l /var/log/nginx/
total 0
lrwxrwxrwx 1 root root 11 Jun  9 16:57 access.log -> /dev/stdout
lrwxrwxrwx 1 root root 11 Jun  9 16:57 error.log -> /dev/stderr

This makes Nginx write its logs to stderr and stdout. The usual Nginx logrotate.d/nginx should not be included in your Docker.

Creating Your Own Dockerfile

Here is an example to create a Dockerfile for a service written with the go language:

FROM golang:alpine AS build-env
RUN apk --no-cache add build-base git bzr mercurial gcc curl
WORKDIR /go/src/github.com/username/project-name
ADD . .
RUN cd cmd/command-name/ && go get && go build -i -o appname

FROM alpine
LABEL maintainer="contact@m2osw.com"

WORKDIR /app
RUN apk --no-cache add dependency-1 dependency-2
COPY --from=build-env /go/src/github.com/username/project-name/cmd/command-name/appname /app
ENTRYPOINT ["./appname"]

If you want to use the composer and create multiple versions of the Dockfile, you use the build section as follow:

services:
  my-service-1:
    image: service-1:latest
    build:
      context: .
      dockerfile: Dockerfile-service-1

  my-service-2:
    image: service-1:latest
    build:
      context: .
      dockerfile: Dockerfile-service-2

This example shows two services that are created within the same project, one called service-1 and the other called service-2.

Added a Dynamically Linked Binary

Whenever you create a tool with C/C++ and similar languages that make use of dynamically linked libraries, you'll have to include all of those libraries to the docker, otherwise you'll get an error saying "not found":

sh: error: <name> not found

The binary will certainly be found, but if any of its dependencies are not present within the Docker, it won't work. This is because the chroot command prevents access to anything from the outside, so you regular /usr/lib is ignored.

So what you need to do is look for all the dependencies, recursively. This can be done automatically with the Dockerize tool which is a Python script. Install it with:

pip install git+https://github.com/larsks/dockerize

Note that the pip command must be used WITHOUT the "sudo". It will work just fine as long as you never used sudo to run pip.

If you don't yet have pip, you can install it with your standard installer. Under a Debian/Ubuntu computer do:

sudo apt install python-pip

Now that we have dockerize installed, we can run it to create the necessary Docker environment:

mkdir -p tool/usr/bin
cd tool
cp .../path/to/tool/my-tool usr/bin/.
dockerize -o . -n usr/bin/my-tool

Now your tool folder includes new directories with libraries. It also adds an etc directory which may be problematic for you. On my end, I deleted it. The default I get works so I did not want to smash my Docker's existing passwd and group files. I actually only kept the lib, lib64, usr directories. You'll want to run tests and see what works best for you.

You may want to drop the tool/... extra path. I have that because that's part of a test. My ADD instruction go something like this:

ADD tool /

since the lib, lib64, and usr must be installed at the root point.

Source: Post with detailed explaination about running binaries in your Dockers.
Source: Dockerize on github

It Stopped Working?!

Today I tried running Docker again. It worked just fine yesterday, but somehow I just couldn't get it to work today. I was telling me that a domain name was unreachable (raw dot githubusercontent dot com). Here is an example of error I get when trying to create a new Docker:

WARNING: Ignoring http://dl-cdn.alpinelinux.org/alpine/v3.11/main/x86_64/APKINDEX.tar.gz: temporary error (try again later)

I looked into this for a little while, the fact is that the day before, I installed an FTP server (only accessible on my machine) and had to add one rule to my firewall. I had that rule before on my old server, but it was commented out on the new server.

When I add a rule to my server, it breaks havoc in Docker because it removes all the iptables rules that Docker uses to forward the packets between its container and your host and the Internet. To allow Docker to acecss the Internet again, restart it:

sudo systemctl restart docker

Now the firewall rules are back in place and the Docker creation shall work again.

IMPORTANT NOTE: If you made changes to your firewall, those new rules may also be blocking some of your traffic. If after a docker restart you still get the same errors, check your firewall closer and make sure your new rules are not in the way.

Source: dl-cdn.alpinelinux.org errors