Made to Order Software Corporation Logo

Docker, an advanced chroot utility

As we are helping various customers we encounter new technologies.

In the old days, we used chroot to create a separate environment where you could have your own installation and prevent the software from within that environment from accessing everything on your computer. This is particularly useful for publicly facing services.

However, chroot is difficult to setup properly. You need to have all the necessary libraries installed, for example.

Docker takes that technology to the next level by hiding all the complicated setup as it takes care of it for you. It will create an environment where you can actually install any software automatically. Then they take care of running the software and also of proxying network ports so everything works as expected.

Once you created such an environment you can actually save the image and share it with your peers so they can run the exact same software (especially, with the exact same versions even if it's on the edge or not on the edge). In other words, this is a bit like the snap installer only it's a seperate environment from your main Linux/macOS installation.

First, you want to install docker on your computer with:

apt-get install docker docker-compose docker-containerd docker.io

You may not need all of the features offered by all of these packages, but that way you get all the functionality. It's likely going to install many additional python packages. Much of Docker is written in Python.

Now to use Docker, it is much simpler if you add yourself to the docker group, run this command:

sudo usermod -a -G docker alexis

Replace "alexis" with your user name. If you use docker as different users, you want to repeat this command for each user.

IMPORTANT NOTE: adding yourself to a group member is not going to add you to the group in your current logging environment. The easiest way to make it effective is to log out and back in. There are other methods to having your new group effectively taken in account, but they are hacks. Just log out and back in. it's the fastest and cleanest way.

The fact that you can have the exact version of certain software in your image makes it very likely to work on most systems available out there. This is pretty powerful. The creation takes time, but once ready, the deployment is very easy. You can just use the command:

docker pull <image/name>

This command downloads (pulls) the image on your computer and makes it possible to run it with the docker run command. For example, there is a project one can use to replicate the Amazon stack functionality (so you can develop on your computer and not waste bandwidth/space on a real Amazon system). This is called localstack. You can get the image with:

docker pull localstack/localstack

The run command is a little more involved as you have to map all the ports that the image offers (assuming the image has a set of services). There may be other parameters that are required.

docker run -d -p 4567-4584:4567-4584 -p 8080:8080 -name localstack localstack/localstack

The -d option means that docker will run that service in the background (detach). The -p is used to map the ports. Usually you keep the ports one to one to the original, but you can change them if you need to (i.e. a conflict between different services).

The -name option allows you to give a specific name to the instance. For example, if you use the localstack just for the S3 functionality, you could rename that socker "s3".

Once running, you should start seeing the ports opened by the services and ready for you to consume. Docker will make use of your network stack and firewall to properly redirect the ports.

Now when you reboot, the docker should restart everything as if you had not rebooted your computer. Some images, though, may not work that well. localstack being one of them. I still have problems where it doesn't auto-restart.

To see what is currently running, use the ps command:

docker ps -a

The -a is to show everything (all).

To start and stop docker services you can those two commands:

docker start localstack
docker stop localstack

The stop command is useful if you want to upgrade an image. To do so, the easiest I've found is to remove the existing environment and restarting it with the run command:

docker stop localstack
docker rm localstack
docker run -d -p ... (see the run command above)

You can get more help using the help command and option like so:

docker help
docker run --help

Most of the files docker deals with are saved under:

/var/lib/docker

Note that the contents are protected. You'll need to have root permissions (sudo) to be able to list the files found under /var/lib/docker.

To debug and run commands from within the docker itself, use the exec and bash as the command:

docker exec -it localstack /bin/bash

The name after the "-it" is the name passed after the "-name" above.

Enjoy!

Comments

Post new comment

The content of this field is kept private and will not be shown publicly.
CAPTCHA
This question is for testing whether you are a human visitor and to prevent automated spam submissions.