Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Docker

Docker

Why use Docker?

One of the big problems in software development is the fact that the environment where we develop our code is different from the environment where QA will test the code that is different from the environment where the application will run in production.

This difference can be as simple as having different operating systems, e.g. a developer might use Windows, Linux, or Mac on her desktop while the production runs on a Linux box. Even if they would use the same operating system, certain dependencies and libraries installed on these machines might be different. There are various ways to reduce the risk that arises from these differences, one of the latest and best is the use of some kind of virtualization.

Docker is the most popular system to provide virtualization.

  • One of the big problems: Developers, Testers, and Production all have different environments.

  • Dependency hell.

  • On-boarding new developers is hard.

  • Makes it easy to have the exact same setup for development, testing, productions.

  • Makes it easy for people to set up their development environment.

What is Docker?

Docker is usually called a light-weight Virtual Server.

If you are familiar with VirtualBox or VMware you know they allow you to have a full copy of any operating system running on top of your main operating system. You can even run multiple of these guest operating systems on a single host operating system. There is full separation and you can install anything on each one of the guests.

However this is very resource intensive as each one of the guest Virtual Servers needs to run its own Operating system taking up a lot of memory and using a lot of CPU cycles.

Docker also allows to run several guests on the same host, but if the host and the guest are the same type (e.g. both are Linux) then the guest uses the same kernel as the host and thus requires a lot less additional resources. It has certain limitations such as one service per each Docker, but these limitations are usually irrelevant in a real-life environment.

The action packaging a whole application inside a Virtual Server is called containerization. There are several solutions to do this, but Docker got so popular that it is now the de-facto standard.

  • A light-weight Virtual Server.

  • De-facto standard for containerization.

Docker container vs. image

  • image
  • container

In the world of Docker an image is a fixed version of an installed application while a container is a running copy of the image.

A Docker image is similar to an ISO image in the world of VirtualBox from which we can create an installation and then run it. The Docker container in this description would be similar to the already virtual hard-disk a Virtual Server has.

You can download Docker images and you can build your own images based on already existing ones by installing more software on it or copying files to it. It is frozen on the disk of your host computer.

When you run a Docker image, Docker creates a copy of it and we start to call it a container. A running instance of the image. You can still install more applications on a container, but usually it is done only during development time.

Some people also try to use the class-instance analogy to the image-container pair. I am not sure how close is that, but that too is just an approximation.

container = runtime of an image

  • It is like instance = runtime class
  • Or Virtual Machine = the running instance of an ISO file.

Install Docker

On modern Linux systems you could use apt or yum to install Docker, but then you'd get a rather old and outdated version of Docker. The recommendation of the Docker development team is to download Docker directly from their web-site.

So this is what I recommend as well.

For MS Windows and I think also for Mac, there are two versions. "Docker for Windows" and "Docker for Mac" for the modern systems. The "Docker Toolbox" for the older versions of MS Windows and Apple Mac, or in case the modern system cannot be installed.

On Linux, you might also need to follow the "Linux post installation instructions".

Docker on Windows

  • cmd

The majority of our work with Docker will be on the command line. There probably is some GUI as well, but I have never searched for one and I have never tried one. In any case I think it is very important that you too, even if you are using MS Windows on your computer make yourself familiar with the command line instructions of Docker. After all that's how you'll be able to talk to other people or search for help.

If you are running Docker on top of MS Windows you'll need to open the Command Prompt Window to access the command line of Windows. So go to Start/Run or click on the "Windows" button on your keyboard and type in "cmd". When you run this application you'll get a black window where you can type in command.

That's what we are going to use.

  • Run the cmd

Docker on Linux and macOS

  • terminal

On Linux and macOS we are going to us the terminal to enter the Docker commands.

  • Run the terminal

Docker --version

  • --version

Show the version of your Docker installation.

docker --version

Output:

Docker version 19.03.6, build 369ce74a3c

Docker version

  • version

Show a lot more details about Docker:

docker version

Output:

Client:
 Version:           19.03.6
 API version:       1.40
 Go version:        go1.12.10
 Git commit:        369ce74a3c
 Built:             Fri Feb 28 23:26:00 2020
 OS/Arch:           linux/amd64
 Experimental:      false

Server:
 Engine:
  Version:          19.03.6
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.12.10
  Git commit:       369ce74a3c
  Built:            Wed Feb 19 01:04:38 2020
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.3.3-0ubuntu1~19.10.1
  GitCommit:
 runc:
  Version:          spec: 1.0.1-dev
  GitCommit:
 docker-init:
  Version:          0.18.0
  GitCommit:

Docker info

  • info

Display some system-wide information.

docker info

Docker help on CLI

  • help

Any of the following will show the list of available commands:

docker
docker help
docker --help

Get help for the various commands on the command-line.

docker run --help
docker build --help
docker help run
docker help build

Docker: desktop - host - daemon - client

  • Docker desktop A GUI application that helps especially in Windows and macOS
  • Docker host (on Windows and macOS it is a Virtual Machine, On Linux it is native).
  • Docker daemon runs in the Docker host.
  • Docker client runs on the host OS (Linux, Windows macOS).

Docker Daemon

To launch docker daemon from the command line:

  • macOS: open -a Docker ot Launch the Docker daemon via the Application icon.
  • Linux: sudo service docker start.
  • Windows: Run the Docker Desktop.

Docker Registry

A Docker registry is a place where we can store reusable Docker images. There are several public or semi-publick Docker Registries and you can also run your own private registry in your organization. The most well known registry is maintained by Docker itself. The major cloud providers run their own registries tightly integrated with their other cloud services.

Docker: Hello World

$ docker run hello-world


Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
78445dd45222: Pull complete
Digest: sha256:c5515758d4c5e1e838e9cd307f6c6a0d620b5e07e6f927b07d05f6d12a1ac8d7
Status: Downloaded newer image for hello-world:latest

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
 3. The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
 $ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker ID:
 https://cloud.docker.com/

For more examples and ideas, visit:
 https://docs.docker.com/engine/userguide/

After Hello World

no running containers, but there is one on the disk:

$ docker container ls -a -s
$ docker ps -as
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS                     PORTS               NAMES               SIZE
f6239f10a6ad        hello-world         "/hello"            8 seconds ago       Exited (0) 7 seconds ago                       lucid_snyder        0 B (virtual 1.84 kB)
  • I keep fortgeting what does that -s do, so I run:
$ docker ps --help

There is also an image

$ docker images
$ docker image ls
$ docker image list
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
hello-world         latest              48b5124b2768        6 weeks ago         1.84 kB

Hello World again

This time it will be faster as the images is already on the disk. A new container is created, ran, and exited.

$ docker run hello-world
...

$ docker ps -as

CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS                      PORTS               NAMES               SIZE
42bbb5394617        hello-world         "/hello"            16 minutes ago      Exited (0) 16 minutes ago                       blissful_knuth      0 B (virtual 1.84 kB)
f6239f10a6ad        hello-world         "/hello"            21 minutes ago      Exited (0) 21 minutes ago                       lucid_snyder        0 B (virtual 1.84 kB)

Remove Docker container

$ docker rm 42bbb5394617
$ docker container rm blissful_knuth

Using the "CONTAINER ID" or the "NAMES" from the list given by ps.

Remove Docker image

$ docker rmi hello-world
$ docker image rm hello-world

Docker busybox

  • busybox
  • run

busybox is a very small image with some essential Linux tools.

docker run busybox echo hello world

docker run busybox echo something else

Run Interactive

  • -it
docker run -it busybox

# pwd
# ls -l
# uptime
# echo hello world

Docker List containers

  • ps
docker container ls
docker container ls -a
docker ps                # list the running containers
docker ps -a             # list all the containers

Remove containers

docker container rm CONTAINER_ID   (or CONTAINER_NAME)
docker rm CONTAINER_ID             (or CONTAINER_NAME)

Remove all the containers with docker prune

  • prune
docker container prune
docker container prune -f
docker container prune --force

Remove all Docker containers (old way)

  • rm
  • -aq

Remove all the docker containers:

docker ps -aq
docker rm $(docker ps -aq)

Run and remove container

docker run --rm busybox echo hello world

docker container ls -a      # the container was not left around

Run container mount external disk

docker run --rm "-v%CD%:/opt" -w /opt -it busybox
  • -w to set the default workdir inside the container
  • -v to mount an external folder to an internal folder
  • %CD% is the current folder in Windows (outside)
  • /opt is the folder inside
  • We need double-quotes around it as the currecnt working directory on windows can contain spaces and other special characters.
  • --rm to remove the container at the end of the run
  • -it interactive mode

List and remove images

  • images
  • rmi
docker image ls
docker image rm busybox

Docker remove all the images - prune images

docker image prune
docker image prune -a
docker image prune -a -f

Exercise 1

  • Install Docker.
  • Run hello-world
  • Run busybox.
  • Basically execute all the above commands.
  • Check what other interesting command you can find in docker.

Create Docker Image Manually

Create your own Docker image

There are two ways to create a Docker image on your computer:

  • Run a container, install stuff, stop it, commit it.
  • Create Dockerfile, run docker build.

We will do both, but we start with the first one. It is less common, but it is more fun and it is easier to understand.

Besides while you are researching a topic you will follow the path of the first technique, only when you found the way to accomplish some task will you formalize it in a Dockerfile.

Docker Hub search for images

Before we start creating our own Docker images, let's take a moment exploring what images already exist? After all our goal is to solve problems not just to create random Docker images, so if there is already one that serves our purposes then it is better to use that.

Moreover, every Docker image we are going to build is going to be based on an already existing image.

If you visit the Docker hub and start searching you'll see there are tons of low-level images that provide the basics of one of the Linux distributions. For example, search for Ubuntu and you'll see there is an official Ubuntu image.

There are also application level images, for example there are imagaes for Wordpress and there are images that already include a certain Database (e.g. PostgreSQL) and other images that contain a programming language. For example Python.

In addition you can find the source of all the official images in A GitHub repository. Once you get familiar with the basics of Docker, it might be useful to review them.

Download image

  • pull

Earlier when we used the run command to run a Docker images, the first thing it did was downloading the image. Subsequent run commands only executed the image.

You could actully download images without running them using the pull command. You only need to provide the name and the tag of the image.

docker pull ubuntu:23.04

Use Ubuntu to run a single command

As earlier we saw how to use the busybox image, but most people are not familiar with busybox, so let's see a few examples using the Ubuntu image. We are still not building our own image at this point, just playing with a ready-made image.

The run command allows us to launch a Docker container based on an image. The --rm flag tells Docker to remove the container from ou harddisk once it stopped running. It is a good idea to do so if you don't need the conatainer any more. Otherwise the dead containers would just accumulate and take up disk space.

After the name of the image we can also provide the command line that needs to be execute. In this case echo hello is just a simple shell command.

Running this will print hello and then the conatiner will exit and it will be removed from the disk as if it never existed.

docker run --rm ubuntu:23.04 echo hello

Use Ubuntu interactively

The next best thing we can do is to launch our Docker container in an interactive mode. This time we proved the -i and -t flags together as -it to allow for interactive mode. We don't use the --rm flag as we'll want the container to stay even after we exit, so we can re-launch the same container.

By default Docker will give a name to each container consisteing two random names, eg. foo_bar, but that makes it harder for use to re-launch the container. So in this case we used the --name flag to give it a name. The brilliant name I selected was ubu.

Once we execute the command we will see the prompt as it is displayed by the container. Now we can execute various Linux command. For example we can try to tun htop.

If you try to do this you'll quickly find out that this image did not come with the htop command. Don't worry though we can easily install it. We run the apt-get update command to download the list of all the available packages. (We don't need to use sudo as we are running as user root inside the Docker container.)

Once we downloaded the up-to-date list of packages, we can run apt-get install htop to download and install the htop pacakge and then we can try to use it.

At this point feel free to play around with the command line. For example use the which command to check if certain programs or certain programming languages were installed.

Once you had enough you can press Ctrl-D or type in exit to leave the container. Once you leave it the container stops running. You can then use the docker ps -a command to see the exited Docker container called ubu.

docker run -it --name ubu ubuntu:23.04

# htop
# apt-get update
# apt-get install htop
# htop
# which perl
# which python
# which python3

# exit
docker ps -a

Rerun (restart) stopped instance

  • container
  • start

It is great that we have a container on the disk which is not runnin any more, but what can we do with it?

Probably the most important thing is that we can re-launch it. Type in docker container start -i ubu and you'll be back inside the same container. How do you know it is the same? You can run htop and see it is installed. If you were running a new container you'd have no htop installed as that's not part of the Ubuntu image as it was downloaded from the Docker hub.

docker container start -i ubu
docker container start -i CONTAINER

Create file in container

Let's do another small experiment. If you have left it again, let's get back to the ubu container by running docker container start -i ubu and then let's create a file. Nothing major, just a file called welomce.txt with the content "hello" in it. (You can use the echo command with redirection to accomplish this.

Then type in exit to leave the container and to let it stop. You alread know how to list all the containers so you can verify it still exists but it does not run any more.

# echo hello > welcome.txt
# exit
  • Create a file inside the container and then leave the container. It is stopped now again.

Create image from container

  • commit

Now that we have a conatainer with all the necessary packages installed (htop) and all the necessary files created (welcome.txt) let's convert this container into an image. Let's freeze this container so it will become reusable. Either by me or by others as well.

We can do this by running the docker container commit command. It needss to get the name of the contain as the base-container and the name and tag of the new image. Here I used the name "myubu"

Once we have the new image we can list it using the docker images command and we can run it with the regular run command.

docker container commit CONTAINER IMAGE
docker container commit ubu myubu:1.00
docker images
docker container run --rm -it myubu:1.00

Docker create image manually

We went over this step-by-step earlier, but let's have all the step in one place:

  • Run container interactively (based on Ubuntu 23.04, call it ubu)
docker run -it --name ubu ubuntu:23.04
  • Install packages, copy or create files, etc..., exit.
# apt-get update
# apt-get install htop
# echo "Hello World" > welcome.txt
# exit
  • Create the image called myubu from the container called ubu
docker container commit ubu myubu:1.00
  • Run a container based on the new image called myubu:
docker container run --rm -it myubu:1.00

Check the history!

docker history ubuntu:23.04
docker history myubu:1.00

Docker create image manually - placeholders

  • Run conatiner
docker run -it --name NAME BASE_CONTAINER:BASE_TAG
  • install stuff in the container, copy stuff to the container, exit

  • Create image from container

docker container commit CONTAINER NEW_IMAGE_NAME:TAG
  • Better yet, create the container under your Docker Hub username:
docker container commit CONTAINER USERNAME/NEW_IMAGE_NAME:TAG
  • To verify, run a container based on the new image (with or without username):
docker container run --rm -it NEW_IMAGE_NAME:TAG
docker container run --rm -it USERNAME/NEW_IMAGE_NAME:TAG

Dockerfile

Docker: Empty Ubuntu

  • FROM

Creating new Docker images manually, as we saw earlier, is fun as you can experiment a lot and you get immediate feedback as you intsall more things in the running container. However this is not easily reproducible. For one thing it happens to me a lot that I install a package and later I find out I probbaly don't need it.

Once you have a good grasp on what do you really need in that image, you can create a set of instructions in a Dockerfile that will create the image for you.

There are many advantages to this approach.

  • You get an instant description of what is really in the image.
  • It can be reproducible, so you or someone else can later rebuild the same image.
  • As it is a plain text file you can put it in version control and track the changes.
  • It is very small, compared to the size of the image.

In this very first example we will create a new image that is based on and is identical to the Ubuntu 23.04 image. Without any extra.

For this we create a new directroy and in that directory we create a file called Dockerfile and with a single line in it: FROM ubutu:23.04. Every Dockerfile must declare the image it is based on. We don't have any more commands in the Dockerfile so we don't add anything to this image.

cd into the directory where we have this file and run docker build -t mydocker . (The dot at the end of the command is important.)

This will create an image called mydocker using the Dockerfile in the current directory and using all the context of this directory (indicated by the dot). We'll discuss the "context" in a bit, for now it only contains the Dockerfile. That's why we created this in a new empty directory.

Once the image is created we can use it exactly as we used the original Ubuntu image.

FROM ubuntu:23.04
$ docker build -t mydocker .
$ docker run -it --rm mydocker

Docker: Ubuntu Hello World

  • CMD

Having an image identical to some already existing image does not give us a lot of value, so let's create make another small step. Let's add another instruction to the Dockerfile called CMD. The content of the CMD line is not executed when we build the image. Whatever comes after this CMD will be executed when we start the Docker container based on this image. Unless we override it.

In our case we just use the shell command echo to print "hello world" to the screen.

FROM ubuntu:23.04
CMD echo hello world
$ docker build -t mydocker .

Once the image is ready we can run it and it will print out "hello world" as expected. You could distribute this image to show that you made it! Well, we have not seen yet how to distribute the image, but aside of that everything is fine.

$ docker run --rm mydocker
hello world

The user of this new image can provide her own command on the command line, either another echo command or something totally different such as the pwd below.

docker run --rm mydocker echo Other text
Other text
docker run --rm mydocker pwd
/

If we try to run it in interactive mode by supplying the -it flags, we'll find out that Docker still runs or CMD and exits. In order to really get into the interactive shell of this container we need to override the default CMD by a call to bash.

docker run -it --rm mydocker bash

Docker: Ubuntu htop

  • RUN

Previously we created a Docker image manually after we have installed htop and created a file manually in a Docker container. Let's do the same now using Dockerfile.

FROM ubuntu:23.04

RUN apt-get update
RUN apt-get install -y htop
RUN echo "Hello World" > welcome.txt
docker build -t mydocker .
docker run --rm -it mydocker

Docker COPY welcome file

  • COPY
FROM ubuntu:23.04

RUN apt-get update
RUN apt-get install -y htop
COPY welcome.txt .
docker build -t mydocker .
docker run --rm -it mydocker

Docker curl

In the previous example we saw that we can use commands other than echo, but what if we would like to use curl for example? It will fail because curl is not installed in the image.

  • Let's try to use curl
$ docker run  --rm ubuntu:23.04 curl https://code-maven.com/
docker: Error response from daemon: OCI runtime create failed: container_linux.go:349: starting container process caused "exec: \"curl\": executable file not found in $PATH": unknown.

To install it we need to add RUN commands. (Yeah, I know the name might be a bit confusing, but think about it from the point of view of the build process. We would like to run something during the build process.)

In order to install the curl package in Ubuntu we need to execute the apt-get install -y curl command. (The -y flag is there so apt-get will not ask for confirmation.) However before we can execute that command we need to download the list of all the available packages. We do that by running (here is that word!) the apt-get update command.

We don't have a CMD as we don't have a default operation.

FROM ubuntu:23.04

RUN apt-get update
RUN apt-get install -y curl

We can build the image the same way we built the previous one.

$ docker build -t mydocker .

Then we can run it. This time it will execute the curl command.

$ docker run  --rm mydocker curl https://code-maven.com/

Docker image as Curl command

In the previous example we installed "curl" in Docker image so we could use it in the container, but we did not make any special arrangement for curl. Using that image it is equally easy to run any other Linux command available in the image. What if we would like to make executing curl the default behavior just as we had with echo?

We could include something like CMD curl https://code-maven.com in the Dockerfile, but then it would default to download the given page.

We could use CMD curl and hope to pass the URL to the docker run command, but the parameters given on the command-line will override everything we have in CMD.

However there is another tool called ENTRYPOINT. It is very similar to CMD, but in certain situations it allows the addition of parameters instead of the overwriting of parameters.

FROM ubuntu:23.04

RUN apt-get update
RUN apt-get install -y curl

ENTRYPOINT ["curl"]
$ docker build -t mydocker .
  • Run alone will execute curl without parameters:
$ docker run --rm  mydocker
curl: try 'curl --help' or 'curl --manual' for more information
  • Supply the URL and that's it:
$ docker run --rm  mydocker https://code-maven.com/

Docker: ENTRYPOINT vs CMD

FROM ubuntu:23.04
RUN apt-get update
RUN apt-get install -y curl

ENTRYPOINT ["curl"]   # fixed part
CMD ["--silent", "https://httpbin.org/get"]  # replacable part

By default if you run a container based on this image, Docker will execute a command which is a combination of the ENTRYPOING + CMD.

However, on the command-line where you call docker run, you can provide a replacement for the CMD part.

  • Build the image:
$ docker build -t mydocker .
  • Running container this way will execute curl --silent https://httpbin.org/get
$ docker run --rm  mydocker
  • The user can replace the CMD part, so if we run this command, docker will execude curl https://szabgab.com/
$ docker run --rm  mydocker https://szabgab.com/

Docker and environment variables with ENV

  • ENV
  • --env
FROM ubuntu:22.10

ENV FIRST Foo

CMD echo $FIRST $SECOND

We can declare environment variables and give them values inside the Docker file using the ENV keyword.

When running docker we can override these and provide other environment varibles using the --env command-line parameter.

docker build -t mydocker .
$ docker run --rm mydocker
Foo
$ docker run --rm --env SECOND=Bar mydocker
Foo Bar
$ docker run --rm --env SECOND=Bar --env FIRST=Peti qqrq
Peti Bar

Docker: Mounting external host directory as a volume (Linux, macOS hosts)

$ docker run -it --rm -v $(pwd):/opt/ mydocker

# cd /opt
# ls -l

The -v HOST:CONTAINER will mount the HOST directory of the home operating system to the CONTAINER directory in the Docker container.

Docker: Mounting external host directory as a volume (Windows hosts)

In CMD window:

> docker run -it --rm -v %cd%:/opt/ mydocker

In Power Shell:

> docker run -it --rm -v ${PWD}:/opt/ mydocker

In any case the path to the folder must only contain [a-zA-Z0-9_.-] character so no spaces, no Hebrew characters, etc.

Docker build passing command-line argumens

We can defined command-line parameters in the Dockerfile and then allow the user who builds the image to pass in values on the command line of docker build.

FROM ubuntu:22.04

ARG TEXT=foo

RUN echo "Hello" > welcome.txt
RUN echo $TEXT >> welcome.txt

docker build -t mydocker --build-arg TEXT=Bar .

Exercies 2

Pick your favorite distribution (Ubuntu, Debian, CentOS, Fedora, etc.) and use it as the base of your application.

  • Compile the most recent release of Python from source code (you will need to install some prerequisites).

  • Add a Python based application using MongoDB or PostgreSQL or MySQL - whatever you like.

  • Prepare it for distribution.

  • Install NodeJS, express, create a small web app (hello world would suffice) and prepare it for distribution.

  • Create a system of two Flask (or Express) applications that provide APIs and a third command-line application that accesses those APIs.

Copy file from stopped container

  • copy
  • cp

At this point we could get back to the container and verify that the file is still there, and I encourage you do that, but we would like to do something else as well. We would like to be able to see the file even though the container does not run any more. We can do this by running the docker container cp command.

The command gets two parameters, the first one contains the name of the container and the path to the file inside the container we would like to copy. The second parameter is the name or location of the file outside the container. dot . just means, copy the file here with the same name as we had inside.

The cp command looks similar to the cp command of Linux.

docker container cp CONTAINER_NAME:FILE .
docker container cp ubu:welcome.txt .
  • On your host OS you can copy files from the container to the external filesystem.

Docker with cron

Docker with crontab

  • cron
  • cron -f
  • -d

Sometimes you might want to distribute an application that can be scheduled to run at a certain interval. The scheduler of Unix/Linux is called cron or crontab (cron table). The name is derived from Greek word chronos.

In order to set up a so-called cron-job you need to edit or install the crontab file in the Docker image and then you need to tell your Docker image to start it when the container starts and to just keep waiting for it to work. So not to quit.

First we prepare a file that looks like a crontab file. We won't go into the details of the crontab format, you can read about it in the linked Wikipedia entry. Suffice to say the 5 stars tell the cron service to run the job every minute.

The job itself is not very interesting. It runs the "date" command and redirects the output to a file called /opt/dates.txt appending to it every time. We only use this to see that the cronjob indeed works.

* * * * * date >> /opt/dates.txt
FROM ubuntu:23.04
RUN apt-get update && \
    apt-get install -y cron

COPY crontab.txt /opt
RUN crontab /opt/crontab.txt

CMD ["cron", "-f"]

We have the usual command to create the Docker image.

docker build -t mydocker .

When we run the container we also include the -d flag that tells Docker to detach the container from the current terminal and run it in the background. This is important so the container won't occupy your terminal and that you will be able to close the terminal while the container keeps running.

docker run -d --rm --name chronos mydocker

Wait some time to allow the cron -job to run at least once (you might need to wait up to a whole minute), and then you can copy the "dates.txt" file from the container to the disk of the host operating system. If you look into the file you'll see the output of the date command. If you copy it again later on you'll see multiple entries of the date command.

Meaning that the cron-job worked as expected.

docker container cp chronos:/opt/dates.txt .

Id you'd like to stop the Docker container you can use the stop command. It will take 10-20 to stop the container. It will also immediately remove the container as we started it with the --rm flag.

docker container stop chronos

Docker with crontab with tail

  • cron
  • tail

In the previous example we used the -f flag of cron to make it stay in the foreground. This was enough for Docker to keep the container running. However there might be other commands that do not have such flag and would automaticlly become a daemon. Just as if we ran cron without any flags.

A way to overcome this problem is to create a process that will run forever. A way to accomplish this is to create an empty file and then run tail -f on that file. That tail command is supposed to display the content of the file as it growth, but the file does not change so this command will just wait there.

Enough for the Docker container to keep running.

As you can see the name of the file does not matter.

FROM ubuntu:20.04
RUN apt-get update && \
    apt-get install -y cron

COPY crontab.txt /opt
RUN crontab /opt/crontab.txt

RUN touch /opt/jumanji.txt
CMD ["cron", "&&", "tail", "-f", "/opt/jumanji.txt"]
docker build -t mydocker .
docker run -d --rm --name chronos mydocker
docker container cp chronos:/opt/dates.txt .
docker container stop chronos

Commands

Dockerfile commands

There is only a handfull of commands that you can use in a Dockerfile. We are going to cover them one-by-one briefly.

Docker FROM

  • FROM

  • Declare the base-image.

  • This is how we start all the Dockerfiles. (thought we could put some ARGs before)

  • FROM

Docker COPY

  • COPY

  • COPY

  • COPY from host to image

Docker ARG

Docker ADD

  • ADD

  • ADD

  • ADD is like COPY but it can do more magic (can download files from the internet, automatically unpacks zipped files)

Docker RUN

Execute some command during the creation of the Docker image.

RUN apt-get update
RUN apt-get upgrade -y
RUN apt-get install -y some-package

Docker CMD

CMD - the default command when the container starts

In Debian it is bash.

The CMD only runs when we run the container!

Docker ENTRYPOINT

Docker ENV

Docker WORKDIR

Docker upload and publish

docker tag ID szabgab/name:1.01
docker login --username=szabgab
docker push szabgab/name:1.01

Docker upload and publish

  • exec
docker run --rm -d -it debian
docker ps
docker stop ID

docker exec  0ca23b8a9802 echo hello
docker exec -it 0ca23b8a9802 bash

docker kill ID    if it does not want to stop

Dockerfile

Dockerfile
FROM debian
RUN apt-get update
RUN apt-get install -y htop
RUN apt-get install -y curl
docker build -t exp1 .
docker images
docker history exp1
docker run --rm -it exp1

Simple docker commands

Empty state, no images:

no runnin containers

$ docker ps

CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES

no local containers at all (including not running, and showing the size)

$ docker ps -as

CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES

not even images

$ docker images

REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE

Commands

docker ps
docker ps -as    list all the containers available locally  (incl the size)
docker images    list images

Passing command to the Docker Container

$ docker run mydocker ls -l /

$ docker run mydocker perl -v

This is perl 5, version 22, subversion 2 (v5.22.2) built for x86_64-linux-gnu-thread-multi
....

$ docker run mydocker python -V
container_linux.go:247: starting container process caused "exec: \"python\": executable file not found in $PATH"
docker: Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused "exec: \"python\": executable file not found in $PATH".
ERRO[0001] error getting events from daemon: net/http: request canceled

Run container as a daemon - attach detach

  • attach

  • -d

  • exec

  • Run as a Daemon in the background name it 'test'

docker run -d --rm -it --name test busybox
  • Check if it is running using docker ps

Run things on it

docker exec CONTAINER command   (eg ls, cat ...)
  • Attach to it:
docker container attach test
  • Detach from the container and keep it running by pressing
Ctrl-p Ctrl-q

Run container as a daemon

  • inspect
  • logs

In this example we create a Docker image based on busybox and tiny bit of shell command. Specifically we'll run an infinite while-loop and every second we'll print the current date and time.

The first thing we need to do is to build the

FROM busybox
CMD while true; do (date; sleep 1); done
docker build -t mydocker .
docker run -d --rm --name test mydocker
docker inspect test
docker container logs test
docker container attach test
Ctrl-p Ctrl-q

Inspect low-level information about Docker

  • inspect
docker inspect CONTAINER_ID

Copy console output of container (logs)

  • logs
docker logs CONTAINER_ID
  • .dockerignore}

docker build . will send over all the content of the corrent directory to the docker daemon. You usually don't want that. So add a file called .dockerignore to the root of your project

.git/
temp/

Docker Compose

Install Docker compose

Docker compose allows us to configure several docker images to be launched together so they can nicely work together. For example one image holds the database server, another one holds the message-passing system, and the third holds the code of the web application.

Docker compose can run them all at once and can also define the network connectivity between them.

There are several ways to install Docker composer. One of them, if you are familiar with Python, is using pip.

pip install docker-compose

Docker compose

  • up
  • attach
  • exec

The configuration of Docker compose is stored in a YAML file, usually called docker-compose.yml. The following is a very simple example that defines a single Docker container, cleverly named "one" which is based on the Ubuntu 20.04 image. Normally in order for a Docker container to keep running you need to execute some command in them that will keep running. We can achieve the same by configuring the stdin_open and the tty parameters. (They are the same as providing -it on the command line of docker.)

{% embed include file="src/examples/interactive-shell/docker-compose.yml)

In order to launch the Docker containers we need to cd in the directory where we have the docker-compose.yml file and then type in docker-compose up. This will download the image if necessary and launch the Docker container.

cd examples/interactive-shell
$ docker-compose up

In another terminal, but in the same directory you can run one-off commands on the running container:

$ docker exec interactive-shell_one_1 hostname

You can also attach to it:

$ docker attach interactive-shell_one_1

However, when you exit, it will shut down the container.

Docker compose 1st example

provide command line override for CMD/ENTRYPOINT in the compose yaml file

version: '3.7'
services:
  one:
    build:
        context: .
        dockerfile: Dockerfile
    entrypoint:
        - tail
        - -f
        - /dev/null
  two:
    build:
        context: .
        dockerfile: Dockerfile
    entrypoint: tail -f /dev/null
FROM centos:7
RUN yum install -y less vim which
WORKDIR /opt
cd examples/compose1
docker-compose up
  • This builds the image based on the Dockerfile and then launches two containers

docker compose - keep running two ways

version: '3.7'
services:
  one:
    build:
        context: .
        dockerfile: Dockerfile1
  two:
    build:
        context: .
        dockerfile: Dockerfile2
    stdin_open: true
    tty: true


FROM centos:7
RUN yum install -y less vim which net-tools
CMD tail -f /dev/null
FROM centos:7
RUN yum install -y less vim which
WORKDIR /opt

attach to either one of them:

docker-compose exec one sh
ping compose1_one_1

Docker Compose

it keeps and reuses the same container unless you remove it with

docker-compose rm
docker-compose up --build

yum install -y net-tools
ifconfig
route -n
ping one
version: '3.7'
services:
  one:
    build:
        context: .
        dockerfile: Dockerfile1
    stdin_open: true
    tty: true
  two:
    image: centos:7
    stdin_open: true
    tty: true
FROM centos:7
RUN yum install -y less vim which net-tools
version: '3.7'
services:
  one:
    image: centos:7
    entrypoint:
        - bash
    stdin_open: true
    tty: true
  two:
    image: centos:7
    stdin_open: true
    tty: true


Docker Compose Redis server and client

version: '3.8'
services:
  client:
    build: .
    volumes:
    - .:/opt
    links:
    - redis
    command: tail -f /dev/null
  redis:
    image: redis:latest
FROM ubuntu:23.04
RUN apt-get update && \
    apt-get install -y curl && \
    apt-get install -y redis-tools

Start the docker containers

docker-compose up -d

Connect to the docker container which has the redis client:

docker exec -it redis_client_1 bash

Try the following commands in the Docker container:

redis-cli -h redis get name
(nil)

redis-cli -h redis set name Foobar
OK


redis-cli -h redis get name
"Foobar"

We provide the hostname redis because that's the name of the service. We don't have to provide the port, but if you'd really want to then try this:

redis-cli -h redis -p 6379 get name

Docker Compose Solr server and curl as client

version: '3.8'
services:
  client:
    build: .
    volumes:
    - .:/opt
    links:
    - solr
    command: tail -f /dev/null
  solr:
    image: solr:latest
FROM ubuntu:22.04
RUN apt-get update && \
    apt-get install -y curl

Start the docker containers

docker-compose up -d

Connect to the docker container which has curl installed:

docker exec -it solr_client_1 bash
curl http://solr:8983/solr/

TBD:

curl --request POST \
--url http://solr:8983/api/collections \
--header 'Content-Type: application/json' \
--data '{
  "create": {
    "name": "techproducts",
    "numShards": 1,
    "replicationFactor": 0
  }
}'

Docker Compose MongoDB server

In one container start the Docker server, the other one will be the client that is built based on the Dockerfile

version: '3.8'
services:
  client:
    build: .
    volumes:
    - .:/opt
    links:
    - mongodb
    command: tail -f /dev/null
  mongodb:
    image: mongo:latest

The Dockerfile is also based on the official mongodb image as that made it easy to have mongosh already installed.

FROM mongo:latest

Start the two containers:

docker-compose up -d

Connect to the client container:

docker exec -it mongodb_client_1 bash

Start the mongodb client and connect to the server running in the other container

mongosh mongodb://mongodb:27017

Docker Compose PostgreSQL server

In one container start the Docker server, the other one will be the client that is built based on the Dockerfile

version: '3.8'
services:
  client:
    build: .
    volumes:
    - .:/opt
    links:
    - postgres
    command: tail -f /dev/null
  postgres:
    image: postgres:latest
    environment:
      POSTGRES_USER: username
      POSTGRES_PASSWORD: password
      POSTGRES_DB: mydb
    volumes:
      - postgres-database-data:/var/lib/postgresql/data/

volumes:
  postgres-database-data:

The Dockerfile is built on top of a plain Ubuntu image

FROM ubuntu:23.04
RUN apt-get update && \
    apt-get install -y curl && \
    apt-get install -y inetutils-ping && \
    apt-get install -y postgresql-client && \
    echo DONE

Start the two containers:

docker-compose up -d

Connect to the client container:

$ docker exec -it postgresql_client_1 bash
# psql -h postgres --username username -d mydb

It will ask for a password:

Password for user username:

type in password

psql (14.5 (Ubuntu 14.5-0ubuntu0.22.04.1), server 15.1 (Debian 15.1-1.pgdg110+1))
WARNING: psql major version 14, server major version 15.
         Some psql features might not work.
Type "help" for help.

mydb=#

Alternativel, once inside the client docker container we can put the variable of the database in an environment variable and then we can run a command that will not wait for any input.

export PGPASSWORD=password
echo "SELECT CURRENT_TIME" | psql -h postgres -U username mydb

Docker Compose for Perl DBD::Pg (PostgreSQL)

These files were used to set up a local development environment for the Perl module DBD::Pg

git clone https://github.com/bucardo/dbdpg.git
cd dbdpg

Put the two files in that folder:

FROM perl:5.36
RUN apt-get update && \
    apt-get install -y libaspell-dev
RUN cpanm --notest DBI Perl::Critic Text::SpellChecker Devel::Cover


# docker build -t dbdpg .
# docker run -it --workdir /opt -v$(pwd):/opt --name dbdpg dbdpg bash
# docker container start -i dbdpg


version: '3.8'
services:
  client:
    build: .
    volumes:
    - .:/opt
    links:
    - mypostgres
    command: tail -f /dev/null
    working_dir: /opt
    environment:
      AUTHOR_TESTING: 1
      RELEASE_TESTING: 1
      DBI_DSN: "dbi:Pg:dbname=test_db;host=mypostgres"
      DBI_PASS: secret
      DBI_USER: test_user
  mypostgres:
    image: postgres:15.2
    environment:
      POSTGRES_USER: test_user
      POSTGRES_PASSWORD: secret
      POSTGRES_DB: test_db
    volumes:
      - database-data:/var/lib/postgresql/data/

volumes:
  database-data:

In one terminal start Docker compose:

docker-compose up

In another terminal connect to the client container

docker exec -it dbdpg_client_1 bash

Now you can run the tests:

perl Makefile.PL
make
AUTHOR_TESTING=1
RELEASE_TESTING=1
make test

And you can also generate test coverage report:

cover -test

Docker Compose MySQL server

version: '3.8'
services:
  client:
    build: .
    volumes:
    - .:/opt
    links:
    - mysql
    command: tail -f /dev/null
  mysql:
    image: mysql:latest
    environment:
      MYSQL_ROOT_PASSWORD: secret

FROM ubuntu:22.04
RUN apt-get update && \
    apt-get install -y curl && \
    apt-get install -y inetutils-ping && \
    apt-get install -y mysql-client && \
    echo DONE

docker-compose up -d
docker exec -it mysql_client_1  bash
ping mysql
# mysql -h mysql --password=secret
mysql> SELECT CURRENT_TIMESTAMP;
mysql> exit
# echo "SELECT CURRENT_TIMESTAMP" | mysql -h mysql --password=secret

Python with Docker

Python CLI in Docker - curl.py

This is a command line script, a very basic implementation of curl in Python. In order to run this we need Python and the requests package to be installed.

#!/usr/bin/python3

import requests
import argparse

parser = argparse.ArgumentParser()
parser.add_argument('url',                      help='The url to fetch')
parser.add_argument('-I',  action='store_true', help='Show headers only')
args = parser.parse_args()

res = requests.get(args.url)
if args.I:
    for k in res.headers.keys():
        print(f"{k} = {res.headers[k]}")
    exit()

print(res.text)

Python CLI in Docker - Dockerfile

FROM python:3.8
RUN pip3 install requests
COPY curl.py .
ENTRYPOINT ["python3", "curl.py"]
$ docker build -t mydocker .

$ docker run --rm mydocker https://httpbin.org/get

This is a simple implementation of a curl-like script in Python. Wrapped in a Docker container. First build the container and then you can run the script.

Docker: Python Development mode with mounted directory

FROM python:3.8
RUN pip3 install requests
# COPY curl.py .
# ENTRYPOINT ["python3", "curl.py"]
$ docker build -t mydocker .

$ docker run --rm -v $(pwd):/opt/ mydocker python /opt/curl.py  https://httpbin.org/get
  • --rm to remove the container when we stopped it.

  • -v $(pwd):/opt/ to map the current directory on the host system to the /opt directory inside the container

  • mydocker is the name of the image

  • After that we can any python program.

  • You can edit the file on your host system (with your IDE) and run it on the command line of the Docker container.

  • This works on Linux and Mac OSX. On Windows you might need to spell out the current working directory yourself.

Flask application

In this simple Flask web application we have 3 files. app.py, a template, and the requirements.txt

from flask import Flask, request, render_template
app = Flask(__name__)

@app.route("/")
def hello():
    return render_template('echo.html')

@app.route("/echo", methods=['POST'])
def echo():
    return render_template('echo.html', text=request.form['text'])

{% embed include file="src/examples/flask-development/templates/echo.html)

flask
pytest

Flask development

FROM python:3.8

COPY requirements.txt /opt/
RUN pip3 install -r /opt/requirements.txt

WORKDIR /opt

ENV FLASK_APP=app
ENV FLASK_DEBUG=1
CMD ["flask", "run", "--host", "0.0.0.0", "--port", "5000"]

$ docker build -t mydocker .
$ docker run -it --name dev --rm -p5001:5000 -v $(pwd):/opt/  mydocker
  • -it to be in interactive mode so we can see the log on the command line and we can easily stop the development container.

  • --name dev we set the name of the container to be dev in case we would like to access it.

  • --rm remove the container after it is finished.

  • -p5001:5000 map port 5001 of the host computer to port 5000 of the container.

  • -v $(pwd):/opt/ map the current working directory of the host to /opt in the container.

  • Access via http://localhost:5001/

Docker: Flask + uwsgi

FROM ubuntu:20.04
RUN apt-get update                           && \
    apt-get upgrade -y                       && \
    apt-get install -y python3               && \
    apt-get install -y python3-pip           && \
    DEBIAN_FRONTEND="noninteractive"   apt-get install -y uwsgi                && \
    apt-get install -y uwsgi-plugin-python3  && \
    echo done

# The DEBIAN_FRONTEND config needed for tzdata installation

COPY requirements.txt .
RUN pip3 install -r requirements.txt
RUN rm -f requirements.txt


COPY . /opt/
COPY uwsgi.ini /etc/uwsgi/apps-enabled/

WORKDIR /opt

CMD service uwsgi start; tail -F /var/log/uwsgi/app/uwsgi.log

{% embed include file="src/examples/flask-uwsgi/uwsgi.ini)

Flask with Redis

from flask import Flask, request, render_template
import redis

app = Flask(__name__)

red = redis.Redis(host='redis', port=6379, db=0)

@app.route("/")
def main():
    return render_template('red.html')

@app.route("/save", methods=['POST'])
def save():
    field = request.form['field']
    value = request.form['value']
    ret = red.set(field, value)
    app.logger.debug(ret)
    new_value = red.get(field)
    return render_template('red.html', saved=1, value=new_value)

@app.route("/get", methods=['POST'])
def get():
    field = request.form['field']
    value = red.get(field)
    if value is None:
        return render_template('red.html', field=field, value="Not defined yet")
    str_value = value.decode('utf-8')
    return render_template('red.html', field=field, value=str_value)

@app.route("/keys", methods=['GET'])
def keys():
    all_keys = red.keys("*")
    return render_template('red.html', fields=all_keys)

<h1>Flask + Redis</h1>

<div>
<a href="/">home</a> <a href="/keys">keys</a>
</div>

Type in a key and a value and save it to Redis.
<form action="/save" method="POST">
<input name="field">
<input name="value">
<input type="submit" value="Save">
</form>

Type in a key and fetch the value from Redis.
<form action="/get" method="POST">
<input name="field">
<input type="submit" value="Get">
</form>

{% if saved %}
<b>saved</b>
{{ value.decode('utf8') }}
{% endif %}

{% if field %}
  The value of <b>{{ field }}</b> is <b>{{ value }}</b>
{% endif %}


{% if fields %}
  <h2>Keys</h2>
  <ul>
  {% for field in fields %}
     <li>{{ field.decode('utf8') }}</li>
  {% endfor %}
  </ul>
{% endif %}

flask
pytest
redis

Docker compose Flask and Redis

pip install docker-compose
version: '3.8'
services:
  web:
    build: .
    ports:
    - "5001:5000"
    volumes:
    - .:/opt
    links:
    - redis
  redis:
    image: redis:6.0.8
FROM python:3.8

COPY requirements.txt /opt/
RUN pip3 install -r /opt/requirements.txt

WORKDIR /opt

ENV FLASK_APP=app
ENV FLASK_DEBUG=1
CMD  ["flask", "run", "--host", "0.0.0.0", "--port", "5000"]

docker-compose up
  • http://localhost:5001/

Python Flask and MongoDB

from flask import Flask, request, render_template, abort, redirect, url_for
import pymongo
import datetime

app = Flask(__name__)

config = {
    "username": "root",
    "password": "Secret",
    "server": "mongo",
}

connector = "mongodb://{}:{}@{}".format(config["username"], config["password"], config["server"])
client = pymongo.MongoClient(connector)
db = client["demo"]

@app.route("/")
def main():
    return render_template('main.html')


@app.route("/save", methods=['POST'])
def save():
    entry = {
        "name": request.form['name'],
        "email": request.form['email'],
        "id": request.form['idnum'],
        "when": datetime.datetime.now(),
    }
    res = db.people.insert(entry)
    db.people.create_index("id", unique=True)
    return render_template('main.html')

@app.route("/list", methods=['GET'])
def list_people():
    count = db.people.count_documents({})
    people = db.people.find({})
    return render_template('list.html', count=count, people=people)

@app.route("/person/<idnum>", methods=['GET'])
def person(idnum):
    person = db.people.find_one({ 'id': idnum })
    if not person:
        abort(404)
    return render_template('person.html', person=person)


@app.errorhandler(404)
def not_found(error):
    app.logger.info(error)
    return render_template('404.html'), 404

@app.route("/get", methods=['POST'])
def get():
    name = request.form['name']
    doc = db.people.find_one({'name' : {'$regex': name}})
    if doc:
        app.logger.info(doc)
        return redirect(url_for('person', idnum=doc["id"]) )
    return render_template('main.html', error="Could not find that person")
flask
pytest
PyMongo
{% include 'incl/header.html' %}

<h2>Add a person</h2>
<form action="/save" method="POST">
  <table>
    <tr><td>Name: </td><td><input name="name"></td></tr>
    <tr><td>Email: </td><td><input name="email"></td></tr>
    <tr><td>ID: </td><td><input name="idnum"></td></tr>
  </table>
<input type="submit" value="Save">
</form>

<h2>List people by name</h2>
<form action="/get" method="POST">
Name: <input name="name">
<input type="submit" value="Get">
</form>

{% if error %}
<div style="color:red;">
  {{ error }}
</div>
{% endif %}

{% include 'incl/footer.html' %}
{% include 'incl/header.html' %}

<h2>List People</h2>
Total: {{ count }}

<ul>
{% for person in people %}
  <li><a href="/person/{{ person.id }}">{{ person.name }}</a></li>
{% endfor %}
</ul>

{% include 'incl/footer.html' %}
{% include 'incl/header.html' %}

<h2>{{ person.name }}</h2>

Email: {{ person.email }}<br>
ID: {{ person.id }}<br>
Date:: {{ person.when }}<br>

{% include 'incl/footer.html' %}
{% include 'incl/header.html' %}

<h2>not Found</h2>

{% include 'incl/footer.html' %}
<!DOCTYPE html>
<html lang="en">
<head>
    <meta charset="UTF-8">
    <title>Title</title>
</head>
<body>
<h1>Flask + MongoDB</h1>
<a href="/">home</a> <a href="/list">list</a>

<hr>
v1
</body>
</html>

Docker Compose Python Flask and MongoDB

pip install docker-compose
FROM python:3.8

COPY requirements.txt /opt/
RUN pip3 install -r /opt/requirements.txt

WORKDIR /opt
#COPY . .

ENV FLASK_APP=app
ENV FLASK_DEBUG=1
CMD  ["flask", "run", "--host", "0.0.0.0", "--port", "5000"]

version: '3.8'
services:
  web:
    build: .
    ports:
    - "5001:5000"
    volumes:
    - .:/opt
    links:
    - mongo
  mongo:
    image: mongo
    environment:
      MONGO_INITDB_ROOT_USERNAME: root
      MONGO_INITDB_ROOT_PASSWORD: Secret
    volumes:
      - mongo-data:/data/db
      - mongo-configdb:/data/configdb
volumes:
  mongo-data:
  mongo-configdb:
docker-compose up
  • http://localhost:5001/

Python, Flask and Pulsar

docker run -it -p 6650:6650 -p 8080:8080  apachepulsar/pulsar:2.4.1 bin/pulsar standalone
docker build -t mydocker .
docker run --rm -it mydocker bash
version: '3.7'
services:
  web:
    build: .
    ports:
    - "5001:5000"
    volumes:
    - .:/opt
    links:
    - pulsar
  pulsar:
    image: apachepulsar/pulsar:2.5.2
    container_name: my-pulsar
    expose:
       - 8080
       - 6650
    command: >
      /bin/bash -c
      "bin/apply-config-from-env.py conf/standalone.conf
      && bin/pulsar standalone"
#  dashboard:
#    image: apachepulsar/pulsar-dashboard
#    depends_on:
#      - pulsar
#    ports:
#      - "5002:80"
#    environment:
#      - SERVICE_URL=http://pulsar:8080
FROM ubuntu:20.04
RUN apt-get update && \
    apt-get upgrade -y && \
    apt-get install -y python3

RUN apt-get install -y python3-pip

COPY requirements.txt /opt/
RUN pip3 install -r /opt/requirements.txt

WORKDIR /opt
CMD FLASK_APP=app FLASK_DEBUG=1 flask run --host 0.0.0.0 --port 5000
mport pulsar

client = pulsar.Client('pulsar://localhost:6650')
consumer = client.subscribe('my-topic', subscription_name='my-sub')

# while True:
#     msg = consumer.receive()
#     print("Received message: '{}'".format(msg.data())
#     consumer.acknowledge(msg)

client.close()

Python and Pulsar

import pulsar
import time
from mytools import get_logger, topic

def receive():
    logger = get_logger('pulsar')
    logger.info('Consumer starting')
    time.sleep(20)
    logger.info('Consumer really starting')

    try:
        client = pulsar.Client('pulsar://my-pulsar:6650')
        consumer = client.subscribe(topic, 'my-subscription')
    except Exception:
        logger.exception("Consumer could not connect to pulsar")
    logger.info("Consumer connected")


    while True:
        msg = consumer.receive()
        try:
            logger.info("Received: {}: {}".format(msg.data(), msg.message_id()))
            consumer.acknowledge(msg)
        except Exception as err:
            logger.error(f"Exception {err}")


receive()
version: '3.7'
services:
  producer:
    build: .
    volumes:
      - .:/opt
    links:
      - pulsar
    #command: python producer.py
    command: tail -f /dev/null
  consumer:
    build: .
    volumes:
      - .:/opt
    links:
      - pulsar
    command: tail -f /dev/null
    #command: python consumer.py
  pulsar:
    image: apachepulsar/pulsar:2.5.2
    container_name: my-pulsar
    expose:
       - 8080
       - 6650
    command: >
      /bin/bash -c
      "bin/apply-config-from-env.py conf/standalone.conf
      && bin/pulsar standalone"
FROM python:3.8
COPY requirements.txt /opt/
RUN pip3 install -r /opt/requirements.txt

WORKDIR /opt
First
Second
Third
Fourth
import logging
import os

def get_logger(name):
    log_file = name + '.log'
    log_format = logging.Formatter('%(asctime)s - %(name)s - %(levelname)-10s - %(message)s')

    logger = logging.getLogger(__name__)
    logger.setLevel(logging.INFO)

    sh = logging.StreamHandler()
    sh.setLevel(logging.INFO)
    sh.setFormatter( log_format )
    logger.addHandler(sh)

    #if os.path.exists(log_file):
    #    os.unlink(log_file)
    fh = logging.FileHandler(log_file)
    fh.setLevel(logging.INFO)
    fh.setFormatter( log_format )
    logger.addHandler(fh)

    return logger

topic = 'text'
import pulsar
import time
from mytools import get_logger, topic


def send():
    logger = get_logger('pulsar')
    logger.info("Producer starting")
    time.sleep(20)
    logger.info("Producer really starting")

    filename = 'input.txt'

    try:
        client = pulsar.Client('pulsar://my-pulsar:6650')
        producer = client.create_producer(topic)
    except Exception:
        logger.exception("Producer could not connect to pulsar")
    logger.info("Producer connected")

    with open(filename) as fh:
        for row in fh:
            logger.info(f"Sending {row}")
            producer.send(row.encode('utf-8'))
            time.sleep(1)

send()
pulsar-client

Run:

docker-compose up

and then check the pulsar.log file

Docker: Flask + uwsgi + nginx

Using https://hub.docker.com/r/tiangolo/uwsgi-nginx-flask/

docker build -t myapp .
docker run -it --rm -p5001:80 myapp

Perl with Docker

Docker: Perl Hello World

FROM ubuntu:20.04
CMD perl -E 'say "Hello from Perl"'
$ docker build -t mydocker .
$ docker run -it --rm mydocker
Hello from Perl

Docker: Perl Hello World in script

FROM ubuntu:20.04
COPY hello_world.pl /opt/
CMD  perl /opt/hello_world.pl
use 5.010;
use strict;
use warnings;

say 'Hello World from Perl script';

$ docker build -t mydocker .
$ docker run -it --rm mydocker
Hello World from Perl script

Docker: Perl with I/O

FROM ubuntu:20.04
COPY greetings.pl /opt/
CMD  perl /opt/greetings.pl
use 5.010;
use strict;
use warnings;

print "What is your name? ";
my $name = <STDIN>;
chomp $name;
say "Hello $name, how are you today?";
$ docker build -t mydocker .

We need to tell Docker that this is an interactive process

docker run -it --rm mydocker

What is your name? Foo
Hello Foo, how are you today?

Docker Perl Dancer hello world app

Developing Perl code in Docker

$ docker run -v /Users/gabor/work/mydocker:/opt/  mydocker perl /opt/hw.pl
  • Mount a directory of the host OS to a directory in the Docker image.
  • Run the code

Install Perl Modules

Install a perl module using apt-get

FROM ubuntu:20.04
RUN apt-get update && \
    apt-get upgrade -y && \
    apt-get install -y libtest-www-mechanize-perl
use 5.010;
use strict;
use warnings;

use WWW::Mechanize;

my ($url) = @ARGV;
die "Usage: $0 URL\n" if not $url;

my $w = WWW::Mechanize->new;

$w->get($url);
say $w->content;

$ docker run -v /Users/gabor/work/mydocker:/opt/  mydocker perl /opt/get.pl
Usage: /opt/get.pl URL
docker run -v /Users/gabor/work/mydocker:/opt/  mydocker perl /opt/get.pl http://perlmaven.com/

Docker networking

Docker network list

docker network list
NETWORK ID          NAME                DRIVER              SCOPE
234aa213ed9a        bridge              bridge              local
63a0fd629d21        host                host                local
37a165457dad        none                null                local
docker network create abc      creates a bridge called abc

PostgreSQL

Run PostgreSQL in Docker

docker run --name pg1 -e POSTGRES_PASSWORD=secret -d postgres
docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED              STATUS              PORTS               NAMES
8bfa343f4f3e        postgres            "docker-entrypoint.s…"   About a minute ago   Up About a minute   5432/tcp            pg1

Docker MongoDB

MongoDB in Docker

Start the Docker container

docker run -it -d --name mymongo mongo:latest

Connect to the Docker container

docker exec -it mymongo bash

Inside the container start the client and send commands

mongosh
db.students.insertOne({"name": "foo", "grades" : { "A" : 1, "B" : 2, "c" : 3, "d" : 4 }});

Deploy

Stand-alone Application to deploy

  • A stand-alone Docker image that exposes a single port
from flask import Flask, request, jsonify
import os
import datetime

version = 1

app = Flask(__name__)

filename = 'counter.txt'
dirname = os.environ.get('COUNTER_DIR')
if dirname:
    filename = os.path.join(dirname, filename)

@app.route("/", methods=['GET'])
def main():
    now = datetime.datetime.now()

    counter = 0
    if os.path.exists(filename):
        with open(filename) as fh:
            counter = int(fh.read())
    counter += 1
    with open(filename, 'w') as fh:
        fh.write(str(counter))

    return f'''
        <html>
          <head><title>Demo v{version}</title></head>
          <body>Demo v{version} at {now} count: {counter}</body>
        </html>
    '''

import app


def test_app():
    web = app.app.test_client()

    rv = web.get('/')
    assert rv.status == '200 OK'
    assert b'<body>Demo' in rv.data
flask
FROM python:3.7

COPY requirements.txt /opt/
RUN pip3 install -r /opt/requirements.txt


COPY app.py /opt/

WORKDIR /opt
CMD ["flask", "run", "--host", "0.0.0.0", "--port", "5000"]

__pycache__
.git
.pytest_cache
counter.txt

Locally

docker build -t flasker .
docker run --rm  -p5000:5000 flasker
http://localhost:5000/

Digital Ocean

Deployment on Digital Ocean

  • Digital Ocean
  • Go to Marketplace, search for Docker
  • Click on Create Docker Droplet
  • Basic $5/month
  • New York is fine
  • Select SSH key
ssh root@remotehost mkdir /data
DOCKER_HOST=ssh://root@remotehost ./deploy.sh
docker build -t flasker .
docker container stop flask --time 0
docker container rm flask
docker run -d --name flask -v/data:/data --env COUNTER_DIR=/data --restart unless-stopped  -p5000:5000 flasker

  • We are going to use the /data directory on the host system as our data volume

  • We use the -d flag to convert it into a daemon

  • We use --restart unless-stopped to tell Docker to restert on reboot

  • We create a volume on the disk

  • restart policy

Multi-container Application to deploy

  • A multi-container Docker app using Docker Compose

  • Create Droplet based on Docker

  • ssh to it, apt-get update, apt-get dist-upgrade, reboot

  • DOCKER_HOST="ssh://user@remotehost" docker-compose up -d

Re-deploy

  • DOCKER_HOST="ssh://user@remotehost" docker-compose build web
  • DOCKER_HOST="ssh://user@remotehost" docker-compose up -d web

Digital Ocean with Docker compose

Linode

Appendix

Links

Companies using Docker in Israel

I know that most of the readers of these slides are from around the world, but I run most of my courses in Israel, so I have special interest in knowing which companies are using docker and what job titles do the people who use it have. At one point I might create similar pages for some other countries as well.

Docker Toolbox

Legacy system

Docker Resources

Docker Whalesay

Go to Docker Hub search for whalesay and note among the many hits there is one called docker/whalesay. We'll use that one.

$ docker run docker/whalesay cowsay hello world

Unable to find image 'docker/whalesay:latest' locally
latest: Pulling from docker/whalesay
e190868d63f8: Pull complete
909cd34c6fd7: Pull complete
0b9bfabab7c1: Pull complete
a3ed95caeb02: Pull complete
00bf65475aba: Pull complete
c57b6bcc83e3: Pull complete
8978f6879e2f: Pull complete
8eed3712d2cf: Pull complete
Digest: sha256:178598e51a26abbc958b8a2e48825c90bc22e641de3d31e18aaf55f3258ba93b
Status: Downloaded newer image for docker/whalesay:latest
 _____________
&lt; hello world >
 -------------
    \
     \
      \
                    ##        .
              ## ## ##       ==
           ## ## ## ##      ===
       /""""""""""""""""___/ ===
  ~~~ {~~ ~~~~ ~~~ ~~~~ ~~ ~ /  ===- ~~~
       \______ o          __/
        \    \        __/
          \____\______/

Docker ps after whalesay

$ docker ps -as

CONTAINER ID        IMAGE               COMMAND                CREATED             STATUS                      PORTS               NAMES
59c99df0177a        docker/whalesay     "cowsay hello world"   36 minutes ago      Exited (0) 23 minutes ago                       loving_wescoff      0 B (virtual 247 MB)
f6239f10a6ad        hello-world         "/hello"               About an hour ago   Exited (0) 58 minutes ago                       lucid_snyder        0 B (virtual 1.84 kB)
$ docker images

REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
hello-world         latest              48b5124b2768        6 weeks ago         1.84 kB
docker/whalesay     latest              6b362a9f73eb        21 months ago       247 MB

Docker whale (create Docker image)

Create Dockerfile with the following content:

FROM docker/whalesay:latest
RUN apt-get -y update && apt-get install -y fortunes
CMD /usr/games/fortune -a | cowsay
$ docker build -t docker-whale .
...
$ docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
docker-whale        latest              d5cf6bf32c0f        24 seconds ago      277 MB
hello-world         latest              48b5124b2768        6 weeks ago         1.84 kB
docker/whalesay     latest              6b362a9f73eb        21 months ago       247 MB

The command docker ps -a shows nothing new.

Run Docker whale

$ docker run docker-whale

Volumes

docker run --mount source=myvol,target=/data --rm -it busybox

docker volume ls --format "{{.Driver}}  {{.Name}} {{.Mountpoint}}"

docker volume create myvol
    Creates /var/lib/docker/volumes/myvol

docker volume ls
docker volume inspect myvol  # Returns a JSON with information about the volume
docker volume rm myvol


docker-compose up
docker-compose rm

docker system df

  • Show docker disk usage
TYPE                TOTAL               ACTIVE              SIZE                RECLAIMABLE
Images              6                   2                   3.464GB             1.579GB (45%)
Containers          4                   0                   71.02kB             71.02kB (100%)
Local Volumes       2                   2                   638.3MB             0B (0%)
Build Cache         0                   0                   0B                  0B

docker system prune

  • Remove all the unused data
  • See the flags
--all
--volumes
version: '3.7'
services:
  box:
    image: ubuntu:latest
    volumes:
        - .:/localdir
        - my-data:/mydata
    entrypoint:
        - tail
        - -f
        - /dev/null


volumes:
  my-data:

Docker history

Each Docker image is built by layers upon layers.

The docker history command can show you these layers.

docker history IMAGE

Here you can see that the Ubuntu image we have downloaded from the Docker Hub has 5 layers.

$ docker history ubuntu:20.04
IMAGE               CREATED             CREATED BY                                      SIZE
74435f89ab78        4 weeks ago         /bin/sh -c #(nop)  CMD ["/bin/bash"]            0B
<missing>           4 weeks ago         /bin/sh -c mkdir -p /run/systemd && echo 'do…   7B
<missing>           4 weeks ago         /bin/sh -c set -xe   && echo '#!/bin/sh' > /…   811B
<missing>           4 weeks ago         /bin/sh -c [ -z "$(apt-get indextargets)" ]     1.01MB
<missing>           4 weeks ago         /bin/sh -c #(nop) ADD file:b2342c7e6665d5ff3…   72.8MB

Docker history - multiple layers

If you run the same command on the mydocker image we have just created you can see that it has 2 more layers. Each RUN command created a layer.

Layers are created separately so having multiple layers makes our development process faster. However having many layers is not recommended so once in a while we usually merge the RUN instructions together and rebuild the image to have less layers. We'll talk about this later.

docker history mydocker
IMAGE               CREATED             CREATED BY                                      SIZE
77dfbe63aa19        2 hours ago         /bin/sh -c apt-get install -y curl              16.2MB
05d6b032b3c6        2 hours ago         /bin/sh -c apt-get update                       22.3MB
74435f89ab78        4 weeks ago         /bin/sh -c #(nop)  CMD ["/bin/bash"]            0B
<missing>           4 weeks ago         /bin/sh -c mkdir -p /run/systemd && echo 'do…   7B
<missing>           4 weeks ago         /bin/sh -c set -xe   && echo '#!/bin/sh' > /…   811B
<missing>           4 weeks ago         /bin/sh -c [ -z "$(apt-get indextargets)" ]     1.01MB
<missing>           4 weeks ago         /bin/sh -c #(nop) ADD file:b2342c7e6665d5ff3…   72.8MB

Kubernetes

Install Minikube locally

Install kubectl

kubectl version --client

Commands

Start Minikube

Once installed we can easily start the Minikube service:

minikube start

Stop Minikube

At the end, when we are done with experimenting or development we can stop the Minikube:

minikube start

Minikube status

  • On the command line we can get a quick status information of the minikube
minikube status

minkube dashboard

minikube dashboard

Kubectl list pods

kubectl get pods
kubectl get pods -o wide

Simple Kubernetes YAML file

apiVersion: v1
kind: Pod
metadata:
  name: demo
spec:
  containers:
  - name: testpod
    image: alpine:3.5
    command: ["ping", "8.8.8.8"]

Kubernetes: Install (apply) YAML configuration file

kubectl apply -f pod.yaml
kubectl get pods
kubectl logs demo
kubectl delete -f pod.yaml

Other Kubernetes

kubectl get po -A
kubectl cluster-info

List deployments

kubectl get deployments.apps kubectl get deployment NAME -o yaml

  • A "pod" is an abstraction of K8s of a Docker container.
  • A "deployment" is a blueprint for a "pod" and we can tell k8s how many of the same pod we would like to have.
  • "Service" public port mapped to a pod, communication between pods, load balancer for pod replications
  • Database cannot be replicated by a "Service" because they have state.
  • "StatefulSet"
  • "Ingress" to handle the requests from the external world
  • "ConfigMap"
  • "Volumes" - to store persistent data

On each node there are 3 processes:

  • container runtime (Docker)
  • Kubelet (interacts with both the container runtime and the machine itself)
  • KubeProxy forwards the requests

On the Master process has 4 processes:

  • API Server (cluster gateway) act as a gatekeeper for authentication
  • Scheduler - decide where to run the next pod based on the available resources (then tells the Kublet on the node to run the pod)
  • Controller Manager
  • etcd - a key-value store, the brain of the cluster

A Kubernetes cluster can have several master nodes.

Add autocomplete

kubectl delete deployments.apps hello-minikube

Single container Python app in Kubernetes

from flask import Flask, request
app = Flask(__name__)

VERSION = "1.00"

@app.route("/")
def main():
    return f'''
     VERSION {VERSION}<br>
     <form action="/echo" method="GET">
         <input name="text">
         <input type="submit" value="Echo">
     </form>
     '''

@app.route("/echo")
def echo():
    return "You said: " + request.args.get('text', '')

If we have Python and Flask installed we can run the application locally:

FLASK_APP=echo_get flask run
  • We can build a Docker image based on this Dockerfile: (myflask is just an arbitrary name)
FROM python:3.7
RUN pip install flask
ENV FLASK_APP echo_get
WORKDIR /opt
COPY  echo_get.py .
CMD ["flask", "run", "--host", "0.0.0.0"]
docker build -t myflask:latest -f Dockerfile_echo_get .
docker build -t myflask:1.00 -f Dockerfile_echo_get .

and then we can run it: (tryflask is just an arbitrary name)

docker run --rm -it -p5000:5000 --name tryflask myflask

Add the docker image to the local Minikube docker registry:

minikube image load myflask:latest

List the images there

minikube image list
apiVersion: v1
kind: Pod
metadata:
  name: echo-get
spec:
  containers:
    - name: echo-get-container
      image: myflask
      imagePullPolicy: Never
      ports:
      - protocol: TCP
        containerPort: 5000
kubectl apply -f echo_get.yaml

Kubernetes resources

Create a Kubernetes deployment based on a Docker image:

kubectl create deployment nginx-depl --image nginx

Check it:

kubectl get deployments.apps
kubectl get pod
kubectl get replicasets.apps
kubectl edit deployments.apps nginx-depl

kubectl exec -it POD -- bash

Usually we'll deal with deployments and not directly with pods or replicasets.

eval $(minikube docker-env)

To see the STDOUT of the container (pod): (using the correct name of your pod)

kubectl logs echo-get-5b44b98785-qwjjv

To access the web application: (using the IP address from the previous output)

minikube ssh
curl http://172.18.0.5:5000
minikube service echo-get

service share port

kubectl port-forward echo-get-5b44b98785-qwjjv 5000:5000

tail the stdout

kubectl logs -f echo-get-5b44b98785-qwjjv

TODO mount external disk

minikube mount $(pwd):/external
minikube ssh
kubectl config get-contexts

kubectl config current-context
kubectl config use-context minikube
kubectl config view
kubectl config use-context do-nyc1-k8s-1-21-2-do-2-nyc1-1626880181820
kubectl get nodes
kubectl run hello-minikube --image=k8s.gcr.io/echoserver:1.4 --port=8080
kubectl get deployments
kubectl get pods
kubectl expose pod hello-minikube --type=NodePort
minikube service hello-minikube --url
minikube service hello-minikube
kubectl describe svc hello-minikube

kubectl delete pods hello-minikube
kubectl delete service hello-minikube
apiVersion: v1
kind: Service
metadata:
  name: echo-get-service
  labels:
    run: echo-get-service
spec:
  ports:
  - port: 5000
    protocol: TCP
    targetPort: 5000
  selector:
    run: echo-get-deployment
  type: NodePort

Kubernetes on Digital Ocean

Digital Ocean Docker registry

doctl registry create szabgab-demo

The URL of the registy then going to be:

registry.digitalocean.com/szabgab-demo

In order to login to the Docker registry:

sudo snap connect doctl:dot-docker
doctl registry login

Locally build the docker image (as earlier) so we can try it:

docker build -t myflask:1.00 -f Dockerfile_echo_get .
  • Tag it to the Docker registry of Digital Ocean
docker tag myflask:1.00 registry.digitalocean.com/szabgab-demo/myflask:1.00

Then push to the registry

docker push registry.digitalocean.com/szabgab-demo/myflask:1.00

Web based integration between the Kubernetes cluster and the Docker registry. See this explanation: How to Use Your Private DigitalOcean Container Registry with Docker and Kubernetes

apiVersion: apps/v1
kind: Deployment
metadata:
  name: echo-get-deployment
spec:
  selector:
    matchLabels:
      run: echo-get-pod
  replicas: 2
  template:
    metadata:
      labels:
        run: echo-get-pod
    spec:
      containers:
        - name: echo-get-container
          #image: python:3.9
          #command: ["tail", "-f", "/etc/passwd"]
          #myflask:1.00
          image: registry.digitalocean.com/szabgab-demo/myflask:1.00
          imagePullPolicy: Always
          #imagePullPolicy: Never
          ports:
            - containerPort: 5000

kubectl apply -f deploy_echo_get.yaml
  • "ssh" to the docker container running in the pod on Kubernetes.
kubectl exec -it echo-get-deployment-bb5bd946-p6k6m -- bash

add load balancers

apiVersion: v1
kind: Service
metadata:
  name: echo-get-loadbalancer
  annotations:
    service.beta.kubernetes.io/do-loadbalancer-protocol: "http"
    service.beta.kubernetes.io/do-loadbalancer-size-slug: "lb-small"
spec:
  type: LoadBalancer
  selector:
    run: echo-get-pod
  ports:
    - name: http
      protocol: TCP
      port: 80
      targetPort: 5000
kubectl apply -f load_balancer.yaml

Kubernetes hierarchy

Clusters
   Namespaces
Users
  • Contexts - A context is a Cluster/Namespace/User triplet (I think)

In the same cluster create 2 namespaces:

  • development
  • production
{
  "apiVersion": "v1",
  "kind": "Namespace",
  "metadata": {
    "name": "development",
    "labels": {
      "name": "development"
    }
  }
}
{
  "apiVersion": "v1",
  "kind": "Namespace",
  "metadata": {
    "name": "production",
    "labels": {
      "name": "production"
    }
  }
}
kubectl apply -f development-namespace.yml
kubectl apply -f production-namespace.yml
kubectl get namespaces --show-labels

We still only have the main minikube context:

kubectl config get-contexts

List the available clusters, users and namespaces:

kubectl config get-clusters
kubectl config get-users
kubectl get namespaces --show-labels

Create context for each environment:

kubectl config set-context development --namespace=development --cluster=minikube --user=minikube
kubectl config set-context production --namespace=production --cluster=minikube --user=minikube

Switch to context:

kubectl config use-context development

Alternatively we could add --context development at the end of the individual commands.

get the current context

kubectl config current-context

kubectl apply -f development-config-map.yml --context development

Make sure the right configuration is used in each namespace

kubectl apply -k base
kubectl delete -k base

kubectl kustomize base/

Open Source

Python Flask

  • Flask
  • Has a file called CONTRIBUTING.rst
$ git clone https://github.com/pallets/flask.git
$ cd flask
$ docker run -it --name flask-dev -w /opt -v$(pwd):/opt python:3.11 bash

# python -m pip install -U pip setuptools wheel
# pip install -r requirements/dev.txt && pip install -e .
# pytest
$ docker start -i flask-dev

Python requests

R data.table

$ git clone git@github.com:Rdatatable/data.table.git
$ cd data.table
$ docker run -it --name data-table --workdir /opt -v$(pwd):/opt r-base:4.2.3 bash

# apt-get update
# apt-get install -y pandoc curl libcurl4-gnutls-dev texlive-latex-base texlive-fonts-extra texlive-latex-recommended texlive-fonts-recommended
# Rscript -e 'install.packages(c("knitr", "rmarkdown", "pandoc", "curl", "bit64", "bit", "xts", "nanotime", "zoo", "R.utils", "markdown"))'
# R CMD build .

Check the version number in the name of the generated tar.gz file:

# ls -l
# R CMD check data.table_1.14.9.tar.gz

This would work without checking

# R CMD check $(ls -1 data.table_*)
$ docker container start -i data-table

R yaml

PHP Twig

$ git clone git@github.com:twigphp/Twig.git
$ docker run -it --rm --workdir /opt -v$(pwd):/opt ubuntu:22.10 bash

# apt-get update
# DEBIAN_FRONTEND=noninteractive TZ=Etc/UTC apt-get -y install tzdata
# apt-get install -y php-cli composer php-curl phpunit

# composer install
# phpunit

Plagiarism-checker-Python

$ git clone https://github.com/Kalebu/Plagiarism-checker-Python
$ cd Plagiarism-checker-Python
$ docker run -it --name Renormalizer-devPlagiarism-checker-Python -w /opt -v %cd%:/opt python:latest bash
$ pip3 install -r requirements.txt

running the project

$ python3 app.py
('john.txt', 'juma.txt', 0.5465972177348937)
('fatma.txt', 'john.txt', 0.14806887549598566)
('fatma.txt', 'juma.txt', 0.18643448370323362)

Cosmo-Tech

$ git clone https://github.com/Cosmo-Tech/CosmoTech-Acceleration-Library.git
$ cd CosmoTech-Acceleration-Library

For Windows: CMD:

$ docker run -it --name cosmotech-acceleration-library-dev -w /opt -v %cd%:/opt python:3.11 bash

PowerShell:

$ docker run -it --name cosmotech-acceleration-library-dev -w /opt -v ${PWD}:/opt python:3.11 bash

For Linux:

$ docker run -it --name cosmotech-acceleration-library-dev -w /opt -v $(PWD):/opt python:3.11 bash
# pip install -r requirements.txt
# pip install pytest
# pytest
$ docker container start -i cosmotech-acceleration-library-dev

mobility

$ git clone https://github.com/mobility-team/mobility.git
$ cd mobility

For Windows: CMD:

$ docker run -it --name mobility-dev -w /opt -v %cd%:/opt python:3.9 bash

PowerShell:

$ docker run -it --name mobility-dev -w /opt -v ${PWD}:/opt python:3.9 bash

For Linux:

$ docker run -it --name mobility-dev -w /opt -v $(PWD):/opt python:3.9 bash
# pip install -r requirements.txt && pip install -e .
# pip install pytest
# pytest
$ docker container start -i mobility-dev

PHX

$ git clone https://github.com/PH-Tools/PHX.git
$ cd PHX

For Windows: CMD:

$ docker run -it --name phx-dev dev -w /opt -v %cd%:/opt python:3.7 bash

PowerShell:

$ docker run -it --name phx-dev -w /opt -v ${PWD}:/opt python:3.7 bash

For Linux:

$ docker run -it --name phx-dev -w /opt -v $(PWD):/opt python:3.7 bash
# pip install -r dev-requirements.txt && pip install -e .
# pytest
$ docker container start -i phx-dev

cybrid-api-id-python

$ git clone https://github.com/Cybrid-app/cybrid-api-id-python.git
$ cd cybrid-api-id-python

For Windows: CMD:

$ docker run -it --name cybrid-api-id-python-dev -w /opt -v %cd%:/opt python:3.11 bash

PowerShell:

$ docker run -it --name cybrid-api-id-python-dev -w /opt  -v ${PWD}:/opt python:3.11 bash

For Linux:

$ docker run -it --name cybrid-api-id-python-dev -w /opt  -v $(PWD):/opt python:3.11 bash
# pip install -r requirements.txt && pip install pytest
# pytest
$ docker container start -i cybrid-api-id-python-dev

pymx2

$ git clone https://github.com/vpaeder/pymx2.git
$ cd pymx2

For Windows: CMD:

$ docker run -it --name pymx2-dev -w /opt -v %cd%:/opt python:3.11 bash

PowerShell:

$ docker run -it --name pymx2-dev -w /opt  -v ${PWD}:/opt python:3.11 bash

For Linux:

$ docker run -it --name pymx2-dev -w /opt  -v $(PWD):/opt python:3.11 bash
# python -m setup install
# python -m unittest
$ docker container start -i pymx2-dev

TOML Kit

Steps to run tests on a docker container:

git clone --recurse-submodules https://github.com/sdispater/tomlkit.git
cd tomlkit
docker run -it --name toml -w /opt -v$(pwd):/opt python:3.11 bash
pip install poetry
pip install pytest
poetry install
poetry run pytest -q tests

Dialogy

Steps to run tests on a docker container:

git clone https://github.com/skit-ai/dialogy.git
cd dialogy
docker run -it --name dialogy -w /opt -v$(pwd):/opt python:3.11 bash
pip install poetry
poetry install
make install
make test

from Readme.md -> Contributors

git clone git@github.com:skit-ai/dialogy.git
cd dialogy
docker run -it --name dialogy_test -w /opt -v <working directory>\dialogy:/opt python:3.11 bash
# Activate your virtualenv, you can also let poetry take care of it.
$ pip install poetry (opened a [PR](https://github.com/skit-ai/dialogy/pull/194)) for adding this command
$ poetry install
$ make test

Teiphy

Steps to run tests on a docker container:

git clone https://github.com/jjmccollum/teiphy.git
cd teiphy
docker run -it --name teiphy -w /opt -v$(pwd):/opt python:3.11 bash
pip install poetry
poetry install
poetry run pytest

Python Automation Framework

Steps to run tests on a docker container:

git clone https://github.com/mreiche/python-automation-framework.git
cd python-automation-framework
docker run -it --name python-automation-framework -w /opt -v$(pwd):/opt python:3.11 bash
pip install pytest
pip install -r requirements.txt
PYTHONPATH="." pytest --numprocesses=4 --cov=paf test

Python Bitcoinlib

Steps to run tests on a docker container:

git clone https://github.com/petertodd/python-bitcoinlib.git
cd python-bitcoinlib
docker run -it --name python-bitcoinlib -w /opt -v$(pwd):/opt python:3.11 bash
python3 -m unittest discover

Overloaded Iterables

Steps to run tests on a docker container:

git clone https://github.com/Arkiralor/overloaded_iterables.git
cd overloaded_iterables
docker run -it --name overloaded_iterables -w /opt -v$(pwd):/opt python:3.11 bash
chmod +x scripts/*
sh scripts/run_tests.sh

xapi-python

Steps to run tests on a docker container:

git clone https://github.com/pawelkn/xapi-python.git
cd xapi-python
docker run -it --name xapi-python -w /opt -v$(pwd):/opt python:3.11 bash
python3 -m unittest discover tests

nats-python

Steps to run tests on a docker container:

git clone https://github.com/Gr1N/nats-python.git
cd nats-python
docker run -it --name nats-python -w /opt -v$(pwd):/opt python:3.11 bash
pip install poetry
poetry install
make install
make test

capella

  • capella
  • I have added CONTRIBUTING.rst
$ git clone https://github.com/AlexSpradling/Capella.git
$ cd Capella
$ docker run -it --name capella-dev -w /opt -v %cd%:/opt python:3.11 bash

# pip install Capella
# cd capella
# run main.py
$ docker start -i capella-dev

Renormalizer

https://github.com/shuaigroup/Renormalizer

$ git clone https://github.com/shuaigroup/Renormalizer.git
$ cd Renormalizer
$ docker run -it --name Renormalizer-dev -w /opt -v %cd%:/opt python:latest bash
$ pip install renormalizer
$ pip install --upgrade pip
$ pip install qutip
$ pip install recommonmark
$ pip install Yaml8
$ pip install -r requirements.txt
$ pytest

Python toml_tools

toml_tools Have contributors file instructions

git clone https://github.com/JamesParrott/toml_tools
cd toml_tools
docker run -it --name toml_tools_test -w /opt -v <working directory>\toml_tools:/opt python:3.11 bash
$ pip install --upgrade pip
$ pip install tox
$ tox -e py

Python penn

penn

There is no contribution file, but there is an explanation of how to clone and run the project in the readme file. Added a contribution.md file with the following instructions:

git clone https://github.com/interactiveaudiolab/penn
cd penn
docker run -it --name penn_test -w /opt -v <working directory>\penn:/opt python:3.11 bash
$ pip install -r requirements.txt && pip install -e .

Python nbt-structure-utils

nbt-structure-utils

There is no contribution file. Added a new PR for contribution file with the following instructions:

git clone https://github.com/BenBenBenB/nbt-structure-utils
cd nbt-structure-utils
docker run -it --name nbt_test -w /opt -v <working directory>\nbt-structure-utils:/opt python:3.11 bash
$ pip install poetry
$ poetry install

Python sanic-restful

sanic-restful

There is no contribution file. Added a new PR for contribution file with the following instructions:

git clone https://github.com/linzhiming0826/sanic-restful
cd sanic-restful
docker run -it --name sanic-restful_test -w /opt -v <working directory>\sanic-restful:/opt python:3.11 bash

Decomissioned Docker slides

Installing Python in Docker

This is a simple Ununtu-based Docker image into which we have installed python3. We build it, we run it in interactive mode and then inside we can run python3.

FROM ubuntu:20.04
RUN apt-get update
RUN apt-get upgrade -y
RUN apt-get install -y python3
$ docker build -t mydocker1 .
$ docker run --rm -it mydocker1
# which python3

Installing Python in Docker - one layer

The same as earlier, but now we merged the 3 RUN commands into one, so we have less levels in the history.

FROM ubuntu:20.04
RUN apt-get update && \
    apt-get upgrade -y && \
    apt-get install -y python3
$ docker build -t mydocker2 .

Docker history

$ docker history mydocker1

{% embed include file="src/examples/dock/history_mydocker1.out)

$ docker history mydocker2

{% embed include file="src/examples/dock/history_mydocker2.out)

Distribute command-line script and include command

FROM ubuntu:20.04
RUN apt-get update && \
    apt-get upgrade -y && \
    apt-get install -y python3

RUN apt-get install -y python3-pip
RUN pip3 install requests

COPY curl.py /opt/

ENTRYPOINT ["/opt/curl.py"]

$ docker build -t mydocker .


$ docker run --rm   mydocker https://code-maven.com/slides