Skip to main content

Docker - Pre-built container image registry

Just say no to :latest:

Dockerfile linters:

Colima - Docker Desktop alternative - - See Thoughworks Technology Radar 27:

is becoming a popular open alternative to Docker Desktop. It provisions the Docker container run time in a Lima VM, configures the Docker CLI on macOS and handles port-forwarding and volume mounts. Colima uses containerd as its run time, which is also the run time on most managed Platform - improving the important dev-prod parity.




How to Get Started with Docker -

Best practices


Containers provide consistency between environments (eg local development machine vs production cloud). They fix "It works on my machine" problems.

Avoid issues due to different programming language or database versions. Avoid having to install and configure specific development environments per project. On your local machine, each project's environment is isolated.

You can run different versions of the same app locally side-by-side, each with a different MySQL version for example with.

No need to install an OS (eg Linux/Windows), thus no need to patch/upgrade it when there are security vulnerabilities.

When you want to deploy new code you simply create a new image and deploy it; no need to individually configure/patch/update each server's app.

Can be easily replicated, ie deploy multiple copies.

Containers are ephemeral, short-lived. If they die we just replace them.


  1. Build Image (package the app)
  2. Ship Image (to the cloud runtimes or local machine)
  3. Run Image (execute the app)

What is a container?

Container = App Code + Runtime + Libraries/Dependencies/Binaries + Configuration Files

What is a container?

What are Linux containers?

Why we have containers -

Containers let us write code (a Dockerfile) to describe the computer an app needs to run on. Choose an operating system, install any runtimes and libraries needed, and populate the file system. This reduces many of the app’s expectations to one: a container runtime.

Containers vs virtual machines

  • Containers
    • All them share the same OS kernel (host OS)
    • Do not virtualize hardware, they run in isolated processes
  • Virtual Machines
    • Each VM has a complete copy of the operating system (guest OS)
    • Abstraction of physical hardware

Thus, containers are lightweight and more efficient, and they can boot faster.

You can run multiple containers in parallel, whereas to run multiple virtual machines in parallel you need a beefy host machine.

Also containers are easy to share amongst team members, and easy to modify and replicate the modifications amongst team members, whereas when a virtual machine is used at the same time it's difficult to share changes done by one person to the rest of the team.

Each VM needs to have an OS installed, and when there are security vulnerabilities we need to upgrade/patch the OS.

Container engine/runtime

Similar role as hypervisors with virtual machines.


Container vs image


  • Image: read-only (ie immutable) template with instructions for creating a Docker container.
  • Container: a runnable instance of an image.

Multiple instances of the same image can be created.


What is a container?

A container is a sandboxed process on your machine that is isolated from all other processes on the host machine. You can create, start, stop, move, or delete a container.

What is a container image?

When running a container, it uses an isolated filesystem. This custom filesystem is provided by a container image. Since the image contains the container’s filesystem, it must contain everything needed to run an application - all dependencies, configuration, scripts, binaries, etc. The image also contains other configuration for the container, such as environment variables, a default command to run, and other metadata.


Images are a lightweight, standalone, executable package of software that includes everything needed to run an application: code, runtime, system tools, system libraries and settings.

A container is a runtime instance of a docker image. A container will always run the same, regardless of the infrastructure. Containers isolate software from its environment and ensure that it works uniformly despite differences for instance between development and staging.



Dockerfile ——docker build——> Image ——docker run——> Container

create →
start →
stop ←
remove ←
run →→←
run -rm →→←←


run vs start:

run = create + start

You can create N clones of the same image.


List commands: docker help

Command help: docker <command> --help, eg docker run --help

Display system-wide information: docker info

Version: docker version


Build an image from a Dockerfile: docker build --tag <tagname> . or docker build -t <tagname> .

List images: docker images or docker image ls

Remove image: docker image rm <image-id> or docker rmi <image-id>

Prune (remove) all unused images: docker image prune [-a]


See this tutorial

Pull image from registry: docker pull alpine:latest

Push image to registry (Docker Hub):

  • If we are logged in Docker Desktop: docker push <repo-name>:<tag-name>
  • If we are not logged in Docker Desktop: docker push <DockerHub-username>/<repo-name>:<tag-name>


Create a new container form an image: docker create

docker run = create + start

Create and run a new container from an image

Run a container: docker run <image> or docker run -d <image>

docker run options:

  • -d/--detach: run in the background, this way we can keep using the terminal session
  • --name: assign a name to reference the container, eg --name myapp
  • -e/--env: pass environment variables, eg -e SOME_VAR=xyz
  • -p/--publish: publish a container's port to the host, eg -p 5433:5432 or -p 80:8080
  • -rm: automatically remove the container when it exits

List running containers: docker ps

List all containers: docker ps --all or docker ps -a

Start a container: docker start <container-id> or docker start <container-name>

Stop a running container: docker stop <container-id> or docker stop <container-name> (get the id/name with docker ps)

Remove a container: docker rm <container-id> or docker container rm <container-id>

Open a shell inside a running container: docker exec -it <container_name> sh

Display container logs: docker logs -f <container_name> or docker container logs -f <container_name>

Dockerfile workflow

On a directory with a Dockerfile run:

  • Build: docker build --tag <imagename> .
    • Doing docker images (or docker image ls) should show the image now
  • Run: docker run <image-name> [-rm]
    • Doing docker ps (if running) or docker ps -a (if stopped) should show the container and it's ID, name etc.
  • Stop container: docker container stop <container-id> and docker container rm <container-id>
    • Afterwards use docker start <container-id> or docker start <container-name> to start it again
  • Delete image: docker image rm <image-id> (get the id with docker images or docker image ls)

docker-compose workflow

docker-compose up, down, stop and start difference -

Start with up:

docker-compose up -d
docker-compose -f docker-compose.yml up

Connect to a container (use docker ps to get the name or id):

docker exec -it <container-id> /bin/sh
docker exec -it <container-name> /bin/sh

This also works sometimes:

docker exec -it <container-id> bash
docker exec -it <container-name> bash

To exit run exit.

Stop services:

docker-compose stop

Shut down:

Can delete volumes

Stops containers and removes containers, networks, volumes, and images created by up.

docker-compose down


docker system prune → prune everything except volumes

Remove dangling images (images with <none> in docker image ls):


Set of instructions (list of commands) to create a container image.

A Dockerfile is a text document that contains all the commands a user could call on the command line to assemble an image.


You might create your own images or you might only use those created by others and published in a registry. To build your own image, you create a Dockerfile with a simple syntax for defining the steps needed to create the image and run it. Each instruction in a Dockerfile creates a layer in the image. When you change the Dockerfile and rebuild the image, only those layers which have changed are rebuilt. This is part of what makes images so lightweight, small, and fast, when compared to other virtualization technologies.




Best practices:

Node.js Dockerfile


# Base image form A verified image that has Node.js, npm and yarn installed
FROM node:12.16.3

# Create the 'code' directory and use is as working directory, ie all following commands
# run in this directory

# Set an environment variable. Will be accessible to any process running inside the image

COPY package.json /code/package.json

RUN npm install

# Copy everything in our current local directory and we put it inside the image's 'code' directory
# Use the a .dockerignore file to exclude files and directories:
COPY . /code

# Command run when the container starts
CMD [ "node", "src/server.js" ]

Python Dockerfile

# Base image
FROM python:3.10

# Copy everything in the current dir to the 'app' dir of the filesystem of the container
COPY . /app

# Directory in which the next commands are run

# Run shell commands (upgrade pip and install flask)
RUN pip install --upgrade pip
RUN pip install flask

# Set an environment variable
ENV FLASK_ENV=production

# App/executable that will run when the container is run from the image
ENTRYPOINT ["python", ""]


COPY is preferred. . That’s because it’s more transparent than ADD. COPY only supports the basic copying of local files into the container, while ADD has some features (like local-only tar extraction and remote URL support) that are not immediately obvious. Consequently, the best use for ADD is local tar file auto-extraction into the image, as in ADD rootfs.tar.xz /.


FROM scratch


Multi-stage builds

Example: -


Containers are started and stopped as required (ie they have a lifecycle). Volumes provide persistent data storage to containers, independent of its lifecycle. Volumes can be shared with many containers. They avoid increasing the container size.





Oh My Zsh plugin:

Docker Compose best practices for dev and prod: /

Example from

version: '2'

dockerfile: Dockerfile
container_name: web
- '8080:80'

image: mongo:3.6.1
container_name: db
- mongodb:/data/db
- mongodb_config:/data/configdb
- 27017:27017
command: mongod


Docker Desktop

Docker.raw (macOS):