How to containerize an application with Docker
I remember a university project developed with a classmate: we were using the same code, yet the program behaved differently on our computers.
Unable to determine which machine was running the application correctly, we decided to create a Docker container and run our project inside it.
That choice allowed us to keep developing and deliver the assignment on time, without wasting hours trying to identify the cause of the discrepancies between the two environments.
Only later did we discover that the issue was caused by a different version of a system library.
From that experience, I realized how much containerization can simplify application development and deployment, making projects more consistent and reliable.
So here I am, with this article, to share the fundamental principles of getting started with Docker and making the most of it in your projects.
Basic concepts of Docker
Let’s start with the theory to understand step by step what elements are necessary to containerise an application with Docker.
Docker
Docker is an open source platform that allows you to develop, deploy and run applications quickly and efficiently.
Compared to traditional virtual machines, Docker containers are much lighter and more efficient in performance and resources, as they share the host operating system kernel instead of emulating an entire operating system.
Docker also allows applications to be isolated from the underlying infrastructure: you don’t have to worry about where you deploy, as the container ensures consistency across all environments in which it runs.
Last but not least, Docker makes it easier to scale applications, allowing you to easily manage intense workloads.
Translated with DeepL.com (free version)
Dockerfile
A Dockerfile is a text file that contains the instructions Docker needs to build an image and start a Docker container. Every Dockerfile begins with the FROM instruction, which defines the base image to use. These images can be lightweight operating systems, such as Alpine, or more specific images optimized for the project at hand. You can easily find base images on DockerHub.
Next, the Dockerfile includes a series of instructions that allow you to customize the base image, creating a Docker container tailored to your needs. The most commonly used instructions are:
RUN: executes commands within the image during the build phase.COPY: copies files and directories from the build context into the image.WORKDIR: sets the working directory for subsequent instructions.EXPOSE: specifies the ports the application will listen on.ENTRYPOINT: defines the executable that receives commands passed to the container.CMD: sets the default command executed when the Docker container starts.
Thanks to the Dockerfile, you can automate image creation and ensure Docker containers are reproducible, making development more efficient and scalable.
Image
Docker images are complete packages containing everything needed to run an application inside a Docker container: code, runtime, libraries, system tools, and related configurations. They are created from a Dockerfile and can be distributed and executed portably across any environment that supports Docker, ensuring consistency and reliability between development, testing, and production.
Container
A Docker container is a running instance of a Docker image. While the image serves as a static package containing everything needed to start an application, the container provides the isolated environment where the application actually runs.
Docker containers allow applications and their dependencies to be separated from the host operating system, ensuring consistency and reliability. This way, the software behaves the same way regardless of the environment in which it is deployed: local development, on-premise servers, or cloud infrastructure.
Docker hands-on tutorial
- Clone the following repository of a simple HTML page:
git clone https://gist.github.com/685b4002511a2cc87e5718e0634c9ac1.git - Enter in the project folder:
cd 685b4002511a2cc87e5718e0634c9ac1 - Create a file named `Dockerfile` with the following content:
FROM nginx
COPY index.html /usr/share/nginx/html/index.html
CMD sh -c "echo '##### Hello from Intré Docker image example #####' && nginx -g 'daemon off;'" - Execute the command to build the Docker image:
docker build -t nginx-intre-example . - Run a container of the new image:
docker run -d -p 8080:80 nginx-intre-example
Note: multiple containers can be started from the same image as long as the exposed port is changed (e.g. using `-p 8081:80`, `-p 8082:80`, etc.) to avoid conflicts. - Open a tab of the browser and visit http://localhost:8080 to see the HTML page inside the container previously run.
Conclusion
Encapsulating an application within a Docker container allows projects to be shared consistently, without worrying about differences between the environments where they run — it works the same way everywhere.
This approach simplifies the transition from development to production, ensuring consistency across environments: what works locally will also work on the server.
Moreover, testing new features or fixing bugs becomes faster and more efficient, thanks to the ability to spin up a clean, isolated environment in seconds.
Docker has transformed the way we deploy applications, making processes more stable, scalable, and faster. Today, mastering it is an essential skill for every developer.