Hi! I’m Mykhailo Khomenko, a DevOps Engineer at P2H. In this article, I’ll give an introduction to DevOps and show you how to get started with DevOps practices using containers and Docker.
Companies are constantly seeking ways to work more efficiently and reduce costs. This is where DevOps comes into play as a methodology that increases the speed and quality of software development.
If you’re new to DevOps, you probably have plenty of questions. And since DevOps is such a huge topic, it would be impossible for one article to answer them all. In this post, I’d like to give you a general understanding of the what, why, and how of DevOps from a technical perspective, and perhaps help you see where you need to go next to get from DevOps theory to practice.
Another goal is to draw the attention of developers, QA, and other IT professionals to Docker and its potential applications.
Let’s start with the basics.
What Is DevOps?
DevOps is a method of developing and deploying software that bridges the gap between software development and operational processes (hence the name). DevOps practices encompass software development, testing, deployment, monitoring, and support.
As a methodology, DevOps implies a shift in the approach to development, not just in terms of tools but also in the people’s minds: managers, analysts, developers, QA, infrastructure teams, and clients. This is why it can be said that DevOps is not just a set of practices but a specific culture.
While a culture shift is imperative if you want to succeed with DevOps, in the rest of this article, I’ll focus on what goes on under the waterline: how DevOps fits into your development process and the technical knowledge you need to get started
Why Do You Need DevOps?
The main goal of DevOps is to combine software development with efficient and reliable software delivery. More practically, it aims to optimize the following tasks:
Process automation
DevOps aims to automate as many processes as possible in the overall software development process to reduce time spent on routine tasks and minimize human error.
Configuration management
DevOps uses version control systems, making it possible to track changes and restore previous code versions when needed
Testing
DevOps ensures regular software testing at different stages of development. This makes it easier to detect and rectify errors early on and ensure high product quality.
Monitoring
DevOps enables software monitoring both during development and after release to pinpoint and address availability, performance, and security issues
Team collaboration
Overall, DevOps strives to build a culture of cooperation between developers, testers, and system administrators. This helps reduce error resolution time, improve communication, and ensure more efficient teamwork.
Where to Start: Using Containers
Let’s start our journey into DevOps implementation with a look at containers.
The dilemma: this price-relevant information affects availability, travel time, and service. At the same time, they are factors for the traveler that can be changed depending on the result and personal preferences and flexibility. As a result, travelers develop their own user hacks to compare different search parameters and weigh the trade-off between price and convenience.
Note:
Containers are useful for deploying and running software on any device or environment. They isolate applications from the operating system and dependencies, which allows for more efficient application management, reduced deployment time, and independence from an operating environment. All these things help teams to align their work using unified (standard) tools and approaches.
Containers vs Virtual Machines
Containers and virtual machines (VMs) are both technologies that let you run applications isolated from other applications and the host operating system. However, some differences make containers more practical than virtual machines in many situations.
Containers use less resource and system overhead
as each VM has its own copy of the operating system, while containers share the operating system with the host system. This means that containers speed up application development, testing, and deployment.
Containers are easier and faster to create and deploy
since they only contain the application and its dependencies, while a virtual machine needs to include a complete operating system
Containers are usually smaller and take up less disk space
than virtual machines.
Why Docker?
Running a simple search about Docker brings up a whole new set of concepts you may or may not be familiar with.
In fact, this is where many people decide they don’t need Docker and return to their good old local web server. But in the longer term, this actually creates more problems than it solves:
Execution inconsistencies:
Without a unified system where everyone uses the same tools, code is unlikely to work the same between teams, leading to “It works on my machine, figure it out yourselves.
Version management headaches:
Suppose you need to run multiple websites locally with different PHP and MySQL versions. In that case, you have to change the versions for all of them, and it becomes impossible to run these sites separately with different configurations.
Code transfer difficulties:
How do you run your application on another machine or server? Or with another developer, you want to share it with? Both situations are challenging without a standard, shared environment.
The good news is that you don’t need to know all the intricacies of Docker to start using it to get more collaborative processes up and running. It’s enough to learn the basic operations, and if you want, you can delve deeper later
The second piece of good news is that running Docker on Windows doesn’t take much effort. All you need to do is download Docker Desktop and choose to use WSL2 during installation, which allows running native Linux applications on Windows.
Setting Up Containers in Docker: A Hands-on Example
As an example, let’s run a local web server with NGINX. For this, we need to describe to Docker Desktop what exactly we want to run. We’ll create a folder called nginx, and within it, a file named compose-dev.yaml with the following content:
Under the services directive, we list the containers we want to run. The container names can be anything, but it makes sense to name them according to their purpose. In this case, we have only one container, which is nginx.
If the repository and tag are omitted, the image will be taken from the latest version and downloaded from DockerHub, the most popular image registry. Let’s also mention another directive: ports. This allows us to access the container ports (which run in their own network) through ports on the local machine (localhost). On the left side is the port for the local machine, and on the right side is the container port. Thus, accessing localhost:8080 corresponds to the container process listening on port 80, the web server.
In “Choose source,” select “Local directory” and specify the directory where the compose-dev.yaml file is located.
Docker Desktop will download the latest NGINX image from DockerHub (or find the image on your computer if it was previously downloaded) and start a container based on it.By clicking on the running container, we can see its logs. These are updated in real-time, so if we try, for example, to open a non-existent path (localhost:8080/nosuchpath), this fact will be immediately reflected in the nginx logs:
When containers run not on the developer’s machine but in a working environment such as a demo or production, their logs are usually collected by other containers and sent to specialized databases. This lets you view and search logs and get alerts based on specific log messages.
Now let’s try running a slightly more complex example. To simplify the current Dev Environment, we’ll delete it and create a new one based on a different file called compose-dev.yaml:
This file is a slightly modified version of the file at DockerHub WordPress: ROOT_PASSWORD = 1, and the latest version of MySQL is used.
Storing logins and passwords in plain text is generally not recommended. They are typically stored in secure storage locations (such as GitLab CI/CD variables, Hashicorp Vault, etc.) and used during deployment. At the very least, secrets should be stored in a separate file, which we will show later, but for now, let’s leave things as they are.
Compared to the previous file, this is a bit more complex:
It adds the depends_on directive to start WordPress after MySQL.
Docker won’t wait for MySQL to start fully; it will just start one container and then the other.
MySQL root credentials, another user, and the database are defined as environment variables.
It is understood that the same databases, usernames, and passwords should be specified for both containers. Only the variable names differ. The variables used are described in DockerHub WordPress.
Added volumes.
This allows us to store files outside the container by mapping them. Without volumes, changes made inside the containers will be lost. That’s how containers work; changes inside them are not persistent. Here, we specify that changes made in the WordPress files and MySQL database will be preserved even after stopping or restarting.
An important note: the path to the MySQL host is specified using WORDPRESS_DB_HOST :mysql. Here, mysql is the service name for MySQL within the compose-dev.yaml file. If we named the service db, we would need to change both WORDPRESS_DB_HOST and depends_on to db.
Now let’s run the Dev Environment based on the updated file.
Open localhost:8080 in a browser and you’ll see the WordPress initialization window:
Initialize WordPress by entering the necessary information, including the login and password for the admin panel.
After making changes in the Pages section of WordPress and restarting the container in Dev Environments, we can see that the changes have been preserved.
Now let’s take our example further: we’ll separate all credentials into a separate file and add a phpMyAdmin container. Multiple dev environments can coexist, but port 8080 can only be used by one of them. For others, the localhost port needs to be c hanged.For simplicity, let’s delete the current Dev Environment and create a new one using the following compose-dev.yaml configuration:
We can see that the phpMyAdmin container has been added, and the environment variables are read from the local.env file, which contains the following content:
Storing such a file in a repository is unnecessary, and it’s better to add it to the .gitignore file. This way, it will be read locally, and during deployment, for example, it can be automatically generated from GitLab secrets.
After launching Dev Environments and initializing WordPress at localhost:8080, we can see the WordPress database and tables in the phpMyAdmin console at localhost:8081:
In the examples above, we used the default WordPress setup (from the image). However, we would like to use our customized version and the ability to modify it.
Let’s download WordPress from the website. Create a separate folder — let’s call it demo-wordpress. Extract the archive of WordPress into this folder. Make changes inside the wordpress/ folder, removing the default plugins from wordpress/wp-content/plugins and adding the classic-editor plugin.
We will continue to use the same compose-dev.yaml file, but with two small changes:
We’ll remove the use of volumes in the wordpress container by deleting the volumes section.
We’ll specify our custom image, mywordpress:v1, as the image for WordPress.
After pushing changes to the repository, the pipeline will say hello to us :).
About Author
Mykhailo Khomenko is a DevOps Engineerat P2H, specializing in streamlining operations and optimizing system performance through cloud computing, automation, and CI/CD. With a focus on efficiency and reliability, he implements cutting-edge solutions to drive innovation within the organization.
Please, rate the article!
You successfully subscribed to the updates! Please check your email inbox to confirm.