Left arrow
Cybersecurity / Practices / Web Development

DevSecOps: why DevOps is not enough

5
Clock icon 18 min read
Illustration of a DevOps concept showing a large infinity loop symbol on a screen, representing continuous integration and continuous delivery. People around the screen are interacting with gears and charts, symbolizing teamwork, automation, and collaborative development. Top right decor Bottom left decor
Quick summary

Hello everyone! My name is Aleksandr and I am a DevOps at P2H. I’d like to tell you about who DevSecOps is, his role, and his mission in the company. Also, you’ll learn why in modern-day IT a simple DevOps guy is not enough anymore. This information will be of great use to anyone who plans to master this profession as well as to those who are already working with it and willing to reinforce their knowledge with the help of hands-on cases.

Background

First of all, we are going to dive into the history of DevOps. In order to understand the core meaning of this profession, we’ll start off with the software development life cycle (SDLC).

Long before “the age of DevOps” it’s the Waterfall model (a cascade model) that was a traditional approach. It represents a development cycle that is based on a straight line that consists of several steps:

  • Collecting requirements from your client; 
  • Software design in compliance with the requirements;
  • The development;
  • Testing of the solution chosen;
  • Deployment of the software into Production;
  • Post-release software maintenance.
Chart with waterfall model of the software development life cycle

It wasn’t until Agile and DevOps that it would normally take several months to several years to walk a long way from the first step to the fifth step. These days, Waterfall is still being used, although more seldom than SDLC + the Agile approach, which emerged after it. The latter has turned the straight line into a circle, which, in its turn, brings about flexibility and it changes the very process of development making it a real circle. Along with this, the path between the steps in the circle shortens substantially.

The Donat Chart with the software development cycle steps.

At the same time, when applying these models there still used to be a wall of misunderstanding of technical- and admin-related issues between the Developers and the Operations: the former stood for changes while the latter stood for stability.

The wall of confusion between Development and Operations.

So, DevOps as a culture with a certain scope of corresponding practices emerged to bridge the gap between the two.

DevOps and DevSecOps: a phenomenon, a person, or a process?

DevOps is a culture and a scope of practices that are primarily aimed at shortening the distance between making changes to the system and commissioning them ensuring the highest quality. However, a person doesn’t have equal practice. This means that a specialist referred to as a DevOps engineer has a number of techniques. Ironically, the name of the profession doesn’t reflect this at all. The name of “DevOps engineer” was formed historically:)

Bearing the scope of techniques in mind, this role could be broken down into the following conventional directions:

  • Infrastructure Engineer: in charge of design and interaction of physical or cloud infrastructure units and networks.
  • Cloud Engineer: works on deployment, management, and optimization of infrastructure and services using Cloud technologies, such as AWS, Azure, and GCP. 
  • Automation Engineer: focuses on the development and maintenance of automation scripts and tools. 
  • Release Engineer: responsible for the release process, including software testing, packaging, and deployment, which ensures a smooth and controlled release in different environments. 
  • Configuration Management Engineer: works with configuration management tools, such as Puppet, Chef, Ansible. This helps make sure that the infrastructure and application configuration is homogenous and solid.
  • Continuous Integration/Continuous Delivery (CI/CD) Engineer: specializes in CI/CD pipeline creation and maintenance, building processes automation, and software testing and deployment.
  • Monitoring Engineer: responsible for implementation and maintenance of monitoring and logging systems; tracks performance, health, and availability of the infrastructure and the program. 
  • Security Engineer (~ DevSecOps): works on the integration of security practices into the development process, including scanning for vulnerabilities, security testing, and ensuring compliance with industry standards and regulations.

All these directions can normally be found in one position, which is a DevOps engineer. This happens essentially because all the above roles and their respective responsibilities can match up or be different depending on the company.DevSecOps is one of the DevOps constituents, formed at the crossroads of DevOps (Development and Operations) on the one hand, and Security (Operations security) on the other hand. This role focuses more on implementing and improving of security practices in work processes.

Patrick Star holding a toolbox labeled 'DevOps' with a confused expression, symbolizing unpreparedness for the complexity of DevOps tasks.

What a DevSecOps does in practice to improve DevOps’ workflow:

  • Implements security scanning with the help of SCA, SAST, and DAST methods
  • Collects reports and analyzes them. He can also submit them to the vulnerability checking system. This is, however, an advanced level
  • Creates alerts and sets up pipelines
  • Always in touch with the developers to resolve detected issues.
An infinity loop diagram representing the DevSecOps lifecycle with key phases and security tasks. The left side of the loop shows the phases Plan, Code, Build, and Test. Plan includes threat landscape and change impact analysis. Code involves pre-commit hooks and SAST (Static Application Security Testing). Build covers software component analysis and SAST. Test includes authentication, SQL injection testing, and DAST (Dynamic Application Security Testing). The right side of the loop shows Release, Deploy, Monitor, and Respond phases. Release involves access and configuration management. Deploy includes chaos engineering and penetration testing. Monitor covers log collection, SIEM (Security Information and Event Management), and RASP (Runtime Application Self-Protection). Respond includes blocking attacks and rollback.

DevSecOps Practices Implementation

Before we sort out DevSecOps’ tasks, let us ask ourselves. These questions will help you determine what you are going to work with as an engineer, as well as your further steps:

  • A vulnerable component doesn’t have any known ways of fixing: what would you do in such a situation? Would you ignore it or would you come up with your own patch?
  • If there’s no fix or patch (legacy code): do you opt for creating a patch to keep the component maintained? Have you considered the cost of maintenance?
  • If the patch is already there, but you are determined to ignore it either way: how would you then assess the risks? Where would you save the fixed version?
  • An update makes the program work incorrectly: a vulnerable component has a fixed version with a patch, but if you update it your program may crash. So, are you going to update the version?
  • Direct dependence VS transitive dependency (a dependence on a dependence on a dependence)
  • Analysis of false positive results: who should do it? A security team or a developers team?
  • Do you have a mechanism to work with vulnerabilities in your own packets?
  • When would you find it convenient to use a virtual patching (ModSecurity and CRS)?
  • How would you act to handle a large number of issues? Initial scanning can find a couple of hundreds of issues as well as hundreds of thousands of OAST issues. Would you choose to manage only the highest priority issues, medium priority, or all of the issues?
  • How often would you carry out an analysis of false positive results? How often would you come back to the ignored issues, if any?

Types of Security Scanning

Let us take a closer look at the following types of security scanning:

SCA stands for Software Component Analysis

As a rule, programs use a lot of third-party libraries as dependencies. With the help of SCA it is possible to scan these libraries to find vulnerable ones. This analysis is easy to start as it is quite straight-forward and unlike SAST it returns fewer false positive results. However, it has its own downsides, for instance, it uses file and packet check-sums to detect vulnerabilities. Thus, it won’t work for internal or unknown components from the manufacturer. 

Below are the examples of SCA tools:

List of SCA tools for different programming languages:
- Javascript - RetireJS/npm@6,
- Python - Safety,
- Ruby on Rails - Bundler Audit,
- PHP - Compose (limited),
- Java - OWASP Dependency Checker

SAST stands for Static Application Security Testing

This method analyzes a source code, a binary code, and a byte code for security vulnerabilities without running the code itself. The static analysis also comprises both SAST and SCA, lining, and secret scanning.

SAST is designed to analyze the source code of a program. It’s easy to start a SAST scan and it is its advantage. Provided the setup is correct, its setting up is also quite straightforward. Besides that, SAST tools provide quick feedback to the team. They can be set up to decrease false positive results with the help of their own rules. On top of it, the majority of them support multiple languages and they also have a lot of additional free tools.

Now downsides. By its nature, static analysis tends to have a high rate of false positive results, which requires a lot of extra time for scanning.  Along with this, it cannot find errors while running and in business logic. Also, many of its tools don’t meet the requirements of current security standards. Among the worst downsides is it’s being unable to process false positive results locally.

List of tools for Static Application Security Testing in programming languages:

Javascript - nodejsscan,

Python - Bandit,

Ruby on Rails - Brakeman

PHP - phpcs-security-audit/rips

Java - Find-Sec-Bugs/SpotBugs

DAST stands for Dynamic Application Security Testing

This method runs an application to find security vulnerabilities. As the name suggests, it checks a running application dynamically. DAST is useful for:

  • Applications (ZAP);
  • Configurations (molecule/knife);
  • Infrastructure as code (ansible/molecule);
  • Docker (Docker benchmark security).

A strong aspect of this type is that it is easy to start even without a deep knowledge of the programming language. 

One of the disadvantages is that it tends to have insufficient coverage as it’s not capable of working with heavy JavaScript frameworks. Many DAST tools do not fully correspond with today’s CI/CD pipelines and they also require a certain amount of manual support. At this point, ZAP Proxy could be of great help. It is an open-source vulnerability scanner that is able to find wide-spread security issues such as those of OWASP Top 10.

Infrastructure as code and its security

There are a lot of materials about Infrastructure as code. Let us then without any further ado have a look at the link between DevSecOps and Ansible. 

Ansible is an automation tool. You can give it a list of the servers and it will do all the necessary settings, which you have prepared beforehand. For instance, every time you set up a server you have to go with Docker, to set up a required firewall, to set up access to the customers. All this can be automated with the help of Ansible.

Ansible is needed for DevSecOps processes to change the security of operating systems and servers and make them immune to any potential attacks in compliance with different standards, for example, PCI, HIPAA, etc. In addition, because it’s easy to build reproducible systems with it, Ansible allows faster testing of security patches as well as patching without downtime.

Security hardening is a process of, as the name suggests, hardening of system security that makes it immune to any potential attacks and vulnerabilities. You can see an example of such hardening with the help of Ansible here. It works in the following way. You pick an Ansible role and run this role on your server. This makes more sense and therefore is used on servers with high security requirements, e.g. payment services.

Compliance as code: testing of the hardening

For certain businesses, meeting business compliance can be of the greatest importance, therefore they implement special security standards, for example, PCI-DSS, HIPAA, GDPR, FedRAMP, etc. In this case, after the hardening of the servers, they are necessarily tested for compliance with security standards. This can be easily carried out with compliance tools. 

This approach is called Compliance as code, meaning that the code and automation tools are used for determination, implementation, and testing for compliance with security standards for IT systems and infrastructure. Let us take a look at one of the tools. Inspec is a framework for testing with open code. It is designed for the automation of testing for IT infrastructure and applications to be compliant and secure. An example of its usage for Linux can be seen here.

Vulnerability management: working with detected vulnerabilities

Eventually, you now get a lot of reports on the vulnerabilities of your system. What is to be done about that?

There’s vulnerability management for these purposes. It includes detecting, assessing, determining the priorities, and decreasing the number of vulnerabilities in software systems and networks. It involves constant active monitoring of potential security threats and elimination of these vulnerabilities.

How to start the vulnerability management process:

  • Implement this approach slowly and gradually
  • Start with the tools that ensure a low rate of false positive results (SCA, Baseline Scan, etc)
  • Be always in touch with the Product Owner, the developers, and QA in order to eliminate all the issues.

The advantage of such a complex approach is that you can implement security programs along with business compliance and this enables you to range the priority of vulnerabilities based on a risk. Vulnerability management also levels up your infrastructure and lets you understand how effective your program is. No doubt, having a vulnerability management system and metrics makes it more valuable in the eyes of the customer and is a good indicator of professionalism. Also, in case of any kind of audit it will serve as a source of all the necessary data. On top of it, it helps create metrics for the whole organization and at different levels of detailization.

What tools can we use when managing vulnerabilities? The first one that’s worth considering is Defect Dojo – it is capable of handling vulnerability reports in different formats and aggregating them in one system. Here you can see which scanners you can integrate it with.

To sup um, a step-by-step vulnerability management process looks as follows: 

  • Find vulnerabilities with the help of tools (consider the level of detail you need);
  • Aggregate the search results into one system using different tools;
  • Process the found issues for false ones;
  • Eliminate the issues by putting them in a bug tracker for fixing;
  • Make conclusions for the future – create metrics/dashboards for stakeholders (number of high/medium/low severity vulnerabilities, list of top vulnerabilities across the business departments, mean time to find (MTTF) and elimination, etc).

DSOMM (DevSecOps Maturity Model)

In order to focus the attention of DevOps specialists on current security issues, the OWASP community has offered DSOMM (DevSecOps Maturity Model). It enables hardening of DevOps strategies and at the same time, it shows how to use them based on priorities to harden the security.

DevSecOps Maturity Model is introduced as four axes: Dynamic Depth, Static Depth, Intensity,
Consolidation.

This model has four axes and five levels of maturity where each axis determines the level of maturity according to its depth. Thus, there are:

  • Static depth depicts how deep the analysis of a static code is being carried out within the framework of the DevSecOps CI chain;
  • Dynamic depth implies dynamic testing using security tools.

Also: 

  • intensity shows how often security scanning in the CI pipeline is carried out;
  • consolidation: how results are processed.

Let us focus on the first two levels.

Level 1:

  • Static depth — initializing a static analysis tool as it is, with no changes in the tool and its settings;
  • Dynamic depth — initializing DAST tools as they are, with default settings;
  • Intensity — scanning is carried out in the main branch of the repository, even with a low frequency;
  • Consolidation — optional, can be skipped.

Level 1 to Level 2 transition may take a while and may require close cooperation with the development team. 

Level 2:

  • Static depth — initializing SAST, and secret scanning tools, with a little change in configuration;
  • Dynamic depth — initializing DAST tools with a little change;
  • Intensity — try to perform scanning as frequently as possible;
  • Consolidation — try saving the tool results on a CI server. 

More details on every step you can find here and you can even try building your own matrix.

Choosing a tool for work

Since we have a lot of scanning, we have to find a tool that matches our needs best. While choosing one, It is offered to make a comparison table having in mind the following criteria:

  1. Support of the language and tool stack used.
  2. Free and open software (FOSS) or paid.
  3. If the tool allows you to ignore a found issue locally – if the developers can fix the false positive result in the code, they don’t need to use another resource to mark this result as a false positive, which helps you save a lot of time on eliminating the issue.
  4. If the tools give you control over the scope and depth of scanning. 
  5. If there’s an opportunity to stop the build based on the results of scanning (preferable). 
  6. If the tool can store the results as a file in a format that can be processed by a computer (JSON, CSV, XML) or provides API. This helps us exchange the results with other systems in the organization.

This is an example of the comparison table, e.g. for the SCA scanner Safety:

Comparison table for the SCA scanner Safety:

Language: Python,

FOSS/Paid: Free,

Ignore Issues: Yes,

Severity: No,

Fail on Issues: Yes,

Results in a file: Yes.

Before I proceed to the practical part let me share some so-called commandments, that are rules necessary for successful DevSecOps:

  • Be in touch and coordinate your steps with the developers, QA, and Operations team. 
  • Do not stop the pipeline if you haven’t yet reached Level 3 or 4 of DevSecOps maturity.
  • If the work of a tool takes more than 10 minutes then do not use it in CI/CD pipeline.
  • Use separate jobs for each tool or scanning.
  • Implement scanning gradually (internationally), otherwise, you are bound to have false positive results.
  • Do not buy tools without API or CLI.
  • It’s better to buy tools that can be used under per-user licensing. 
  • Make sure that the tool is capable of performing incremental/basic scanning.
  • Feel confident to create your own custom rules of scanning. 
  • Strive to do everything through code – Everything As Code (EaC).
  • False positive vulnerability scanning results should also be in the form of code as this helps control the scope of scanning.
  • Add documentation references to the pipeline – in such a way you’ll be able to share your team’s expertise with other teams.

Conclusion

I’d like to briefly and simply sum up all the above:

  • DevSecOps is a separate role and a separate specialist in a company (in a perfect world). 
  • Before you start work, it’s worth questioning a lot as well as relying on DevSecOps’ solutions and practices that have already proven effective.
  • Different types of scanning (SCA, SAST, DAST): start with a simple one, include it in the pipeline, and do everything gradually.
  • Implementation of DevSecOps: Adding scanning is not really difficult but what is more important is who is going to be in charge of the process – from scanning to processing of the results and vulnerability fixing.

Please, rate the article!

    You successfully subscribed to the updates! Please check your email inbox to confirm.

    Topics

    Next Articles

    Load more

    Have a task with an important mission? Let’s discuss it!

      Thank you for getting in touch!

      We appreciate you contacting us. One of our colleagues will get back in touch with you soon!

      Have a great day!

      Thank you for getting in touch!

      Your form data is saved and will be sent to the site administrator as soon as your network is stable.