As technology delivery teams have transitioned to a mode where they run the services they build, the needs, skillsets and expectations of those team members have changed as well.
In a previous world, security was worked on only by speciality teams through cumbersome processes and was often thought of as red tape. In today’s world where DevOps principles are widely used, security is a requirement upfront and needs to be ingrained in the delivery process, writes Michael Stahnke, VP of Platform, CircleCI.
This is required because we’re shipping more often and if the people building the technology are the people operating it, there aren’t traditional handoff points and quality gates built by external teams to jump through.
In most cases, the lack of red tape and gates is seen as freeing — but it also creates room to accidentally skip steps that matter to the quality of your delivery. The good news though, is that by building security testing and checks into your delivery practices, the right thing is the easy thing and thus simply gets done.
Some people think the practice of automating security validation within CI/CD is some special breed of DevOps called DevSecOps. I’m not a big fan of that nomenclature because security is one important aspect of quality software delivery. There are many others — and they all have a significant impact on the delivery process and user experience. To figure out what to put in your delivery process via CI/CD pipelines, you need to know what you have and what your threat models are. Here’s what experience has taught me.
A good first step is to create a digital asset registry
This can be something as simple as a flat file or spreadsheet, that keeps track of services, libraries, accounts and tools. This would include things like who is responsible for rotating credentials, who is the primary contact for the items, what are the contract terms (simple ones like expiration dates, or number of seats purchased).
Once you have this, you have some accountability for the supply chain you’re building on top of. This is critical and often overlooked. Engineers want to jump into what they’re building, not what they chose to work with to build upon. However, from a security standpoint it’s at least as likely that a library or tool you’re using has a security issue as that your developers have created a security concern.
With a digital asset registry, figure out your validation procedures. It can be running a pipeline periodically, or always applying the latest updates, or subscribing to security notification email lists. If any of those validation methods fit well into your CI system, put them there. For example, many firms will test against the latest version of their 3rd party libraries automatically. That way they know they have the latest security updates validated and ready to go at a moment’s notice or just picked up when the next deployment happens.
Another option for supply chain validation is to lock or pin every version of every library, package and tool you use in your build and production systems to a specific version with a goal of complete determinism and reproducibility in mind. This seems like a good idea, until you really try to accomplish it. While it is very possible to do, it’s compex and takes a lot of quantification of moving parts.
How many packages are in an OS image? Hundreds to thousands. How many packages are in a software library? Dozens to hundreds are quite likely depending on the ecosystem. If you run `docker pull` do you get the latest version? If you do it 10 minutes later, is it the same version?
These types of concerns are huge challenges in our complex software world where software is built on top of other’s release processes. In most cases, it’s easier and more efficient to have validation running regularly that pulls in the latest changes, or have a pipeline run when a change is detected in a key component. Either way, you need to know that software works with the latest versions and that the latest versions of all the building blocks are secure.
Then you get into validation of the software your team is authoring. Write your tests. Security testing is a lot like every other type of testing. There are tests that work like unit tests in terms of fuzzers or static analysis tools and more system level testing in terms of port probing, OWASP top 10 testing, pen testing tools, and more. Spend time evaluating the entry points and threat vectors on what you’re authoring. This is also a good time to bring in security specialists, to evaluate or help author the tests you’ll be running on a very regular basis.
At this point, it’s good to recognize that this testing is a software component in and of itself and will require maintenance and focus from developers to keep operating at its best. This is where leaders need to show understanding of maintenance of processes.
Automate for deployments sake
From the validation of the supply chain to the software being authored by the first party, you move into the world of operations and the infrastructure that is running the software. First and foremost, you need automation to ensure that deployments and updates can happen easily, quickly and consistently. The consistency portion of automation is often overlooked, but provides huge amounts of value when different team members are supporting an emergency deployment or update.
From here, you have some type of artifact to be built. In many cases it’s a container image, but it could be an operating system package, a library to install with a software runtime environment or simply a tarball. Here is where you can validate that final artifact.
Scan for threats
Tools like Snyk can help you scan for vulnerabilities on the final artifact. Did you already check for vulnerabilities earlier? Yes. Should you do it again after building an artifact? Also yes. Earlier it was more about the building blocks and environment you start with. Starting from something clean is ideal. Since then you’ve added first part code (and probably more 3rd party code). You probably also modified the container or bundle you’re building as well. Vulnerability scanning, dependency scanning, and other non-runtime scans are ideal at this state. They are also normally quite fast.
Your automation should integrate with the wide variety of security tools available to manage secrets, keys, one-time-use items, and other credentials. With the advent of these cloud based tools, great security practices with real world protection can be set up and stay secure with minimal maintenance over a longer period of time.
Validate in production
Once you have your software in production — maybe it’s running in a container, or many containers, or on an application platform or some serverless deployment — the next step is to validate it as it’s running. This phase of the software lifecycle should always include testing.
What if an operator leaves a port open or changes permissions on something to debug and forgets to change it back? These are the tests that should run. For many years people said they didn’t test in production, but they did. They usually just called it monitoring. Either way you have a state you want to see and check for it. If it’s not the state you expect it’s a defect in some way. When a defect is found, start back at supply chain validation and push through. That’s the power of automating those security checkpoints and being able to repeat them quite quickly.
- Start with ownership and laying a foundation of the software supply chain assets and areas of concern. You validate those.
- Your process validates your base operating environment, e.g. OS, Docker Images, libraries, etc.
- Your process validates the software your team authors.
- Your process validates the artifact that contains all of those things
- Finally, production validation.
Validation is continuous if you want to be successful with security being woven into the modern software delivery practice. It’s also a practice that requires maintenance and revisiting as the team learns more about what threats exist, bugs found, or new/better ways to continuously integrate. Happy building.