Unpacking the Power of Policy at Scale in Anchore

Generating a software bill of materials (SBOM) is starting to become common practice. Is your organization using them to their full potential? Here are a couple questions Anchore can help you answer with SBOMs and the power of our policy engine:

  • How far off are we from meeting the security requirements that Iron Bank, NIST, CIS, and DISA put out around container images?
  • How can I standardize the way our developers build container images to improve security without disrupting the development team’s output?
  • How can I best prioritize this endless list of security issues for my container images?
  • I’m new to containers. Where do I start on securing them?

If any of those questions still need answering at your organization and you have five minutes, you’re in the right place. Let’s dive in.

If you’re reading this you probably already know that Anchore creates developer tools to generate SBOMs, and has been since 2016. Beyond just SBOM generation, Anchore truly shines when it comes to its policy capabilities. Every company operates differently — some need to meet strict compliance standards while others are focused on refining their software development practices for enhanced security. No matter where you’re at in your container security journey today, Anchore’s policy framework can help improve your security practices.

Anchore Enterprise has a tailored approach to policy and enforcement that means whether you’re a healthcare provider abiding by stringent regulations or a startup eager to fortify its digital defenses, Anchore has got you covered. Our granular controls allow teams to craft policies that align perfectly with their security goals.

Exporting Policy Reports with Ease

Anchore also has a nifty command line tool called anchorectl that allows you to grab SBOMs and policy results related to those SBOMs. There are a lot of cool things you can do with a little bit of scripting and all the data that Anchore Enterprise stores. We are going to cover one example in this blog.

Once Anchore has created and stored an SBOM for a container image, you can quickly get policy results related to that image. The anchorectl command that will evaluate an image against the docker-cis-benchmark policy bundle:

anchorectl image details <image-id> -p docker-cis-benchmark

That command will return the policy result in a few seconds. Let’s say your organization develops 100 images and you want to meet the CIS benchmark standard. You wouldn’t want to assess each of these images individually, that sounds exhausting. 

To solve this problem, we have created a script that can iterate over any number of images, merge the results into a single policy report, and export that into a csv file. This allows you to make strategic decisions about how you can most effectively move towards compliance with the CIS benchmark (or any standard).

In this example, we ran the script against 30 images in my Anchore deployment. Now take a look holistically at how far off we are from CIS compliance. Here are a few metrics that standout:

  • 26 of the 30 images are running as ‘root’
  • 46.9% of our total vulnerabilities have fixes available (4978 /10611)
  • ADD instructions are being used in 70% of our images
  • Health checks missing in 80% of our images
  • 14 secrets (all from the same application team)
  • 1 malware hit (Cryptominer Casey is at it again)

As a security team member, we didn’t write any of this code myself, which means I need to work with my developer colleagues on the product/application teams to clear up these security issues. Usually this means an email that educates my colleagues on how to utilize health checks, prefer COPY instead over ADD in Dockerfiles, declaring a non-privileged user instead of root, and methods to upgrade packages with fixes available (e.g., Dependabot). Finally, we would prioritize investigating how that malware made its way into that image for myself.

This example illustrates how storing SBOMs and applying policy rules against them at scale can streamline your path to your container security goals.

Visualizing Your Aggregated Policy Reports

While this raw data is useful in and of itself, there are times when you may want to visualize the data in a way that is easier to understand.  While Anchore Enterprise does provide some dashboarding capabilities, it is not and does not aim to be a versatile dashboarding tool. This is where utilizing an observability vendor comes in handy.

In this example, I’ll be using New Relic as they provide a free tier that you can sign up for and begin using immediately. However, other providers such as Datadog and Grafana would also work quite well for this use case. 

Importing your Data

  1. Download the tsv-to-json.py script
  2. Save the data produced by the policy-report.py script as a TSV file
    • We use TABs as a separator because commas are used in many of the items contained in the report.
  3. Run the tsv-to-json.py script against the TSV file:
python3 tsv-to-json.py aggregated_output.tsv > test.json
  1. Sign-up for a New Relic account here
  2. Find your New Relic Account ID and License Key
    • Your New Relic Account ID can be seen in your browser’s address bar upon logging in to New Relic, and your New Relic License Key can be found on the right hand side of the screen upon initial login to your New Relic account.
  3. Use curl to push the data to New Relic:
gzip -c test.json | curl \
-X POST \
-H "Content-Type: application/json" \
-H "Api-Key: <YOUR_NEWRELIC_LICENSE_KEY>" \
-H "Content-Encoding: gzip" \
https://insights-collector.newrelic.com/v1/accounts/<YOUR_NEWRELIC_ACCOUNT_ID>/events \
--data-binary @-

Visualizing Your Data

New Relic uses the New Relic Query Language (NRQL) to perform queries and render charts based on the resulting data set.  The tsv-to-json.py script you ran earlier converted your TSV file into a JSON file compatible with New Relic’s event data type.  You can think of each collection of events as a table in a SQL database.  The tsv-to-json.py script will automatically create an event type for you, combining the string “Anchore” with a timestamp.

To create a dashboard in New Relic containing charts, you’ll need to write some NRQL queries.  Here is a quick example:

FROM Anchore1698686488 SELECT count(*) FACET severity

This query will count the total number of entries in the event type named Anchore1698686488 and group them by the associated vulnerability’s severity. You can experiment with creating your own, or start by importing a template we have created for you here.

Wrap-Up

The security data that your tools create is only as good as the insights that you are able to derive from them. In this blog post, we covered a way to help security practitioners turn a mountain of security data into actionable and prioritized security insights. That can help your organization to improve its security posture and meet compliance standards quicker. That being said this blog is dependent on you already being a customer of Anchore Enterprise.

Looking to learn more about how to utilizing a policy-based security posture to meet DoD compliance standards like cATO or CMMC? One of the most popular technology shortcuts is to utilize a DoD software factory. Anchore has been helping organizations and agencies put the Sec in DevSecOps by securing traditional software factories, transforming them into DoD software factories. Get caught up with the content below:

Automated Policy Enforcement for CMMC with Anchore Enterprise

The Cyber Maturity Model Certification (CMMC) is an important program to harden the cybersecurity posture of the defense industrial base. Its purpose is to validate that appropriate safeguards are in place to protect controlled unclassified information (CUI). Many of the organizations that are required to comply with CMMC are Anchore customers. They have the responsibility to protect the sensitive, but not classified data, of US military and government agencies as they support the various missions of the United States. 

CMMC 2.0 Levels

  • Level 1 Foundation: Safeguard federal contract information (FCI); not critical to national security.
  • Level 2 Advanced:  This maps directly to NIST Special Publication (SP) 800-171. Its primary goal is to ensure that government contractors are properly protecting controlled unclassified information (CUI).
  • Level 3 Expert: This maps directly to NIST Special Publication (SP) 800-172. Its primary goal is to go beyond the base-level security requirements defined in NIST 800-171. NIST 800-172 provides security requirements that specifically defend against advanced persistent threats (APTs).

This is of critical importance as these organizations leverage common place DevOps tooling to build their software. Additionally, these large organizations may be working with smaller subcontractors or suppliers who are building software in tandem or partnership. 

For example, a mega-defense contractor is working alongside a small mom-and-pop shop to develop software for a classified government system. Lots of questions we should have here:

  1. How can my company as a mega-defense contractor validate what software built by my partner is not using blacklisted software packages?
  2. How can my company validate software supplied to me is free of malware?
  3. How can I validate that the software supplied to me is in compliance with licensing standards and vulnerability compliance thresholds of my security team?
  4. How do I validate that the software I’m supplying is compliant not only against NIST 800-171 and CMMC, but against the compliance standards of my government end user (Such as NIST 800-53 or NIST 800-161)?

Validating Security between DevSecOps Pipelines and Software Supply Chain

At any major or small contractor alike, everyone has taken steps to build internal DevSecOps (DSO) pipelines. However, the defense industrial base (DIB) commonly involves daily relationships in which smaller defense contractors supply software to a larger defense contractor for a program or DSO pipeline that consumes and implements that software. With Anchore Enterprise, we can now validate if that software supplied is compliant with CMMC controls as specified in NIST 800-171.

Looking to learn more about how to achieve CMMC Level 2 or NIST 800-171 compliance? One of the most popular technology shortcuts is to utilize a DoD software factory. Anchore has been helping organizations and agencies put the Sec in DevSecOps by securing traditional software factories, transforming them into DoD software factories.

Which Controls does Anchore Enterprise Automate?

3.1.7 – Restrict Non-Privileged Users and Log Privileged Actions

Related NIST 800-53 Controls: AC-6 (10)

Description: Prevent non-privileged users from executing privileged functions and capture the execution of such functions in audit logs. 

Implementation: Anchore Enterprise can scan the container manifests to determine if the user is being given root privileges and implement an automated policy to prevent build containers from entering a runtime environment. This prevents a scenario where any privileged functions can be utilized in a runtime environment.

3.4.1 – Maintain Baseline Configurations & Inventories

Related NIST 800-53 Controls: CM-2(1), CM-8(1), CM-6

Description: Establish and maintain baseline configurations and inventories of organizational systems (including hardware, software, firmware, and documentation) throughout the respective system development life cycles.

Implementation: Anchore Enterprise provides a centralized inventory of all containers and their associated manifests at each stage of the development pipeline. All manifests, images and containers are automatically added to the central tracking inventory so that a complete list of all artifacts of the build pipeline can be tracked at any moment in time.

3.4.2 – Enforce Security Configurations

Related NIST 800-53 Controls: CM-2 (1) & CM-8(1) & CM-6

Description: Establish and enforce security configuration settings for information technology products employed in organizational systems.

Implementation: Implementation: Anchore Enterprise scans all container manifest files for security configurations and publishes found vulnerabilities to a centralized database that can be used for monitoring, ad-hoc reporting, alerting and/or automated policy enforcement.

3.4.3 – Monitor and Log System Changes with Approval Process

Related NIST 800-53 Controls: CM-3

Description: Track, review, approve or disapprove, and log changes to organizational systems.

Implementation: Anchore Enterprise provides a centralized dashboard that tracks all changes to applications which makes scheduled reviews simple. It also provides an automated controller that can apply policy-based decision making to either automatically approve or reject changes to applications based on security rules.

3.4.4 – Run Security Analysis on All System Changes

Related NIST 800-53 Controls: CM-4

Description: Analyze the security impact of changes prior to implementation.

Implementation: Anchore Enterprise can scan changes to applications for security vulnerabilities during the build pipeline to determine the security impact of the changes.

3.4.6 – Apply Principle of Least Functionality

Related NIST 800-53 Controls: CM-7

Description: Employ the principle of least functionality by configuring organizational systems to provide only essential capabilities.

Implementation: Anchore Enterprise can scan all applications to ensure that they are uniformly applying the principle of least functionality to individual applications. If an application does not meet this standard then Anchore Enterprise can be configured to prevent an application from being deployed to a production environment.

3.4.7 – Limit Use of Nonessential Programs, Ports, and Services

Related NIST 800-53 Controls: CM-7(1), CM-7(2)

Description: Restrict, disable, or prevent the use of nonessential programs, functions, ports, protocols, and services.

Implementation: Anchore Enterprise can be configured as a gating agent that will scan for specific security violations and prevent these applications from being deployed until the violations are remediated.

3.4.8 – Implement Blacklisting and Whitelisting Software Policies

Related NIST 800-53 Controls: CM-7(4), CM-7(5)

Description: Apply deny-by-exception (blacklisting) policy to prevent the use of unauthorized software or deny-all, permit-by-exception (whitelisting) policy to allow the execution of authorized software.

Implementation: Anchore Enterprise can be configured as a gating agent that will apply a security policy to all scanned software. The policies can be configured in a black- or white-listing manner.

3.4.9 – Control and Monitor User-Installed Software

Related NIST 800-53 Controls: CM-11

Description: Control and monitor user-installed software.

Implementation: Anchore Enterprise scans all software in the development pipeline and records all user-installed software. The scans can be monitored in the provided dashboard. User-installed software can be controlled (allowed or denied) via the gating agent.

3.5.10 – Store and Transmit Only Cryptographically-Protected Passwords

Related NIST 800-53 Controls: IA-5(1)

Description: Store and transmit only cryptographically-protected of passwords.

Implementation: Anchore Enterprise can scan for plain-text secrets in build artifacts and prevent exposed secrets from being promoted to the next environment until the violation is remediated. This prevents unauthorized storage or transmission of unencrypted passwords or secrets. See screenshot below to see this protection in action.

3.11.2 – Scan for Vulnerabilities

Related NIST 800-53 Controls: RA-5, RA-5(5)

Description: Scan for vulnerabilities in organizational systems and applications periodically and when new vulnerabilities affecting those systems and applications are identified.

Implementation: Anchore Enterprise is designed to scan all systems and applications for vulnerabilities continuously and alert when any changes introduce new vulnerabilities. See screenshot below to see this protection in action.

3.11.3 – Remediate Vulnerabilities Respective to Risk Assessments

Related NIST 800-53 Controls: RA-5, RA-5(5)

Description: Remediate vulnerabilities in accordance with risk assessments.

Implementation: Anchore Enterprise can be tuned to allow or deny changes based on a risk scoring system.

3.12.2 – Implement Plans to Address System Vulnerabilities

Related NIST 800-53 Controls: CA-5

Description: Develop and implement plans of action designed to correct deficiencies and reduce or eliminate vulnerabilities in organizational systems.

Implementation: Anchore Enterprise automates the process of ensuring all software and systems are in compliance with the security policy of the organization. 

3.13.4 – Block Unauthorized Information Transfer via Shared Resources

Related NIST 800-53 Controls: SC-4

Description: Prevent unauthorized and unintended information transfer via shared system resources.

Implementation: Anchore Enterprise can be configured as a gating agent that will scan for unauthorized and unintended information transfer and prevent violations from being transferred between shared system resources until the violations are remediated.

3.13.8 – Use Cryptography to Safeguard CUI During Transmission

Related NIST 800-53 Controls: SC-8

Description: Transmission Confidentiality and Integrity: Implement cryptographic mechanisms to prevent unauthorized disclosure of CUI during transmission unless otherwise protected by alternative physical safeguards.

Implementation: Anchore Enterprise can be configured as a gating agent that will scan for CUI and prevent violations of organization defined policies regarding CUI from being disclosed between systems.

3.14.5 – Periodically Scan Systems and Real-time Scan External Files

Related NIST 800-53 Controls: SI-2

Description: Perform periodic scans of organizational systems and real-time scans of files from external sources as files are downloaded, opened, or executed.

Implementation: Anchore Enterprise can be configured to scan all external dependencies that are built into software and provide information about relevant security vulnerabilities in the software development pipeline. See screenshot below to see this protection in action.

Wrap-Up

In a world increasingly defined by software solutions, the cybersecurity posture of defense-related industries stands paramount. The CMMC, a framework with its varying levels of compliance, underscores the commitment of the defense industrial base to fortify its cyber defenses. 

As a multitude of organizations, ranging from the largest defense contractors to smaller mom-and-pop shops, work in tandem to support U.S. missions, the intricacies of maintaining cybersecurity standards grow. The questions posed exemplify the necessity to validate software integrity, especially in complex collaborations. 

Anchore Enterprise solves these problems by automating software supply chain security best practices. It not only automates a myriad of crucial controls, ranging from user privilege restrictions to vulnerability scanning, but it also empowers organizations to meet and exceed the benchmarks set by CMMC and NIST. 

In essence, as defense entities navigate the nuanced web of software development and partnerships, tools like Anchore Enterprise are indispensable in safeguarding the nation’s interests, ensuring the integrity of software supply chains, and championing the highest levels of cybersecurity.

If you’d like to learn more about the Anchore Enterprise platform or speak with a member of our team, feel free to book a time to speak with one of our specialists.

The Power of Policy-as-Code for the Public Sector

As the public sector and businesses face unprecedented security challenges in light of software supply chain breaches and the move to remote, and now hybrid work, means the time for policy-as-code is now.

Here’s a look at the current and future states of policy-as-code and the potential it holds for security and compliance in the public sector:

What is Policy-as-Code?

Policy-as-code is the act of writing code to manage the policies you create to help with container security and other related security policies. Your IT staff can automate those policies to support policy compliance throughout your DevSecOps toolchain and production systems. Programmers express policy-as-code in a high-level language and store them in text files.

Your agency is most likely getting exposure to policy-as-code through cloud services providers (CSPs). Amazon Web Services (AWS) offers policy-as-code via the AWS Cloud Development Kit. Microsoft Azure supports policy-as-code through Azure Policy, a service that provides both built-in and user-defined policies across categories that map the various Azure services such as Compute, Storage, and Azure Kubernetes Services (AKS).

Benefits of Policy-as-Code

Here are some benefits your agency can realize from policy-as-code:

  • Information and logic about your security and compliance policies as code remove the risks of “oral history” when sysadmins may or may not pass down policy information to their successors during a contract transition.
  • When you render security and compliance policies as code in plain text files, you can use various DevSecOps and cloud management tools to automate the deployment of policies into your systems.
  • Guardrails for your automated systems because as your agency moves to the cloud, your number of automated systems only grows. A responsible growth strategy is to protect your automated systems from performing dangerous actions. Policy-as-code is a more suitable method to verify the activities of your automated systems.
  • A longer-term goal would be to manage your compliance and security policies in your version control system of choice with all the benefits of history, diffs, and pull requests for managing software code.
  • You can now test policies with automated tools in your DevSecOps toolchain.

Public Sector Policy Challenges

As your agency moves to the cloud, it faces new challenges with policy compliance while adjusting to novel ways of managing and securing IT infrastructure:

Keeping Pace with Government-Wide Compliance & Cloud Initiatives

FedRAMP compliance has become a domain specialty unto itself. While the United States federal government maintains control over the policies behind FedRAMP, and the next updates and changes, FedRAMP compliance has become its own industry with specialized consultants and toolsets that promise to get an agency’s cloud application through the FedRAMP approval process.

As government cloud initiatives such as Cloud Smart become more important, the more your agency can automate the management and testing of security policies, the better. Automation reduces human error because it does away with the manual and tedious management and testing of security policies.

Automating Cloud Migration and Management

Large cloud initiatives bring with them the automation of cloud migration and management. Cloud-native development projects that accompany cloud initiatives need to consider continuous compliance and security solutions to protect their software containers.

Maintaining Continuous Transparency and Accountability

Continuous transparency is fundamental to FedRAMP and other government compliance programs. Automation and reporting are two fundamental building blocks. The stakes for reporting are only going to increase as the mandates of the Executive Order on Improving the Nation’s Cybersecurity become reality for agencies.

Achieving continuous transparency and accountability requires that an enterprise have the right tools, processes, and frameworks in place to monitor, report, and manage employee behaviors throughout the application delivery life cycle.

Securing the Agency Software Supply Chain

Government agencies are multi-vendor environments with homogenous IT infrastructure, including cloud services, proprietary tools, and open source technologies. The recent release of the Container Platform SRG is going to drive more requirements for the automation of container security across Department of Defense (DoD) projects

Looking to learn more about how to utilizing a policy-based security posture to meet DoD compliance standards like cATO or CMMC? One of the most popular technology shortcuts is to utilize a DoD software factory. Anchore has been helping organizations and agencies put the Sec in DevSecOps by securing traditional software factories, transforming them into DoD software factories. Get caught up with the content below:

Policy-as-Code: Current and Future States

The future of policy-as-code in government could go in two directions. The same technology principles of policy-as-code that apply to technology and security policies can also render any government policy-as-code. An example of that is the work that 18F is prototyping for SNAP (Supplemental Nutrition Assistance Program) food stamp program eligibility.

Policy-as-code can also serve as another automation tool for FedRAMP and Security Technical Implementation Guide (STIG) testing as more agencies move their systems to the cloud. Look for the backend tools that can make this happen gradually to improve over the next few years.

Managing Cultural and Procurement Barriers

Compliance and security are integral elements of federal agency working life, whether it’s the DoD supporting warfighters worldwide or civilian government agencies managing constituent data to serve the American public better.

The concept of policy-as-code brings to mind being able to modify policy bundles on the fly and pushing changes into your DevSecOps toolchain via automation. While theoretically possible with policy-as-code in a DevSecOps toolchain, the reality is much different. Industry standards and CISO directives govern policy management at a much slower and measured cadence than the current technology stack enables.

API integration also enables you to integrate your policy-as-code solution into third-party tools such as Splunk and other operational support systems that your organization may already use as your standards.

Automation

It’s best to avoid manual intervention for managing and testing compliance policies. Automation should be a top requirement for any policy-as-code solution, especially if your agency is pursuing FedRAMP or NIST certification for its cloud applications.

Enterprise Reporting

Internal and external compliance auditors bring with them varying degrees of reporting requirements. It’s essential to have a policy-as-code solution that can support a full range of reporting requirements that your auditors and other stakeholders may present to your team.

Enterprise reporting requirements range from customizable GUI reporting dashboards to APIs that enable your developers to integrate policy-as-code tools into your DevSecOps team’s toolchain.

Vendor Backing and Support

As your programs venture into policy compliance, failing a compliance audit can be a costly mistake. You want to choose a policy-as-code solution for your enterprise compliance requirements with a vendor behind it for technical support, service level agreements (SLAs), software updates, and security patches.

You also want vendor backing and support also for technical support. Policy-as-code isn’t a technology to support using your own internal IT staff (at least in the beginning).

With policy-as-code being a newer technology option, a fee-based solution backed by a vendor also gets you access to their product management. As a customer, you want a vendor that will let you access their product roadmap and see the future.

Interested to see how the preeminent DoD Software Factory Platform used a policy-based approach to software supply chain security in order to achieve a cATO and allow any DoD programs that built on their platform to do the same? Read our case study or watch our on-demand webinar with Major Camdon Cady.

Enforcing the DoD Container Image and Deployment Guide with Anchore Federal

The latest version of the DoD Container Image and Deployment Guide details technical and security requirements for container image creation and deployment within a DoD production environment. Sections 2 and 3 of the guide include security practices that teams must follow to limit the footprint of security flaws during the container image build process. These sections also discuss best security practices and correlate them to the corresponding security control family with Risk Management Framework (RMF) commonly used by cybersecurity teams across DoD.

Anchore Federal is a container scanning solution used to validate the DoD compliance and security standards, such as continuous authorization to operate (cATO), across images, as explained in the DoD Container Hardening Process Guide. Anchore’s policy first approach places policy where it belongs– at the forefront of the development lifecycle to assess compliance and security issues in a shift left approach. Scanning policies within Anchore are fully customizable based on specific mission needs, providing more in-depth insight into compliance irregularities that may exist within a container image. This level of granularity is achieved through specific security gates and triggers that generate automated alerts. This allows teams to validate that the best practices discussed in Section 2 of the Container Image Deployment Guide enable best practices to be enforced as your developers build.  

Anchore Federal uses a specific DoD Scanning Policy that enforces a wide array of gates and triggers that provide insight into the DoD Container Image and Deployment Guide’s security practices. For example, you can configure the Dockerfile gate and its corresponding triggers to monitor for security issues such as privileged access. You can also configure the Dockerfile gate to expose unauthorized ports and validate images built from approved base images and check for the unauthorized disclosure of secrets/sensitive files, amongst others.

Anchor Federal’s DoD scanning policy is already enabled to validate the detailed list of best practices in Section 2 of the Container Image and Deployment Guide. 

Looking to learn more about how to achieve container hardening at DoD levels of security? One of the most popular technology shortcuts is to utilize a DoD software factory. Anchore has been helping organizations and agencies put the Sec in DevSecOps by securing traditional software factories, transforming them into DoD software factories.

Next Steps

Anchore Federal is a battle-tested solution that has been deployed to secure DoD’s most critical workloads. Anchore Federal exists to provide cleared professional services and software to DoD mission partners and the US Intelligence Community in building their DevSecOps environments. Learn more about how Anchore Federal supports DoD missions.

Benefits of Static Image Inspection and Policy Enforcement

In this post, I will dive deeper into the key benefits of a comprehensive container image inspection and policy-as-code framework.
A couple of key terms:

  • Comprehensive Container Image Inspection: Complete analysis of a container image to identify it’s entire contents: OS & non-OS packages, libraries, licenses, binaries, credentials, secrets, and metadata. Importantly: storing this information in a Software Bill of Materials (SBOM) for later use.
  • Policy-as-Code Framework: a structure and language for policy rule creation, management, and enforcement represented as code. Importantly: This allows for software development best practices to be adopted such as version control, automation, and testing.

What Exactly Comes from a Complete Static Image Inspection?

A deeper understanding. Container images are complex and require a complete analysis to fully understand all of their contents. The picture above shows all of the useful data an inspection can uncover. Some examples are:

  • Ports specified via the EXPOSE instruction
  • Base image / Linux distribution
  • Username or UID to use when running the container
  • Any environment variables set via the ENV instruction
  • Secrets or keys (ex. AWS credentials, API keys) in the container image filesystem
  • Custom configurations for applications (ex. httpd.conf for Apache HTTP Server)

In short, a deeper insight into what exactly is inside of container images allows teams to make better decisions on what configurations and security standards they would prefer their production software to have.

How to Use the Above Data in Context?

While we can likely agree that access to the above data for container images is a good thing from a visibility perspective, how can we use it effectively to produce higher-quality software? The answer is through policy management.

Policy management allows us to create and edit the rules we would like to enforce. Oftentimes these rules fall into one of three buckets: security, compliance, or best-practice. Typically, a policy author creates sets of rules and describes the circumstances by which certain behaviors/properties are allowed or not. Unfortunately, authors are often restricted to setting policy rules with a GUI or even a Word document, which makes rules difficult to transfer, repeat, version, or test. Policy-as-code solves this by representing policies in human-readable text files, which allow them to adopt software practices such as version control, automation, and testing. Importantly, a policy as code framework includes a mechanism to enforce the rules created.

With containers, standardization on a common set of best-practices for software vulnerabilities, package usage, secrets management, Dockerfiles, etc. are excellent places to start. Some examples of policy rules are:

  • Should all Dockerfiles have effective USER instruction? Yes. If undefined, warn me.
  • Should the FROM instruction only reference a set of “trusted” base images? Yes. If not from the approved list, fail this policy evaluation.
  • Are AWS keys ever allowed inside of the container image filesystem? No. If they are found, fail this policy evaluation.
  • Are containers coming from DockerHub allowed in production? No. If they attempt to be used, fail this policy evaluation.

The above examples demonstrate how the Dockerfile analysis and secrets found during the image inspection can prove extremely useful when creating policy. Most importantly, all of these policy rules are created to map to information available prior to running a container.

Integrating Policy Enforcement

With policy rules clearly defined as code and shared across multiple teams, the enforcement component can freely be integrated into the Continuous Integration / Continuous Delivery workflow. The concept of “shifting left” is important to follow here. The principal benefit here is, the more testing and checks individuals and teams can incorporate further left in their software development pipelines, the less costly it will be for them when changes need to be made. Simply put, prevention is better than a cure.

Integration as Part of a CI Pipeline

Incorporating container image inspection and policy rule enforcement to new or existing CI pipelines immediately adds security and compliance requirements as part of the build, blocking important security risks from ever making their way into production environments. For example, if a policy rule exists to explicitly not allow a container image to have a root user defined in the Dockerfile, failing the build pipeline of a non-compliant image before pushing to a production registry is a fundamental quality gate to implement. Developers will typically be forced to remediate the issue they’ve created which caused the build failure and work to modify their commit to reflect compliant changes.

Below depicts how this process works with Anchore:

Anchore provides an API endpoint where the CI pipeline can send an image for analysis and policy evaluation. This provides simple integration into any workflow, agnostic of the CI system being used. When the policy evaluation is complete, Anchore returns a PASS or FAIL output based on the policy rules defined. From this, the user can choose whether or not to fail the build pipeline.

Integration with Kubernetes Deployments

Adding an admission controller to gate execution of container images in Kubernetes in accordance with policy standards can be a critical method to validate what containers are allowed to run on your cluster. Very simply: admit the containers I trust, reject the ones I don’t. Some examples of this are:

  • Reject an image if it is being pulled directly from DockerHub.
  • Reject an image if it has high or critical CVEs that have fixes available.

This integration allows Kubernetes operators to enforce policy and security gates for any pod that is requested on their clusters before they even get scheduled.

Below depicts how this process works with Anchore and the Anchore Kubernetes Admission Controller:

The key takeaway from both of these points of integration is that they are occurring before ever running a container image. Anchore provides users with a full suite of policy checks which can be mapped to any detail uncovered during the image inspection. When discussing this with customers, we often hear, “I would like to scan my container images for vulnerabilities.” While this is a good first step to take, it is the tip of the iceberg when it comes to what is available inside of a container image.

Conclusion

With immutable infrastructure, once a container image artifact is created, it does not change. To make changes to the software, good practice tells us to build a new container image, push it to a container registry, kill the existing container, and start a new one. As explained above, containers provide us with tons of useful static information gathered during an inspection, so another good practice is to use this information, as soon as it is available, and where it makes sense in the development workflow. The more policies which can be created and enforced as code, the faster and more effective IT organizations will be able to deliver secure software to their end customers.

Looking to learn more about how to utilizing a policy-based security posture to meet DoD compliance standards like cATO or CMMC? One of the most popular technology shortcuts is to utilize a DoD software factory. Anchore has been helping organizations and agencies put the Sec in DevSecOps by securing traditional software factories, transforming them into DoD software factories. Get caught up with the content below:

A Policy Based Approach to Container Security & Compliance

At Anchore, we take a preventative, policy-based compliance approach, specific to organizational needs. Our philosophy of scanning and evaluating Docker images against user-defined policies as early as possible in the development lifecycle, greatly reduce vulnerable, non-compliant images from making their way into trusted container registries and production environments.

But what do we mean by ‘policy-based compliance’? And what are some of the best practices organizations can adopt to help achieve their own compliance needs? In this post, we will first define compliance and then cover a few steps development teams can take to help to bolster their container security.

An Example of Compliance

Before we define ‘policy-based compliance’, it helps to gain a solid understanding of what compliance means in the world of software development. Generally speaking, compliance is a set of standards for recommended security controls laid out by a particular agency or industry that an application must adhere to. An example of such an agency is the National Institute of Standards and Technology or NIST. NIST is a non-regulatory government agency that develops technology, metrics, and standards to drive innovation and economic competitiveness at U.S. based organizations in the science and technology industry. Companies that are providing products and services to the federal government oftentimes are required to meet the security mandates set by NIST. An example of one of these documents is NIST SP 800-218, the Secure Software Development Framework (SSDF) which specifics the security controls necessary to ensure a software development environment is secure and produces secure code.

What do we mean by ‘Policy-based’?

Now that we have a definition and example, we can begin to discuss the aspect role play in achieving compliance. In short, policy-based compliance means adhering to a set compliance requirements via customizable rules defined by a user. In some cases, security software tools will contain a policy engine that allows for development teams to create rules that correspond to a particular security concern addressed in a compliance publication.

Looking to learn more about how to utilizing a policy-based security posture to meet DoD compliance standards like cATO or CMMC? One of the most popular technology shortcuts is to utilize a DoD software factory. Anchore has been helping organizations and agencies put the Sec in DevSecOps by securing traditional software factories, transforming them into DoD software factories. Get caught up with the content below:

How can Organizations Achieve Compliance in Containerized Environments?

Here at Anchore, our focus is helping organizations secure their container environments by scanning and analyzing container images. Oftentimes, our customers come to us to help them achieve certain compliance requirements they have, and we can often point them to our policy engine. Anchore policies are user-defined checks that are evaluated against an analyzed image. A best practice for implementing these checks is through a step in CI/CD. By adding an Anchore image scanning step in a CI tool like Jenkins or Gitlab, development teams can create an added layer of governance to their build pipeline.

Complete Approach to Image Scanning

Vulnerability scanning

Adding image scanning against a list of CVEs to a build pipeline allows developers to be proactive about security as they will get a near-immediate feedback loop on potentially vulnerable images. Anchore image scanning will identify any known vulnerabilities in the container images, enforcing a shift-left paradigm in the development lifecycle. Once vulnerabilities have been identified, reports can be generated listing information about the CVEs and vulnerable packages within the images. In addition, Anchore can be configured to send webhooks to specified endpoints if new CVEs have published that impact an image that has been previously scanned. At Anchore, we’ve seen integrations with Slack or JIRA to alert teams or file tickets automatically when vulnerabilities are discovered.

Adding governance

Once an image has been analyzed and its content has been discovered, categorized, and processed, the resulting data can be evaluated against a user-defined set of rules to give a final pass or fail recommendation for the image. It is typically at this stage that security and DevOps teams want to add a layer of control to the images being scanned in order to make decisions on which images should be promoted into production environments.

Anchore policy bundles (structured as JSON documents) are the unit of policy definition and evaluation. A user may create multiple policy bundles, however, for evaluation, only one can be marked as ‘active’. The policy is expressed as a policy bundle, which is made up of a set of rules to perform an evaluation of an image. These rules can define a check against an image for things such as:

  • Security vulnerabilities
  • Package whitelists and blacklists
  • Configuration file contents
  • Presence of credentials in an image
  • Image manifest changes
  • Exposed ports

Anchore policies return a pass or fail decision result.

Putting it Together with Compliance

Given the variance of compliance needs across different enterprises, having a flexible and robust policy engine becomes a necessity for organizations needing to adhere to one or many sets of standards. In addition, managing and securing container images in CI/CD environments can be challenging without the proper workflow. However, with Anchore, development and security teams can harden their container security posture by adding an image scanning step to their CI, reporting back on CVEs, and fine-tuning policies meet compliance requirements. With compliance checks in place, only container images that meet the standards laid out a particular agency or industry will be allowed to make their way into production-ready environments.

Conclusion

Taking a policy-based compliance approach is a multi-team effort. Developers, testers, and security engineers should be in constant collaboration on policy creation, CI workflow, and notification/alert. With all of these aspects in-check, compliance can simply become part of application testing and overall quality and product development. Most importantly, it allows organizations to create and ship products with a much higher level of confidence knowing that the appropriate methods and tooling are in place to meet industry-specific compliance requirements.

Interested to see how the preeminent DoD Software Factory Platform used a policy-based approach to software supply chain security in order to achieve a cATO and allow any DoD programs that built on their platform to do the same? Read our case study or watch our on-demand webinar with Major Camdon Cady.