Omer Dahan – Mend https://www.mend.io Wed, 04 Sep 2024 22:47:25 +0000 en-US hourly 1 https://www.mend.io/wp-content/uploads/2024/11/Mend-io-favicon-outline-200px.svg Omer Dahan – Mend https://www.mend.io 32 32 How to Manage Risk Effectively in Cloud-Native Environments https://www.mend.io/blog/how-to-manage-risk-effectively-in-cloud-native-environments/ Wed, 25 Jan 2023 21:37:00 +0000 https://mend.io/how-to-manage-risk-effectively-in-cloud-native-environments/ We’ve all got our heads in the cloud, or if not yet, we’re well on our way there.

In other words, the process of digital transformation is happening at such a pace that almost all organizations will soon be working in the cloud and using cloud-native technology. Analyst Gartner has predicted that by 2025, over 95% of new digital workloads will be deployed on cloud-native platforms. This represents a 30% growth from 2021.

Any shift in working practices necessitates a change in risk management to handle the new environment in which people work. This is particularly important when moving to a cloud-native environment. That’s because the cloud comes with its own risks and security considerations that should be met, and the shift to cloud-native development directly impacts many aspects of your organization’s security procedures. Consequently, we need to completely rethink our approach to security in cloud-native environments. In this blog, let’s take a look at why these environments are vulnerable, what two of the main risks are, and what you can do about them.

Why are cloud-native environments vulnerable?

Cloud-native environments are very accessible to all different users. This is both their strength and their weakness. A developer can set up a cloud environment by simply going to a cloud services provider like Amazon, spinning up an environment, and registering their email account, subject to the complexity of the environment, the security measures that are in place, and any compliance requirements. Then they can start to work, with the permissions set by the cloud service provider and within the parameters of the organization’s policies.  This is great for collaboration but creates a very open environment for malicious actors to find or create vulnerabilities to exploit or insert damaging code.

Moreover, the amount of development and the speed with which it can be achieved in the cloud is wonderful but it simultaneously poses a challenge. That’s because the more software and applications in the cloud, and the increasing amount of data, products, and services that are generated and housed in the cloud, the bigger potential for vulnerabilities, and for these vulnerabilities to be exploited. New code, new applications, new technologies, and new environments create new possibilities for attacks. Alongside the velocity of development, they require constant vigilance and an agile mindset and methodology to stay ahead of flaws and malicious activity.  Implementing proper security measures and regular vulnerability assessments can mitigate those risks.

Risk 1. The visibility challenge

Perhaps the biggest risk of cloud-native environments is a lack of visibility. By their very nature, these environments are huge and can seem to be amorphous. The volume of code, software, components, and dependencies in any cloud environment is far bigger than on-premises. Plus, cloud environments are distributed systems with cloud-native applications based on microservices. Components, particularly from open source software, come from a large variety of sources and can be linked with or dependent upon others in an increasingly complex network of relationships, both within and across different cloud environments. As a result, the attack surface of cloud-native environments continually expands, leading to the emergence of new attack vectors and vulnerabilities that threat actors can seek to exploit.

Furthermore, the cloud introduces a shared responsibility model that requires its multiple users to be accountable for security and for taking sensible steps to avoid any breaches. However, as it’s a more “open” platform in this respect, it lays itself open to more errors or lapses of judgment. Arguably, the security of any given cloud environment is only as good as its weakest parts and users. This can be tricky to police, especially as there’s so much to try and keep an eye on. It’s no surprise, therefore, that without taking the right approach to security, it’s easy for some anomalies, vulnerabilities, and threats to get overlooked. And so, the first challenge is to gain accurate visibility into your cumulative assets.

Risk 2. Security posture and the issue of containers

The second challenge is to establish an accurate and comprehensive security posture for your cloud-native environment. As we’ve seen, there are a lot of different risk factors that can impact this environment, so it’s not always possible to scan every open source component. Even if you can detect them all, there are other things to address. There are also different language-specific files that you should have the capability to scan. There may sometimes be hidden malware, and developers might forget to remove the relevant API keys. You also need accurate visibility into your container images.

In fact, when it comes to container security, vulnerability management takes a huge turn. Container images have a lot of different layers and components, and it’s important to assess all of them based on the accurate runtime environment. That’s because an open source component that’s running may have a very high severity score, objectively, but when it’s put into the cloud-native environment, running on top of a different base layer, the context changes, and its impact is lowered. Failure to take this into account can lead to false positives that obfuscate visibility and cause security inefficiencies or errors.

As a result, when organizations move their workloads to the cloud, they must re-evaluate their application deployment process. Your security posture — the way your security operates, the tools you use, and the way they work — must consider such a variety of components to scan and the changes they can undergo within the context in which you use them.  Only with this context-based vulnerability management approach can you gain a clear and accurate assessment of actual risk, and an accurate risk prioritization mechanism.

How to achieve a risk prioritization mechanism

To do this successfully, you need to tackle three main challenges. The first is to gain accurate visibility into your assets. The second challenge is to gain accurate visibility into your container images across the different layers and map all the different components,

Then you need to assess each case, based on the contextual environment in which it’s running. This gives you a robust combination of the right visibility, a map of all the different components, and an assessment of the accurate risk posture based on actual usage and the vendor’s security advisories. Consequently, a vulnerability that may be severe or seems severe, may be less so in different contexts and may prove to be a false positive in a particular use case. And instead, you can prioritize your activity to address and remediate genuine threats.

Context-based vulnerability management in action

For instance, while a Common Vulnerability Scoring System (CVSS) base score addresses the theoretical level of CVE (Common Vulnerabilities and Exposures) exploitability and classifies 60% of vulnerabilities as having a high or critical severity, only 2.5% of vulnerabilities have actual exploits in the wild that are available for attackers to use. Unfortunately, relying purely on a public database like NVD (the National Vulnerability Database) isn’t very accurate. There’s often a four to five-week delay before the NVD is updated, which can cause many false positives to arise. So, CVEs that appear more severe than they really are or those that have been fixed elsewhere can still be identified as threats. Then time and resources are wasted on mitigating them when the risk they pose is small to none. To minimize false positives in cases where there are conflicts between the NVD reports of a CVE’s severity and the vendor’s report, the vendor score should take precedence. In many cases, the vendor score will be lower because the context in which the vendor used the component reduced the risk of the CVE through means other than fixing the vulnerable code. Put simply, the right context can help eliminate false positives and requires reconciling multiple sources and some research.

As an example, in a standard software development lifecycle (SDLC), the developer may use an open source software package that is assigned an NVD rating. The NVD rating is based on the potential security exposures of a specific vulnerability. But when the developer builds their container image and moves to deploy to the production environment, they use an Ubuntu base image (UBI) from Red Hat, as defined by their security admin. Consequently, the CVE score changes at each step of the application lifecycle, from high to negligible. The main reason is that each open source vendor has its own security advisory, and they address issues and aim to provide the most secure open source base image. This gives us a new scoring mechanism that is based on actual vendor analysis of the impact of the vulnerability on that image in the context in which it’s used.

Using such vendor-supplied ratings that factor in the usage context reduces the number of false alarms and false positives you need to mitigate. This increases the visibility of vulnerabilities and reduced your risk exposure, which are the keys to minimizing the attack surface and managing risk. 

Best practices for cloud-native security

So, generally, what are the best tactics and strategies for mitigating risks in cloud-native environments?

The key is to be proactive rather than reactive. Don’t wait for problems and crises. Take preventative measures as early as possible in the SDLC. Detect the potential threats in your pipeline and prevent them in storage. Shift security as far left as possible because if you shift all the way left to the repo to the code, it won’t be affected in the cloud.

Also, when you’re scanning, get the full picture, or as much as you can. The first step to implementing security controls is in the CI pipeline. This is as far left as we can go. And introduce as many security gates as possible; one when you’re creating software and applications in the pipeline. Another is directly in your image register because not all images are created from sketch one. Some images are imported from the open source communities and developers who use images that are not allowed, or who are importing images directly to the image.

Another step is right before you deploy. Scan all the artifacts you want during the creation of the container image. Scan the container and all the objects therein – anything originating from sources, code, and from the build on the CI/CD system.

Furthermore, assess the security posture of each container that you’re trying to deploy deployed on your cluster. Assess the security posture from the vulnerability management to the study, which is a static analysis, and also the random configuration. Check not only what’s running, but also how to run it effectively and securely.

With all these considerations in mind, and with the measures in place to address them, you lay the foundations of a sound and robust security strategy for your cloud-native environment.

]]>
What Threatens Kubernetes Security and What Can You Do About It? https://www.mend.io/blog/what-threatens-kubernetes-security-and-what-can-you-do-about-it/ Wed, 11 Jan 2023 05:35:00 +0000 https://mend.io/what-threatens-kubernetes-security-and-what-can-you-do-about-it/ Kubernetes adoption has rocketed into ubiquity.  At this point, 96 percent of organizations are either using Kubernetes for container orchestration or evaluating its use, according to the latest Cloud Native Computing Foundation annual survey. But this doesn’t mean that Kubernetes isn’t without security risks. The flexibility of container applications and their capability to carry discrete components that interact over the network present security challenges. In particular, these include:

  • Compromise of container images or containers as a whole with vulnerabilities, configuration defects, and embedded malware
  • Misuse of a container to attack other containers, the host operating system, or other hosts
  • Registry risks, such as insecure connections to registries, stale images in registries, inadequate authentication, and authorization restrictions
  • Orchestrator risks arising from unbounded or unauthorized access, poorly separated inter-container network traffic, mixing of workload sensitivity levels, and orchestrator node trust
  • Vulnerabilities within the runtime software, insecure container runtime configurations, application vulnerabilities, and rogue containers
  • Host operating system risks, such as a large attack surface, shared kernel, operating system component vulnerabilities, improper access, and host OS file system damage

How do malicious actors attack Kubernetes?

Organizations that deploy Kubernetes might leave open ports exposed to the outside world. Knowing that, threat actors use tools (such as Metasploit) and simple how-to guides to scan all open ports for vulnerabilities, and then gain access.

Based on the different configurations of a deployed Kubernetes cluster, malicious actors can leverage misconfigurations in the wild or specific vulnerabilities. They can find a Kubernetes cluster that is exposed in the wild and leverage the misconfiguration of the cluster to execute an attack. From that point on, they can access the configuration of the actual cluster.

As shown below, there are various ways that an attacker could attempt to compromise your Kubernetes cluster and the applications running on it.

Related: A Comprehensive Comparison of Docker vs. Kubernetes

Securing Kubernetes environments

There are three general areas of focus to start with: 

  • Align cluster configurations with security best practices, such as the U.S. Center for Internet Security (CIS) or National Security Agency (NSA) guidelines and standards.  To do this, start by assessing the security posture of the deployment file for your cluster (infrastructure as code file). 
  • Ensure that the applications running on your cluster contain no known vulnerabilities and enforce runtime security policies, making it harder for an attacker to abuse those containers. It’s important to gain control and visibility over the security posture of applications that are running in your environment and assess the risk which hides in both image contents and configuration
  • Enforce credentials protection.

We have found that the defense-in-depth approach is the best way to introduce the necessary security barriers. This security technique for securing cloud native systems is called “The 4C Security Model” because it is divided into four different layers: cloud, cluster, container, code. Addressing all these layers ensures comprehensive security coverage from development to deployment.

It’s also imperative to address the overly permissive defaults associated with Kubernetes objects (roles and subjects, such as service accounts). You need to gain visibility into your role-based access control posture within Kubernetes clusters and gain a clear view of their Kubernetes environment, highlighting the most pressing issues that require immediate attention.

Finally, don’t forget automation. Open source software tools such as Kubescape help DevOps teams automate risk management and compliance with policy-driven automation that replaces onerous manual processes and prioritizes remediation in your environment.

]]>
Risks Faced by Cloud-Native Apps and How to Assess Them https://www.mend.io/blog/risks-faced-by-cloud-native-apps-and-how-to-assess-them/ Tue, 06 Dec 2022 22:08:28 +0000 https://mend.io/risks-faced-by-cloud-native-apps-and-how-to-assess-them/ Modern organizations are adopting a cloud-native approach to their application development. While this approach provides many benefits, it also makes organizations face several challenges, including the challenge of securing the application with a completely different approach. In this blog, we will discuss how software changes and how organizations should think about securing it.

How do cloud-native apps differ from traditional apps?

Gartner estimates 90 percent of global organizations will be running containerized applications in production in the coming five years. This implies a significant shift in how organizations build software, from traditional applications to containerized ones on the cloud.

Applications that used to be one main monolithic application running on physical servers have become, with the rise of the cloud-native approach, a large set of small and independent code logic units called microservices that are assembled into an application and deployed on the cloud via several layers such as containers and clusters. Those services are implemented either in-house or via third parties such as open source projects and external providers.

One of the main driving forces of this transition is the need to ship code faster, which has resulted in the rise of the agile software development process. To accomplish an agile methodology, many DevOps practices, technologies, and tools have been implemented in recent years to support the new way of designing and building cloud-based applications. Moreover, developers are responsible for large parts of the software development life cycle (SDLC), which makes them responsible also to the security aspects within it.

The agile development process has also brought some challenges from the software security perspective. The breaking of software into microservices that are deployed on the cloud infrastructure has made traditional analysis tools, such as application security testing (AST), less hermetic in regard to the multi-layered problem of cloud-based applications. It also created knowledge gaps between the development life-cycle personas, such as Developers, DevOps, and SecOps, in the transition between the development stages, from code to build, to test, to deployment and to run-time.

The challenges of cloud-native applications

The transition to cloud-based applications brings several benefits to the way software is built. It allows developers to ship code faster and more efficiently by using third-party and open source software, and manage their code better with the benefits of Git. It allows organizations to benefit business-wise thanks to improved scalability, flexibility, and resiliency; to deliver products faster to market; and to lower operational costs.

Along with those benefits, cloud-based applications also raise some challenges. These applications are built on many layers and their development process is spread over multiple stages, which makes it difficult to manage and govern the development workflow. There are also massive amounts of tools and technologies that function as enablers for cloud-native application development.

The unique value proposition of cloud-native architectures also brings many layers of complexity, such as the source code, third-party code, dependencies, software artifacts from the build process, and Infrastructure as Code, which results in the deployment of live infrastructure, and the cloud resources provisioning with clusters. Therefore, one of the main challenges for cloud-native applications is security. 

A security approach should also address the multiple workflows that exist in the SDLC, such as the integrated development environment (IDE), testing flows, deployment flows, configuration management, and cloud resources, by integrating directly with native environments such as IDEs, Git repositories, CI/CD flows and image registries.

In order to overcome the security challenge, an organization should adopt multi-layered tools and security approaches to efficiently manage its security posture.

Assessing the risk

To assess the risk of cloud-based applications, organizations need to adopt a holistic approach that provides the following:

  • Visibility into the entire SDLC by detecting and presenting the application’s security posture. This allows both developers and security admins to own the security responsibility.
  • A context-based risk approach that enables security admins to handle the security posture more efficiently, and prioritize what risks should be handled. 
  • A governance framework that enables security admins to manage and automate the security process.

Furthermore, assessing the risk is just the first step. While efficient assessment helps a lot in reducing the massive load of security threats, it is mandatory to provide a framework, with workflow automation, to help security admins remediate and mitigate security breaches.

]]>