Supply Chain Security – Mend https://www.mend.io Thu, 19 Dec 2024 16:16:13 +0000 en-US hourly 1 https://www.mend.io/wp-content/uploads/2024/11/Mend-io-favicon-outline-200px.svg Supply Chain Security – Mend https://www.mend.io 32 32 The @Solana/web3.js Incident: Another Wake-Up Call for Supply Chain Security https://www.mend.io/blog/the-solana-web3-js-incident-another-wake-up-call-for-supply-chain-security/ Thu, 05 Dec 2024 15:17:03 +0000 https://www.mend.io/?p=13179 On December 2, 2024, the Solana community faced a significant security incident involving the @solana/web3.js npm package, a critical library for developers building on the Solana blockchain with over 450K weekly downloads. This blog post aims to break down the attack flow, explore how it happened, and discuss the importance of supply chain security.

What happened?

The incident focused on versions 1.95.6 and 1.95.7 of the @solana/web3.js library, which were compromised through what seems to have been a phishing attack on the credentials for publishing npm packages. Here is how it worked:

  • Attackers introduced a backdoor into the library by adding a function called addToQueue. This function was designed to capture and exfiltrate private keys used for signing transactions and accessing wallets. The attacker used what looks like CloudFlare headers to stay less suspicious in the network logs.
  • The malicious code was inserted into functions responsible for handling cryptographic operations, such as Keypair.fromSecretKey and Keypair.fromSeed, effectively hijacking these operations to steal keys.
  • The compromised versions were available on npm for approximately five hours, potentially affecting any application that updated or installed these versions during this window.
Solana addToQueue backdoor

Figure 1. addToQueue backdoor introduced on version 1.95.6 

The impact

  • According to Mert Mumtaz, CEO of Helius Labs, the damage from this attack is roughly $130K.
  • A fast response and detection by the Solana team was really important in making the download window of those versions to a minimum of 5 hours.
  • It’s important to mention there is no issue with the security of blockchain, as the problem is with the client Javascript library.
Solana post regarding the incident

Figure 2. Mert Mumtaz’s post

Remediating Suggestions:

  • Upgrade to the latest version – 1.95.8, where the malicious code was removed.
  • Ensure that all suspect authority keys are rotated, including multisigs, program authorities, server key pairs, etc.

How did Mend detect the attack

We have been tracking this issue as MSC-2024-17462 and MSC-2024-17463 since it started, so our customers using this library will get an alert on the two compromised versions.

Moreover, today, the Solana team issued a CVE profile to address this issue. 

The importance of supply chain security

It’s the third supply chain security attack on a very popular open-source library after the Lottie player and the polyfill attacks that we have encountered in the last six months. All those incidents concatenate with the unforgettable XZ incident we had at the start of the year, and of course, all of the North Korean attacks on developers, together with other so-called “regular attacks” we see daily on the main open-source registries.

As we approach the new year, it’s time to stop and think about our supply chain security. As far as I’m concerned, companies are taking vulnerabilities more seriously than malicious packages, despite the fact that having a malicious package in their code means they are immediately compromised. Now is the time to stop closing our eyes to supply chain incidents and invent more resources to secure our supply chain and all our outsourced operations.

Conclusion

The @solana/web3.js incident reminds us of the complexities and risks associated with supply chain security. While the immediate financial impact was contained, the long-term lesson is clear: the supply chain security space requires constant vigilance from individual developers and the entire community.

]]>
More than 100K sites impacted by Polyfill supply chain attack https://www.mend.io/blog/more-than-100k-sites-impacted-by-polyfill-supply-chain-attack/ Mon, 01 Jul 2024 17:30:43 +0000 https://mend.io/more-than-100k-sites-impacted-by-polyfill-supply-chain-attack/ Polyfill.js is a popular open-source project that provides modern functionality on older browsers that do not support it natively; users embed it using the cdn.polyfill.io domain. On February 24, 2024, a Chinese company named Funnull acquired both the domain and the Github account.  

Following that acquisition, the developer, Andrew Betts, tweeted on his X account a warning for all of his service’s users urging them to remove any reference to polyfill from their code. In his words, “I created the polyfill service project but I have never owned the domain and I have had no influence over its sale.” 

Figure 1. https://x.com/triblondon/status/1761852350224846906

On June 25, 2024, Sansec.io researchers reported an active supply chain attack using cdn.polyfill.io. According to their report, the service’s code has been tampered with to inject malware that targets mobile devices through any website that uses the polyfill CDN. Since then, it has been confirmed that the same operator is responsible for similar attacks using additional CDNs, namely BootCDN, Bootcss, and Staticfile.

It’s important to clarify that this attack has no connection to the polyfill npm package. Unlike npm packages, which are downloaded and installed locally within a project’s dependencies, this malicious code is delivered dynamically from the CDN each time a page is loaded.

Malicious code

The malicious code dynamically generates payloads based on HTTP headers. This action allows the code to tailor the right behavior depending on the victim’s environment, making the malware harder to detect and fix. The code targets specific mobile devices and is not executed when browsing on a computer.

The code employs evasion techniques, such as avoiding execution, when it detects admin users or the presence of any Google Analytics service. It has a delay mechanism to reduce the likelihood of being caught by security scan services.

The code is heavily obfuscated, which makes it harder for reverse engineers to understand all the functionality.

Figure 2. Obfuscated malicious code

In some cases, the attack introduces a fake Google Analytics script. Affected users receive altered JavaScript files that include a link to “hxxps[://]www[.]googie-anaiytics[.]com/gtags[.]js“—a deliberate misspelling of “Google Analytics.” This script redirects users to various malicious sites, such as sports betting and adult content platforms, seemingly based on geographic location.

While the reports are mainly focused on redirects to inappropriate links for now, it’s worth remembering that the attacker can change the malicious functionality at any point.

How Mend.io can help

We have been actively scanning and issuing MSC profiles to flag packages that embed references to all of the compromised CDNs. This helps our customers detect and remove any reference to those malicious domains.

We recommend immediately replacing any reference to the compromised CDN with a trusted alternative from either Fastly or Cloudflare

This supply chain attack demonstrates the vulnerabilities associated with dynamically loaded CDN resources and highlights the importance of continuous monitoring and response to such threats.

Developers and organizations must prioritize the security of their applications and always doubt the code they depend on. Staying proactive in maintaining the security of web resources is not just a best practice; it is essential in guarding against evolving supply chain attacks.

]]>
Threat Hunting 101: Five Common Threats to Look For https://www.mend.io/blog/threat-hunting-101-five-common-threats-to-look-for/ Thu, 30 May 2024 06:30:00 +0000 https://mend.io/threat-hunting-101-five-common-threats-to-look-for/ The software supply chain is increasingly complex, giving threat actors more opportunities to find ways into your system, either via custom code or third-party code. 

In this blog we’ll briefly go over five supply chain threats and where to find them. For a deeper look to finding these threats, with more specifics and tool suggestions, check out our threat hunting guide.

Installation scripts

What it is: 

After gaining access to the distribution network of software vendors or package managers, attackers can inject malicious code into the installation scripts of otherwise legitimate packages. 

Where to look: 

Installation scripts can show up on the developer machine, where they typically target personal and company data including credentials, or in the build process, where they will usually attempt to establish a backdoor and persistence.

Secrets Leak

What it is: 

Secrets include API keys, passwords, and other types of credentials and confidential information that should be kept away from bad actors. They can be easily left behind in the development process, and it’s worth the effort to find them before your adversaries do.

Where to look:

While secrets can be anywhere in your code, they are often found in configuration files. Looking for secrets manually is a tough job. Tools exist to search your entire code base for you.

Malicious Artifacts

What it is:

Malicious artifacts are entered into the public registries that developers rely on for downloading libraries and applications in their programming language. Bad actors use techniques like typosquatting and brandjacking to make their malicious packages look like legitimate ones and trick unsuspecting developers into installing them. You can learn more about malicious packages here.

Where to look:

Malicious artifacts sit in public registries like NPM, Pypi, and Maven. A good SCA tool that detects malicious packages (we like Mend SCA, of course), can stop developers from adding these packages in the first place or find the packages that have already slipped through the cracks.

Repojacking

What it is: 

Through rebranding and acquisitions, it is common for repository names to change. When that happens, threat actors can create new repositories with the old names and their malicious code. Now any project that dynamically links to the original repository is at risk.

Where to look:

Repojacking occurs on code-hosting platforms like GitHub, Bitbucket, or GitLab. Hunting these threats must be done by repository owners who should audit any changed or deleted names for new activity.

Account Takeover

What it is:

Through phishing attacks, weak passwords, stolen credentials, and social engineering, attackers can gain access to the accounts of repository owners and inject malicious code into widely used projects.

Where to look:

Monitor your account for irregular login activity and monitor your repository for suspicious patterns. Look at your security settings for your code hosting platform accounts (like GitHub, Bitbucket, or GitLab) as well as your email or any other services that are connected to make sure they are fortified.

If you’re not a repo owner but are still in the line of fire because your project requires open source packages that are at risk of repojacking or account takeover, you can protect yourself.  Regularly scan your project with an SCA tool and educate your developers on best practices in open source code security.

For a deeper look into finding these threats, check out our threat hunting guide.

]]>
Critical Backdoor Found in XZ Utils (CVE-2024-3094) Enables SSH Compromise 1 https://www.mend.io/blog/critical-backdoor-found-xz-utils-cve-2024-3094/ Sun, 31 Mar 2024 11:04:37 +0000 https://mend.io/critical-backdoor-found-xz-utils-cve-2024-3094/ On March 29th, 2024, a critical CVE was issued for the XZ-Utils library. This vulnerability allows an attacker to run arbitrary code remotely on affected systems. Due to its immediate impact and wide scope, the vulnerability has scored 10 for both CVSS 3.1 and CVSS 4, which is the highest score available.

In this article, you’ll learn how this backdoor infiltrated the software, the potential consequences of an attack, and how to identify and patch vulnerable systems.

This article is part of a series of articles about malicious packages.

What is XZ-Utils?

XZ-Utils is a collection of tools for the XZ compression format that have high compression ratios and fast decompression. The XZ format utilizes the LZMA2 compression algorithm, an improved version of LZMA that provides better compression and improved flexibility in the compression process. The XZ-Utils package typically includes several command-line tools for compressing and decompressing files. One of the most commonly used is XZ itself.

What happened

On Friday, March 29th, a developer named Andres Freund issued a security advisory regarding a backdoor he found in the upstream XZ/liblzma 5.6.0 and 5.6.1 versions that led to SSH server compromise. After he noticed some odd symptoms around liblzma, such as SSH login taking up too much CPU, and debugging with valgrind, he figured out that the XZ tarballs have been backdoored.

Backdoor mechanism

The backdoor was not directly inserted into the source code of liblzma that is visible in version control systems or utilized by XZ directly. Instead, it was hidden within binary test files in the XZ compressed format. These files appeared benign and were theoretically part of the library’s test suite.

The attackers used a sophisticated method that split the backdoor into parts, which were then concealed within two XZ compressed files. These files were disguised as ordinary test files, evading detection from casual inspection or automated tools that scan for malicious patterns.

Those crafted test files can be found here:

tests/files/bad-3-corrupt_lzma2.xz (cf44e4b7f5dfdbf8c78aef377c10f71e274f63c0, 74b138d2a6529f2c07729d7c77b1725a8e8b16f1)

tests/files/good-large_compressed.lzma (cf44e4b7f5dfdbf8c78aef377c10f71e274f63c0, 74b138d2a6529f2c07729d7c77b1725a8e8b16f1)

Figure 1. Different version of build-to-host.m4 file in the released tarballs

Execution flow

Upon decompression and execution, these files collaboratively manipulated the build process of liblzma. The process involved extracting and executing obfuscated script code, leading to the injection of malicious code into the build output.

This manipulation effectively appended malicious data to the build process without raising suspicion, as it mimicked legitimate testing adjustments.

The deobfuscated code can be found here

Malicious code injection

The attackers introduced a new object related to the CRC64 algorithm, claiming it was an improvement. This object was, in fact, a trojan that, when included in the build process, embedded the final backdoor into the liblzma library.

The essence of the backdoor was to intercept function calls related to CRC32 and CRC64 resolution and replace them with malicious variants that could execute arbitrary code under certain conditions, likely tied to specific crafted inputs.

Linker manipulation and RSA decryption hook

The backdoor installed an “audit hook” into the dynamic linker of Linux, a critical component that resolves library dependencies at runtime. By hooking into this mechanism, the backdoor could alter the behavior of the linker to intercept and modify the resolution of symbols, particularly those involved in RSA public key decryption.

This manipulation meant that during SSH key authentication, the backdoor could substitute the legitimate RSA decryption function with its own, allowing for unauthorized access if the input matched a certain condition likely known only to the attacker.

Stealth and implications

The backdoor’s sophistication lay in its ability to hide within the normal build process and manipulate low-level system components undetected. Targeting the dynamic linker and encryption routines posed a severe threat to system security, potentially allowing attackers to bypass authentication mechanisms.

How to look for the XZ package with Mend Container

With the Mend Container solution, you can effortlessly scan individual images or integrate your container registry to scan your entire registry thoroughly. Additionally, you can leverage our in-house container reachability analysis to check if the vulnerability is reachable. Our updated scanner will enhance your knowledge with the most updated data regarding this vulnerability.

Vulnerable Linux distros and their fix versions

DistroDistro VersionsIs Affected?Fixed VersionsReferences
AlpineedgeAffected5.6.0-r2, 5.6.1-r2alpinelinux.org
DebianUnstable (sid, trixie)Debian stable versions are not affected5.6.1+really5.4.5-1security-tracker.debian.org
UbuntuNot Affectedubuntu.com
RHELNot Affectedaccess.redhat.com
Fedora40Not Affected5.4.6-3.eln136fedoraproject.org
Fedora41Affectedfedoraproject.org
FedoraRawhideAffectedfedoraproject.org
Amazon LinuxNot Affectedaws.amazon.com
OpenSUSETumbleweedAffected5.6.1.revertto5.4news.opensuse.org
Arch LinuxAffected5.6.1-2archlinux.org

False positive announcement

Due to a conflict between the Debian advisory and the official announcement, customers are expected to detect false positive alerts on older versions than 5.6.0. We recommend either upgrading to the matched fixed version listed above or downgrading to the latest uncompromised version, which is 5.4.6.

*April 1 update. It was confirmed that Fedora 40 is not affected by the backdoor. However, users should still downgrade to a 5.4 build to be safe.

A landscape of malicious packages

The critical backdoor found in XZ Utils serves as a stark reminder of the evolving threats within the software supply chain. While this specific vulnerability involved a backdoored library, malicious actors employ various techniques like license tampering, credential theft and many more. 

In conclusion, the XZ Utils backdoor vulnerability (CVE-2024-3094) underscores the critical need for vigilance in software supply chain security. By staying informed about these evolving threats, implementing robust security practices, and patching vulnerable systems promptly, organizations can significantly reduce their risk of falling victim to malicious attacks.

]]>
Top Tools for Automating SBOMs https://www.mend.io/blog/top-tools-for-automating-sboms/ Thu, 11 Jan 2024 21:30:43 +0000 https://mend.io/top-tools-for-automating-sboms/ We’ve talked a lot about why software bills of materials (SBOMs) are important and how they communicate the value of your organization, so we won’t continue those lectures here. We’re all good on the why so today we’ll talk about the how – the best (and free!) tools to help you create SBOMs automatically. Creating an SBOM manually is arduous and error-prone so why not avoid it altogether?

If you haven’t thought about SBOMs in a minute, you may want a quick refresher on SBOM standards before reading on.

Creating an SBOM with your SCA tool

If you have one, your commercial software composition analysis (SCA) tool is a great resource for SBOM generation. This isn’t a free solution, per se, but if you’re already paying for an SCA, generating SBOMs doesn’t cost you anything extra.

If you’re using Mend SCA, you can generate an SPDX or CycloneDX SBOM in a variety of formats easily from the Reports menu of the application menu bar. Additionally, you can execute the SBOM Generator Tool via CLI or as a Docker container.

This short video shows how easy it is to generate an SBOM from the Mend UI.

If you don’t use an SCA (you should though…), your SCA doesn’t generate SBOMs, or you simply want to try another tool, here are some widely used free and open source tools. Choosing the right one for your project will depend a lot on your language and architecture. For the purposes of keeping this blog post clean and short, we’ll skip the step-by-step for setting up each tool, but we’ll provide links to helpful documentation.

SBOM tools for containers

1. Create container images and SBOMs in one go with Paketo Buildpacks and Pack CLI. You can generate SBOMs in Syft, SPDX, or CycloneDX standards in a JSON file. A full how-to can be found here.

2. A multifunctional tool that scans container images, filesystems, Kubernetes workloads, and more, Trivy can generate SBOMs in both SPDX and CycloneDX standards in JSON format.

SBOM tools for CI/CD

1. Mentioned above, Trivy is also great for continuous integration/continuous delivery (CI/CD) and integrates with a number of CI ecosystems, including GitHub Actions, Azure DevOps, and Semaphore.

2. A great tool for Java projects, the CycloneDX Maven plugin runs at the build stage of your CI/CD pipeline to create CycloneDX SBOMs in XML or JSON format. This plugin can create SBOMs for single modules or an aggregate SBOM that starts at build root. If you’re not a Maven expert, it can be a little difficult to set up using the developer-provided documentation. This Medium post gives a good step-by-step breakdown on how to do it.

3. Microsoft’s sbom-tool is a command line tool that creates SPDX SBOMs for a wide variety of artifacts and integrates with GitHub Actions and Azure DevOps.

General SBOM tools that support multiple languages

1. The Microsoft sbom-tool also works as a standalone tool. It uses Component Detection libraries so check there to see if your language is covered.

2. One of the most popular open source tools for SBOM generation, Syft supports a wide number of languages including Java, Ruby, Rust, Go, PHP, Python, C++ (Conan), and more. With this tool you can create SBOMs in CycloneDX, SPDX, and Syft’s own standard. 

3. The SPDX SBOM Generator has slightly more limited language coverage compared to Syft but covers a few package managers that Syft does not.

SBOM Tools For C/C++

Although they’re two of the most widely used languages, finding an open source SBOM generator for C and C++ can be tricky. Due to the lack of an official or even dominating package manager for C/C++, the work for scanning a project and recognizing dependencies is not trivial and therefore generally beyond the abilities of free software.

There are a few package managers for C/C++ out there, though, and developers who use Conan are in luck. Conan includes extensions to help you create an SBOM and Syft and Trivy also support C/C++ SBOMs via Conan.

If you’re using a different package manager or none at all, sorry to say, but at this point in time there’s no great automated solution outside of commercial SCA products.

Going beyond generating SBOMs – other useful tools

The grass is always greener, eh? If you need to convert SPDX to CycloneDX (or vice versa), the organizations behind both standards have tools to help you do that. The CycloneDX CLI tool can be found here and an SPDX prototype conversion tool can be found here.

KubeClarity does not generate SBOMs on its own (although it does run Trivy and Syft on your behalf) but rather merges multiple SBOMs and performs multi-stage CI/CD SBOM analysis, overlaying analysis from different build stages for comprehensive insights. It can be installed locally, via Docker, or on a Kubernetes cluster-based system.

Parting words

The era of SBOMs has only just begun. More tools are sure to pop up and existing tools are sure to get better. At the moment, many tools, both commercial and free and open source, are likely to have some limitations. Some work great with one language and less great with others. Some struggle to show dependency trees and produce very flat SBOMs. Our advice to you is to try out as many tools as you can and compare the outputs.

]]>
Six More Top Tips For Holistic AppSec and Software Supply Chain Security https://www.mend.io/blog/six-more-top-tips-for-holistic-appsec-and-software-supply-chain-security/ Tue, 12 Dec 2023 20:09:03 +0000 https://mend.io/six-more-top-tips-for-holistic-appsec-and-software-supply-chain-security/ In my previous post, I began to list the ways you can strengthen your security posture, with some holistic approaches to application security and the software supply chain. In this second part of the series, let’s look at six more important considerations.

7. Apply wisdom, especially with AI and LLMs

Education strengthens knowledge, but you also need to apply good judgment ― wisdom, if you will ― that arises from experience. This is particularly significant when using new AI and LLM technologies. Many of these tools give you a convincing — but not necessarily the best ― answer, so you must evaluate the results. Blindly following the answer, no matter what tool, will cause problems.

There are generalist LLMs like ChatGPT, and then there are more specific add-ons such as GitHub Copilot, which works in your development environment and has been trained on code. These more domain-specific LLMs are likely to provide more useful answers, especially when building on any of the data that you have. That’s where we’re going to see big value from LLMs.

8. Help leadership understand AppSec risk

All of this requires the buy-in of the C-suite and stakeholders. The success of your AppSec and software supply chain security program depends partly on whether decision-makers understand the risks they face at any given time. At this level, a lot of folks don’t understand the technology, so they don’t fully understand the risks, and how to manage them. Those in the application security space need to help them improve their understanding of risk. What is inherent risk? What are controls? How do controls give you residual risk? What are the control options? How does time fit into this?

For instance, you sometimes might need to accept some risk in the short term to build an effective solution that is more supportable in the long term. Investing the time, effort, and budget in an automated, scalable solution will be more efficient over time, but perhaps more challenging in the short term.

9. Manage risk

You need a modicum of both knowledge and wisdom to manage risk and exceptions, particularly at scale, because you need to base decisions on what risks you will tolerate based on an understanding of the way your software and its dependencies work, and judgment as to what risks and exceptions are acceptable. It could mean the difference between operating a targeted and effective detection and remediation strategy, and an inefficient process weighed down with false positives. You need to apply know-how and experience to decide how best to modulate your risk threshold using techniques and tools such as SCA, SAST, container scanning, and more. Otherwise, you risk blocking developers when it isn’t necessary. That’ll make a real mess, and your developers will be discouraged from actively supporting security efforts, which will detrimentally affect your AppSec program.

10. Maintain performance

Perhaps the main challenge for security is to deliver without impeding developers’ pipelines and hindering their productivity. Think of it like motor racing. A pit stop is a big deal. There needs to be a serious reason why performance must be interrupted to address a problem. Better to avoid this situation, where possible, by monitoring and tweaking performance while driving. While things are happening on the track, there are all kinds of telemetry decisions, communication analyses, and more, and that’s how we have to think about security. Wherever possible, it should take place during the workflow, whatever it might be. SBOMs are being produced, static analysis, software composition analysis, all those things create telemetry for us. We just have to figure out the right way to harness this information, so we know what’s going on, and we know when to take action.

11. The question of accountability

Then we have to think about who is authorized to make the right calls when it comes to security decisions. It’s often unclear who is accountable for the process and it’s inconsistent in industry right now. The modern complexity of relationships between the manifold components and dependencies within modern software lends itself to security teams having oversight, with developers implementing security as it shifts left.

The challenge is how to set up a framework so that developers understand and manage their risk as opposed to managing their risk for them.  It involves helping people easily identify, understand, and assess their risk, and providing them with information, so they can make wise, educated judgments.

12. Make AppSec more preventive

Recently, we’ve seen attacks become more sophisticated as well as more numerous. For example, many more malicious packages are now being published into npm than before. Nevertheless, the challenge remains to ensure that your security processes are accurate, or they’ll get overlooked. We don’t want to become the boy who cried “Wolf!” Most companies that are trying to develop at scale need to maintain their speed of output while reinforcing their security, so the balance is to practice responsible open source usage by applying pre-emptive and preventive AppSec tactics and tools earlier in the software development lifecycle (SDLC) by shifting left and iterating those procedures during the SDLC ― shifting smart.

Most importantly, make sure you have a plan. Keep your dependencies up to date. It’s like fixing up your house. You don’t wait until things start falling apart. Do preventative maintenance. Understand where you are in the value chain. Ensure that you develop solutions that are highly scalable and that fit the capabilities of your team. And don’t break development. Don’t break the build.

]]>
Six Top Tips For Holistic AppSec and Software Supply Chain Security https://www.mend.io/blog/six-top-tips-for-holistic-appsec-and-software-supply-chain-security/ Fri, 08 Dec 2023 17:20:01 +0000 https://mend.io/six-top-tips-for-holistic-appsec-and-software-supply-chain-security/ Developing applications and working within the software supply chain requires hard skills such as coding and proficiency in programming languages. However, protecting the software supply chain also requires some softer skills and an openness to strategies and tools that will strengthen your security posture. In this two-part series, we will discuss these considerations and how they support a holistic approach to application security and software supply chain security

1. Walk a mile in developers’ shoes.

We’re increasingly applying security earlier in the software development lifecycle (SDLC), a shift-left approach that requires the buy-in and involvement of developers. However, developers are very sensitive to additional work, so approach this situation with empathy. You have to understand and anticipate the issues involved, and provide solutions.

Consider how you are going to integrate with development teams. What are their needs and concerns? Then frame your strategy around that and provide them with ways that make security easy for them to adopt. And seek solutions that have scalable outcomes. If you implement something that creates a huge mess for developers, it’s going to be difficult to develop and maintain momentum.

2. Where possible, automate.

Automating security measures is important because when processes are manual, you simply can’t scale to handle the volume of components and dependencies, as well as the speed of change and software development. For example, static application security testing (SAST) can be integrated into the development environment so that you can help developers learn from any errors as they work. Don’t develop a program that requires them to submit the application for scanning, get the results back, and then learn. Integrate those lessons into the development loop as quickly as possible. They reduce the friction for the developers and this leads to a better application security program.

3. Build a good inventory.

It might sound obvious, but you can’t fix things if you don’t know you’ve got them. So, having a good software inventory is critical. Employ tools such as a software bill of materials (SBOM) and integrate it into the development environment. The detailed inventory of the libraries you get from an SBOM gives you the visibility needed to align responsibility and have traceability for the repos. There are always incremental improvements, but it’s an exciting place to be because we can know very quickly who’s responsible for repos or any other zero day going back to the likes of Spring4Shell.

4. Take responsibility.

You need inventory to have accountability, and you have to provide your teams with tools to take responsibility for effective management. SBOMs aren’t just gestures. They address a market inefficiency that governments are trying to correct. This inefficiency is that sometimes software vendors or service providers have inadvertently transferred the risk of ineffective maintenance over to their customers, and the customers are not sufficiently aware of that risk. SBOM and other standards, like configuration management database (CMDB), are attempts to fix this, and they address calls from governments such as the Biden administration to apply such standards via agencies such as the National Telecommunications and Information Administration (NTIA).

We know from experience that when customers seek proposals and RFIs, they request SBOMs. So, I anticipate that over time, more sophisticated consumers at the corporate level will ask for SBOMs, and I think that the intent of the public policies that are being introduced is to correct the market inefficiency.

5. Don’t rest on your laurels. Keep updated.

Once a tool like this is in place, stay alert and keep it updated. Teams that struggle to patch new vulnerabilities and threats also struggle with maintenance. That’s where a tool like Mend Renovate helps enormously, scanning your software, discovering dependencies, automatically checking to see if an updated version exists, and submitting automated pull requests.

And be mindful that security can’t be a one-time thing. Whether it’s an SBOM, a code scan, or an architectural review, it’s got to be built into a flow because software’s changing very quickly. You can create an SBOM in the morning and quite quickly, it can become out of date because someone has checked something in, pushed it into test, and it might have even gone to production. That’s why automation is so important. It makes security seamless and eases the process for those who are tasked with it, like developers, so they can readily adopt and embrace the process. That circles back to my first point: be developer-focused, because they’re the folks you’re relying on to apply your security strategy.

6. Integrate education.

Developers need to know what they’re looking for and what to do when they find it. They therefore need to be educated and remain cognizant of changes and updates. It’s a process that should be ongoing, and as integrated as possible into their normal workflow. So, make sure you have processes that are integrated with IDEs and build pipelines that show teams flaws and errors. This is important because you can take the lessons learned from finding vulnerabilities and feed them back into the training program. Most important is helping people understand when errors occurred and why, so that security can constantly improve.

Equally, focus on your people. Set them up for long-term success by keeping them trained and educated. Give them a strong foundation in networking. Offer even non-technical folk with a stake in security some introduction to Linux and some Python. Then they will expand their skills, become more valuable to your processes, and grow with you. With increased knowledge comes increased understanding and more visibility into security issues. Establish a minimum standard and ensure that everybody is trained to that standard in both the security and DevOps spaces, and you’ll benefit from a lot of efficiencies in the long run.

Read the second part of this series for six more important considerations.

]]>
Turnover, Relationships, and Tools in Cybersecurity https://www.mend.io/blog/turnover-relationships-and-tools-in-cybersecurity/ Wed, 06 Dec 2023 17:23:10 +0000 https://mend.io/turnover-relationships-and-tools-in-cybersecurity/ Some things, like choosing tools, are perennial problems. Others, like complete security team turnover, seem to be a more recent development within my circles. But either way, staff turnover has ripple effects that are not always immediately apparent. Let’s take a look.

Turnover

I am lucky, I get to talk about application security with hundreds of companies each year. And over the past year and a half, I’ve noticed that many have had a complete turnover in security staff.

What does this mean? Tools that were purchased by previous teams are either going unused or on auto-pilot. Alerts and notifications are going to email addresses that are no longer being monitored. Some companies do not even realize that they have some of the tools they have. 

My advice is to take a moment and do an inventory on what you have. Talk with your purchasing department and get a list of tools that were purchased by the previous team. Take a moment to call each company and talk with them. Ask for a demo on the tools you have and ask how to use it. Maybe the tool is not needed, or maybe it’s the lifesaver you have been shopping around for and you already have it.

(Re)building strong relationships

A complete security staff turnover not only affects technology usage. All those relationships that the security team had with development disappeared as personnel walked out the door, and the new team must rebuild them. Here’s what I suggest: 

Spend time with development. Leave your desk and take a trip over to where the development team sits. If remote, then reach out to the team leads or architects. Discuss how things are going and rekindle the relationship that was lost when the previous team turned over.

At a previous employer, I would always take time to walk around the development floor and see how things were going. This will give you insight to new products and tools that teams are building. The information gained is invaluable because it keeps you in the loop and you can plan accordingly.

And about those tools…

So maybe you don’t already have the tool you need, and the search continues. Here’s something to consider as you do your due diligence and establish your needs: 

Would you hire a plumber to do a whole house inspection? Would you hire a general contractor to fix foundation issues in your home? Would you hire a teller to audit your 401(k) account? Finally, would you hire a foot doctor to do surgery on your back?

What do these examples have in common?  The person being hired knows some of what needs to be done, but not everything. Why would you do the same thing for your application security program?

Make sure that you are properly staffing your programs with people who understand all aspects of security. Using your cloud security guy with no development background to handle SAST could lead to disastrous results and additional risk.

Knowledge is key in each aspect of your security program. Several vendors provide great tools, it’s a matter of understanding what value they can bring you and how they can be integrated.  I understand that it can be more expensive to have a la carte security, but how much does a compromise cost?

Use your tools correctly and tune them for success. Clicking the “WAF” option in AWS does not mean you have a fully working and deployed WAF solution. These rules need to be fine-tuned by someone knowledgeable to be successful.

If possible, hire an on-premises penetration tester. Have them validate critical findings that could ruin your day. Share the actionable findings with development -– ideally via a recorded video so that they will know exactly how and why the finding is important. It will go much further than just giving them a report of 5,000+ potential security issues from a security tool.

To next year

Every year we see new technologies and new vulnerabilities, but the basics of cybersecurity tend to stay the same: building relationships with staff and taking care to purchase and implement the best security tools for the job. So here’s one last piece of advice that covers both: talk to your developers about tools before you buy them. Developers are busy, of course, but let them own some part of the process and you’ll get much better buy-in later. Do not be a roadblock, but come alongside development and work together to incorporate good security practices. Work together as a team and you will accomplish so much more.

As always, stay secure my friends.

]]>
What New Security Threats Arise from The Boom in AI and LLMs? https://www.mend.io/blog/what-new-security-threats-arise-from-the-boom-in-ai-and-llms/ Wed, 15 Nov 2023 08:43:00 +0000 https://mend.io/what-new-security-threats-arise-from-the-boom-in-ai-and-llms/ Generative AI and large language models (LLMs) seem to have burst onto the scene like a supernova. LLMs are machine learning models that are trained using enormous amounts of data to understand and generate human language. LLMs like ChatGPT and Bard have made a far wider audience aware of generative AI technology.

Understandably, organizations that want to sharpen their competitive edge are keen to get on the bandwagon and harness the power of AI and LLMs. That’s why, in a recent study, Research and Markets predicts that the global generative AI market will grow to a value of USD 109.37 billion by the year 2030.

However, the rapid growth of this new trend comes with an old caveat: with progress comes challenges. That’s particularly true when considering the security implications of generative AI and LLMs. 

New threats and challenges arising from generative AI and LLMs

As is often the case, innovation often outstrips security, which must catch up to assure users that the tech is viable and reliable. In particular, security teams should be aware of the following considerations:

  • Data privacy and leakage. Since LLMs are trained on vast amounts of data, they can sometimes inadvertently generate outputs that may contain sensitive or private information that was part of their training data. Always be mindful that LLMs are probabilistic engines that don’t understand the meaning or the context of the information that they use to generate data. Unless they are instructed or guardrails are used, they have no idea whether data is sensitive, whether it should be exposed or not, unless you intervene and alter prompts to reflect expectations of what information should be made available. If you train LLMs on badly anonymized data, for example, you may end up getting information that’s inappropriate or risky. Fine-tuning is needed to address this, and you would need to track all data and the training paths used, to justify and check the outcome. That’s a huge task.
  • Misinformation and propaganda. Bad actors can use LLMs to generate fake news, manipulate public opinion, or create believable misinformation. If you’re not already knowledgeable about a given subject, the answers that you get from LLMs may seem plausible, but it’s often difficult to establish how authoritative the information provided really is, and whether its sources are legitimate or correct. The potential for spreading damaging information is significant.
  • Exploitability. Skilled users can potentially “trick” the model into producing harmful, inappropriate, or undesirable content. In line with the above, LLMs can be tuned to produce a distribution of comments and sentiments that look plausible but skew content in a way that presents opinion as fact. Unsuspecting users consider this content reasonable when it may really be exploited for underhand purposes.
  • Dependency on external resources. Some LLMs rely on external data sources that can be targets for attacks or manipulation. Prompts and sources can be both manual and machine-generated. Manual prompts can be influenced by human error or malign intentions. Machine-generated prompts can result from inaccurate or malicious information and then be distributed through newly created content and data. Can you be sure that either is reliable? Both must be tested and verified.
  • Resource exhaustion attacks. Due to the resource-intensive nature of LLMs, they can be targets for DDoS attacks that aim to drain computational resources by overloading systems. For instance, you could set up a farm of bots to rapidly generate queries at a volume that could pose operational and efficiency problems.
  • Proprietary knowledge leakage. Skilled users can potentially “trick” models into exposing their valuable operations prompts. Usually, when you build functionality around AI, you have some initial prompts that you test and validate. For example, you can prompt LLMs to recognize copyrights, identify the primary owner of source code, and then extract knowledge about the copyrights. Potentially this means a copyright owner could lose their advantage over competitors. As I wrote earlier, LLMs don’t understand the information they generate, so it’s possible that they inadvertently expose proprietary knowledge like this.

These are not the only security concerns that arise from generative AI and LLMs. There are other, pre-existing issues that are amplified by the advent of this technology. In my next blog post, we’ll examine these issues and we’ll take a glance at how we might address them to safeguard users’ cybersecurity.

]]>
Let’s Embrace Death in the Software Development Lifecycle https://www.mend.io/blog/lets-embrace-death-in-the-software-development-lifecycle/ Fri, 20 Oct 2023 18:05:10 +0000 https://mend.io/lets-embrace-death-in-the-software-development-lifecycle/ The leaves are turning brilliant colors before they fall off and blow away here where I live just a few minutes outside of Salem, Massachusetts where autumn — Halloween specifically — is a very big deal.

I’m not morbid but it’s a natural time to think about how things wind down and finally breathe their last breath. Nothing lasts forever. Not trees. Not animals. Not people. Not cars. Not houses. Not software. Especially not software.

People who actually make applications definitely know this. But instead of showing respect to our apps and letting them die a planned and peaceful death, we let our products turn first into Frankenstein’s monster with mismatched parts sewn crudely together, then finally into a zombie with fully rotting parts that fly apart at the smallest bump.

In this blog, let’s look at how and why you should retire some software gracefully, before it transforms into something scary.

Stopping zombies: Why you should let some software go

Here’s the classic graphic of the software development lifecycle (SDLC). There’s no obvious place where death comes in.

If you don’t want a zombie product, it needs to come in right at stage 1: planning. You have to plan on how you will replace all of the pieces, and you need to think about when it’ll become too complex. If you don’t decide ahead of time that you are going to budget and plan for building a new house every 100 years, what you end up with is a cursed 200-year-old mansion that’s falling down, a danger to anything it touches, and that anyone can walk right into and steal your stuff. In software, we don’t get 100 years (more like five) but the result is the same.

Here’s the SDLC in practice on a large time scale, or at least what we wish would happen: you spend a lot of time and money on the build, and then you try to maintain the plateau indefinitely to live happily and profitably ever after.

So you reiterate and replace piece by piece, but meanwhile quality (and security) goes by the wayside. You don’t plan for deprecation and getting rid of your product, you just focus on maintaining it. Here’s what actually happens: eventually maintaining that zombie will cost your entire revenue stream with no money leftover to rebuild with. When you’re spending all of your resources maintaining a product, it’s difficult to keep it secure or functional, let alone to iterate and make it better.

This is make or break stuff. Many software startups fail in the first year or two. There’s a second huge cliff between eight and ten years. This makes sense.

For the majority of startups the first couple of years are focused on making ends meet, growing fast, and building quickly with what they can get cheap. If they don’t then slow down enough to plan for the retirement and rebuild of their product, they’ll end up with a product that’s costly to maintain, impossible to secure, and too complex to keep functional. If that’s happening across all of their applications, that company will fry by that ten year mark. 

Why you need to think ahead to evade the monsters

Anyone who doesn’t know when their product will expire isn’t thinking very far ahead. Humans generally aren’t great at long term planning and those in charge of software companies are no exception.

One source of short term thinking comes from high developer turnover. With the average developer only staying at a company 1-2 years, the longevity of a product is seen as someone else’s problem, and it very likely will be. The decision to plan ahead and avoid zombie products can’t be left up to developers. Companies need to have the right long-term perspective and use that to tell employees how to build products that can be retired and rebuilt without chaos.

Why too many companies don’t think ahead

So what stops companies from having that perspective? Well, it’s a painful pitch to make. “Hey, I know the thing we have is working and making us money but in a year I’m going to have to replace pretty much the whole thing. We better spend some money now and rebuild the thing we already have.” It’s one of the toughest business decisions to make in application development. It’s not going to make any money; all it does is cut down future costs. Should we spend $2 million now to not spend $10 million in three years? That’s a long time frame for many companies.

But the alternative to making that tough decision is bleak. It’s easy to be penny-wise and pound-foolish. I’ve killed a lot of products in my career. I’ve retired product lines that were still profitable for the company because they were too much of a pain. They had too many quality issues and couldn’t be secured. If someone had had a little bit more foresight and made some fundamental changes to these products three or four years prior, before I got to them, I would have been able to hold onto them longer. But it was too late.

Why is the threat worse now?

Five to ten years ago you could maybe get away with keeping products around longer. Today, applications are increasingly dependent on each other and the application development supply chain is far more complex. It’s become much harder to find things that fit older software; replacing old parts is a pain in the rear that never works right. And you just can’t properly secure old software. You can try to plug up the holes as you find them but more will pop up and you’ll find some holes are simply out of reach.

I’ve tried to keep my analogies to a Halloween theme so I’m sorry to go out with one about cars. If you try to keep a car going for 30 years by replacing each part piece by piece, you’re not going to end up with a better car. You’re going to end up with a crappy car with terrible gas mileage that can barely get you where you need to go without breaking down, and can take advantage of few if any advancements in efficiency or safety.

Embrace change and refresh to keep the zombies at bay

Because of its dependency and complexity, the problems with approaching software this way is worse than any real world analogy. The road that you drive on doesn’t change every five years but the platforms, networks, infrastructure, and so on that your software rides on do. 

You have to know things in software are always going to change. You have to plan for that change or at least recognize that the change is going to happen. There’s going to be better and cheaper ways of doing things that you’re going to want to take advantage of. So let’s embrace death in the SDLC. Plan for a peaceful death of your software now or be haunted by it later.

Are you set up to manage your dependencies efficiently and avoid zombie software from affecting your codebase?

]]>