Sam Quakenbush – Mend https://www.mend.io Mon, 25 Nov 2024 23:00:09 +0000 en-US hourly 1 https://www.mend.io/wp-content/uploads/2024/11/Mend-io-favicon-outline-200px.svg Sam Quakenbush – Mend https://www.mend.io 32 32 Quality > Quantity: How to Get the Most Out of SAST https://www.mend.io/blog/quality-quantity-how-to-get-the-most-out-of-sast/ Thu, 01 Feb 2024 17:53:38 +0000 https://mend.io/quality-quantity-how-to-get-the-most-out-of-sast/ Static Application Security Testing (SAST) has a bit of a bad reputation. SAST tools can produce an overwhelming number of alerts and security teams, having often come from networking backgrounds, don’t always fully understand the alerts that they are passing on to developers for fixes. This can cause the relationships between the teams to sour, as developers often perceive this work as pointless and holding them back from working on their primary responsibilities like new features.

It doesn’t have to be that way. In this blog, I want to pass on some simple advice that I’ve seen work well for improving security posture while maintaining warm relations between security and developers.

How does SAST work?

SAST works based on coding rules to find common flaws and weaknesses that may lead to vulnerabilities. These flaws are typically defined by the Common Weakness Enumeration (CWE) framework. The flaws themselves are not vulnerabilities, but they are evidence of poor code quality. Think of SAST like a spelling or grammar check that detects where code is believed to be written poorly. The key word there is “believed”; the flaw may or may not be something that will ever be a problem, and the only way to know is to look into it. Although SAST tools are built for the purpose of detecting weaknesses that lead to security issues, they can usually also address code quality beyond security concerns to some degree.

Solid strategies for addressing SAST alerts without overwhelming developers

There’s no way around it, even the best SAST tools in existence create a lot of alerts. Maybe more than with any other kind of tool, how you address SAST alerts must include a consideration of how you will do so with diplomacy. I’ve written about this before, and I cannot stress enough how quickly you will lose developers’ trust and cooperation by haphazardly issuing tickets based on alerts from security tools. If you want to scan wide, go ahead. Just leave your developers out of it. Remember, not all alerts are created equal, and not all alerts are vulnerabilities.

Here are two great strategies for addressing SAST alerts that I’ve seen used in the field with success:

Stop the bleeding

With this approach, you get a baseline of SAST alerts at the start — but you do not address them. Instead, your goal is to not add any new weaknesses or code quality issues. You will only have developers address new alerts as they come in. Then, over time, you work down your backlog, perhaps using the second strategy.

The trickle 

With this approach, you take one kind of CWE (SQL injection, cross-site scripting, and deserialization of untrusted data are common first picks) and address only those alerts. When those are done, you address the next CWE, working through the most critical kinds of CWEs first.

A word of warning from someone who has been there: make sure your developers know the plan ahead of time. If you don’t let them know that next month they’ll be getting a whole new set of alerts to address, you may inadvertently hurt morale.

Picking a good SAST tool

Finding a tool that decreases the number of alerts while increasing the quality of those alerts will help you immensely.

Look for a tool that does the following:

Can be customized to your needs

Like spell and grammar check can be set to recognize, say, “Quakenbush”, as a legitimate proper noun, a good SAST tool can be customized to your projects so you end up with higher quality alerts.

Has data flow consolidation

Many SAST tools will make an alert for every data flow, even when that means hundreds of alerts generated for just a few pieces of code that need addressing. Having an impressively large number of alerts can be good when you’re making the case for more resources for your security team, but that benefit is quickly diminished when developers are tasked with actually addressing them.

Encourages developer training

Some SAST tools are integrated with education programs like Secure Code Warrior or other just-in-time training for your developers. No tool or configuration will reduce alerts like your devs writing secure, high quality code in the first place.

High-quality alerts in disguise

One last piece of advice: pay attention to context, especially application context. SAST has no knowledge of runtime or deployment environments, so it’s up to you to decide if an alert is an actual vulnerability. Sometimes, what you might think of as a low-quality alert that doesn’t matter can have huge implications down the line. For instance, a path traversal CWE in a command line tool may not be a real security vulnerability, but it is still a code quality issue. And if that command line is ever deployed as a part of a cloud application, now it is a real potential security vulnerability.

In parting

The purpose of any application security tool should be to actually improve your application security. Make sure you wield them wisely. Asking developers to fix things that aren’t important has no positive impact on your security posture, and will likely have a negative impact in the long run, as developers begin ignoring the security team and their own duty to code securely.

]]>
Six More Top Tips For Holistic AppSec and Software Supply Chain Security https://www.mend.io/blog/six-more-top-tips-for-holistic-appsec-and-software-supply-chain-security/ Tue, 12 Dec 2023 20:09:03 +0000 https://mend.io/six-more-top-tips-for-holistic-appsec-and-software-supply-chain-security/ In my previous post, I began to list the ways you can strengthen your security posture, with some holistic approaches to application security and the software supply chain. In this second part of the series, let’s look at six more important considerations.

7. Apply wisdom, especially with AI and LLMs

Education strengthens knowledge, but you also need to apply good judgment ― wisdom, if you will ― that arises from experience. This is particularly significant when using new AI and LLM technologies. Many of these tools give you a convincing — but not necessarily the best ― answer, so you must evaluate the results. Blindly following the answer, no matter what tool, will cause problems.

There are generalist LLMs like ChatGPT, and then there are more specific add-ons such as GitHub Copilot, which works in your development environment and has been trained on code. These more domain-specific LLMs are likely to provide more useful answers, especially when building on any of the data that you have. That’s where we’re going to see big value from LLMs.

8. Help leadership understand AppSec risk

All of this requires the buy-in of the C-suite and stakeholders. The success of your AppSec and software supply chain security program depends partly on whether decision-makers understand the risks they face at any given time. At this level, a lot of folks don’t understand the technology, so they don’t fully understand the risks, and how to manage them. Those in the application security space need to help them improve their understanding of risk. What is inherent risk? What are controls? How do controls give you residual risk? What are the control options? How does time fit into this?

For instance, you sometimes might need to accept some risk in the short term to build an effective solution that is more supportable in the long term. Investing the time, effort, and budget in an automated, scalable solution will be more efficient over time, but perhaps more challenging in the short term.

9. Manage risk

You need a modicum of both knowledge and wisdom to manage risk and exceptions, particularly at scale, because you need to base decisions on what risks you will tolerate based on an understanding of the way your software and its dependencies work, and judgment as to what risks and exceptions are acceptable. It could mean the difference between operating a targeted and effective detection and remediation strategy, and an inefficient process weighed down with false positives. You need to apply know-how and experience to decide how best to modulate your risk threshold using techniques and tools such as SCA, SAST, container scanning, and more. Otherwise, you risk blocking developers when it isn’t necessary. That’ll make a real mess, and your developers will be discouraged from actively supporting security efforts, which will detrimentally affect your AppSec program.

10. Maintain performance

Perhaps the main challenge for security is to deliver without impeding developers’ pipelines and hindering their productivity. Think of it like motor racing. A pit stop is a big deal. There needs to be a serious reason why performance must be interrupted to address a problem. Better to avoid this situation, where possible, by monitoring and tweaking performance while driving. While things are happening on the track, there are all kinds of telemetry decisions, communication analyses, and more, and that’s how we have to think about security. Wherever possible, it should take place during the workflow, whatever it might be. SBOMs are being produced, static analysis, software composition analysis, all those things create telemetry for us. We just have to figure out the right way to harness this information, so we know what’s going on, and we know when to take action.

11. The question of accountability

Then we have to think about who is authorized to make the right calls when it comes to security decisions. It’s often unclear who is accountable for the process and it’s inconsistent in industry right now. The modern complexity of relationships between the manifold components and dependencies within modern software lends itself to security teams having oversight, with developers implementing security as it shifts left.

The challenge is how to set up a framework so that developers understand and manage their risk as opposed to managing their risk for them.  It involves helping people easily identify, understand, and assess their risk, and providing them with information, so they can make wise, educated judgments.

12. Make AppSec more preventive

Recently, we’ve seen attacks become more sophisticated as well as more numerous. For example, many more malicious packages are now being published into npm than before. Nevertheless, the challenge remains to ensure that your security processes are accurate, or they’ll get overlooked. We don’t want to become the boy who cried “Wolf!” Most companies that are trying to develop at scale need to maintain their speed of output while reinforcing their security, so the balance is to practice responsible open source usage by applying pre-emptive and preventive AppSec tactics and tools earlier in the software development lifecycle (SDLC) by shifting left and iterating those procedures during the SDLC ― shifting smart.

Most importantly, make sure you have a plan. Keep your dependencies up to date. It’s like fixing up your house. You don’t wait until things start falling apart. Do preventative maintenance. Understand where you are in the value chain. Ensure that you develop solutions that are highly scalable and that fit the capabilities of your team. And don’t break development. Don’t break the build.

]]>
Six Top Tips For Holistic AppSec and Software Supply Chain Security https://www.mend.io/blog/six-top-tips-for-holistic-appsec-and-software-supply-chain-security/ Fri, 08 Dec 2023 17:20:01 +0000 https://mend.io/six-top-tips-for-holistic-appsec-and-software-supply-chain-security/ Developing applications and working within the software supply chain requires hard skills such as coding and proficiency in programming languages. However, protecting the software supply chain also requires some softer skills and an openness to strategies and tools that will strengthen your security posture. In this two-part series, we will discuss these considerations and how they support a holistic approach to application security and software supply chain security

1. Walk a mile in developers’ shoes.

We’re increasingly applying security earlier in the software development lifecycle (SDLC), a shift-left approach that requires the buy-in and involvement of developers. However, developers are very sensitive to additional work, so approach this situation with empathy. You have to understand and anticipate the issues involved, and provide solutions.

Consider how you are going to integrate with development teams. What are their needs and concerns? Then frame your strategy around that and provide them with ways that make security easy for them to adopt. And seek solutions that have scalable outcomes. If you implement something that creates a huge mess for developers, it’s going to be difficult to develop and maintain momentum.

2. Where possible, automate.

Automating security measures is important because when processes are manual, you simply can’t scale to handle the volume of components and dependencies, as well as the speed of change and software development. For example, static application security testing (SAST) can be integrated into the development environment so that you can help developers learn from any errors as they work. Don’t develop a program that requires them to submit the application for scanning, get the results back, and then learn. Integrate those lessons into the development loop as quickly as possible. They reduce the friction for the developers and this leads to a better application security program.

3. Build a good inventory.

It might sound obvious, but you can’t fix things if you don’t know you’ve got them. So, having a good software inventory is critical. Employ tools such as a software bill of materials (SBOM) and integrate it into the development environment. The detailed inventory of the libraries you get from an SBOM gives you the visibility needed to align responsibility and have traceability for the repos. There are always incremental improvements, but it’s an exciting place to be because we can know very quickly who’s responsible for repos or any other zero day going back to the likes of Spring4Shell.

4. Take responsibility.

You need inventory to have accountability, and you have to provide your teams with tools to take responsibility for effective management. SBOMs aren’t just gestures. They address a market inefficiency that governments are trying to correct. This inefficiency is that sometimes software vendors or service providers have inadvertently transferred the risk of ineffective maintenance over to their customers, and the customers are not sufficiently aware of that risk. SBOM and other standards, like configuration management database (CMDB), are attempts to fix this, and they address calls from governments such as the Biden administration to apply such standards via agencies such as the National Telecommunications and Information Administration (NTIA).

We know from experience that when customers seek proposals and RFIs, they request SBOMs. So, I anticipate that over time, more sophisticated consumers at the corporate level will ask for SBOMs, and I think that the intent of the public policies that are being introduced is to correct the market inefficiency.

5. Don’t rest on your laurels. Keep updated.

Once a tool like this is in place, stay alert and keep it updated. Teams that struggle to patch new vulnerabilities and threats also struggle with maintenance. That’s where a tool like Mend Renovate helps enormously, scanning your software, discovering dependencies, automatically checking to see if an updated version exists, and submitting automated pull requests.

And be mindful that security can’t be a one-time thing. Whether it’s an SBOM, a code scan, or an architectural review, it’s got to be built into a flow because software’s changing very quickly. You can create an SBOM in the morning and quite quickly, it can become out of date because someone has checked something in, pushed it into test, and it might have even gone to production. That’s why automation is so important. It makes security seamless and eases the process for those who are tasked with it, like developers, so they can readily adopt and embrace the process. That circles back to my first point: be developer-focused, because they’re the folks you’re relying on to apply your security strategy.

6. Integrate education.

Developers need to know what they’re looking for and what to do when they find it. They therefore need to be educated and remain cognizant of changes and updates. It’s a process that should be ongoing, and as integrated as possible into their normal workflow. So, make sure you have processes that are integrated with IDEs and build pipelines that show teams flaws and errors. This is important because you can take the lessons learned from finding vulnerabilities and feed them back into the training program. Most important is helping people understand when errors occurred and why, so that security can constantly improve.

Equally, focus on your people. Set them up for long-term success by keeping them trained and educated. Give them a strong foundation in networking. Offer even non-technical folk with a stake in security some introduction to Linux and some Python. Then they will expand their skills, become more valuable to your processes, and grow with you. With increased knowledge comes increased understanding and more visibility into security issues. Establish a minimum standard and ensure that everybody is trained to that standard in both the security and DevOps spaces, and you’ll benefit from a lot of efficiencies in the long run.

Read the second part of this series for six more important considerations.

]]>
How Software Supply Chain Security Regulation Will Develop, and What Will It Look Like? https://www.mend.io/blog/how-software-supply-chain-security-regulation-will-develop-and-what-will-it-look-like/ Tue, 12 Sep 2023 16:09:18 +0000 https://mend.io/how-software-supply-chain-security-regulation-will-develop-and-what-will-it-look-like/ The escalation of international legislative interest in regulating the software supply chain has led to an increasing likelihood that tools such as software bills of materials (SBOMs) and AppSec solutions will become essential for companies doing business in the public sector or in highly regulated industries. However, the process of building and enforcing effective regulations presents challenges as well. As regulations grow in importance, what effect will that have on software supply chain regulation? And let’s not forget the topic that affects everything tech these days: AI.

Influencing regulations 

Regulations are not birthed solely in government or legal circles. There’s a combination of influences that drive the adoption of regulation. For example, as liability shifts towards software producers, this won’t go unnoticed by the insurance industry. We can anticipate that insurers will intervene to ensure that organizations properly disclose the extent of their cybersecurity measures. Otherwise, claims will be denied and insurance will be withheld.

There’s also a business motivation for regulation. Companies will want to demonstrate to public and private customers that they are responsible corporate citizens and reliable business partners. There may be initial costs, but over time they will reap the benefits of having software supply chain security that their competitors don’t. Then, everyone else will have to catch up and good AppSec will become standard, especially as regulations will increasingly require companies to fulfill certain security criteria. Partly this is a cultural shift that embraces the value of trustworthiness in driving revenue; it’s also an acceptance of the fact that building a robust security framework that meets regulatory guidelines will ultimately be a successful marketplace differentiator.

SBOMs and due diligence 

Under regulations and cybersecurity strategies from the U.S, the EU and as far afield as Australasia, SBOMs are likely to play an increasingly important role in the due diligence process. Nevertheless, opinion is divided as to how much emphasis should be put on SBOMs alone. On the one hand, encouraging or compelling vendors to produce SBOMs makes them accountable for what’s in their software. It makes them more responsible for managing their own software supply chain risk, and it makes them more reliable.

However, the information that SBOMs provide is only valuable if the buyer knows what to do with it. SBOMs are one element of an evolving security posture based on trustworthiness, transparency, and disclosure. At present, that responsibility is migrating towards the producer, and we can expect to reach an equilibrium between what the seller is entitled to understand about the buyer’s intended use, especially in sophisticated environments, and the buyer’s ability to use data about pedigree and providence, not just SBOM data.

Importantly, as part of every procurement process, sellers should know what’s in their tools and buyers should know that what they’re getting is secure.

What’s next for software supply chain security regulation?

Assuming that current trends in regulation continue, we can expect SBOMs to proliferate, along with increased adoption of existing technologies that manage security risks. But what’s next?

The answer lies in bringing together and applying other guidelines, methodologies, and verification practices. For example, NIST provides us with the Secure Software Development Framework (SSDF), a set of fundamental, well-established, secure software development practices that should be integrated with every software development lifecycle (SDLC) implementation, such as doing static analysis, defining vulnerabilities, using the right compiler settings, and other processes you can implement and decisions you can make while you’re building your software to make it more secure. Hand-in-hand with this, and with the use of SBOMs, you can verify your security measures by applying standards such as the Cybersecurity Maturity Model Certification (CMMC) to attest to your commitment to security best practices.

These processes and tools will continue to develop to satisfy regulations like FARs and DFARs and will inevitably evolve as the use of software components, dependencies, and the attack surface continues to expand. What’s likely is that all these tools and practices will coalesce into regulated standard operating software supply chain security procedures.

How does AI figure in these considerations?

In the next year or so, we can expect software supply chain security to include AI — and that technology will certainly be regulated. The EU has just gone through the first step of a three-step process of passing an AI statute. The UK is working on one, and there are hearings in the U.S. Congress. In February 2023, the American Bar Association (ABA) issued a statement of principles around AI, which includes some considerations about the use of AI in software development.

Perhaps the most challenging thing when considering including AI in this discussion is defining it clearly, because it’s difficult to regulate what you can’t define. This is where NIST may play an important role with its AI management framework, currently a set of voluntary guidelines that could pave the way for workable and transparent regulations.

The main concern is that heavy regulation could stifle innovation, particularly when competing with companies in countries with fewer limitations. The most important thing is to focus on the outcome, which is to improve security and accountability without stifling innovation. If nothing else, developing regulations encourages us to start thinking about what we can do to improve software supply chain security organically, to drive us to do better due diligence ourselves, and to be more responsible and accountable before we need to feel the force of law.

]]>
Why Legal Regulation Shifts Responsibility for Software Supply Chain Security to Vendors https://www.mend.io/blog/why-legal-regulation-shifts-responsibility-for-software-supply-chain-security-to-vendors/ Thu, 07 Sep 2023 14:34:30 +0000 https://mend.io/why-legal-regulation-shifts-responsibility-for-software-supply-chain-security-to-vendors/ In the face of increasingly impactful malicious attacks, governments of leading economies have turned their attention to the software supply chain security. Regulations like the EU’s Digital Operational Resilience Act (DORA) for financial institutions and the Cyber Resilience Act (CRA) for software and hardware providers Australia’s 2023-2030 cybersecurity strategy, and the U.S. government’s national cybersecurity strategy have turned a spotlight on the regulation of the software supply chain and could create significant change in application and software security, particularly when it comes to liability. 

The importance of protecting customers

We all sign end-user license agreements (EULAs) but customers vary hugely in terms of sophistication. There are government and military customers of software that have big security budgets. There are big corporations, Fortune 500 banks, and regulated industries, and then there are community hospitals and public school systems and small businesses, and sometimes they’re all users of the same software. 

Sophisticated customers understand what security issues they face and have the budget, resources, and tools to implement vulnerability detection, protection, and response. Those without the capability or budget to address the issue of vulnerable software risk losing their entire operations. Recently, a major U.S. hospital network, Prospect Medical Holdings, fell victim to a ransomware-based cyberattack that disrupted 16 hospitals in California, Connecticut, Pennsylvania, and Rhode Island, and 166 outpatient centers and clinics, leading to some hospitals having to stop operations and divert patients to other facilities. The attack was so serious that radiology, diagnostic, and heart health facilities were forced to close, and the FBI is investigating. 

This kind of attack is not unprecedented, and it illustrates why it’s so important for vendors to help protect those customers who have fewer security capabilities or less knowledge with which to mitigate risks.

Challenges with regulation

Formalized regulation is one way of addressing this problem, but it has issues of its own. We know how to innovate breathtakingly quickly. What we don’t know how to do is respond as quickly with regulation and appropriate checks and balances that don’t hobble innovation. This isn’t necessarily a new quandary. Typically, innovation in information and communication technology has outpaced regulation, and the latter has always had to play catch-up. 

The current debate about AI is a perfect example. Entrepreneurs are leading the way with incredible innovation, but the risks are only now beginning to be understood, and the power of technology is overwhelming.

The same thing is happening with software supply chain security. The legal and regulatory system is beginning to come up with some tentative innovations that could change the way we contract and deploy technology in a new definition of what constitutes due diligence. Every one of these transactions is governed by a contract or a statute, and the nature of that contract is changing dramatically to push the burden onto both the developers and sellers as well as the old classic caveat emptor burden on the buyer to be aware of what they’re getting.

From “caveat emptor” to “caveat venditor”

We’ve reached a stage of maturity and complexity in the software and applications industry that requires this new level of due diligence. As the software supply chain becomes more complicated, allowing the risk of purchase to fall entirely on the buyer is no longer acceptable or fair. We’ve seen this pattern before with manufactured goods like cars, household appliances, and electrical hardware. Initially, the notion of caveat emptor – buyer beware – was king. It was the buyer’s responsibility to check that your intended purchase did what you expected and worked properly, but as these items became more complex, it was clear that buyers/end-users didn’t always have the expertise to make these judgments. They needed to be protected by warranties because only the manufacturers and vendors had the know-how to suitably assess and guarantee their products. Established in law, caveat emptor shifted to caveat venditor – seller beware.

The concept of warranties has become increasingly specific and granular warranties have been incorporated in U.S. federal acquisition regulations, defense acquisition regulations, and all the state acquisition regulations that mirror them. Now the contracts at every level of government reflect a general acceptance that there are some things that the buyer doesn’t necessarily have the expertise to know. This statutory shift in federal contracting has now begun to enter the marketplace for software and applications, shining a light on elements of trustworthiness in contractual transactions within this market. Concepts of disclosure, candor, and clarity about the provenance of software components and dependencies increasingly need tools like software bills of materials (SBOMs) as part of software contracts.

What does this new due diligence look like? 

Although it’s not yet fully part of the statutory scheme, SBOMs are mentioned in this year’s White House cybersecurity strategy as an option, and the likelihood is that they’ll be required in any serious software transaction in the near future.

What is also evolving is a new sense of a much more balanced relationship between the buyer and the seller, especially with sophisticated products, and in sensitive environments like critical infrastructures and national security environments. There’s much more tolerance for spreading the pain so that all the burden doesn’t fall on the buyer. So, what exactly is the seller’s responsibility?

Increasingly, we’re seeing efforts to ensure that there’s explicit language where the seller is expected to know about their own supply chain and fully disclose that to the buyer just as the buyer should explain to a seller how they intend to use the product, what the context is, and what some of the risks are. Then, the seller has the opportunity to say whether the product would be used appropriately and if the intended use might be risky. This process provides the vendor with the opportunity to raise certain legitimate disclaimers to ensure that the buyer fully understands where there’s a risk that the vendor can’t cover. It leads to more open and transparent transactions.

The bottom line is that it shifts the risk balance from the side of the buyer to more shared responsibility with the seller. In the U.S. that’s going to be reflected in federal acquisition regulations (FARs), the Defense Federal Acquisition Regulation Supplement (DFARS), and state acquisition regulations supporting contracting. More balanced due diligence will become part of the contracting environment for all software and applications adopted by the public sector, and we anticipate that the private sector will follow this lead.

]]>
What You Should Know About Open Source License Compliance for M&A Activity https://www.mend.io/blog/what-you-should-know-about-open-source-license-compliance-for-ma-activity/ Thu, 18 May 2023 16:07:37 +0000 https://mend.io/what-you-should-know-about-open-source-license-compliance-for-ma-activity/ Companies are increasingly concerned about the security risks lurking within open source code, especially as they navigate complex mergers and acquisitions. Each piece of open source software is governed by a license, a legal contract dictating how the software can be used. While this might sound straightforward, the reality is a complex web of over 200 different licenses. To ensure legal compliance and mitigate risks, companies must meticulously catalog and understand the licenses embedded within their software, especially during M&A deals.

It’s not just about security; it’s about survival.

In this article, you will learn how to navigate the complex world of open source licenses, understanding their implications for software development, especially during mergers and acquisitions, to protect your business from legal and security risks.

This article is part of a series of articles about Open Source License Compliance

Types of open source licenses

Given the nature of open source, it’s unsurprising that licensing rules are also open. Anyone can create an open source license that suits them — that’s why there are so many out there. In general, however, open source licenses can be divided into two main categories: copyleft and permissive. Permissive licenses, like Apache 2.0, are typically referred to as “anything goes” because they place minimal to no restrictions on how others can use these components. Given the lack of restrictions, there is relatively little compliance risk with this category.

Copyleft licenses, such as the GNU General Public License (GPL) family of licenses, are a different story. This category of licenses gives other people the right to use, modify, and share the work, as long as the reciprocity of the obligation is maintained. In layman’s terms, if you’re using a component with this kind of license, you have to make your source code available for use by others.

Risk of copyleft licenses

We’ll use GPL as our example, but these same principles apply to other copyleft licenses.

GPL requires the release of components’ full source code along with the broad rights to modify and distribute that source code. When you build software using GPL license components, you have to release your work under a GPL-compatible license regardless of the percentage of GPL license code being used.

Nobody wants to do that, because anyone, including competitors, can learn how your software is structured and designed. Then they can modify your program, perhaps building out new high-demand functionality that supersedes your product. That’s a big risk.

How do we reduce this risk?

The GPL requirement to share source code applies to all derivative work. However, it doesn’t extend to programs that are separate and independent from the GPL license work. Unfortunately, there’s no clear legal definition of when proprietary software is separate and independent. Nevertheless, you can reduce your exposure by: 

  • Using a dynamic rather than a static link between the GPL module and the custom software.
  • Using separate names for the GPL and proprietary modules.
  • Including separate copyright notices for GPL and proprietary modules.
  • If it’s practical, price the software with and without the GPL module.
  • Provide the GPL and proprietary modules to be downloaded separately.
  • Provide them as separate executables and have separate documentation.

Creating a company policy for the use of components

Every company needs a clear policy regarding how or if certain open source license components can be used. Using GPL as our example, at a minimum, a policy should cover the following: 

  • Will you permit any GPL components in products that get distributed?
  • Can they only be used in the backend tools and not part of the distributed product?
  • If these components are allowed in deliverables, which GPL versions will you prohibit? 

Licensing should be treated just like security where you manage by exception because it’s easier to have “block” lists instead of “allow” lists. Imagine trying to approve hundreds of thousands of components with over 200 different license types instead of just blocking a few unacceptable license types.

How to ensure license compliance with GPL components

Achieving license compliance with GPL components is essential to avoid legal pitfalls. Two primary steps are crucial:creating a compliant notice file and making all source code accessible. A notice file, often referred to as a header file, must be updated with each new release to accurately reflect the open source license terms. This typically includes a copyright notice, disclaimers, and the specific license text. By diligently following these steps, developers can maintain GPL compliance and protect their software projects.

Technical due diligence: Top tips

Due diligence is crucial for accurately valuing a company and determining potential risks. It’s often conducted during major corporate events like M&A, and IPOs.

It involves assessing a company’s technology-related aspects like its products, software, systems, and practices. Our focus here is on third-party software use. You need to keep a record of any third-party and open-source components that you use. Here’s how to prepare and avoid any unpleasant surprises:

  • Prohibit people from downloading any software, without first reviewing its licenses.
  • Know the intended use of the software.
  • Categorize software by its license type, along with its permitted and prohibited uses.
  • Document which version you’re using, including a link to the license text.
  • Be aware of any license restrictions.
  • Document if you have access to the proprietary program.
  • Ensure the source code version is available for download for any GPL components..

Be prepared. Use an SBOM

The main lesson is to get your open source license policies and controls ready if you think M&A activity is imminent. Start by inventorying what components you’re using and their associated licenses and risks. Use tools like a software bill of materials (SBOM) to achieve this. It’s something that numerous governments are mandating. 

Learn More: Best Practices for Open Source Governance

Streamlining the process with SCA

This is overwhelming to do manually, but there’s an easier way, using software composition analysis (SCA).

During the due diligence process, you should produce a list of all your third-party software dependencies and relevant metadata. This should include the package names, authors, supplier inversions, declared hidden licenses, dependency paths, and whether the packages have been statically or dynamically linked to the product.  With the right SCA tool, you can do all this automatically in real time when a developer pushes code.

You could commission an external audit on your code base, but typically firms that do this use an SCA tool anyway. If you choose an external audit, ensure it flags any known open source vulnerabilities with contextual data like the CVE score. Ensure it identifies any copyleft license components and includes a license compatibility report.

Finally, a good audit should include an attribution report, containing all the licenses, the license text, the copyrights, and the notice files required for each open source component.

]]>