Aurora Starita – Mend https://www.mend.io Wed, 18 Dec 2024 17:35:47 +0000 en-US hourly 1 https://www.mend.io/wp-content/uploads/2024/11/Mend-io-favicon-outline-200px.svg Aurora Starita – Mend https://www.mend.io 32 32 Mend.io – Backstage Integration: Bringing Security Insights Where You Need Them https://www.mend.io/blog/mend-io-backstage-integration-bringing-security-insights-where-you-need-them/ Wed, 18 Dec 2024 17:35:46 +0000 https://www.mend.io/?p=13355 Backstage background

Launched as an internal project by Spotify in 2016, Backstage was released under the Apache 2.0 open source license in 2020 to help other growing engineering teams deal with similar challenges. Backstage aims to provide a consistent developer experience and centralize tools, documentation, and services within a single platform.

What started as a way to help new developers onboard faster is now a fully fleshed out developer portal that standardizes how teams interact with their internal services, APIs, and resources. Backstage includes features for service catalogs, continuous delivery, observability, and plugin integrations—all customizable to fit specific workflows.

For application security teams, Backstage offers wide views and controls across the development process and with the Mend.io plugin, deep insights into application risks overall or by project, too.

Mend.io for Backstage

Switching across multiple tools and projects isn’t just annoying, it can also lead to potential security blindspots and delayed response times. Likewise, missing opportunities to address vulnerabilities early in the development process can lead to costly rework, delays in releases, and vulnerabilities in production.

We want to save you from that trouble and consolidate security information from SCA, SAST, and Container scans into a single view within Backstage, providing you with a comprehensive overview of projects’ security.

We built the Mend.io plugin for Backstage to help you:

  • Stay where you are with integrated projects overview. We love our own UI, but we want to be wherever it is that you need us to be. Mend.io’s centralized project dashboard within Backstage provides visibility into each project and its security findings, giving a complete view of a project’s threat landscape.
  • Zoom in for detailed findings or zoom out for the big picture. A proactive approach to application security requires insights to make broad strokes and fine lines. Mend.io’s findings overview dashboard drills down into identified risks for a deeper analysis right within Backstage and shows all identified vulnerability findings across your organization.
  • Work on what matters with risk severity prioritization. Vulnerabilities are not all created equal and treating them that way is a waste of resources. The Mend.io plugin for Backstage displays the severity level of each identified threat to effectively prioritize remediation of the risks that matter most.
  • Bring security directly to developers. Encouraging developers to take on ownership of security with enthusiasm starts with minimizing the disruptions security tools add to their environments. The Mend.io plugin for Backstage embeds security information directly into developers’ workflows and presents security scan results in a clear and concise manner, making it easier for developers to understand and remediate issues.
  • Nip vulnerabilities in the bud. Help developers stay proactive with application security. The Mend.io plugin integrates security checks early in the development process, enabling developers to identify and address vulnerabilities while they’re still fresh.
  • Keep track of the difference you make. It’s difficult to assess your security posture and demonstrate your team’s accomplishments to stakeholders if you lack clear metrics and insights. The Mend.io plugin for Backstage provides important metrics and reports to track security progress and demonstrate the effectiveness of security efforts.

Keeping applications secure is a tough job and Mend.io is here to assist you wherever you’re doing it.

Adding the Mend.io plugin to Backstage

Installing the Mend.io plugin for Backstage is simple. All you need is a simple script to install the back-end plugin plus a Mend.io API token. Information on both and everything else you need to get started can be found on the plugin installation page.

]]>
What is the KEV Catalog? https://www.mend.io/blog/what-is-the-kev-catalog/ Thu, 19 Sep 2024 13:25:42 +0000 https://www.mend.io/?p=11571 With external threats looming as a constant source of potential disruption, multiple government agencies have coordinated to compile a catalog of Known Exploited Vulnerabilities (KEV). The Known Exploited Vulnerabilities Catalog, or KEV catalog, is a database of actively exploited vulnerabilities, including those that have been exploited by ransomware campaigns, that can help application security professionals in the public and private sectors monitor threats and prioritize fixes. 

A brief history of the KEV catalog

The KEV Catalog was created and is maintained by the Cybersecurity and Infrastructure Security Agency (CISA), a component of the Department of Homeland Security, and NIST, the National Institute of Standards and Technology, an agency within the Department of Commerce. 

In 2021, CISA issued “Binding Operational Directive (BOD) 22-01 – Reducing the Significant Risk of Known Exploited Vulnerabilities”, which requires all federal civilian executive branch (FCEB) agencies to comply with regulations to ensure protection of government systems and information from cyberthreats. These regulations include regular monitoring of the KEV catalog and remediation of all known exploited vulnerabilities included in the catalog within specific timeframes. 

Though it is not legally required for private sector organizations to follow CISA’s guidelines, all organizations with online applications and websites are strongly encouraged to regularly monitor the catalog to better protect themselves and their clients. 

How does a vulnerability get added to the KEV catalog? 

The KEV catalog only contains vulnerabilities that pass an evaluation process. When a new vulnerability is disclosed, it is typically assigned a Common Vulnerabilities and Exposures (CVE) ID and analyzed to determine potential impact and exploitation status. CISA then seeks feedback from security experts, federal agencies, and the private sector to gain insight into the vulnerability.

In order to be included in the catalog, the vulnerability must meet the following criteria:

  • Evidence of active exploitation
  • An assigned CVE ID
  • Existing methods for remediation, typically from the vendor

 If the vulnerability meets these criteria, it is added to the KEV catalog with detailed information. Once a vulnerability is added to the catalog, information is regularly updated to include remediation guidance. It’s important to emphasize that vulnerabilities aren’t added to the catalog until there are ways to address them.

Prioritize with CVSS, EPSS, known exploits and reachability

Are all vulnerabilities created equally?

Though the government also maintains a National Vulnerability Database (NVD) with over 160,000 CVEs, the majority of potential vulnerabilities have no known exploits in the wild. Research from CISA indicates that less than four percent of CVEs have been exploited. However, once a CVE is exploited, threat actors move quickly, which is why regular monitoring of CVEs and their exploit status is important. 

While a vulnerability may be known to many within cybersecurity, it does not typically warrant action until a public exploit is discovered. Academics frequently release papers on potential ways an exploit could be performed, but it cannot cause harm until an exploit is performed in the real world. The KEV catalog helps filter out theoretical threats from active threats by only including vulnerabilities with known exploits. 

Are all actively exploited vulnerabilities in the KEV catalog?

No, not all known vulnerabilities are reported through the CVE process, even though they could be, and will thus not be included in the KEV catalog. It’s also likely that some known and unknown vulnerabilities are being actively exploited right now, but no one has discovered it yet. Other things we sometimes think of as vulnerabilities, like malicious packages, cannot be reported through the CVE process so they won’t show up in KEV either. But if you’ve installed a malicious package, there’s no question on whether it’s exploitable or not–you are actively under attack.

Prioritizing with known exploits 

The KEV catalog can make it easier to determine which exploits should be addressed first. Though a CVE may have a critical severity score, if it has no known exploits, it is less of a threat than an CVE with a medium threat severity score that is listed in the KEV database, as that means there are active exploits for it. A CVE that has active exploits should always be prioritized over CVEs with only theoretical exploits. 

While the KEV catalog is not the end-all, be-all of application security, it is an important tool for those within application security. Utilization of exploit information such as data from the KEV catalog, in addition to other vulnerability metrics, including Exploit Prediction Scoring System (EPSS) and Common Vulnerability Scoring System (CVSS), can aid any organization in strengthening their application security.

]]>
Application Security — The Complete Guide https://www.mend.io/blog/application-security/ Thu, 12 Sep 2024 21:51:42 +0000 https://mend.io/application-security/ What is application security?

Application security is the combination of tools, practices, and policies that are used to protect the application layer of software from threat actors. Once something of an afterthought, application security is now widely and rightfully recognized as a vital part of the software development life cycle (SDLC). As the complexity of technology increases, considering application security early and often in the SDLC is imperative to keeping data and resources from falling into the wrong hands.

This straightforward guide will help you build a better understanding of application security and the tools and practices your organization can use to stay protected. 

Why is application security important?

Globally, the average cost of a data breach in 2023 was over 4 million. For US companies, that number was over 9 million, according to IBM.  The Verizon 2024 Data Breach Investigations Report found that web applications are the top hacking vector in breaches. The cost of ignoring application security is high.

Beyond the painful enough money loss, the good reputation of a business that took years to build up can crash in the blink of an eye after a serious security failure. In 2017, Equifax suffered one of the worst data breaches in history. The following year, after compiling financial data and several customer surveys, USA Today declared Equifax “The Most Hated Company in America” Ouch.

Virtually all businesses, not just those in tech, now use software to run their daily operations, ultimately meaning that application security, or a lack thereof, affects nearly every human on the planet.

  • Global average cost of a data breach:  > $4 million
  • U.S. companies: > $9 million
  • Top attack vector: Web applications

The major challenges of AppSec

Protecting applications isn’t for the weak. Here are some of the major challenges to keeping apps safe today.

App + Sec relationship status: It’s complicated

A vast, sprawling, and complex landscape filled with vulnerabilities at nearly every twist and turn, the application layer is the top infiltration spot for bad actors in search of valuable data and other assets. The increasing complexity of applications and the systems and architectures they connect to makes securing them properly a real technical challenge.

Third party woe

Modern software is rarely built entirely from scratch. Rather, modern software is composed, often by about 80% existing open source and third-party code and 20% custom code to bind it all together and add a layer of innovation. Open source code by definition is freely available to examine and is used by many people, making it an extremely inviting vector for bad actors. Code from a vendor may be more obscured but as the SolarWinds hack showed us, can still provide a major attack surface.

Who, me?

A frequent challenge to application security lies in intra-organizational confusion about who exactly is responsible for it. This ends up with a lot of pointed fingers and not a lot of positive action. Unsurprisingly, the answer to who is responsible is “everyone” but that really must start with the people controlling the budgets.  Taking security seriously starts with funding the tools needed to do a good job.  

Deep in technical debt

We all love our old software, still functioning and bringing in money, but aging software is especially prone to accruing large amounts of technical debt, which leaves applications insecure. Shortcuts made to keep up with shrinking software release cycles are another source of technical debt and security weaknesses. Finding the time and will to pay down the debt can be a struggle for many organizations.

Application security best practices

From how you organize your company to how you write a single line of code, there are many ways to practice great application security. The application security best practices here are just a few high-level suggestions to get you going.

Work together

Not to be cliche, but teamwork really does make the dream work. If application security is not already a part of your organization’s culture, here are some things you can do to get started.

Bring down the firewalls between development and security teams

Name one or more members of your development team as security champions. Security champions are active members of a development team that serve as security ambassadors within their team. Security champions also provide visibility of their team’s security activities to the application security team, and often  the point person between developers and a central security team.

Incentivize developers to use security tools

As a group, devs may not be particularly well known for their people skills, but they do know something about human nature: people don’t like to do things that are repetitive or tedious. In software engineering laziness is a virtue, so meet developers where they are and give them tools to work smarter and minimize busywork–tools that are embedded right into the work environments they’re already using and coding within.

Manage privileges

For all the silos you should knock down to integrate your team into a lean, mean DevSecOps machine, you should be bringing up walls when it comes to user privileges. Make sure that users across your organization are restricted to only having access to the data they absolutely need, so you can reduce your attack surface from security concerns from both within and outside your organization.

Know your stuff

It’s difficult to protect something that you aren’t aware you have, and it’s also not so easy to convince people you’re doing something if you aren’t even sure yourself. It’s time to clear the fog. 

Track assets

Tracking your assets simply means knowing which hardware and software you’re using and what they’re doing.  

While you’re taking stock, it’s a good idea to note and prioritize those things that:

  • Access vulnerable networks (like the public internet)
  • Access personally identifiable information (PII)
  • Access financial information
  • Are subject to regulation
  • Are public facing
  • Are crucial to business function

You should have a record of all of the code, open source and custom, that your products depend on, all of that code’s dependencies, and all of those dependencies’ dependencies, and… you get the idea. Another great reason to create and keep updated SBOMs.

Model threats

Malicious actors care about what you care about. Now that you have a good idea of what’s important, you can start looking for ways they might try to get it for themselves.

If you’ve never threat modeled an application before, here are the basic steps:

  1. Document how the application functions, focusing on the flow of data through the application.
  2. Find threats against the application by following the flow of data and identifying places weak to certain exploits (attack databases work well here).
  3. Address threats by rating them and deciding on mitigation strategies.
  4. Validate the model to make sure you didn’t miss anything.

Define your metrics—know your risk, track your efforts

Quantifying and tracking security isn’t always a straightforward task, especially when the best security is done before there’s even a problem. But having good numbers for your efforts, results, and risk profile is essential to making good decisions and justifying your work. Which metrics to track and how to track them are very organization-dependent, but consider measuring:

  • Risk. Use this basic formula: Risk = Probability of attack X impact of attack
  • Time spent on education and estimated number of issues prevented
  • Percentage of process made to be automated over time
  • Number of issues kicked back to developers
  • Average time for developers to fix an issue

Shift left, shift everywhere, shift smart

Sometimes even agile organizations find themselves administering security to the tempo of an old school waterfall pipeline—a software idea is born, a team of developers writes it into being, and then the big bad security team finds numerous flaws that need to be addressed by the devs before release, causing a major break in flow and a bottleneck in the SDLC. There is a better way.

Shift left – design with security in mind

The rumors are true: the beginning is in fact a very good place to start. “Shift left security” means moving security considerations as early into the SDLC as possible and that should mean right into the very design and conceptualization of a product. One way to achieve this is to define your security policies right from the start to set consistent boundaries and create efficient development processes.

Shift everywhere – make security a part of every stage of development

But wait there’s more! There are many spots in the SDLC where security can and should move left. Bringing security testing into the early development stage, as code is created, is preventive care that immediately translates to time and money saved. But don’t stop there. Use your security champions to ensure that security concerns are addressed regularly throughout development processes.

Shift smart – automate security and minimize false positives

Done correctly, shifting security up, down, and all around shouldn’t slow production. Add a dash of automation, and you might even find development speeding up. Tools are really the driving force behind shifting smart. Find the right tools that automate security and give developers just-in-time feedback that allows them to remediate security concerns without requiring them to become security experts themselves.

Find out more about the Mend AppSec Platform

A single platform that supports both developer and security teams

Application security assessment

It is important to perform an application security assessment in order to evaluate and understand your true risk and to create a plan for addressing security issues. A full application security assessment includes identifying sensitive data, threat modeling, mapping out your applications to identify which areas will need which types of security tools, searching out gaps in your security process, and creating a security roadmap.

Vulnerability assessment

A vulnerability assessment is a systematic review process that uses vulnerability scanning to identify areas of an application that potentially need security improvements. This differs from penetration tests, which are usually more limited in scope but identify real and exploitable application vulnerabilities.

Popular resources for application security

Be comforted by the fact that you are not alone. A large number of individuals, teams, and organizations have dedicated their time and experience to bringing application security to all. Here we highlight some fantastic (and free) resources.

MITRE’s CWE Most Dangerous Software Weaknesses 

Using real-world data from the U.S. National Vulnerability Database and combining frequency and severity to determine rank order, every year this community-developed project puts out a list of the top 25 hottest, most on-trend software weaknesses.

OWASP Top 10 

Open Web Application Security Project (OWASP) puts out their own list that is unsurprisingly focused on web application security. It is well worth a gander as it particularly comprises low-hanging fruit that hackers love to bite.

The Linux Foundation

A hub of open-source activity, The Linux Foundation’s website is a wealth of resources including guides, webinars, research papers, and free courses.

OpenSSF

One of The Linux Foundation’s many projects, Open Source Security Foundation (OpenSSF) offers free training and certifications, guides, reports, and a robust community of security buffs.

MITRE ATT&CK Framework 

A major resource for threat modeling and beyond, MITRE’s Adversarial Tactics, Techniques, and Common Knowledge (ATT&CK) is a knowledge base and playbook of offensive moves and the defensive actions necessary to combat them.

Cybersecurity Framework from NIST

If you like comprehensive security frameworks, this one’s for you. Designed with the security of critical U.S. infrastructure in mind (and likely to become mandatory for some sectors), the Cybersecurity Framework from NIST is a well-organized body of standards, guidelines, and practices that can help any kind of organization stay secure. It is helpfully available in 10 different languages.

Some important application security tools

Perhaps someday there will be one tool to rule them all and cover the entire software supply chain, but as it stands, staying secure typically requires multiple application security scanning tools to get the job done well. As code changes throughout the SDLC, so do the tools needed to keep it secure. Here are a few stand-out tools to protect both your custom code and the third-party and open source code that makes up your software supply chain.

Application security testing

Application security testing is a market category that includes both security scanning tools and runtime protection and monitoring tools. Software composition analysis (SCA) is also part of this category but we’ve put it into a different section with other tools concerned with open source software.

Security scanning tools

  • Static application security testing (SAST) – Implemented at the coding and testing stages of development, SAST tools analyze custom application source code from the inside out (“white-box” testing) and look for coding and design issues that may indicate security vulnerabilities, including old favorites like SQL injection, input validation, and stack buffer overflows. Because SAST tools are deployed at the development stage, before applications go live, the issues they find tend to be cheaper and easier to locate and fix than issues found later in applications that are already running.
  • Dynamic application security testing (DAST) – A complement to SAST, DAST takes the opposite approach and tests running code for security vulnerabilities without any visibility into the source code (“black-box” testing). DAST is usually limited to testing only the exposed HTTP and HTML interfaces of web-enabled applications and takes a hacker’s rather than a developer’s perspective.
  • Interactive application security testing (IAST) – IAST is designed to complement SAST and DAST. Like DAST, IAST is deployed in a QA or testing environment to test running code. Unlike DAST, IAST has knowledge of the post-build code and can identify the line of problem code and notify a developer for immediate remediation. IAST is not without its own limitations, however, as it does not scan the entire codebase, and coverage across all languages and platforms is still lacking. 

Application security monitoring tools

  • Runtime protection tools are used in production and designed to defend against threat actors in real time. This market is segmented into web application firewalls (WAF), bot management, and runtime application self-protection (RASP).

Open source software security

Keeping open source software secure typically involves keeping components cataloged and up to date. Here are a few tools that help you do that.

Software composition analysis (SCA) – SCA tools manage the use of open source components by performing automated scans of an application’s code base (and related artifacts, including containers and registries) to identify all open source components, their license compliance data, and any known security vulnerabilities. Many SCA tools also detect malicious packages and enforce an organization’s policies on bringing new open source components into projects.

Automated dependency update tools – Automated dependency update tools scan repositories for open source dependency information, check for updates, then create pull requests for the latest versions.  

Cloud-native application security

Cloud-native applications have a lot of advantages in terms of scalability but they introduce a few unique security issues that new cloud- and container-specific security tools address.

Container image scanning – Container image scanners are similar to SCA scans applied to container image layers but with some container-specific twists. In addition to scanning for and reporting vulnerabilities at each image layer, they scan for misplaced secrets and configuration issues.

Cloud-native application protection platforms (CNAPP) – CNAPP consolidates a number of cloud security services into a single platform including container scanning, cloud workload protection, and runtime vulnerability and configuration scanning.

Trends in application security

Worldwide government interest in AppSec

World governments and legislatures across the globe have taken a strong interest in cybersecurity, particularly application security, in recent years. The Biden Administration’s 2023 Cybersecurity Strategy shows that the United States government is pushing for regulatory mandates relating to software used in critical infrastructure and the European Union, Australia, and many other governments have announced similar goals for cybersecurity. Meaningful progression security compliance may start with government contracts but it won’t end there. Civilian business consumers of software are also starting to demand more secure applications and transparency into what their vendors are doing to keep code safe.

Increased adoption of SBOMs

One way for vendors to provide that transparency is in the use of Software Bills of Material (SBOMs), machine-readable logs  that account for all software components, their dependencies, and their relationships. SBOMs take their name and function from traditional industries like automobile manufacturing where makers keep records of information about all of the parts in a machine. Should one of those parts get recalled, these manufacturers know which vehicles to call back to the dealership for repairs. So it goes with SBOMs. By keeping an up-to-date record on hand to be provided by regulators when requested, software companies are kept responsible for their products. Needless to say, governments are pretty excited about it, but companies should be, too. SBOMs are invaluable instruments for tracking a large breadth of code and finding vulnerabilities even before the authorities come knocking.

Automation becomes essential

Across the globe, the number of jobs for security professionals continues to rise quickly, while the supply of people to fill these jobs continues to lag. This leaves organizations with little choice but to train their developers in security and automate as much as possible. But even if your organization had a budget big enough to snag all the top cybersecurity talent in the world, modern software is so complex and is deployed so quickly that trying to address application security without automation is nearly impossible and definitely unrealistic.

Application security forecast

Finger to the wind, here’s what we think may be coming down the pipeline soon.

Increased demand for suppliers to not just provide SBOMs but list known vulnerabilities

With great maturity comes great responsibility. As the software industry matures, the expectation that vendors are responsible about securing their products increases. In the past, software customers often didn’t have security on the front of their mind, but things are changing. Expect to be asked not just for SBOMs but a list of known vulnerabilities in the near future.

FDA-type bodies to regulate software concerning critical infrastructure

Application security concerns don’t go away if one political party or another is in charge. World governments have stated their seriousness about cybersecurity in the last few years, but so far most of their statements remain only recommendations. We think an increase of hard regulation and the formation of regulatory bodies to monitor compliance is on the horizon.

Increased use of artificial intelligence and machine learning

AI and ML both hold great promise. Unfortunately that’s for the bad guys, too. As developers lean on more AI-written code, scanning for security concerns will become more crucial than ever. New issues beyond anyone’s comprehension a few years ago are sure to soon arise such as “hallucination squatting” where crafty malicious actors take advantage of AI’s tendency to hallucinate (i.e. make up) credible sounding sources of information like the names of open source repositories. On a happier note, the integration of machine learning into security tools will soon make triaging issues even better.

How Mend.io can help

The Mend Application Security Platform provides comprehensive security solutions for your entire codebase.

Mend SAST allows your developers to rapidly design new applications while maintaining the security of their custom code. With Mend SAST integrated right into your favorite DevOps environment and continuous integration/continuous development (CI/CD) pipelines, developers can easily evaluate the recommended code changes and approve them using a simple pull request.

Mend SCA detects open source vulnerabilities in over 200 languages, frameworks, and development tools. Like Mend SAST, it provides pull requests with automated remediation, enabling developers to update the recommended open source package with just a single click. Mend SCA also enables the easy creation of SBOMs.

Mend Renovate automatically resolves outdated dependencies saving time, reducing risk, and mitigating the impact of security vulnerabilities.

Mend Container gives you agentless reachability analysis for container vulnerabilities as well as secrets detection and Kubernetes cluster scanning to help make vulnerability prioritization simple.

Mend AI catalogs the AI models in your codebase and keeps track of their licenses, versions, and security alerts.

FAQ

What do you mean by application security?

Application security is the tools, practices, and policies used to protect the application layer of software from cybercriminals and other threat actors. This often encompasses some cloud and mobile security but typically does not include network security concerns.

What are application security best practices?

  • Integrate development and security teams
  • Incentivize developers using security tools and practices
  • Manage privileges
  • Track your assets
  • Model threats
  • Define security metrics and know your risks
  • Consider security early in and throughout the software development lifecycle

Is application security part of cybersecurity? What’s the difference?

Yes, application security is a subset of cybersecurity at large. Cybersecurity concerns entire systems, and sometimes the entirety of the code and infrastructure that make up the public internet and every single thing that interacts with it. Application security is only concerned with the application layer of software and the data it accesses.

]]>
A Guide to Open Source Software https://www.mend.io/blog/a-guide-to-open-source-software/ Fri, 26 Jul 2024 15:47:00 +0000 https://mend.io/?p=9294 What is open source software?

Open source software (OSS) is software for which the original authors have granted express copyright and usage permissions to allow all users to access, view, and modify the source code of these programs however they see fit and without the need to pay royalties.

This is in contrast to proprietary, closed source software, which typically requires a paid license and cannot be added to, modified, or distributed by anyone except the owner of the rights to the software.

The cost efficiency of using open source software has proven to be irresistible. Today, open source components often account for 80% or more of the code base of new applications. Virtually zero products are free from OSS somewhere along the supply chain.

OSS has a rich history and a colorful community who dedicate their time—often unpaid—to provide millions of useful libraries and programs to all for free.

The history and philosophy of open source software

Before “open source software” was a thing, there was just “free software”, and you can’t talk about the history of free software without talking about the daddy of free software himself, Richard Stallman. Eccentric founder of the Free Software Foundation (FSF) and author of the GNU Public License, known for its use of copyright law to enforce freedoms in a “viral” fashion, Stallman has been a strong advocate for free software since the 1980s. He still heads the FSF and writes on the topic of free software to this day.

Most of us think of free software as software that doesn’t necessitate an exchange of money to obtain, but “think free as in free speech, not free beer” is the mantra of free software advocates. Stallman sees a distinction between “open source” and “free” software. Though the Venn diagram of free and open source applications is basically a circle, he believes the two terms emphasize different philosophies. The FSF’s four essential freedoms are the pillars of the free software movement—and they’re not particularly interested in how much freedom free software gives businesses.

Enter the Open Source Initiative (OSI). Founded in 1998 by Bruce Perens and Eric Raymond, OSI took a more practical view of free software. For Perens and Raymond, using the term “open source” took away the political baggage clinging to “free software” and invited businesses to use and contribute to the growing number of libraries and applications that were freely available to all.  OSI may be friendlier to corporate participation in OSS, but it is still undergirded by a libertarian philosophy. Many of Stallman’s four freedoms are reflected in their 10-point definition of OSS. As things usually go with politics, most developers don’t have strong feelings one way or another about “free” versus “open source” software. You might see the more neutral acronym “FOSS” which stands for “Free and Open Source Software”, or even “FLOSS”, which stands for “Free Libre and Open Source Software”, where the “free” is back to the common meaning of “costing zero dollars”.

Prior to the early 2000s, many companies were skeptical or even downright hostile towards their developers using OSS, but the cost savings of using readily available open source components instead of using developer time to reinvent the wheel became too high to ignore. 

What is an open source license?

An open source license details the terms and conditions for using or modifying a software component. The text of the license will be included in an open source project’s distribution, either in the comments of a source code file or in a separate README file.

There are over 200 open source licenses out there in the world, but many are redundant. The OSI only recognizes about 80 unique and truly open source licenses. The vast majority of open source projects fall under just eight licenses. Not too bad. 

Learn more about those top open source licenses.

Types of open source licenses

All open source licenses fall into three broad categories:

  • Strong copyleft. These licenses require that any code derived from an open source project, even if the original code is used unmodified, inherit its licensing terms. Popular copyleft licenses include the GNU and Affero GNU Public Licenses (GPL and AGPL).
  • Weak copyleft. These licenses typically allow for use of an unmodified open source component without the requirement that the new project inherit its license. However, any modification to the open source component must be licensed with the original license. Popular weak copyleft licenses include the Lesser GNU Public License (LGPL), the Microsoft Public License (MsPL), and the Eclipse Public License (EPL).
  • Permissive. These licenses allow developers to  use or even modify an open source component without requiring that the final application be under any particular license. Popular permissive licenses include the MIT License, The Berkeley Software Distribution licenses (BSD), and the Apache 2.0 license.

Open source compliance

Organizations that use open source code must take care to stay compliant with intellectual property (IP) law and to reduce their risk of breaches and attacks from malicious actors taking advantage of vulnerabilities in OSS.

Copyright and IP compliance

IP compliance and policy comes down largely to compatibility: whether or not the licenses of open source components are compatible with each other, and whether or not the licenses are compatible with the overall business goals of the project.

  • Compatibility between licenses. Permissive open source licenses play nicely with everyone, but strong and weak copyleft licenses are usually incompatible with each other. Even if the intention is to release the final project under an open source license, components under the GPL cannot be used with components under the MsPL, for example, because both licenses require the source code of the derivative work to be licensed under that same license, with no room for dual licensing.
  • Compatibility with business goals. Organizations frequently use open source components in projects that will not themselves be open source. Permissive licenses allow for this while strong copyleft licenses do not. Components with weak copyleft licenses can usually be used in closed source projects, but they must be used or linked to in particular ways laid out in the license.

Security compliance

Not all OSS projects are equal in terms of quality and security. Luckily, with the use of automated tools, developers can usually get a picture of how many serious vulnerabilities are present in a particular open source package before adding it to their projects. Many organizations put rules into place to stop developers from using components that have too many issues. We’ll go deeper into open source security in a later section.

What is an open source software policy?

OSS policies provide a team or entire organization with standards for the proper use of open source components. The goal is to maximize the benefits and impact of using these components while also addressing technical, business, and legal risks that might arise.

OSS policies begin with choosing which licenses will always be allowed, sometimes allowed (through an escalation process), or never allowed for a particular project or across all development teams.

An OSS policy will also need to contain rules for which components will be allowed or not allowed based on risk of exploitation. This is typically done by deciding the number of vulnerabilities an open source component brought onto the project can have, based on the severity of the vulnerabilities. The 100% vulnerability-free open source component is a rare bird indeed. OSS policies typically disallow any component with high or critical severity vulnerabilities, but allow for some medium and low severity vulnerabilities.

Read more about choosing a comprehensive open source compliance policy

Open source software security

Using OSS instead of writing code from scratch isn’t without its drawbacks. OSS quality varies widely, and many components are buggy, insecure, and inconsistently updated. Here are some important tools and considerations for addressing open source software security.

Inventories and Software bills of materials (SBOMs)

It’s impossible to watch out for vulnerabilities in products you don’t even know you have. Software inventories and SBOMs are both important for getting a big picture of your software dependencies and for compiling a list of things that need to be monitored and updated.

Learn more about the value of SBOMs.

Dependency updates

Proprietary software vendors usually push out updates, while open source projects do not necessarily do so. Support is limited in open source and will vary greatly depending on the size of a project’s community. As a result, organizations using open source components need to proactively monitor for vulnerabilities and updates to ensure that all components are promptly patched and remediated.

Automated dependency health tools like our open source tool, Mend Renovate, take a great deal of the stress of tracking and scheduling updates off of developers.

Open source vulnerability scanning

Software composition analysis (SCA) is an application security testing tool that helps manage open source components. SCA tools automatically scan your source code to identify open source components, licensing data, and known vulnerabilities.

SCA tools provide visibility into open source components and offer prioritization and automated remediation to help fix vulnerabilities. 

Here is how it works:

  • Vulnerability prioritization is done automatically by weighing factors such as whether or not a vulnerability is already being exploited in the wild (KEV and public exploits), whether or not a vulnerability is likely to be exploited in the near future (EPSS), the severity of a vulnerability were it to be exploited (CVSS), and vendor-specific insights.
  • Vulnerability remediation is typically done manually with tool-provided information such as the location of the vulnerability. Some tools may also offer suggestions on how the fix may impact your build. Advanced tools offer automated vulnerability remediation workflows initiated according to vulnerability policies. These policies are triggered according to vulnerability detection and severity, CVSS score, and new version releases. 

Ideally, you should look for SCA tools that seamlessly integrate into your software development life cycle. Doing so helps resolve vulnerabilities during early phases when issues are more easily and cheaply fixed.

SCA is typically used during the building process of the software development life cycle. After code is deployed, vulnerability scanners like DAST and IAST are helpful to test applications for common vulnerabilities.

Binary repository managers

Here are key benefits of using binary repositories:

  • Cache local copies of open source code to ensure you use only clean and verified components
  • Avoid getting affected by source code updates or changes in a different copy
  • Effectively manage, approve, and track components

Current trends in open source software

While the licenses and philosophies underpinning OSS have largely stayed the same since the 1990s, the proliferation of OSS and technologies covered under open source licenses has changed greatly. Here are two trends that have popped up in recent years.

Open source for me and not for thee

To be considered true open source or free software licenses according to both OSI and Stallman, no particular use of the software or type of person using the software can be prohibited. However, that doesn’t stop developer activists from writing licenses that they call open source, but aren’t.

This is becoming especially common in licenses for AI models, where concerns about unethical use of powerful technology have moved some authors to create “responsible” licenses, but such restrictions are actually not that new. The JSON License was written in 2002 and is an MIT license with an added clause that “The Software shall be used for Good, not Evil.” This clause restricts endeavor, failing criterion 6 of the OSI OSS definition. Therefore, the JSON License is not an open source license, despite the fact that the software is available to all for free and the source code is openly available.

Government interest in open source software

Within the last few years, the US government has taken notice of the prevalence and importance of open source software. President Biden’s Cybersecurity Executive Order includes guidelines to improve the software supply chain, including the creation of SBOMs, and remediation of vulnerabilities found in OSS. The CISA Open Source Software Security Roadmap details the government organization’s plans to assist the open source community.

Conclusion

Open source software has evolved from a community-driven initiative into an industry-standard approach to software development. The open source ecosystem offers various benefits, including flexibility, cost savings, and collective intelligence in development.

Understanding and implementing an open source policy is essential for risk management, as it sets the groundwork for compliant and secure use of open source components. Open source vulnerability scanning and software composition analysis tools are critical in identifying and remedying security risks. These practices allow organizations to reap the benefits of open source software while managing potential drawbacks effectively.

Thus, the open source model is not just about freely accessible code; it’s also about setting up frameworks for responsible use and contribution to the community. By adhering to standardized procedures for license compliance, security, and code contribution, organizations can foster a robust open source environment that advances innovation and resolves software challenges efficiently.

]]>
SAST – All About Static Application Security Testing https://www.mend.io/blog/sast-static-application-security-testing/ Thu, 18 Jul 2024 21:39:00 +0000 https://mend.io/sast-static-application-security-testing/ Static Application Security Testing (SAST) has been a central part of application security efforts for more than 15 years. According to the Crowdstrike 2024 State of Application Security Report, eight out of the top 10 data breaches of 2023 were related to application attack surfaces, so it’s safe to say that SAST will be in use for the foreseeable future.

What is SAST?

Static application security testing (SAST), one of the most mature application security testing methods in use, is white-box testing, where source code is analyzed from the inside out while components are at rest. Gartner’s definition of SAST is “a set of technologies designed to analyze application source code, byte code and binaries for coding and design conditions that are indicative of security vulnerabilities.”

Why do we need SAST?

With application-level attacks on the rise and delivery schedules only getting shorter, it’s important to have SAST’s insights on potential vulnerabilities in newly written code early and often in the development process. SAST can also be run on older code so security debt can be prioritized and addressed.

What problems does SAST address?

SAST enables developers to detect security flaws or weaknesses in their custom source code. The objective is either to comply with a requirement or regulation (for example, PCI/DSS) or to achieve better understanding of one’s software risk. Understanding security flaws is the first step toward remediating security flaws and thus reducing software risk.

How does SAST work?

As its name implies, SAST scans organizations’ static in-house code at rest, without having to run it. SAST is usually implemented at the coding and testing stages of development, integrating into CI servers and, more recently, into IDEs.

SAST scans are based on a set of predetermined rules that define the coding errors in the source code that need to be addressed and assessed. SAST scans can be designed to identify some of the most common security vulnerabilities out there, such as SQL injection, input validation, stack buffer overflows, and more.

Demystifying SAST, DAST, IAST, and RASP

A number of tools exist to test or protect applications throughout the software development lifecycle (SDLC). SAST, DAST, IAST, and RASP are sometimes confused with one another. We’ll briefly go over them here and in the next section we’ll go deeper into the differences between SAST and DAST.

  • Static Application Security Testing (SAST) looks for weaknesses in custom code at rest
  • Dynamic Application Security Testing (DAST) performs automated attacks on applications to test them for weaknesses when they are running
  • Interactive Application Security Testing (IAST) combines DAST capabilities with SAST insights
  • Runtime Application Self-Protection (RASP) is built into a program to protect it after deployment. It is capable of detecting and preventing external threats in real time.

What is the difference between SAST and DAST? 

SAST is one of the primary application security testing methodologies that are available, alongside DAST (dynamic application security testing). So, what’s the difference and which should you choose?

SAST and DAST differ in how and when they perform security testing and their access to source code. SAST is known as a “white-box” testing method that tests source code and related dependencies statically, early in the SDLC, to identify flaws and vulnerabilities in the code that pose a security threat. It is used to ensure that developers take care when writing their code. SAST tests applications from an internal perspective. SAST tools are easy to integrate into CI/CD pipelines and make it cheaper to locate and fix issues before they get complicated by being on software that’s functioning and in applications that are running. However, it requires visibility into and knowledge of the code being used and tested. 

DAST takes the opposite approach to SAST. It is known as “black-box” testing, which means the code is tested while it’s running, without any knowledge of or access to the source code. It is concerned with identifying runtime issues and weaknesses in software and applications. DAST testing is performed later in the SDLC, when software and applications are actually working. While SAST tests the code from the inside out, DAST tests it from the outside in, taking a hacker’s rather than a developer’s perspective. Rather than being static, DAST is dynamic, because it tests as applications run, so it needs a working version of the application for it to perform testing.

SAST and DAST complement each other. Therefore, implementing both will help you optimize and maximize your software and application security, by providing ways of scanning your software at any point in the SDLC.

Typical SAST benefits

SAST is a top application security tool and, when done right, is essential to a robust application security strategy. Integrating SAST into the SDLC provides the following benefits:

Shifting security left. Integrating security testing into the earliest stages of software development is an important practice. SAST helps shift security testing left, detecting vulnerabilities in proprietary code in the design stage when they are relatively easy to resolve. Finding and remediating security issues at this stage saves organizations the costly efforts of addressing them closer to the release date or, even worse, after release.

Ensuring secure coding. SAST easily detects flaws that are a result of fairly simple coding errors, helping development teams make sure that they comply with secure coding standards and best practices.

Detecting common vulnerabilities. Automated SAST tools can easily detect common security vulnerabilities like buffer overflows, SQL injection, cross-site scripting, and more with high confidence.

Enhanced benefits of next-generation SAST

SAST is a mature technology and the application development environment has changed since it was introduced. The new generation of SAST products have evolved in response to these changes, particularly the scale and rapidity of the modern environment. This evolution offers the following additional benefits that enhance those offered by previous SAST products:

Ease of use. The new approach to SAST further integrates it with your existing DevOps environment and CI/CD pipeline, so developers don’t need to separately configure or trigger scans. This removes the need for them to leave their development environment to run scans, view results, and research how to fix security problems. It’s more efficient, convenient, and easier for them to use. 

Comprehensive CWE coverage. The comprehensive detection provided by Mend SAST provides visibility to more than 70 CWE types — including the OWASP Top 10 and SANS 25 — in desktop, web, and mobile applications developed on various platforms and frameworks. 

Support for multiple programming languages and programming frameworks. For example, Mend SAST supports 27 different languages. This enables more comprehensive vulnerability detection and increases the visibility to a larger number of CWE types.

Overcoming false positives and eliminating wasteful effort. Older SAST products typically generated a high number of false positives, costing development and security teams significant time and effort differentiating between false alarms and real issues. Considering the competitive pace of development and the amount of time it takes to remediate critical issues, dealing with the noise of false positives puts quite a strain on development. Now, Mend has a patented set of analytics that enables teams to significantly reduce the generation of false positives that they would otherwise have to sift through.

Speed. Traditional SAST solutions were designed for an earlier era, when the typical SDLC took considerably longer than it now does, and one scan could take several hours for a large code base. In today’s fast-paced development environment, where the duration of a release cycle is less than a day, these products are a poor fit. Numerous research studies have shown that many developers simply don’t use the application security tools that their security team provides, because they choose speed over security. The new Mend SAST has a scan engine that is 10 times faster than traditional SAST products, so your engineers will get results in minutes or less.

SAST Pros and Cons

SAST Benefits
✔
SAST Limitations
❌
Early Detection
Finds vulnerabilities early in the SDLC
Later Detection
Does not find and mitigate flaws and vulnerabilities later in the SDLC or when development is complete
Fast
Faster to remediate vulnerabilities early in the SDLC
Static Code Only
Not dynamic. Doesn’t discover runtime flaws and vulnerabilities
Simple
Fast, early detection makes it easier to fix code before it enters the QA cycle
Requires Source Code
Needs access to the source code to work
Versatile
Supports all kinds of software and application (web, desktop, mobile)
Custom Code Only
Designed to support custom code, not open-source software and dependencies
Cost-Effective
Early detection makes remediation easier, less time-consuming, and therefore, cheaper
False Positives
Traditional SAST tools can generate many false positives, which can hamper development

Legacy vs. Modern SAST tools

A new generation of SAST solutions lets enterprise application developers create new applications quickly, without sacrificing security. They aim to integrate with your existing DevOps environment and CI/CD pipeline, so developers don’t need to separately configure or trigger the scan. They expedite the SAST process, while supporting multiple programming languages and various different programming frameworks.

Modern SAST tools that include these capabilities increase efficiency and convenience for developers. They make it quicker and easier to detect vulnerabilities, and they ensure compliance and reinforce governance. As a result, developers will learn to trust their software tools and collaborate more readily with members of the security team.

How to choose the right SAST tool for your organization

The AST market is full of SAST offerings, often bundled up with additional solutions, making it a challenge to find the right fit for your organization.

OWASP’s list of criteria for selecting the right SAST tools can help companies narrow down the options and choose the solution that best helps them improve their application security strategies.

  • Language support. Make sure the SAST tool that you use offers you complete coverage for the programming languages your organization uses.
  • Vulnerabilities coverage. Make sure that your SAST tool covers at least all of OWASP’s Top Ten web application security vulnerabilities.
  • Accuracy. Your SAST solution should be capable of minimizing the false positives and false negatives that create unnecessary work. So, it’s important to check the accuracy of the SAST tools that your organization is considering.
  • Compatibility.  Like any automated tool, it’s important that the SAST tool you use is supported by the frameworks you are already using so that it integrates easily into your SDLC.
  • IDE integration. A SAST tool that can be integrated into your IDE will save you valuable remediation resources.
  • Easy integration. Find the SAST tool that is easy to set up and integrates as seamlessly as possible with the rest of the tools in your DevOps pipeline.
  • Scalability. Make sure the SAST tool you integrate today can be scaled to support more developers and projects tomorrow. A SAST tool can seem to scan quickly on a small sample project; make sure it delivers similar results on larger projects.

Rising scale can also impact the cost of the solution. OWASP’s list points out that it’s important to consider whether the cost varies per user, per organization, per application, or per line of code analyzed.

How to Implement SAST

Having chosen your SAST solution, it’s important to implement it correctly in order to optimize its effectiveness and maximize the benefits you get from it. Successfully achieving this involves the following steps:

Select the means of deployment. Decide whether you will deploy SAST on-premises or in cloud environments. This consideration depends on how much control you wish to have and how quickly, how easily, and how much you might wish to scale up.

Configure and integrate into your SDLC. Considerations here include when and how you scan and analyze your code. You can elect to:

  • Analyze code as it’s compiled
  • Scan it as you merge it into your code base
  • Add SAST in your CI/CD pipeline
  • Run SAST in your IDE, enabling developers to detect and mitigate vulnerabilities as they code

Choose the extent of your analysis. You can decide between the following:

  • Complete: A full scan of your applications and their code is the most comprehensive and most lengthy process
  • Incremental: Scan only new or changed code
  • Desktop: Code scanned as it’s written, with issues addressed in real time
  • Without build: Analysis in the source code, for those not familiar with the building process or the IDE

Customize to suit your needs. You might want to focus on reducing false positives, creating new rules, or revising old ones in order to identify new security flaws. Perhaps you want to create dashboards for analyzing scans, or build custom reports.

Prioritize applications and results, based on what’s most important to you. Considerations include compliance issues, severity of threat, CWE, risk level, responsibility, and status of the vulnerability.

Analyze results, track progress, and evaluate urgency. Examine scan results to remove false positives. Set up a system that automatically sends issues to the developers responsible, and then assign them to be addressed.

Report and governance. Use either built-in reporting tools such as OWASP Top 10 violations, or push data to other reporting tools you already have. Ensure that development teams are using the scanning tools properly.

SAST: An important component in your application security journey

Using traditional SAST products to ensure security in application development requires a value tradeoff. And that tradeoff is speed. SAST offers high value when it comes to coverage and visibility over an organization’s static code base. It also integrates early in the SDLC, enabling organizations to shift security left. But, traditional solutions present major barriers to agility.

The next generation of SAST overcomes these barriers to meet the demand of today’s rapid SDLC. As the SDLC becomes shorter and shorter and as more applications are developed, the attack surface grows and the risk to the application layer continuously rises. However, now, the need to make such a value tradeoff is significantly reduced.

Successfully integrating SAST requires organizations to find the right balance between minimizing risk by covering all security vulnerabilities and delivering quality products at a competitive speed. Now, development teams can more confidently combine security and speed earlier than ever in the development process. speed. Now, development teams can more confidently combine security and speed earlier than ever in the development process.

FAQs

In which phase of the SDLC should you use SAST?

SAST should be deployed early in developers’ workflow when they design and write applications and before applications go into production. This allows developers to detect and remediate flaws in software components and dependencies before they go into production.

How frequently should we do code scanning and static assessment?

Various industry standards recommend that organizations scan and assess their code regularly. In the interests of thoroughness, and to meet different compliance requirements, it’s advisable to do this at least monthly. However, scanning and assessment should really be performed whenever changes are made to existing code, software, or applications, or whenever new components and dependencies are introduced. This will keep your security up-to-date and ensure that new flaws and vulnerabilities are identified and stopped from entering your code base. Of course, if you scan more frequently, it’s more likely that you’ll identify flaws and mitigate any threats quickly and earlier.

It’s also important to perform vulnerability scanning in line with the following industry compliance standards:

  • Payment Card Industry (PCI DSS) – Quarterly
  • National Institute of Standards and Technology (NIST) – Quarterly to monthly depending on governing framework
  • Cyber Security Maturity Model Certification (CMMC) – Weekly to quarterly based on auditor requirements
  • Health Information Protection Accountability Act (HIPAA) – Scanning not required but states a detailed assessment process must be established.

When can static application security testing be used?

SAST should be deployed early in the implementation phase of the SDLC. because they don’t need a running application to perform an analysis. Security teams therefore use SAST tools to scan applications during the coding process and before production.

Does SAST require source code?

Yes. SAST is designed to specifically analyze source code, and compiled versions of code to help detect vulnerabilities and issues during software development.

]]>
Dependency Management: Protecting Your Code https://www.mend.io/blog/dependency-management-protecting-your-code/ Fri, 12 Jul 2024 21:24:13 +0000 https://mend.io/?p=9103 Managing dependencies isn’t always easy, but it’s critical for protecting your code. In this guide, we’ll explore what dependencies are and how they can be checked for known vulnerabilities, compatibility, licensing requirements, and more. We’ll then learn that dependency checks should be part of a dependency management strategy to keep applications up to date and reduce security risks and technical debt.

This article also provides insights into the challenges of dependency management, as well as a number of best practices and things to consider. It highlights the benefits of regular application updates and how automated dependency management can simplify the process. Lastly, the guide provides strategies and tips for managing dependencies.

What are dependencies in code?

In the past, companies would typically either write their own custom software for business needs or use off-the-shelf software packages. But today, most organizations have software stacks composed of many different third-party or open-source components.

Dependencies in code are the connections among all those different software components. A piece of software is said to be dependent on another piece of software if it invokes it. For example, a company might have a custom point-of-sale solution that uses open source payment processing libraries or receipt-printing libraries. That means the solution is dependent on those open source libraries. 

In short, external code that’s required for a piece of software to operate correctly is a dependency. Dependencies can include libraries, frameworks, modules, APIs, and services. 

Dependency check

The most significant risk for companies is not knowing what they don’t know. That’s where dependency checks come in. 

A dependency check identifies all the components, libraries, or modules that are linked to a specific piece of software to create an inventory and identify out-of-date software components, license issues, compatibility issues, version numbers, and more.

Dependency checking is a key way organizations can reduce security risks and identify known vulnerabilities. Some of the key items being checked include:

  • Known vulnerabilities. Dependency checks typically start with an automated scan that checks all the dependencies in a given code set against a database of known vulnerabilities to identify potential issues.
  • Outdated software. A dependency scan also checks for outdated code by seeing if new versions of libraries or software components are available.
  • Compatibility. If new versions of software components are available, it’s important for a dependency checker to identify whether they are compatible or not, so an organization can manage the risk and the level of effort required to update to a new version.
  • License information. Dependency checking can also evaluate the licensing information for any third-party code to avoid possible legal issues and ensure compliance. 

Ideally, the results of a dependency check will generate a detailed report identifying vulnerabilities or issues and suggest recommended actions. Most companies will also want to conduct continuous dependency checks, because new vulnerabilities may be introduced any time software is altered or updated.  

Dependency checks improve application security in several ways. First, vulnerabilities in third-party components can be identified and eliminated before ever making it into the code base, which shrinks the potential threat footprint. Second, eliminating known bugs improves code quality and reliability. And finally, organizations can ensure that all software components comply with licensing requirements.

What is dependency management?

A dependency check allows organizations to identify the connections or dependencies among all the different components in its code base. But that’s just the start of the process. Once an organization knows the initial connections and dependencies, it needs to create an on-going management strategy for them. 

That’s where dependency management comes in. Dependency management is a structured approach to managing all the external software components (including libraries, frameworks, modules, APIs, and services) linked to a specific project or code base. 

Proper dependency management ensures that organizations are using the appropriate versions of software components and ensures on-going consistency and security of the software. 

Essential components of a robust dependency management strategy include: 

  • Version control. Identify and manage the specific versions of software components to ensure they’re up to date or using the most recent viable version.
  • Package managers. Dependency management is best automated due to both the need to continually rerun dependency checks and the large volume of software components that typically make up today’s solutions. Package managers help by automating the downloading, installing, and updating of dependencies.
  • Dependency resolution. Knowing that a component has different versions is just the first step. Knowing which one to use is the next step. Dependency resolution is the process by which a package manager or dependency management process determines which version is best to use based on potential conflicts, compatibility, or specific requirements.
  • Transitive dependencies. Software isn’t only dependent on the libraries or code it invokes; it’s also reliant on whatever libraries or code those initial libraries depend on—the transitive dependencies. That’s why it’s essential to manage transitive dependencies to ensure they are compatible and appropriate as well.
  • Security management. One big reason to bother with dependencies at all is to manage security and prevent known bugs. Organizations need to regularly monitor and resolve security issues or vulnerabilities in software dependencies and incorporate known patches as appropriate. 

In addition, a dependency management strategy should ensure isolation among the updates for different projects so that updated dependencies on one project don’t cause problems with separate projects. Organizations should also make sure to maintain clear documentation on the dependencies and versions and implement automated testing procedures to ensure compatibility and stability. 

What are the risks of not updating dependencies?

Using third-party components for software development can save organizations considerable money, time, and resources. However, those same components can open the organization to potential risks, especially if they drift out of date. While no one likes to interfere with working software, organizations that let the dependencies in their software code base become out of date are opening themselves up to a variety of non-trivial risks. 

Consider some of the dangers that not updating dependencies can generate:

  • Security risks. The most obvious problem with out-of-date dependencies is potential security risks or vulnerabilities. The older a piece of software is, the more likely potential vulnerabilities have been identified or exploited by bad actors. Using such out-of-date components dramatically increases an organization’s security risk profile.
  • Compatibility issues. Most software continues to change over time via updates with new features, patches, or bug fixes. The more out-of-date a component is, the greater the risk of compatibility issues. Also, the older the software is, the more risk there is that an update will be problematic or have integration problems when trying to update.
  • Malicious updates. Organizations need to remember that not every update is good! Malicious updates are updates that bad actors create to introduce vulnerabilities or bad code that can be exploited. For example, a malicious update may introduce a hidden software back door for data breaches or install malware that provides external actors control over a system.
  • Increased application fragility. Organizations that don’t update dependencies risk increasing the fragility of their applications. The longer it has been since updates have been applied, the more difficult they may become, or the more work may be required once they are applied.
  • Performance issues. Due to optimization or updates, new components or software may provide better performance characteristics than older software. Organizations that don’t update dependencies will miss out on potential performance updates. 

Decreasing risk in updating dependencies

Application development teams find themselves balancing functional risk—that is, the chance that updating software might break something—with security risk. How do organizations resolve these competing factors and decrease the risks that might come from updating dependencies?

There are two primary ways to decrease both types of risk when updating dependencies: Merge Confidence metrics and blocking malicious software.

Merge confidence from broken dependency releases can help organizations determine how likely a potential update will have a negative impact or “break something.” Merge confidence metrics can take into account statistics such as the frequency of updates with that component, the number of users, the regularity of updates, and how quickly component problems are resolved. Merge confidence can also take into account details from a component’s change log to help understand the specific changes that have occurred. 

Detecting and blocking malicious open-source packages is equally important to prevent the introduction of problematic code when updating dependencies. Putting measures in place or using software tools that can detect and block malicious open source software updates helps prevent potential security breaches that such software might introduce, whether it’s malware, trojan back doors, or other hostile behaviors. 

What are vulnerabilities in dependencies?

Unfortunately, organizations should look for vulnerabilities not only in their own code but also in any dependencies that are called or referenced within that code.

Vulnerabilities in dependencies are security risks or vulnerabilities in third-party libraries, frameworks, or code. Organizations need to be just as diligent in identifying and mitigating vulnerabilities in dependencies as they are in their own code. 

There are several ways that vulnerabilities manifest themselves in dependencies, including:

  • Outdated or unmaintained versions. Older versions or unmaintained versions of software components can have known vulnerabilities or risks.
  • Malicious packages. Organizations must be careful that third-party components don’t have intentionally harmful code.
  • Indirect vulnerabilities. Dependencies can have dependencies, so organizations need to dig deep when analyzing potential vulnerabilities.
  • Insecure configurations. Some software packages may come with default configurations that can open an organization to risks unless altered. 

What is technical debt?

What happens when there’s a problem in a software project that’s not resolved? It tends to grow bigger over time. As a result, it can become more challenging to solve because of changes in the environment or lack of skills availability. 

That’s technical debt in a nutshell. Like financial debt that increases over time due to additional interest charges, technical debt is a technical problem that’s not addressed immediately and is left to linger. As a result, it typically grows larger and harder to solve, leading to increased maintenance costs.

There are several causes of technical debt, including quick fixes, lack of expertise, poor design decisions, changing requirements, use of outdated technologies, and even lack of documentation. Moreover, organizations can accrue technical debt intentionally or unintentionally. Intentional technical debt comes from taking shortcuts or making quick fixes to meet a deadline without regard for the future impact. Unintentional technical debt accumulates accidentally due to a lack of expertise, knowledge, or complexities within a software environment. 

Unfortunately, technical debt isn’t just an IT problem, as accumulating technical debt can hurt an organization in a number of ways. increased maintenance costs, decreased productivity, slower development, and higher failure/defect rates are just a few examples. In response, organizations need to recognize the impact that technical debt can have on their development process and business outcomes and plan to minimize it. 

What are challenges in dependency management?

One important way to reduce technical debt is through dependency management, which can shrink security risks and enable more rapid and agile change.

But as helpful as dependency management can be, there are potential challenges that organizations must be aware of, including:

  • Conflicting dependencies. Sometimes, more than one software package needs to use the same dependency, but each requires a different version. The versions may not always be compatible, so by solving the dependency for one piece of software, you could risk breaking the compatibility of another.
  • Versioning issues. As software evolves over time and new versions are released, it can be difficult for an organization to keep track of the different versions of a component needed for different projects. For example, versioning issues can arise when teams lose track of the most up-to-date versions and continue to use out-of-date versions, or when different projects require different versions of the same component.
  • Dependency license issuesOpen-source license management issues can also arise as a challenge in dependency management. As components are referenced or updated, software developers need to ensure that the license terms and conditions of the dependency are up to date and work well with the overall project.
  • Managing dependencies across multiple environments. Most organizations have more than one software project, so managing dependencies across multiple environments can quickly become a significant challenge. It’s important to recognize that dependencies can behave differently when used in different environments and when they are combined with other components. Like cogs, they have to fit together properly; otherwise, the whole mechanism doesn’t work. Mismanaged dependencies can, therefore, disrupt or disable entire projects.  

What are best practices for dependency management?

Dependency management requires organizations to keep track of many different software components across many projects and versions. It’s a complex task. 

That’s why it’s critical for organizations to establish some best practices for dependency management so they can ensure it is implemented correctly to streamline development, reduce security and performance risks, and ensure software consistency. Good dependency management is critical for ensuring the security, stability, and performance of software. 

Since there is so much pre-written code and third-party software to choose from, a few best practices for dependency management include the following items:

  • Compatibility. A good place to start is to find out whether the dependency works on the platform you’re using. If not, discard it.
  • Licensing. Verify that a dependency’s licensing terms and conditions are up-to-date and consistent with your project needs.
  • Maintenance and versioning. Check to make sure the dependency is updated regularly and properly; otherwise, avoid it.
  • Breadth of use. Look for dependencies that are currently in wide usage. 
  • Quality. Look at the source code—is it clean and readable? Is the documentation solid? Does the dependency come with its own unit tests?
  • Ease of integration. Investigate how easy it is to integrate the dependency into your current workflow and how easy it will be for developers to start using it.
  • Size. Smaller, more focused dependencies are better than larger ones since they are less likely to bloat and slow down your code base.
  • Community support. Evaluate how active the community support is for a dependency.
  • Speed and efficiency. Consider how responsive the solution is and if you can manage dependencies in real time. 

What is semantic versioning?

Managing dependencies effectively requires a solid strategy for managing different software projects and components. 

That’s where semantic versioning comes in. Semantic versioning is an approach for defining and communicating the meaning of software changes within a specific release. It provides a visible way to note the hierarchy of software releases so developers can easily understand how they may relate to underlying code changes.

A typical semantic versioning format comprises three parts: a major version, a minor version, and a patch version. 

The major version is incremented or advanced when there are significant changes within a software project, such as when an API changes or there are changes that are not backward compatible.

The minor version is advanced when new features are added but don’t necessarily impact existing functionality. 

Lastly, the patch version is incremented when bug fixes are made. It’s typically used for minor changes that don’t affect functionality. 

Semantic versioning has several benefits, including consistent communication among developers, enabling automated updates and compatibility checks, providing a clear understanding of the extent of a version change, and facilitating compatibility management by communicating the extent of change. 

Semantic versioning is a simple way to keep track of changes and indicate what kind of change happened and at what point in development. It can also help developers gauge update risk based on whether a change is major, minor, or patch. Most importantly, semantic versioning helps developers avoid dependency hell by making it easier to resolve version conflicts and know what versions are acceptable to use.

Regular application updates and maintenance

Regular updates and maintenance to applications and dependencies give organizations the best chance to avoid security risks. Attackers are always looking for vulnerabilities and ways to exploit them, so if an organization has an older version of a popular component, there’s a good chance the bad actors have found ways to exploit it. Updates can counter these efforts and reduce overall risk by addressing flaws or weaknesses before they cause trouble. 

In addition, keeping dependencies current can provide increased stability and potential performance improvements while reducing technical debt. Updated dependencies may also deliver new features or provide more robust or security applications.

Critical benefits of regularly updating application dependencies include:

  • Security improvements
  • Bug fixes
  • Performance or feature improvements
  • Increased compatibility 
  • Reduced technical debt
  • Improved compliance

One consideration when updating dependencies is reachability. Reachability is the determination of whether all the dependencies and transitive dependencies are identifiable, accessible, available for use, and can be successfully resolved. 

However, any application update, even those made to fix dependency problems or bugs, can introduce new problems. 

Some of the potential risks of updating an application’s dependencies include: 

  • Breaking an application
  • Introducing new bugs or security risks
  • Performance problems
  • Introducing dependency conflicts
  • Integration issues
  • Increased maintenance overhead

While the risk of breaking an application during dependency updates exists, those potential risks can be minimized with the processes and automation in place. Regular scanning for vulnerabilities is necessary to ensure that every dependency used is secure and updated. 

Automating dependency management

Given the number of dependencies and the volume of changes occurring, most organizations cannot effectively manage dependencies without automation. Automating dependency management boosts efficiency while also improving the security and maintainability of software projects.

The first step in automating dependency management is selecting a dependency management tool that typically integrates with your version management system and other components in your software environment. The next steps are configuring your CI/CD pipelines, automating tests and scans to run during dependency updates, and establishing a process for reviewing and merging automated dependency pull updates. 

Automating dependency management instead of manually managing it provides significant benefits, including consistency, security, efficiency, and compliance.

Strategies for managing dependencies

Since managing dependencies can be a complex process, it’s best to have a strategy to ensure consistency and security. Here are three different types of dependency management strategies to consider:

  • Centralized management. A centralized approach to dependency management focuses on keeping all the software components in one centralized repository. This would typically include all libraries, components, versions of third-party dependencies, and APIs. The benefit of this approach is having one place to consult and one source of “the truth.” This centralized approach can reduce the risk of dependency conflict or confusion. A consideration, especially for larger companies, is whether this approach will work for situations with vast numbers of dependencies or projects since the large size could lead to performance problems.
  • Decentralized management. A second approach is to go in the opposite way and empower teams to manage dependencies by themselves rather than having them all in a single corporate repository. This can theoretically be simpler since each team is managing its own dependencies, which can result in faster performance when finding, identifying, and fixing problems. However, it also reduces the overall visibility and control of the dependencies since there is no centralized resource. It may also lead to different versions of the same third-party libraries across different teams, possibly resulting in conflicting versions and dependency confusion.
  • Hybrid approach. A hybrid approach combines centralized and decentralized repositories. To do this, an organization sets up a single centralized repository that keeps shared libraries, third-party dependencies, and APIs together for use by all teams. Teams would then extend this with their repositories for code specific to their needs and not shared by anyone else. The approach will reduce dependency confusion while accelerating the ability to find project-specific code.

Tips for managing dependencies

Understanding which dependencies are vulnerable to security threats and which updates won’t break your code is complex, especially with so many direct and transitive dependencies. To help manage the process, consider starting with the following best practices for dependency management:

Identify trusted sources of dependency information. Identify trusted sources of dependency and version information to avoid including malicious dependencies or introducing risky code into your project.acks.

Prioritize. Organizations should focus their attention on the highest risks, so it’s essential to recognize that some dependencies are more important than others—and those are the ones that should be analyzed first. This may include understanding which potential open source vulnerabilities are being accessed by your code and which aren’t, so you can prioritize the updates.

Automate. Bug fixes and software updates can be time-critical, so it’s essential to put automated processes in place for dependency management in order to quickly and easily update new versions when appropriate.

Create and communicate policies. Policies let development and security teams know how to handle threats in third-party components, how to update them efficiently, and what priority is placed on continuous dependency management.

Have a consistent approach. Dependency management isn’t a one-off project. It should happen continuously so that potential vulnerabilities are eliminated before they become real vulnerabilities.

]]>
Dependency Management vs Dependency Updates: What’s the Difference? https://www.mend.io/blog/dependency-management-vs-dependency-updates-whats-the-difference/ Wed, 26 Jun 2024 14:31:52 +0000 https://mend.io/dependency-management-vs-dependency-updates-whats-the-difference/ It’s not uncommon to hear people refer to updating dependencies as “dependency management”. They’re not wrong; keeping dependencies up to date is a big part of dependency management, but it’s not everything. Read on to learn more about the differences between the two.

What is dependency management?

Let’s first briefly breakdown what dependencies are. 

Dependencies are the relationships between software components that rely on each other to work. You have direct dependencies, where a software component directly calls another, and indirect dependencies, which describes the relationship between software component A and component C that is not called directly by A but instead by a direct dependency of A, component B.

Sounds fairly straightforward, but of course component A could rely on components B, D, E, F, and G, and component B could rely on C and G, and component G can only work with one particular version of component F, and…you get the idea. These relationships can get very complicated. Moreover, these components are usually projects managed by other developers, meaning that there is no control over how their newer versions will be built or function.

Dependency management involves selecting, identifying, or defining all of these different relationships and resolving the conflicts that arise between them. Some minimal amount of dependency management will be done every time a new component is added to a project and every time an existing component is updated.

Some specific dependency management tasks include:

  • Identifying all external components that a project relies on
  • Choosing appropriate components
  • Defining components in a configuration file
  • Ensuring the correct versions of components are retrieved
  • Recognizing constraints in compatibility between particular versions of components
  • Resolving compatibility conflicts between components
  • Keeping dependencies up to date

For a deeper look at dependency management, check out this blog.

Obviously, some theoretical static software project that’s working just fine has no need for dependency management once it’s built. But in reality, nearly all projects will be expanded upon at some point, or at least updated for security purposes.

Which leads us to dependency updates.

Dependency updates

Software will need to be updated. It’s a simple fact of life and is the main reason solid dependency management throughout a project’s entire lifecycle is so necessary. 

But updating isn’t always easy. As new versions of each component come out, perhaps to add new functionality or security patches, the relationships between multiple other components can make the updating of one component cause another component to fail to work. 

And perhaps updating that other component would fix the first problem but create a new problem with an entirely different component! This is referred to as dependency hell, and even fairly small projects can find themselves in the depths of it.

So how are dependency updates managed?

Managing dependency updates

While small projects can conceivably get away with manual updates, larger projects almost certainly cannot. The absolute worst way to deal with managing updates is to not update at all. Don’t do that. That means you are choosing to take on unnecessary technical debt that you really don’t need. (You can read more about dependency management and technical debt here.)

Here are some ways to manage dependency updates without getting sent straight to dependency hell:

  • Update early. With new projects, set yourself to update frequently—even if not strictly necessary to the health of your project—so components don’t end up many versions behind by the time an update is no longer optional.
  • Update automatically. Use an automated dependency updating tool (we like our own open source tool, Mend Renovate) to group and pull updates on a schedule that works for you.
  • Start small on big projects. If you’ve got a project that’s seriously behind in updates, check out the tips in this blog for getting going.

tl;dr

Updating dependencies is just one aspect of dependency management, albeit a very important one. Dependency management also includes keeping track of which components go into your projects, their versions, and their reliance on other components as well as resolving conflicts between components.

]]>
Hallucinated Packages, Malicious AI Models, and Insecure AI-Generated Code https://www.mend.io/blog/hallucinated-packages-malicious-ai-models-and-insecure-ai-generated-code/ Thu, 20 Jun 2024 20:23:17 +0000 https://mend.io/hallucinated-packages-malicious-ai-models-and-insecure-ai-generated-code/ AI promises many advantages when it comes to application development. But it’s also giving threat actors plenty of advantages, too. It’s always important to remember that AI models can produce a lot of garbage that is really convincing—and so can attackers.

“Dark” AI models can be used to purposely write malicious code, but in this blog, we’ll discuss three other distinct ways using AI models can lead to attacks. Whether you’re building AI models or using AI models to build software, it’s important to be cautious and stay alert for signs that things aren’t quite what they seem.

Trippin’ AIs lead to “hallucination squatting”

Large language machines (LLMs) don’t think like humans—in fact, they don’t think at all and they don’t really “know” anything, either. They work by finding statistically plausible responses, not necessarily truthful ones. They are known to occasionally hallucinate, which is a much cooler way of saying “spit out things that sound right but turn out to be complete BS”. They’ve even been known to hallucinate sources of information to back themselves up. 

One thing they might hallucinate is the name of an open source package. LLM hallucinations tend to have some persistence, so an enterprising threat actor might prompt an LLM to hallucinate a package name and then create a malicious package with the hallucinated name. From there, they only need to wait for the LLM-using developers to come. They might even remember to provide the functionality that the LLM suggests the package ought to have, making it that much more likely that their malicious code will make its way into the developer’s environment.

Bad actors and bad AI models

Hugging Face does provide extensive security measures to users, including malware scanning, but many malicious ML models still make their way onto the platform. Bad actors sneaking malicious packages onto public software sharing platforms isn’t anything new, but in the excitement of building with AI, developers may forget to be skeptical.

Loading malicious pickle files, which serialize and deserialize Python object structure and are common in AI models on Hugging Face, can easily lead to remote code execution attacks. Developers should be extremely cautious about downloading AI models and only use those from trustworthy sources.

Letting your Co-Pilot take the wheel

Code-producing LLMs have been known to include security vulnerabilities in their outputs, including ones common enough to make the OWASP Top 10. While engineers are working hard every day to make sure these models get better and better at their jobs, developers should still be wary of taking on any code that they don’t fully understand. Unfortunately, the developers most likely to use the help from an AI are junior ones who are also less likely to scrutinize the code they’ve been given.

Takeaways

In all three of these examples, inexperienced developers are more likely to be duped by threat actors or LLMs. Of course, scanning your code and watching your network for signs of attack are a must, but nothing beats developer training here. Remind your teams frequently to be extremely skeptical of third-party code, whether it’s generated by an LLM or not.

That said, we don’t mean to be all doom and gloom; the world of AI is improving all the time, and AI developers are aware of these problems. Hallucinations are a bigger problem with LLMs that don’t get updated frequently enough, so even Chat-GPT 3.5 has been getting updates, Hugging Face is working hard to remove malicious AI models, and Co-Pilot has been steadily improving the security of code output.

Even so, precaution is never a bad thing.

]]>
Quick Guide to Popular AI Licenses https://www.mend.io/blog/quick-guide-to-popular-ai-licenses/ Mon, 17 Jun 2024 13:59:34 +0000 https://mend.io/quick-guide-to-popular-ai-licenses/ Only about 35 percent of the models on Hugging Face bear any license at all. Of those that do, roughly 60 percent fall under traditional open source licenses. But while the majority of licensed AI models may be open source, some very large projects–including Midjourney, BLOOM, and LLaMa—fall under that remaining 40 percent category. So let’s take a look at some of the top AI model licenses on Hugging Face, including the most popular open source and not-so-open source licenses.

Open source licenses

Ahhh… refreshing, timeless open source licenses. We know ’em, we usually love ’em, but it’s actually hard to say how well these licenses transition from covering traditional applications—which are distributed as binaries, source code, or both—to covering AI models, which are usually shared as the model binary plus training weights that essentially act as configuration files. 

The thing that makes existing open source licenses tricky for AI is that they define their terms around source code, and it is not the source code that people are generally interested in when it comes to modifying AI models. For more information on the legal questions surrounding whether or not common open source licenses work for AI, check out this OSI webinar with Mary Hardy, corporate counsel at Microsoft.

Permissive

Apache 2.0

The Apache 2.0 license is a permissive license that also includes patent grants, which can be important for AI models.

MIT

The MIT license is about as permissive as you can get, but it doesn’t include patent grants. On Hugging Face, this is the most popular license for data sets, and the second most popular for models.

Academic Free License 3.0 (AFL 3.0)

You don’t hear as much about the AFL because many consider it redundant with the more popular Apache 2.0 license, but that hasn’t stopped a large number of projects on Hugging Face from using it.

Copyleft

GNU Public License 3 (GPL3)

GPL3 does include patent grants, which makes it good for AI. Of course, the reciprocity terms of the very strongly copyleft GPL may not be good for your organization’s particular use case, AI or otherwise.

Not actually open source licenses

If the source code is available and you’re free to modify and redistribute it, it’s open source, right? Not according to the Open Source Initiative (OSI). In order to be considered “open source”, a license must meet all ten of the criteria outlined in their Open Source Definition. What Richard Stallman calls “Freedom 0”, the right for anyone to use open source software however they please, the OSI lists as their 6th criterion: “No Discrimination Against Fields of Endeavor”.

The following licenses restrict usage and are therefore not considered open source licenses—even if the projects under them might be free to use, modify, or distribute under most circumstances. Not being truly open source doesn’t mean these licenses are bad or worth avoiding, but it does mean you may need to work with your legal department to work out policies on which licenses are going to work in your products.

Use Restrictors

Llama2

Meta’s Llama2 license was created to get people active on Meta’s Llama projects while also keeping Meta’s IP from giving any competitors a boost. This license includes the use restriction that you cannot use the model or any output from it to build, train, or otherwise enhance any other LLM. 

Additionally, if your Llama2 licensed project has more than 700 million monthly users, you have to write to Meta to get approval for a new license and if they don’t grant it, your rights from the Llama2 license are revoked. Sure, 700 million is more than twice the population of the United States, but it’s a restriction nonetheless.

RAIL Family

Responsible AI Licenses (RAIL) are created and used by people who are openly in disagreement with Stallman’s Freedom 0 and OSI’s criterion 6. They believe that because AI is such a powerful technology, AI licenses must ensure that models and data sets are built, trained, and used responsibly. What exactly defines “responsible use” is up to the license writer, and you can read some specific examples here.

Generally, RAIL licenses aim to stop things like harassment and discrimination against legally protected characteristics like race and gender. But—and the specifics here depend on which license is used—they can also prohibit the development of AI applications to be used in medicine or law enforcement.

Public Domain

Creative Commons

Creative Commons (CC) licenses seek to enter a work into the public domain so others can use or create derivatives from the work without the need to pay royalties. There are many versions of the CC license, including those with non-commercial (NC) use restrictions and copyleft “sharealike” (SA) reciprocity terms. Free? Yes. Open? Usually. Source? Software? Not so much. 

CC licenses are best used for documents, images, music files, and other similar artifacts. The OSI does not consider CC licenses to be open source licenses. Even ignoring the CC-NC licenses that violate Stallman’s “Freedom 0” or OSI criterion 6, these licenses are not recommended for software because they are not written with software distribution in mind, making it legally unclear if they cover the source code (OSI criterion 2), or just the completed binary.

Top 12 Licenses on Hugging Face

LicenseNumber of HF models*OSI recognized?Permissive copyright?Commercial use?Use restriction?Patent grant?
Apache 2.097,421YesYesYesNoYes
MIT42,831YesYesYesNoNo
Open Rail Family27,919NoYesYesYesUndefined**
CreativeML – Open Rail18,631NoYesYesYesYes
CC-BY-NC 4.07,081NoYesNoNoNo
Llama25,375NoYesYes, with exceptionsYesNo
CC-BY-4.03,840NoYesYesNoNo
OpenRAIL ++2,379NoYesYesYesYes
AFL 3.02,377YesYesYesNoYes
CC-BY-NC-sa 4.02,108NoNoNoNoNo
CC-BY-SA 4.01,547NoNoYesNoNo
GPL31,483YesNoYesNoYes
*As of 5/6/2024
**Hugging Face allows tagging a model under a “family” of licenses instead of one specific license. Not all Open RAIL family licenses include patent grants but some (including OpenRAIL-M) do.

In sum

With AI, the license battle isn’t just “copyleft” vs “permissive”. You now need to consider not only how you wish to distribute (or not distribute) your software, but also how it is intended to be used. We’re not lawyers, so we can’t give you any particular advice for which licenses will be ok for your organization to work with, but we hope this guide was helpful nonetheless.

]]>
Threat Hunting 101: Five Common Threats to Look For https://www.mend.io/blog/threat-hunting-101-five-common-threats-to-look-for/ Thu, 30 May 2024 06:30:00 +0000 https://mend.io/threat-hunting-101-five-common-threats-to-look-for/ The software supply chain is increasingly complex, giving threat actors more opportunities to find ways into your system, either via custom code or third-party code. 

In this blog we’ll briefly go over five supply chain threats and where to find them. For a deeper look to finding these threats, with more specifics and tool suggestions, check out our threat hunting guide.

Installation scripts

What it is: 

After gaining access to the distribution network of software vendors or package managers, attackers can inject malicious code into the installation scripts of otherwise legitimate packages. 

Where to look: 

Installation scripts can show up on the developer machine, where they typically target personal and company data including credentials, or in the build process, where they will usually attempt to establish a backdoor and persistence.

Secrets Leak

What it is: 

Secrets include API keys, passwords, and other types of credentials and confidential information that should be kept away from bad actors. They can be easily left behind in the development process, and it’s worth the effort to find them before your adversaries do.

Where to look:

While secrets can be anywhere in your code, they are often found in configuration files. Looking for secrets manually is a tough job. Tools exist to search your entire code base for you.

Malicious Artifacts

What it is:

Malicious artifacts are entered into the public registries that developers rely on for downloading libraries and applications in their programming language. Bad actors use techniques like typosquatting and brandjacking to make their malicious packages look like legitimate ones and trick unsuspecting developers into installing them. You can learn more about malicious packages here.

Where to look:

Malicious artifacts sit in public registries like NPM, Pypi, and Maven. A good SCA tool that detects malicious packages (we like Mend SCA, of course), can stop developers from adding these packages in the first place or find the packages that have already slipped through the cracks.

Repojacking

What it is: 

Through rebranding and acquisitions, it is common for repository names to change. When that happens, threat actors can create new repositories with the old names and their malicious code. Now any project that dynamically links to the original repository is at risk.

Where to look:

Repojacking occurs on code-hosting platforms like GitHub, Bitbucket, or GitLab. Hunting these threats must be done by repository owners who should audit any changed or deleted names for new activity.

Account Takeover

What it is:

Through phishing attacks, weak passwords, stolen credentials, and social engineering, attackers can gain access to the accounts of repository owners and inject malicious code into widely used projects.

Where to look:

Monitor your account for irregular login activity and monitor your repository for suspicious patterns. Look at your security settings for your code hosting platform accounts (like GitHub, Bitbucket, or GitLab) as well as your email or any other services that are connected to make sure they are fortified.

If you’re not a repo owner but are still in the line of fire because your project requires open source packages that are at risk of repojacking or account takeover, you can protect yourself.  Regularly scan your project with an SCA tool and educate your developers on best practices in open source code security.

For a deeper look into finding these threats, check out our threat hunting guide.

]]>