Alfrick Opidi – Mend https://www.mend.io Thu, 14 Nov 2024 21:50:04 +0000 en-US hourly 1 https://www.mend.io/wp-content/uploads/2024/11/Mend-io-favicon-outline-200px.svg Alfrick Opidi – Mend https://www.mend.io 32 32 How to Easily Update Node.js to the Latest Version https://www.mend.io/blog/how-to-update-node-js-to-latest-version/ Tue, 19 Jul 2022 08:58:24 +0000 https://mend.io/blog/how-to-update-node-js-to-latest-version/ Node.js is a popular open-source, cross-platform server-side environment for building robust applications. Since a vibrant community of contributors backs it, the platform is continuously updated to introduce new features, security patches, and other performance improvements. 

So, updating to the latest Node.js version can help you to make the most of the technology. You can decide to work with the Long-term Supported (LTS) version or the Current version that comes with the latest features. 

Typically, LTS is recommended for most users because it is a stable version that provides predictable update releases as well as a slower introduction of substantial changes. 

In this article, you will learn how to quickly and easily update Node.js on different operating systems—macOS, Linux, and Windows.

As we’ll demonstrate, there are many ways of updating to the next version of Node.js. So, you can choose the option that best meets your system requirements and preferences.

Checking your version of Node.js

Before getting started, you can check the version of Node.js currently deployed on your system by running the following command on the terminal:

node -version

or (shortened method):

node -v

Let’s now talk about the different ways on how to update Node.js.

1. Updating using a Node version manager on macOS or Linux

A Node version manager is a utility that lets you install different Node.js versions and switch flawlessly between them on your machine. You can also use it to update your version of Node.js.

On macOS or Linux, you can use either of the following Node version managers:

  • nvm
  • n

Let’s talk about each of them.

a) nvm

nvm is a script-based version manager for Node.js. To install it on macOS or Linux, you can use either Wget or cURL.

For Wget, run the following command on the terminal:

wget -qO- https://raw.githubusercontent.com/nvm-sh/nvm/v0.35.3/install.sh | bash

For cURL, run the following:

curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.35.3/install.sh | bash

The above commands assume that you’re installing nvm version 0.35.3. So, you’ll need to check the latest version before installing it on your machine. 

With these commands, you can clone the repository to ~/.nvm. This way, you can make changes to your bash profile, allowing you to access nvm system-wide.

To confirm if the installation was successful, you can run the following command:

command -v nvm

If everything went well, it’d output nvm.

Next, you can simply download and update to the latest Node.js version by running the following:

nvm install node

Note that node refers to an alias of the latest Node.js version. 

You can also reference LTS versions in aliases as well as .nvmrc files using the notation lts/* for the most recent LTS releases. 

Here is an example:

nvm install –lts

If you want to install and upgrade to a specific version, you can run the following:

nvm install <version-number>

For example, if you want to update Node.js to version 12.18.3, you can run:

nvm install 12.18.3

After the upgrade, you can set that version to be the default version to use throughout your system:

nvm use 12.18.3

You can see the list of installed Node.js versions by running this command:

nvm ls

Also, you can see the list of versions available for installation by running this command:

nvm ls-remote

b) n

n is another useful Node version manager you can use for updating Node.js on macOS and Linux.

Since it’s an npm-based package, if you already have Node.js available on your environment, you can simply install it by running this command:

npm install -g n

Then, to download and update to your desired Node.js version, execute the following:

n <version-number>

For example, if you want to update Node.js to version 12.18.3, you can run:

n 12.18.3

To see a list of your downloaded Node.js versions, run n on its own:

n

You can specify to update to the newest LTS version by running:

n lts

You can also specify to update to the latest current version by running:

You can specify to update to the newest LTS version by running:

n latest

2. Updating using a Node version manager on Windows

On Windows, you can use the following Node version manager:

nvm-windows

Let’s talk about it.

a) nvm-windows

nvm-windows is a Node version management tool for the Windows operating system. While it’s not the same as nvm, both tools share several usage similarities for Node.js version management.

Before installing nvm-windows, it’s recommended to uninstall any available Node.js versions from your machine. This will avoid potential conflict issues during installation.

Next, you can download and run the latest nvm-setup.zip installer.

Also, since the utility runs in an Admin shell, you’ll need to begin the Command Prompt or Powershell as an Administrator before using it. 

If you want to install and upgrade to a specific version, you can run the following:

You can specify to update to the newest LTS version by running:

nvm install <version-number>

For example, if you want to update Node.js to version 12.18.3, you can run:

nvm install 12.18.3

After the upgrade, you can switch to that version:

nvm use 12.18.3

You can also specify to update to the latest stable Node.js version:

nvm install latest

You can see the list of installed Node.js versions by running this command:

nvm list

Also, you can see the list of versions available for download by running this command:

nvm list available

3. Updating using a Node installer on Linux

Using a Node installer is the least recommended way of upgrading Node.js on Linux. Nonetheless, if it’s the only route you can use, then follow the following steps:

  • Go to the official Node.js downloads site, which has different Linux binary packages, and select your preferred built-in installer or source code. You can choose either the LTS releases or the latest current releases.
  • Download the binary package using your browser. Or, you can download it using the following Wget command on the terminal:
Wget command on the terminal
  • Download the binary package using your browser. Or, you can download it using the following Wget command on the terminal:
wget https://nodejs.org/dist/v12.18.3/node-v12.18.3-linux-x64.tar.xz

Remember to change the version number on the Wget command depending on the one you want.

  • Install the xz-utils utility using the following command:
sudo apt-get install xz-utils

This utility will be used for unpacking the binary package. 

  • Finally, run the following command to unpack and install the binary package on usr/local:
tar -C /usr/local -strip-components 1 -xJf node-v12.18.3-linux-x64.tar.xz

4. Updating using a Node installer on macOS and Windows

Another way of updating your Node.js on macOS and Windows is to go to the official download site and install the most recent version. This way, it’ll overwrite your existing old version with the latest one.

You can follow the following steps to update it using this method:

  • On the Node.js download page, select either the LTS version or the latest current version.
On the Node.js download page, select either the LTS version or the latest current version
  • Depending on your system, click either the Windows Installer option or the macOS installer option.
  • Run the installation wizard. It will magically complete the installation process and upgrade your Node.js version by replacing it with the new, updated one.

5. Updating using Homebrew on macOS

Homebrew is a popular package management utility for macOS. 

To use it for installing Node.js, run the following command on your macOS terminal:

brew install node

Later, if you’d like to update it, run the following commands:

brew update #ensure Homebrew is up to date first
brew upgrade node

Furthermore, you can switch between installed Node.js versions:

brew switch node 12.18.3

There is an easier way to upgrade Node.js Versions

Renovate is an open source tool by Mend for developers and DevOps that automatically creates pull requests (PRs) for dependency updates. Renovate PRs embed all the information you need to ease your update decision.

Renovate can upgrade the Node.js runtime and packages used by your project. This way you have access to the latest features, bug fixes, performance improvements, and security mitigations.

]]>
Update Docker Images & Containers To Latest Version https://www.mend.io/blog/update-docker-images-and-containers-to-the-latest-version-easily-and-quickly/ Mon, 18 Jul 2022 07:45:35 +0000 https://mend.io/blog/update-docker-images-and-containers-to-the-latest-version-easily-and-quickly/ When working with Docker, everything revolves around images. And when you’ve used an image to deploy a container, it continues running that image version, even if a new release is existing.

Since images within launched containers cannot self-update, it’s essential to use other ways to keep them up-to-date and optimize your applications’ performance. 

This article demonstrates how to upgrade a deployed Docker container based on an updated image version.

What is a Docker image?

A Docker image is a lightweight, standalone, and executable package that includes everything needed to run a piece of software, including the code, runtime, libraries, and system tools. It is a snapshot of a file system and the parameters needed for running a specific application or service.

Docker images are built from a set of instructions written in a Dockerfile, which specifies the base image, dependencies, configuration, and other settings required for the application. These images are designed to be portable, consistent, and scalable across different environments, allowing developers to package their applications and dependencies into a single unit that can run seamlessly on any system supporting Docker.

Role of Containers

Containers are instances of Docker images, running in isolated environments. They enable consistent operation across various development and deployment environments, making it easier to develop, test, and deploy applications.

Importance of updating Docker images and containers

Unless you have a convincing reason to use an older version, it’s recommended to run Docker containers using the most up-to-date image releases.

Here are some reasons for keeping them up-to-date:

  • Updated images often come with patch fixes that can enhance the security of your applications. 
  • Upgraded images often include new or improved features that can boost the performance of your applications. 
  • Updated images often remove outdated features that can otherwise impair smooth compatibility with various devices or applications. 

How to update Docker images and containers

Let’s talk about the steps of upgrading a Docker image and a container to the version you desire.

Step 1: Check current image version

To find out if your Docker container requires an upgrade, you need to check the version you are using. You can get this information by looking at the set of identifiers and tags that come with every image pulled from a Docker Registry such as the Docker Hub.

You can use any of the following two simple methods to check if your image is outdated:

  • List images on your system

To see the images on your system, run the docker images or docker image ls command.

Here is an example:

the docker images or docker image ls command

If you check the output of either of the commands, you’ll see that they list the available images, alongside their version tags, IDs, and other stuff. It’s the version tag that we want here.

For example, the version of the mysql image is 8.0.

  • Inspect the images further

If an image is tagged with a version name, such as latest, you may need to dig further to know the exact version of the image. 

Remember that the latest tag may not actually mean that the image is the latest version. It’s the default tag that Docker applies to any image without a specific tag. 

It’s not dynamic and may not track the most recent push. 

So, it may be confusing to use, at times.

For example, these two commands will both create a new image that is tagged as latest:

docker build -t user/image_name .
docker build -t user/image_name:latest .

So, to know the exact version of such an image, you may run the docker inspect command, which outputs detailed low-level information about the image.

Here is its syntax:

docker inspect <image_id>

Let’s run the command against the bash image on our system. Note that we got its ID after running the docker images command.

Here is the command:

docker inspect 39a95ac32011

Here is the output (for brevity, it’s truncated):

Output of running docker inspect command to check image version

From the above output, you can see that the version of the bash image, which is tagged as latest, is 5.0.18.

Step 2: Stop the container

After knowing the outdated image version the deployed container is using, the next step is to stop the container.

First, you’ll need to use the docker ps command to see a list of all containers currently running on your system.

To see a list of all containers, running and not running, add the -a (or –all) flag.

Here is an example:

how To see a list of all containers, running and not running

Then, stop the existing container by running the docker stop command.

Here is its syntax:

docker stop <container_id>

For example, here is how to stop the mysql container:

Using docker stop command to stop a container

It’ll return the ID of the deployed container.

Step 3: Remove the container

After stopping the running container, you can now use the docker rm command to remove it. 

Here is its syntax:

docker rm <container_id>

Here is how to remove the mysql container:

Using docker rm command to remove a container

It’ll return the ID of the container.

You may also remove its associated image by using the docker rmi command. 

Here is its syntax:

docker rmi <image_id>

Here is how to remove the mysql image:

Using docker rmi command to remove an image

Step 4: Pull your desired image version

Next, you can look for the version of the image you need to update to.

For example, this is a screenshot from Docker Hub showing the mysql version we need to use for carrying out the Docker update image task:

Docker Hub screenshot showing mysql version

You can see that the version is tagged as 8.0.22.

To download the image from Docker Hub, you can use the docker pull command. 

Here is its syntax:

  docker pull <image_name:image_tag>

Here is how to pull the mysql image:

Using docker pull command to download an image from Docker Hub

Step 5: Launch the updated container

After downloading the new image, you can use it to recreate the container by executing the docker run command. 

Here is its syntax:

docker run <image_name:image_tag>

Here is how to carry out a Docker update container task for the downloaded mysql image:

Undertaking Docker update container task based on an upgraded image

Note that the docker run command will start by looking for the image on your local system. If it’s not already present, it’ll download the image from the Docker Hub registry.

So, you may use the docker run command and avoid step 4 above.

Step 5: Verify the update

Lastly, you may need to ascertain if your Docker update image efforts were successful.

You may run the docker images or docker ps -a command to verify if everything went well.

Here is an example:

Checking the success of Docker update image tasks

As you can see on the image above, the Docker update container process for mysql went well. 

There is an easier way to upgrade dependencies in Docker files

Renovate is an open source tool by Mend for developers and DevOps that automatically creates pull requests (PRs) for dependency updates. Our PRs embed all the information you need to ease your update decision.

Renovate supports upgrading dependencies in various types of Docker definition files. By using Renovate, you can easily keep your Docker images, containers, and IaC files up-to-date, performant, and secure.

Get Renovate now. It’s free. >>

]]>
How To Address SAST False Positives In Application Security Testing https://www.mend.io/blog/sast-false-positives/ Thu, 10 Mar 2022 11:30:18 +0000 https://mend.io/sast-false-positives/ Static Application Security Testing (SAST) is an effective and well-established application security testing technology. It allows developers to create high-quality and secure software that is resistant to the kinds of attacks that have grown more prevalent in recent years.

However, the challenge with SAST is that it tends to produce a high number of false positives that waste the time of your engineering team.

In this blog we take a look at SAST and the problem of false positives. We discuss how to avoid them, and how Mend SAST can help.

What Is SAST?

SAST is an application testing methodology that assesses source code to discover potential design loopholes, using static program analysis to find vulnerabilities. In SAST, the application is scanned without the need to execute its code. It’s also called white box testing.

SAST is one of several approaches used in application security testing (AST). The other main approaches are Dynamic Application Security Testing (DAST)Interactive Application Security Testing (IAST), and Software Composition Analysis (SCA).

Ideally, SAST is performed in the early stages of the software development life cycle (SDLC). By shifting security left, developers are able to fix issues swiftly without breaking builds or passing defects to the later development stages, where issues are more costly to resolve.

One of the advantages of SAST, as compared to DAST or IAST, is the fact that SAST identifies flaws in 100% of the codebase. For example, the use of inadequate encryption is not something that a DAST or IAST tool can detect, but this is easily detected by a SAST tool.

The Challenge of SAST False Positives

Despite its many benefits, SAST tools have developed a bad reputation of producing a large number of false positive warnings – potential code flaws which, upon further investigation, do not really exist. To demonstrate, here are two typical kinds of problems that arise from these circumstances.

Firstly, project- specific sanitizers. These are measures that are taken to “clean” data in order.  to protect against a vulnerability. Here is an example of a project- spvecific sanitizer for SQL injection:

final var allowedColumns = List.of(“col1”, “col2”, “col3”);

if(allowedColumns.contains(userInput)){

stmt.executeQuery(“SELECT ” + userInput + ” FROM table”)

}

In this example, userInput is a string that comes from the outside: a so-called “taint” or “tainted data.” The data can easily be controlled by an attacker and can cause a SQL injection.

When this tainted data reaches a code location where it might cause harm, then this is called a taint sink. In the code snipped above, the executeQuery method is a taint sink for SQL injection. If no further protection is put in place, you can attack the database by adding a harmful string via userInput and then use it to instigate malicious activity, such as deleting all users, getting administrator permissions, and more.

In general, SAST tools check if there is a path to the taint sink (“a taint flow”) and if it exists, then a vulnerability is detected. However, if some measures are taken on this flow so that only harmless data can reach the sink, then you don’t have a vulnerability. These measures are called taint sanitizers, because they “clean” the tainted data. This way, it is not possible that an attacker can reach the executeQuery call with a harmful string.

Typically, the sanitizer can either be some typical library function (that can be detected out of the box when the tool vendor knows about the library and has predefined a corresponding rule) or it can be something “home-made” or project- specific. In this case, allowedColumns.contains(userInput) is the project- specific sanitizer. It checks that userInput is one of the allowed strings.

Generally, the best way to prevent SQL injection is to use a prepared statement. This isn’t used here, so static analysis tools will report a vulnerability. But in fact it is a false positive, because in this case, a project specific sanitizer is used by whitelisting the input (checking that only one of the expected strings is provided). This is something that no static analysis tool can know in advance.

The second example is that of a context- specific vulnerability. A security tool simply can’t know in advance, if a piece of software is sensitive enough to require authentication and authorization. Therefore many tools report every type of unauthenticated access, thereby generating false positive alerts for unauthenticated access to software that is innocuous. For example, unauthenticated access to software that provides time-of-day lookup is probably not a real vulnerability.

Effects Of SAST False Positives

To understand what a false positive is, it is important to understand the range of test results that a SAST tool can produce. Security testing exercises usually generate the following results:

  • True Negative—correctly showing that a vulnerability is not present
  • True Positive—correctly showing that a vulnerability is present
  • False Negative—not showing that a vulnerability is present, while it actually exists
  • False Positive—incorrectly showing that a vulnerability is present, while it actually does not exist.

The goal of any AST tool should be to maximize the true negatives and true positives while minimizing the false negatives and false positives. However, from an engineering point of view, this is hard to do. In practice, most SAST products have been designed to maximize the number of true positives even at the expense of producing a larger number of false positives. It leads to tremendous time wastage as developers hunt down flaws that actually do not exist. What results are the following problems:

Common Causes Of False Positives

There are two main causes of false alarms in SAST tests.

#1 Over assumptions in scanners

Some SAST tools have incorporated too many assumptions in order to provide support for several programming languages. In most cases, these assumptions are generic andcannot accurately apply to all programming languages.

If a bit of source code does not make sense to such tools, or if a dependency is missing, they assume there must be problems with the code. As a result, these static analysis tools produce a large number of false positives.

#2 Poorly designed rules

Some traditional SAST tools are poorly designed and cannot confidently flag vulnerabilities. To compensate for this, they flag vulnerabilities even when there is just a hint that there could be a problem, just to be on the safe side.

For example, some SAST tools search for certain words in string variable names to determine if they contain sensitive data, such as authentication credentials, keys, or passwords.

Let’s look at the following code:

userPasswordLabel.Text = “Please enter your password”;

Although the variable name contains the word “password”, it does not necessarily mean that it contains sensitive information. It’s just a label. If a SAST tool cannot perform further analysis to determine if the variable reveals sensitive data, it could quickly give a false alarm.

How To Avoid SAST False Positives

Let’s look at some ways to conquer the false positives and prevent them from derailing your shift left strategies.

#1 Choose a good tool

Choose a good static analysis tool that is well designed for the job. If you go for any tool without sufficiently assessing its effectiveness, you can waste your time dealing with many false positives instead of resolving real threats.

You need a tool that can perform deep, accurate, comprehensive analysis in your preferred programming language and your tool should cover a wide range of vulnerabilities. A generic tool that is not fine-tuned to your language of choice could produce many false positives for no good reason.

Additionally, go for a tool that can integrate seamlessly within your existing DevOps environment and CI/CD pipeline. This will prevent you from separately configuring or triggering a scan, which can increase the noise of your SAST results.

Read more on how to select the right SAST tool for your organization.

#2 Practice customization

Applying a customized ruleset to a SAST tool will help to decrease the number of false positives. Using a tool’s default settings may not suit the specific needs of your organization.

If you configure a SAST tool to your preferences, it can assist in detecting the right issues and minimizing the false positives. Implementing a tool without customization could produce a lot of junk that may make you miss the real issues plaguing your software.

#3 Filter and prioritize

You can also filter out results that are inconsequential to the well-being of your software. For example, you can ignore results from parts of the software that comprise “dead code” or packages that are not being invoked.

It will help to prioritize the testing results based on what is most important to your organization. You can prioritize them according to the severity of threats, status of the vulnerabilities, and ability to solve the issues. This will allow you to concentrate on fixing the most critical anomalies instead of wasting efforts tackling the issues that have little or no impact.

How Mend SAST Helps

Mend SAST lets enterprise application developers create new applications quickly, without sacrificing security. With Mend SAST, you can effectively reduce the risks and inefficiencies that usually arise from massive alert volume. And you can do it fast. Mend SAST contains a breakthrough scanning engine that produces results 10x faster than traditional SAST solutions. So your developers are not left waiting.

It integrates with your existing DevOps environment and CI/CD pipeline, so developers don’t need to separately configure or trigger the scan. And it’s comprehensive, supporting 27 different programming languages and various different programming frameworks, platforms and frameworks This will ensure that you have visibility to over 70 CWE types — including OWASP Top 10 and SANS 25 — in desktop, web and mobile applications, providing deep visibility into a wide range of vulnerabilities.

Mend SAST provides maximum efficiency and convenience for your developers, allowing them to identify and address vulnerabilities right away, when it’s quickest and easiest to do so. Built-in reports for security standards such as PCI and HIPAA allow you to easily meet compliance requirements. The efficiency and ergonomics of Mend SAST will help your software developers learn to trust their software tools and collaborate more readily with members of the security team.

]]>
Best Practices For Managing Ruby Supply Chain Security Risks https://www.mend.io/blog/how-to-mitigate-ruby-supply-chain-security-risks/ Thu, 03 Mar 2022 07:00:00 +0000 https://mend.io/how-to-mitigate-ruby-supply-chain-security-risks/ Software supply chain attacks are on the rise – the attacks increased by more than 600% between 2020 and 2021. On RubyGems, the official package repository for the Ruby programming language, attackers usually take advantage of the implicit trust developers have on the gems deployed on the platform and infect them with malicious code. 

And when a developer incorporates a compromised open source library in their project, bad actors use that as a stepping stone to stage attacks to a wider audience downstream, resulting in supply chain attacks

This blog talks about the best practices for managing supply chain security risks in Ruby applications in order to prevent Ruby supply chain attacks.

Types of Ruby supply chain attacks

Open source software provides a lucrative opportunity for bad actors to penetrate the supply chain of your Ruby applications and instigate Ruby supply chain attacks . Third-party packages are widely used to speed up software delivery times, improve the efficiency of development teams, and produce quality software. 

However, in addition to these benefits, open source libraries are vulnerable to attacks and supply chain risks that could affect the performance of your applications. 

Let’s address the main types of supply chain attacks associated with using open source libraries in Ruby applications.

1. Gems Typosquatting Attacks

Typosquatting is a common type of attack on the RubyGems ecosystem. It involves pushing misspelled malicious gems to the registry with the intention of duping developers into using them. 

Typosquatting relies on mistakes developers make when searching for gems. An attacker can deliberately create a malicious gem and name it like a legitimate gem. And if a developer makes a typing error, they could land on the similarly named gem. 

In 2020, security researchers found more than 700 malicious typosquatted packages on the RubyGems platform. One of the harmful packages they discovered was called atlas-client, which typosquatted the legitimate package called atlas_client. Once installed on Windows devices, the fake package ran a script that intercepted cryptocurrency payments.

2. Brandjacking Attacks

Typosquatting is closely related to another type of attack called brandjacking. In a brandjacking attack, a malicious actor takes advantage of the popularity of a package and provides a similarly named malicious package that lures developers into installing it. 

For example, an attacker can create a package that is popular in other registries, such as npm or PyPI, and deploy it on RubyGems, where it is missing. And if a developer uses the same name to look for it on RubyGems, they land on the brandjacked package. 

3. Malicious Takeover Attacks

Malicious takeover attacks occur when a bad actor uses social engineering tricks to get the account credentials of the project owner and overtake the repo. A wolf in sheep’s clothing contributor may offer to keep a package up to date. And if granted access to the repository, the malicious maintainer could tamper with the package, inject backdoors, or cause other damages.

Alternatively, an attacker could extract the credentials if the project owner has a weak credential management system.

In 2019, attackers compromised the credentials of one of the maintainers of rest-client, a popular library in RubyGems, and used them to distribute malicious code.

4. Primed Repository Attacks

Primed repository attacks occur when a miscreant publishes a package on RubyGems with the intention of building its popularity and using it for staging attacks in the future. 

For example, the Mend Supply Chain Defender security team recently discovered a RubyGems library called ddtracer, which was an intricate build-up for future Ruby supply chain attacks. 

5. Security Research Smokescreen Attacks

RubyGems has a policy that allows packages to be used for security research. However, the Mend Supply Chain Defender team recently discovered 61 gems that claimed to be used for security research but were intended for malicious purposes. 

Best practices to mitigate attacks

Managing the risks of the Ruby supply chain is a challenging task. In today’s software development landscape, it’s not possible to eliminate reliance on open source software. 

As a Ruby developer, however, you can take some proactive steps to minimize the open source software risks, prevent Ruby supply chain attacks, and ensure you release quality and performant products.

Let’s delve into some best practices you can follow to improve the supply chain security of your Ruby projects.

1.  Eliminate Trust

Applying a Zero Trust philosophy can help you overcome the supply chain security challenges in your Ruby environment. Bad actors usually use trust as an attack vector or an easy entry point to infect packages.

A Zero Trust approach requires that you treat every dependency as untrusted by default. You then need to implement sufficient measures to verify the legitimacy of every gem before incorporating them into your project.

For example, instead of automatically updating your packages, you should assume that every update is insecure and explicitly validate their security before installation.

2. Probe Gems

Before acquiring any gem to include in your Ruby project, you need to ask some questions.

  • Is the gem from an authentic source? If its source is questionable or it has a misspelled name, it could make you susceptible to brandjacking or typosquatting attacks. You should avoid those that appear to be squatting on a typo of more popular packages. 
  • When was the gem released? Is it a new package? Typically, you should wait for at least 30 days for a new package’s safety to be verified by the community’s security researchers before starting to use it.
  • Is the gem properly documented with healthy codebase comments? A package that is poorly documented should raise the red flags. 
  • How well is the gem maintained? When was its last commit? If a gem is under-maintained or abandoned, it could be ripe for a malicious takeover.
  • Which open source license is the gem using? You need to review its open source license and ensure you’re comfortable with it. 

3. Maintain Visibility 

Lack of visibility into your software supply chain could be a recipe for disaster. You need to know what’s happening with your direct and transitive dependencies at all times. 

Even after probing your gems and selecting the ones to include in your project, you should not burn your bridges and forget about them. You need to constantly examine your dependencies for newly identified vulnerabilities.

If you discover that a package no longer offers the intended value, you need to remove it. Having stagnant gems could provide attackers with a weak link to harm your application.

4. Implement Multi-Factor Authentication

You need to implement multi-factor authentication across your supply chain. Providing two or more pieces of evidence before accessing your sensitive resources could help you avoid malicious takeover attacks.

If you enable two-factor authentication on RubyGems, you could increase the security of your account and reduce the chances of malicious infiltration that can cause Ruby supply chain attacks.

5. Separate Build Steps

You need to run all build processes in hermetic, isolated, and ephemeral environments. Separating your CI stages across multiple infrastructures could help you avoid releasing products with exploitable vulnerabilities. 

Avoiding using the same environment for executing specs, creating containers, pushing updates, or anything else, could greatly increase the security of your software.

6. Be Proactive

Most organizations usually treat supply chain security incidents using reactive measures. With a proactive approach, however, you can neutralize supply chain risks and keep your Ruby project healthy.

For example, you can educate your team on secure dependency management practices. You can also include critical production-related CVE notifications in your security alert mechanism. That’s the best way to get prompt security alerts and ensure you’re ahead of the threat actors. 

How Mend Supply Chain Defender helps

Mend Supply Chain Defender is a robust security scanning and risk management tool that helps you manage supply chain risks in your Ruby projects and prevent. It’s the platform you need to safeguard yourself against the risks of using open source third-party dependencies.

Mend Supply Chain Defender scans your gems and assesses their behaviors, assisting you to keep track of the state of your production and avoid poisoning your downstream clients with malicious releases. 

It intelligently allows you to block installing malicious packages, understand the risks of new vulnerabilities, ensure open source licensing consistency, and create security policies around dependencies and their versioning. 

It’s the tool you need to add comprehensive supply chain risk mitigation measures in your Ruby projects, protect yourself against Ruby supply chain attacks, and deploy with confidence.

]]>
A Guide To Implementing Software Supply Chain Risk Management https://www.mend.io/blog/implementing-software-supply-chain-risk-management/ Thu, 24 Feb 2022 06:00:00 +0000 https://mend.io/implementing-software-supply-chain-risk-management/ Software supply chain risks are escalating. Between 2020 and 2021, bad actors launched nearly 7,000 software supply chain attacks, representing an increase of more than 600%. Without identifying and managing security risks within the supply chain, you could be exposing your critical assets to attacks. 

Implementing a supply chain risk management strategy is essential to staying ahead of the potential threats and making the most of your software. Let’s take a look at how best to adopt software supply chain risk management strategies in an enterprise:

What Are Software Supply Chain Risks?

A software supply chain consists of a network of different components that goes into delivering a software product. It comprises open source packages, proprietary software, and other third-party resources that a vendor acquires from suppliers to produce the final software.

If there is a defect in any component of the software, it could present a risk to the entire supply chain. A vulnerability in any dependency or service could introduce a weakness in the software that adversaries might target. 

If the risk is exploited, a malicious person can extract sensitive information or cause other dangerous damage. For example, a cybercriminal can implant nefarious code in a third-party package. If an unsuspecting victim then uses the vendor’s software that depends on the vulnerable package, their system gets compromised.

In most supply chain attacks, the vendor is not a final target, but they are used as a stepping stone to several other potential targets downstream. 

What Is Software Supply Chain Risk Management?

Software supply chain risk management is the process of evaluating and managing issues in a supply chain. Without appropriate mitigation measures, software supply chain security risks could affect the authenticity, trustworthiness, and integrity of the released software.

Managing supply chain risks requires a coordinated approach that identifies, assesses, and addresses software threats. It aims to enhance the understanding of supply chain risks in an organization, minimize the effects of attacks, and ensure that defective components do not find their way into the software supply chain.

Importantly, software supply chain risk management involves mitigating risks from open source components, which have increasingly become a way for attackers to penetrate the software supply chain.

Open source codebases have many benefits. Consequently, their usage in software development projects has escalated rapidly in recent years. In 2015, 36% of such projects used open source. By 2020, this proportion had risen to 75%, and it’s still growing. However, open source codebases come with security risks that you must address to reap their full value. 

Software Supply Chain Risk Management Process

The software supply chain risk management process typically entails the following:

  • Risk identification: Conducting tests to identify the main risks in the entire software supply chain.
  • Risk assessment: Assessing the severity of the identified risks and the possibility of their occurrence.
  • Risk mitigation: Performing various tasks to treat the risks and lower the likelihood of damage. You can manage risks in this step by creating an action plan that avoids potential risks and ensures your supply chain is free of vulnerabilities.
  • Risk monitoring and control: Actively monitoring the software supply chain for any emerging risks and appropriately addressing them.

Software Supply Chain Risk Management Best Practices

Let’s delve into some best practices you can implement to enhance risk management in your software supply chain. 

1. Develop A Risk Management Framework

A proactive risk management framework strengthens the security of the software supply chain. By outlining the possible attack scenarios, you can enhance your organization’s readiness to combat them in the future.

These are some questions you can investigate:

  • What’s the likelihood that attackers will compromise a supply chain component?
  • Which preventive strategies should you adopt to avoid the risks?
  • Which contingency plan should you put in place to address each of the risks?

2. Increase Transparency In Your Supply Chain

You need to map out all the areas of your software supply chain. This will increase their visibility and help you know what’s happening with your software components.

Without transparency in your supply chain, it will be difficult to implement protective measures or identify defects that could make your software vulnerable to attacks.

You also need to know the details of each component of your supply chain. For example, you need to scrutinize the origin of your open source components, processes used to develop them, and the history of their vulnerabilities.

3. Adopt Risk Management Training

Training everyone in your organization is another essential strategy for managing software supply chain risks. It will equip them with the right skills to anticipate, recognize, and prepare for risks. 

Risk management training will empower your workforce to be vigilant and avoid risks that could threaten the optimal performance of the organization’s software. It’s what will transform them into your most valued security asset instead of your weakest security link. 

Software Supply Chain Risk Management Benefits

Practicing risk management in your software supply chain can lead to several benefits. 

Let’s look at some of them.

1. Reduces Security Risks

In the software supply chain, the devil is usually in the details. However, a risk management strategy allows you to get increased visibility into the various supply chain components so that you can see issues that are not apparent.

As such, you can quickly detect threats, formulate plans to address them, and establish measures to safeguard your critical infrastructure. Risk management ensures that any vulnerability in the supply chain is promptly identified and dealt with before it brings your software to its knees. 

 2. Enforces Regulatory Compliance

The sharp rise of software supply chain attacks has elicited a regulatory response from governments and industry bodies. For example, in May 2021, the U.S. government issued an executive order aimed at enhancing its cybersecurity posture and curbing threats from supply chain attacks. The rest of the industry is likely to follow suit and set standards in vendors’ software. 

So, adopting risk management in your software supply chain allows you to comply with the regulations and practice better security hygiene. 

3. Improves Productivity

A proper risk management strategy ensures risks are actively tracked and solved before they cause havoc to the software. As light is shone on the problematic components of the software, your team can move quickly to address them and ensure everything works as desired. 

This avoids blame games between your team members and keeps them focused and motivated. Trying to rectify an anomaly in the supply chain when it has already overwhelmed the software could affect your team’s morale.

With improved productivity, your company can produce quality software that meets consumer expectations and maintains a competitive edge in the marketplace. 

How Mend Supply Chain Defender Can Help

Mend Supply Chain Defender is a robust risk management platform that lets you fend off the supply chain risks and keep your software performant and healthy. It allows you to defend yourself against vulnerabilities in open source third-party dependencies.

It’s a multi-dimensional tool that scans open source packages and assesses their behaviors. This allows you to maintain visibility into the components of your supply chain, react quickly to risks, and enforce policies on the usage of external packages.

With Mend Supply Chain Defender, you can take a proactive approach to secure your supply chain. It’s the risk management solution you need to ward off malicious exploits. 

]]>
DevSecOps in an Agile Environment https://www.mend.io/blog/devsecops-agile-environment/ Thu, 20 Jan 2022 09:19:09 +0000 https://mend.io/devsecops-agile-environment/ At first glance, DevSecOps and Agile can seem like different things. In reality, the methodologies often complement each other. Let’s see how.

Agile is a methodology that aims to give teams flexibility during software development. DevSecOps is about adding automated security to an existing automated software development process. Both are methodologies that require high levels of communication between different stakeholders and continuous improvement as part of the process.

But how, exactly, does DevSecOps work in an Agile environment? and why does DevSecOps in Agile matter?

How DevSecOps works in an Agile environment

To understand how DevSecOps can work in an Agile environment, we must first understand what Agile is.

Agile is often synonymous with the idea of speed — that is, ship fast and ship often. However, as software developers move faster, the old security tools they were using can’t keep up. Security becomes a speed bump and is frequently bypassed in order to ship code quickly. Over time, the software becomes prone to leaks, breaches, and hacks.

DevOps was developed as a methodology that enables application developers and software release teams to work together more efficiently, with more cooperation, in order to deliver applications with higher velocity. DevSecOps is an evolution of DevOps, driven by the need to add automated security to the automated DevOps processes.

In spite of their differences, Agile and DevSecOps can be seen as complementary to one another because they are both trying to achieve the same thing: speed.

Promoting security as a central feature in Agile workflows

DevSecOps is primarily intended to avoid slowing down the delivery pipeline. However, badly structured and constructed software also has the ability to slow delivery down. As we move into a fully digitized world, ignoring security in the DevOps process can significantly reduce a team’s ability to remain Agile.

Fake Agile” occurs when Agile processes are followed but not properly implemented. This means that teams are engaged in sprints, standups, scrums, and burndown charts, but the software produced is done in a haphazard way. Fake Agile is endemic across organizations that try to expedite deployments without understanding the need for properly implementing software in order to avoid breaking the existing application.

A crucial value of the Agile Manifesto is prioritizing “working software over comprehensive documentation”. However, this means more than the software simply performing its required functions. Properly working software involves everything that the software needs to work effectively and securely. This means thinking beyond just the core code for the features and functionality. It also requires the inclusion of other layers such as infrastructure and the security that surrounds it.

For teams and software to be truly Agile, they need to incorporate DevSecOps into their workflows.

Automated security features in pipelines as part of an Agile workflow

Automation is a valuable feature for DevSecOps. This is because it enables continuous integration, deployment, and scaling in a way that allows ongoing maintenance and security assessments. By design and philosophy, DevOps already emphasizes speed through automated deployments. DevSecOps takes it one step further and automates security protocols, checks, and testing to ensure that the software is ‘world-ready’ and not just in a prototype.

When DevSecOps development cycles become part of an Agile sprint, they ensure that the software delivered remains robust and is updated against potential vulnerabilities.

Agile is also more than the ability to ship and deliver features and software. It includes the ability to respond to change from any source. This extends beyond market forces and competition to include vectors such as malicious actors and cyber attacks.

According to IBM’s Cost of a Data Breach Report 2021, the average cost incurred by a single data breach rose by almost 10% from USD 3.86 million to USD 4.24 million. DevSecOps is a preventative measure against cybercrimes and malicious data hijacking through various methods such as zero trust. IBM’s report says that organizations which adopted a zero trust approach reduced the costs of a breach by USD 1.76 million compared to organizations that hadn’t implemented this approach. Furthermore, those with mature DevSecOps processes were also able to contain a breach on average 77 days faster than those without.

With DevSecOps, security is built into the application during its development, making it easier to identify and resolve vulnerabilities sooner. This ensures that when software and its features are delivered, they maintain a baseline level of functional quality.

Agile is designed to help an organization maximize its profits by enabling developers to create software that enhances the ability to provide better products and services. However, the indirect costs of a breached application can result in lost revenue, diminished customer trust, reduced growth, and shrinkage in market share. Additionally, a breach requires the diversion of resources to resolve the breach and ensure that the software is structurally compliant with security needs.

Wrapping up: DevSecOps and Agile can co-exist

Speed and security can work together, especially in a DevSecOps and Agile environment. Agile doesn’t mean your team needs to sacrifice security and DevSecOps doesn’t mean you have to sacrifice speed.

DevSecOps matters because when implemented properly with Agile, both speed and security can be achieved at scale. Agile is only achieved when the software delivered is able to adapt to changes with minimal friction. The integration of the developer, quality assurance tester, security expert, and ops into a single cohort of developers as a DevSecOps team allows for a cohesive piece of software to be built. This can lead to fewer bugs, better modularity, and automation. In turn, this reduces the software structural and architectural resistance that naturally comes with any change.

]]>
How To Transition Your Team From DevOps To DevSecOps https://www.mend.io/blog/devops-to-devsecops-team-transition/ Thu, 14 Oct 2021 07:19:20 +0000 https://mend.io/devops-to-devsecops-team-transition/ DevOps has transformed the software development industry. The merging of development (Dev) and operations (Ops) teams has largely contributed to quick and effective software releases.

The continuous evolution of the application security threat landscape requires organizations to integrate security into the DevOps culture. Thus, DevSecOps has emerged to extend the capabilities of DevOps and enable enterprises to release secure software faster. 

However, making a move from DevOps to DevSecOps may be challenging. You need the right mindset and approach to make the shift painless and realize the full value of DevSecOps. 

DevOps and DevSecOps comparison

DevOps and DevSecOps are closely related agile development methodologies. Both of these approaches have quite a few similarities, such as relying on a collaborative culture to realize development objectives like fast iteration and deployment, using automation during the application development process, and actively monitoring and analyzing data to drive improvements.

On the other hand, DevSecOps and DevOps are set apart by their focus. DevOps focuses on meshing the development and the operations teams. The two teams collaborate throughout the development and deployment process to implement shared goals, which greatly optimize the speed of delivery. As DevOps teams race to increase the frequency of deployments, security is often the first casualty.

DevOps has evolved into DevSecOps as organizations have come to realize that the DevOps approach could make applications prone to security vulnerabilities. Rather than considering security at the end of the development pipeline, DevSecOps integrates security into the entire pipeline, right from the beginning. DevSecOps seeks to shift security left in the software development life cycle (SDLC).

Benefits of moving to DevSecOps

As you move forward on your DevSecOps journey, let’s take a closer look at how this approach can improve the security of the entire software delivery life cycle in your organization.

1. Addressing security early saves organizations time and money

DevSecOps emphasizes the need for embedding security into the entire DevOps workflow, right from the beginning — from the design, code, and deployment stages.

Instead of following the traditional technique of retrofitting security into the build, DevSecOps integrates security testing earlier throughout the development and operations pipeline. Addressing security issues as early as possible helps save a lot of valuable resources. Developers can solve anomalies before they reach production, which expedites delivery and lowers risks. 

2. Shared ownership of security reduces friction between teams

With DevSecOps, security becomes part of everyone’s job. In many organizations, the DevOps and security teams are at odds with each other. This strained relationship often leads to slower remediation and even poor and insecure applications.

DevSecOps seeks to break down the security silos—the applications, data, and security protocols that every department manages in its own particular way—that may impede inter-department communication in an organization. 

It enhances transparency and collaboration between developers, security, and operations teams throughout the software delivery life cycle. This increased communication assists in releasing secure and performant products.

3. Automated security testing speeds up remediation 

Incorporating automated application security testing tools into the development environment helps prevent security issues from entering the code, and helps detect and fix issues as early as possible. 

When organizations invest in security testing tools that integrate seamlessly into developer environments, developers can easily address security throughout development, without delaying or interrupting their workflows. 

Tips for transitioning successfully

Starting to implement DevSecOps practices is an enormous undertaking for most organizations. If your organization does not make the transition smoothly, it might not realize its benefits. 

Here are some tips on how to successfully transition from DevOps to DevSecOps. 

1. Establish a strong foundation

It’s important to start by building a strong foundation for your adoption of DevSecOps. Take a deep breath and ask yourself, “What does my organization intend to achieve, and what security measures are required?

You can gather the impetus needed to succeed with your DevSecOps strategy through proper planning and appropriately laying down your objectives.

To establish a strong foundation, you can start small and incrementally incorporate new ideas as your DevSecOps practices mature. Breaking up tasks into simple, manageable pieces will prevent overwhelming and confusing your team.

2. Understand the big cultural change involved

The human element is a crucial part of the DevSecOps approach. 

Many team members might have trouble accepting the sweeping transition that changes the traditional way of doing things. Furthermore, as security was considered an afterthought in the DevOps model, it could heighten the resistance. 

Since the transition to DevSecOps will affect everyone, it’s important to ensure all teams are included in the process. If anyone is not on the same page, it will be challenging to implement the new approach successfully.

You can begin by educating teams on what DevSecOps is, and its benefits to your organization. Encouraging a mindset change that prioritizes security will significantly assist the change process. 

3. Practice secure coding

It’s important to train developers on secure coding practices in order to move from DevOps to DevSecOps seamlessly. If every line of code is written with security considerations in mind, there will be less vulnerabilities in the final application.

Developers cannot address anomalies they do not understand. They should have the right skills to identify cyber security issues that may come out in their code. By investing time and resources into training developers, you can ensure they have the right skills to stay ahead of the attackers. 

4. Use the right tools

Developers can’t be expected to become security experts overnight. White training is important, integrating automated security tools that help support developers is crucial. Developers need the right tools that allow them to detect vulnerabilities at each stage of the delivery pipeline — from the time the first line of code is written to when it is deployed into production.

The best vulnerability scanning tools help prioritize alerts and send notifications so that the most dangerous issues are resolved immediately. For example, Mend Bolt is a free tool that lets you scan the open source components in your code and detect vulnerabilities. With Bolt, you can get real-time security alerts, find and fix open source vulnerabilities, and more. 

5. Evaluate progress

Evaluating your progress will help you to determine how your transition to DevSecOps is going. You can measure the metrics of the various SDLC phases—such as the amount of time taken to develop, test, or deploy the application—and compare them to the results after adopting the DevSecOps methodology. You can also use customer metrics to assess the progress of the transition. 

Measuring success will also help assess the productivity of your DevSecOps team. You can even evaluate your teams’ motivation regarding the new shift in your organization. If your team freely shares the shortfalls they’ve encountered, productively gives each other feedback, and openly raises incidents without any fear of repercussions, this environment based on trust can help your organization grow when adopting DevSecOps. 

6. Keep learning

Reorganizing your development practices from DevOps to DevSecOps is a continual process. It requires a deliberate approach where everyone involved keeps learning throughout the entire process. If a mistake happens, learn from them and continue to move  forward. 

Remember that today’s cyber security landscape changes constantly. You should not learn DevSecOps once and hang your boots. If you keep looking for better ways of preventing and remediating vulnerabilities, you’ll be a step ahead of the attackers. 

Conclusion

Transforming your team from DevOps to DevSecOps is not something you can accomplish overnight. You need to strategize your actions and decisions before making a move. We hope that this article has shed light on how to make the transition seamless and beneficial. 

DevSecOps is a movement that is growing rapidly. And since its train has already left the station, the sooner your teams hop aboard, the better. 

]]>
Using Zero Trust to Mitigate Supply Chain Risks https://www.mend.io/blog/using-zero-trust-to-mitigate-supply-chain-risks/ Thu, 30 Sep 2021 06:00:00 +0000 https://mend.io/using-zero-trust-to-mitigate-supply-chain-risks/ Software supply chain attacks have been on the rise lately. With the current pervasiveness of third-party and open source libraries, which presumably developers cannot control as strongly as the code they create, vulnerabilities in these software dependencies are causing serious security risks to applications. 

Supply chain attacks abuse the inherent trust that users have with a software provider. When a vendor uses a vulnerable dependency, a miscreant can penetrate the vendor’s system and plant malicious code. When the vendor distributes their software downstream to a wider audience, the attacker uses the trusted vendor’s software to stage attacks. 

Zero Trust is a new security model that aims to eliminate the implicit trust users have on any type of software. It assumes breach and helps you to deploy proactive strategies for defending your systems. 

This article covers Zero Trust and how you can use it to take your software supply chain security to the next level.

What Is Zero Trust Security?

Zero Trust is a modern security model that throws away the idea of trust in an organization’s computing infrastructure, just as its name suggests. It’s a proactive approach that requires every request, whether coming from inside or outside the corporate network, to be authenticated, authorized, and continuously validated for security compliance before being allowed to access resources and data.

Zero Trust is based on the “never trust, always verify” concept. Rather than trusting everything behind the organization’s firewall, it assumes a breach and verifies every request as if it emanates from an unsanitized network. 

Zero Trust is a significant shift from the old castle-and-moat model that follows the “trust but verify” concept. You can think of your organization’s network as the castle in which authorized individuals are able to “cross the moat defenses” and get inside your network perimeter. 

The traditional perimeter-based security model concentrates on the dangerous idea that trusted users can access any resource within the organization while untrusted users are kept outside the network boundaries. 

Founded by an industry analyst at Forrester Research in 2010, Zero Trust has metamorphosed into a model that addresses today’s supply chain security challenges. The old perimeter-based security approach where a protective wall is built around the IT system has proven to be ineffective. 

In a Zero Trust environment, every software is untrusted by default. Therefore, sufficient measures must be placed to verify and secure all resources before access is granted. 

Using Zero Trust to Neutralize Supply Chain Risks

The Zero Trust philosophy depends on several existing technologies and best practices to realize its objective of safeguarding the enterprise IT environment.

Let’s delve into some of its core principles and how you can use them to address supply chain vulnerabilities.

1. Verify Explicitly

Zero Trust advocates for authenticating and verifying access to every resource. The identity of the users, their access privileges, as well as the identity and security of their devices must be continuously validated. 

Zero Trust assumes there are intruders both inside and outside the organization’s network. Therefore, you should not automatically trust any user, device, or package.

The initial attack vector is usually not the ultimate goal of a supply chain attacker. Therefore, verifying explicitly can assist in preventing a bad actor from gaining that initial access. For example, after completing login or connection requests, users and devices should time out at defined intervals, compelling them to be constantly re-authenticated. Also, if an already logged-in user tries to access another resource within the network, they must be re-validated.

With the Zero Trust approach, you should not automatically perform dependency updates. You should assume that every update is unsafe and explicitly verify its security before installing it. 

2. Enforce Least Privilege

Least privilege is a security principle that mandates that a user should be given only the bare minimum level of access or permissions necessary to complete the required task.

In a Zero Trust environment, every user’s privileges and access to data are tightly controlled. This prevents intruders from using a single breached account to make lateral movements that could allow them to inject malicious code into your software.

You can also limit the permissions granted to third-party libraries on your systems. This reduces security risks and safeguards against supply chain attacks.

3. Practice Microsegmentation

Microsegmentation is another Zero Trust principle. It refers to setting up separate segments across a network with their own unique access credentials. A user or device able to access a single segment is not allowed to access another segment without separate verification.

This practice helps enhance security and prevents attackers from making lateral movements and causing havoc to the entire software, even if a single package is compromised. If a vulnerability is detected, the breached dependency can be quarantined and sanitized without affecting the rest of the software. 

4. Apply Multi-Factor Authentication

Multi-factor authentication (MFA) is a vital Zero Trust principle that requires multiple pieces of independent evidence before a user is authenticated—providing only a password is not sufficient to gain access. 

A popular implementation of MFA is the 2-factor authorization (2FA) technique, which, in addition to the password, allows users to receive a code sent to their devices, such as a mobile phone, for them to confirm who they claim to be. As such, 2FA offers two pieces of evidence for users to verify their identity.

5. Inspect and Log All Activities

Zero Trust requires that every activity should be inspected, analyzed, and logged without disruptions. By continuously monitoring your packages, you can discover abnormal activities that can bring your software to its knees. For example, if you notice that a dependency is trying to open connections to other external resources that it does not require to interact with to undertake its function, you can raise the red flag and investigate the issue. 

How Mend Supply Chain Defender Can Help

Trust is the fundamental problem in today’s dynamic and rapidly evolving security landscape. The modern security environment requires a resilient model that can easily adapt to the shifting threats and perimeters.

By eliminating implicit trust from an enterprise’s network architecture, the Zero Trust approach is crucial to ensuring protected and healthy digital environments. It’s the model you need to embrace to enhance the security of your data, devices, and people across the organization.

Mend Supply Chain Defender can help you apply Zero Trust strategies to your software supply chain. With Supply Chain Defender, you can avoid poisoning your downstream clients with malicious applications. It increases your visibility into third-party libraries so that you can avoid treating them blindly as trusted components of your software. 

With Mend Supply Chain Defender, you can scan your packages for the presence of vulnerabilities, control the dependencies allowed in your enterprise, assess if any package update is malicious, and more. It’s the tool you need to fortify your defenses against software supply chain attacks. 

]]>
How To Manage PHP Dependencies Using Composer https://www.mend.io/blog/how-to-manage-php-dependencies-using-composer/ Wed, 11 Aug 2021 07:34:41 +0000 https://mend.io/blog/how-to-manage-php-dependencies-using-composer/ PHP dependency management is crucial for making the most out of your development efforts. It ensures third-party packages or libraries that your project depends on are functioning optimally.

Composer is a popular tool that allows you to practice dependency management in your PHP projects—just like JavaScript (and Node.js) uses npm or Yarn

With the Composer package manager, you can declare the libraries of code that your project requires so that it can manage them for you. You can use it to effectively install, update, and manage your PHP dependencies.

This article talks about how to use Composer to manage dependencies in the PHP programming language.

PHP dependency management with Composer

Composer is a free and open source tool you can use to make managing PHP dependencies easier. As your project becomes bigger, it becomes difficult to track all of its moving parts. 

However, with Composer, you can effectively manage your direct and transitive dependencies (dependencies of dependencies) in PHP and release quality software.

Here are some benefits of using Composer:

  • Allows you to incorporate ready-made packages that assist you to solve common programming hurdles. 
  • Helps you to keep all your packages up-to-date.
  • Conveniently autoloads all your files and classes.
  • Helps you to gain visibility into your dependencies and keep them functional and secure. 

Composer mostly depends on three things to work:

  • composer.json file—declares all the dependencies to install in the project. 
  • composer.lock file—records the specific versions of the installed dependency packages. 
  • Packagist—is a repository that stores public packages installable with Composer. 

How to install composer

Composer is a multi-platform tool you can install on Windows, macOS, and Linux operating systems. It needs a minimum of PHP version 5.3.2 to run. You may notice a few other system requirements as you try setting it up in your preferred environment. 

If you’re a Windows user, you can use the Composer Setup file to install it. You can find the instructions here

If you’re a macOS or Linux user, you can navigate to your project’s directory on the terminal and run the following command:

curl -sS https://getcomposer.org/installer | php

A composer.phar file will then be installed on your local project directory.

You can also install Composer to be accessible from anywhere on your system by running the following command:

mv composer.phar /usr/local/bin/composer

Once you have installed Composer globally, you can use the composer command to access it. 

How to use composer.json file

The composer.json file contains a description of your project’s dependencies and possibly other metadata as well. It’s all you need to begin using Composer in your project.

To follow along in this Composer tutorial, you can create a directory on your system and add a file called composer.json to it.

You can then add the following sample declaration to the file:

{
    “require”:{
      “php”:”>=7.3.0″,
      “swiftmailer/swiftmailer”:”^6.2″
  },
  “require-dev”:{
      “symfony/var-dumper”:”^6.0″,
      “phpunit/phpunit”:”~8.0″
  },
  “autoload”:{
      “psr-4”:{
        “”:”app/”
      }
  },
  “autoload-dev”:{
      “psr-4”:{
        “Tests\\”:”tests/”
      }
  }
}

Let’s describe what’s happening in the above JSON file:

  • The require key specifies the packages your project depends on. The mentioned packages are indispensable for the application’s performance. Note that the key takes an object that maps package names in the vendor/package format (such as swiftmailer/swiftmailer) to version constraints (such as ^6.2). 
  • The require-dev key specifies the packages used for development, and will not be needed in the production environment. 
  • The autoload key specifies the packages to be autoloaded so that they’ll be available in the project. 
  • The autoload-dev key specifies the development packages to be autoloaded. 

How to use update command

To initially install the dependencies already specified in the composer.json file, you can run the update command.

Here is the command to run at the root of your project:

php composer.phar update

If you installed Composer globally, run the following:

composer update

This is what the update command does:

  • Reads the dependencies listed in the composer.json file.
  • Creates the composer.lock file. It then locks the packages to their specific versions in the composer.lock file. If you share the project, the composer.lock file allows everyone in your team to use the exact same dependencies’ versions and avoid inconsistencies. This is the main function of the update command.
  • Implicitly runs the install command to download the dependencies according to their defined version constraints.

How to update dependencies

You can also use the update command to upgrade your project dependencies. This command will also update both the composer.json file and the composer.lock file with the current state of the versions of your project dependencies. 

To universally update all your installed dependencies at once, run the following command:

composer update

If you want to upgrade all packages excluding development dependencies, run the following command:

composer update –no-dev

If you want to perform package-specific updates, run the following command:

composer update vendor1/package1 vendor2/package2

Remember to substitute vendor/package names with the details of the dependencies you intend to update using Composer. 

It’s important to note that Composer follows the semantic versioning notation to define package versions using the major.minor.patch format.

Let’s look at some options you can use in the composer.json file:

  • Specify the exact version constraint. For example, 1.3.0 can be updated to the stated specific version and that version only. 
  • Specify the upper and lower bounds using >, <, >=, and <= operators. For example, >=1.3.0 can be updated to any version above or equal to 1.3.0. 
  • Use a wildcard to specify all the allowed version ranges to update to. For example, 1.3.* can be updated to within >=1.3.0 and <1.4.0.
  • Use a tilde to specify the last digit that can be updated upwards. For example, ~1.3 can be updated to within >=1.3.0 and <2.0.0. Also, ~1.3.2 can be updated to within >=1.3.2 and <1.4.0. 
  • Use a caret to avoid updating to major versions that could introduce breaking changes. For example, ^1.3.2 can be updated to within >=1.3.2 and <2.0.0.

How to use install command

You can run the install command to install all the dependencies listed in the composer.lock file. It uses the exact versions in the composer.lock file to ensure that everyone working on your project uses the same versions. The install command does not update any packages. 

Here is the command to run at the root of your project:

php composer.phar install

If you installed Composer globally, run the following:

composer install

If you want to install all packages excluding development dependencies, run the following command:

composer install –no-dev

This is what the install command does:

  • If composer.lock file already exists, it resolves and installs dependencies from the file. 
  • If composer.lock file does not exist, it runs the update command to create it. It then resolves and installs the dependencies listed in the composer.lock file.

So, what is the difference between composer update and composer install commands?

The composer update command is mainly used during the development stage to upgrade dependencies based on the version constraints defined in the composer.json file. 

On the other hand, the composer install command is mainly used during the deployment stage when the application needs to be set up in a testing or production environment. It ensures the same dependencies specified in the composer.lock file, which was generated by running the update command, are used in deployment.

How to use require command

You can run the require command to install dependencies just by specifying the package details on the command line.

Here is the command to run at the root of your project:

php composer.phar require vendor/package

If you installed Composer globally, run the following:

composer require vendor/package

You can also specify to install a specific package version using the require command. Here is an example:

composer require vendor/package:1.3

This is what the require command does:

  • Adds new dependencies to the composer.json file. If the file does not exist, the command will create it on the fly. It’s like a shortcut to creating the composer.json file.
  • Downloads the specified package to the project. 
  • Updates the composer.lock file as well. 

How to uninstall dependencies

To uninstall a PHP dependency using Composer, delete its details from the require or require-dev section of the composer.json file and run the following command:

composer update

You can also use the remove command to uninstall a package. It will remove the stated packages from the composer.json file and uninstall them. 

Here is how to run it:

composer remove vendor1/package1 vendor2/package2

How to set up autoloading

If you have installed libraries that specify autoloading information, you can use the helpful autoloading feature to make development easier. 

To autoload all the dependencies, just include the following in your code:

require __DIR__ . ‘/vendor/autoload.php’;

Conclusion

That’s it!

We hope that this tutorial has covered most of the common scenarios for managing PHP dependencies using Composer. Remember that you can always dig into the Composer documentation to discover other ways of using the handy tool. 

Of course, manually practicing dependency management, especially updating dependencies, is hectic and time-consuming. Trying to manually keep all your dependencies up-to-date is tedious, especially if your project relies on packages that release updates frequently.  

With Mend Renovate, you can automate dependency updates in your PHP projects and increase your development productivity. It’s the tool you need to save time, reduce security risks, and keep your applications performant. 

Happy PHP coding!

]]>
DevOps vs. Agile: What Is the Difference? https://www.mend.io/blog/devops-vs-agile/ Thu, 05 Aug 2021 12:29:18 +0000 https://mend.io/devops-vs-agile/ DevOps and Agile are popular modern software development methodologies. According to the 14th Annual State of Agile Report, 95% and 76% of the respondents stated that their organizations had adopted Agile and DevOps development methods, respectively. Interestingly, both approaches have the same aim: deliver the end product as efficiently and quickly as possible.

Despite the popularity and shared goals of Agile and DevOps methodologies, there is often confusion about what differentiates them from each other. While most organizations are eager to deploy these development practices, they often struggle with the best approach to adopt. 

In this blog post, we’ll explore both methodologies and outline their similarities and differences.

What is Agile?

In software development, Agile is a methodology built around close collaboration, customer satisfaction, and small, rapid release cycles. It’s a step-by-step approach that aims to continuously produce small, manageable, and quality increments of software by means of iterative development and testing. 

As the name implies, Agile advocates for flexibility and adaptability when building software—customer feedback or market changes are responded to quickly, instead of following rigid plans. That’s why Agile meets the demands of today’s fast-paced world in which customers desire continuous technological innovation. 

The Agile philosophy is about people. It aims to build an organizational culture in which human interactions are favored over stringent processes. Agilists allow employees to be steered by their experience and shared values, instead of micromanaging or confining them to exhaustive documentation.  

Agile was born in the early 2000s as an alternative to the traditional waterfall methodology. Waterfall focuses on a structured, sequential, and linear approach to developing software. It’s an inflexible model that doesn’t meet the expectations of the modern world. 

Agile is a collection of values and principles that assists in directing software development decisions. It’s not one single approach. It’s a collection of a broad range of practices, such as Scrum, Kanban, Extreme Programming (XP), which developers used before its popularization. The aggregation of these practices resulted in the publication of the Agile Manifesto, which outlines the core values and principles of the Agile software development methodology.

What is DevOps?

DevOps is a methodology that combines software development (Dev) and operations (Ops). It’s a culture, a state of mind that allows the software to be created, tested, and released reliably, quickly, and efficiently by embracing Agile values and principles. 

In both DevOps and Agile environments, the software is developed, tested, and deployed. Traditional Agile, however, does not consider IT operations, which happens continually and is an essential aspect of the DevOps pipeline. In fact, DevOps aims to incorporate Agile practices to a new audience: the operations team.

DevOps seeks to break the barriers between the development team that writes the code and the operations team that deploys and manages the software in production. It eliminates the old technique of development teams writing an application and then leaving the operations teams to run it without sufficient visibility into how it was created. It’s a new approach that empowers these two teams to work side by side throughout the software development life cycle

By bringing together these two diverse groups, DevOps supports continuous integration (CI) and continuous deployment (CD), automated testing, and transparent processes in software development. 

DevOps came to the fore around 2009. Since then, the approach has metamorphosed to include other revolutionary software development techniques. For example, DevSecOps is a growing practice that involves integrating security into DevOps right from the beginning. 

DevOps vs. Agile comparison

DevOps and Agile both aim to enhance the software development process. They emphasize quality results, efficiency, and speed when developing software. They also advocate for shorter, frequent release cycles.

One term that permeates both methodologies is the idea of shifting left. It refers to improving software quality by finding and fixing defects during the early stages of the software development process. 

So what is the difference between DevOps and Agile?

Although both methodologies aim for the same end goal, they do so by taking different routes. 

Let’s look at some concepts that set them apart.

1. Teams

A major differentiating factor in the Agile versus DevOps comparison debate is how each team is structured. Basically, DevOps unites two large disparate teams to work together to deliver faster, more reliable software releases. The skillset is divided among the members of the development and operations teams. This implies that everyone is assigned specific duties to accomplish during the development process. 

This DevOps approach encourages inter-department collaboration and reduces friction between the development and operations teams. Since every team member is a master of their respective skills, however, it could limit their ability to step into each other’s shoes.

The Agile approach, on the other hand, seeks to bring smaller teams together so that they can collaborate and swiftly address the ever-changing consumer expectations. Instead of allocating specific tasks to members, it encourages each of them to share duties equally. This makes any Agile team member capable of handling any aspect of the project at any time. Since the groups are isolated, however, it could limit cross-functionality within the organization.

2. Strategies

DevOps and Agile also differ in terms of their strategies and processes. In DevOps, automation methodologies are employed to boost productivity and maximize efficiency. In Agile processes, however, automation is not emphasized, though it can still be used to improve output. 

Another differentiating strategy is the use of sprints in Agile projects. The idea with sprints is that tasks are broken down into short, repeatable phases, usually lasting one to four weeks. The assigned duties are completed in increments, with every sprint beginning immediately after the previous one. DevOps provides continuous delivery with specific deadlines to hit and benchmarks to realize, some of which can be daily or within a few hours.

DevOps and Agile also tend to use different tools when implementing their strategies. Some of the popular DevOps tools include Jenkins, CircleCI, and Azure DevOps Pipelines. Agile, on the other hand, uses such popular tools as MindMeister, Aha!, and Trello.

Agile vs. DevOps: Which is better?

The most important thing to understand about the DevOps versus Agile debate is that they are not mutually exclusive. It may not be necessary to choose one over the other. Although they are different, a blend of both disciplines can lead to more beneficial outcomes.

Agile replaced the traditional waterfall model. DevOps, however, is not an Agile replacement. DevOps rose to prominence because of Agile. Therefore, the two can coexist and enable each other. 

Instead of trying to choose between Agile and DevOps, you can apply them in tandem. That’s the best way of surmounting their individual weaknesses and making the most out of them. 

For example, you can empower individual teams to embrace an Agile approach within a wider DevOps culture. That way, each of their biases can be used together in a complementary manner.

What about Agile and DevOps security?

Effectively implementing DevOps and Agile practices requires security consciousness; otherwise, your efforts could be grounded on sinking sand. Incorporating security into your Agile and DevOps cycle can deliver robust applications that enhance the user satisfaction rate.

]]>