Skip to content

Conversation

@willkill07
Copy link
Member

@willkill07 willkill07 commented Feb 2, 2026

Description

This also re-adds all examples into the top-level so we can track all example dependencies in the top-level uv.lock

Adds two new scripts:

  • ./ci/scripts/license_diff.py - shows the license/package changes from HEAD to a base branch (default is develop). Very useful for finding new/updated/removed packages for PRs. Output written to standard output.
  • ./ci/scripts/sbom_list.py - shows the full package SBOM with package name, version, and license. Exported as sbom_list.tsv.

Example output of license_diff.py is below. (For this very PR)

Notable output considerations:

  • license changes are only printed if it has changed
  • source packages (e.g. those shipped with the repo) are indicated via (source)
Added packages:
- nat-adk-demo (source)
- nat-agents-examples (source)
- nat-agno-personal-finance (source)
- nat-alert-triage-agent (source)
- nat-autogen-demo (source)
- nat-automated-description-generation (source)
- nat-currency-agent-a2a (source)
- nat-documentation-guides (source)
- nat-dpo-tic-tac-toe (source)
- nat-email-phishing-analyzer (source)
- nat-haystack-deep-research-agent (source)
- nat-kaggle-mcp (source)
- nat-math-assistant-a2a (source)
- nat-math-assistant-a2a-protected (source)
- nat-multi-frameworks (source)
- nat-notebooks (source)
- nat-per-user-workflow (source)
- nat-plot-charts (source)
- nat-por-to-jiratickets (source)
- nat-profiler-agent (source)
- nat-react-benchmark-agent (source)
- nat-redis-example (source)
- nat-retail-agent (source)
- nat-rl-with-openpipe-art (source)
- nat-router-agent (source)
- nat-semantic-kernel-demo (source)
- nat-sequential-executor (source)
- nat-service-account-auth-mcp (source)
- nat-simple-auth (source)
- nat-simple-auth-mcp (source)
- nat-simple-calculator (source)
- nat-simple-calculator-custom-routes (source)
- nat-simple-calculator-eval (source)
- nat-simple-calculator-hitl (source)
- nat-simple-calculator-mcp (source)
- nat-simple-calculator-mcp-protected (source)
- nat-simple-calculator-observability (source)
- nat-simple-rag (source)
- nat-simple-web-query (source)
- nat-simple-web-query-eval (source)
- nat-strands-demo (source)
- nat-swe-bench (source)
- nat-user-report (source)
Changed packages:
- lxml 5.4.0 -> 6.0.2 (License :: OSI Approved :: BSD License -> BSD-3-Clause)
- openinference-instrumentation-langchain 0.1.29 -> 0.1.58 (License :: OSI Approved :: Apache Software License -> Apache-2.0)
- uvloop 0.21.0 -> 0.22.1

Closes

By Submitting this PR I confirm:

  • I am familiar with the Contributing Guidelines.
  • We require that all contributors "sign-off" on their commits. This certifies that the contribution is your original work, or you have rights to submit it under the same license, or a compatible license.
    • Any contribution which contains commits that are not Signed-Off will not be accepted.
  • When the PR is ready for review, new or existing tests cover these changes.
  • When the PR is ready for review, the documentation is up to date with these changes.

Signed-off-by: Will Killian <wkillian@nvidia.com>
Signed-off-by: Will Killian <wkillian@nvidia.com>
Signed-off-by: Will Killian <wkillian@nvidia.com>
@willkill07 willkill07 self-assigned this Feb 2, 2026
@willkill07 willkill07 requested a review from a team as a code owner February 2, 2026 20:49
@willkill07 willkill07 added the feature request New feature or request label Feb 2, 2026
@willkill07 willkill07 requested a review from a team as a code owner February 2, 2026 20:49
@willkill07 willkill07 added the non-breaking Non-breaking change label Feb 2, 2026
@coderabbitai
Copy link

coderabbitai bot commented Feb 2, 2026

Walkthrough

This PR adds infrastructure for managing examples and tracking licenses in the NAT project. It introduces two new CI scripts for comparing license changes across branches and generating SBOM data from lock files, updates project configuration to catalog examples and define UV source mappings, adds a pre-commit hook for lock file validation, and clarifies dependency guidelines in documentation.

Changes

Cohort / File(s) Summary
Configuration & Guidelines
.coderabbit.yaml, .cursor/rules/nat-setup/nat-toolkit-installation.mdc, .pre-commit-config.yaml
Updated guidelines for example and dependency handling; added uv-lock pre-commit hook for pyproject.toml validation; expanded documentation on discovering available plugins via pyproject.toml inspection.
CI Scripts
ci/scripts/license_diff.py, ci/scripts/sbom_list.py
Two new Python scripts for dependency analysis: license_diff compares uv.lock changes between branches and resolves license info via PyPI; sbom_list generates SBOM data by indexing uv.lock packages and querying PyPI for license metadata.
Project Configuration
pyproject.toml
Added examples optional dependency listing 54+ example identifiers; expanded UV source mappings to include 24 packages and 40 examples with editable paths, enabling local development workflows for NAT examples and integrations.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~22 minutes

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title clearly describes the main changes: adding utility scripts (license_diff.py and sbom_list.py) for CI/license updates and SBOM generation, matching the primary objectives of the pull request.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 7

🤖 Fix all issues with AI agents
In `@ci/scripts/license_diff.py`:
- Around line 86-89: The added_packages and removed_packages comprehension
currently include internal packages despite the intent to skip
"nvidia-nat*"—update the comprehensions that build added_packages and
removed_packages to filter out packages whose names start with "nvidia-nat" the
same way changed_packages does (i.e., use the same pkg.startswith("nvidia-nat")
check), ensuring all three dictionaries consistently exclude internal packages;
locate the comprehensions that set added_packages, removed_packages, and
changed_packages and apply the filter to the first two.
- Around line 111-124: The change lines currently render "head_version ->
base_version" and "(head_license -> base_license)" which inverts the direction;
update the two formatted strings in the list_of_changes append calls to show
base first then head (i.e., use base_version -> head_version and base_license ->
head_license) so the diff reads "base -> head"; locate the code around the loop
using changed_packages, head_packages, base_packages and pypi_license and swap
the order of the version and license interpolations in both appended strings.
- Around line 42-45: The urllib.request.urlopen calls in pypi_license() (PyPI
metadata fetch using variable url) and in main (GitHub uv.lock fetch) lack
timeouts; either add timeout=10 to both urlopen(...) calls or, preferably,
replace these requests with an httpx.Client() usage (create a client with
default verify=True) and perform client.get(url, timeout=10) to fetch and
json.loads() the response content; update pypi_license() and the main fetch
logic to use the httpx client and ensure responses are checked for successful
status before parsing.
- Around line 134-137: Validate and sanitize the CLI input for --base-branch
(args.base_branch) after parsing to prevent malformed GitHub URLs; specifically,
restrict it to a safe character set (e.g., allow letters, digits, dot,
underscore, hyphen and slash via a regex like r'^[A-Za-z0-9._/-]+$') and call
parser.error(...) or exit with a clear message when the value fails validation,
or alternatively percent-encode the branch name before using it in the GitHub
API URL construction that interpolates args.base_branch.

In `@ci/scripts/sbom_list.py`:
- Around line 62-68: Rename the unused parameter base_name in the function
process_uvlock to _base_name to signal it's intentionally unused (update the
function signature accordingly), and update the docstring parameter section to
document _base_name instead of base_name while keeping the compatibility note;
verify there are no internal references to base_name that need changing and run
tests/linting to ensure no unused-parameter warnings remain.
- Around line 42-44: Replace the blocking urllib.request.urlopen call with the
project's preferred httpx synchronous client: create/reuse an httpx.Client in
main() (or the calling scope), fetch the PyPI URL via client.get(url,
timeout=10) and parse the JSON via response.json(), and ensure the client is
closed (use a with httpx.Client() as client or store and close it) so requests
to the URL in the sbom_list.py function replace urllib.request.urlopen(url) and
json.load(r) with client.get(...).json() using timeout and proper lifecycle
management.

In `@pyproject.toml`:
- Around line 109-154: The examples list contains "text_file_ingest" which lacks
the required nat_ prefix; update the examples array entry to
"nat_text_file_ingest" and also update the corresponding entry in
tool.uv.sources (where "text_file_ingest" is referenced) to
"nat_text_file_ingest" so both the examples list and tool.uv.sources use the
required nat_ prefix.

Comment on lines +42 to +45
# Use version-specific metadata when available to avoid mismatches.
url = f"https://pypi.org/pypi/{name}/json" if version is None else f"https://pypi.org/pypi/{name}/{version}/json"
with urllib.request.urlopen(url) as r:
data = json.load(r)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

cat -n ci/scripts/license_diff.py | head -60

Repository: NVIDIA/NeMo-Agent-Toolkit

Length of output: 2743


🏁 Script executed:

rg -n "httpx|urllib" pyproject.toml uv.lock | head -30

Repository: NVIDIA/NeMo-Agent-Toolkit

Length of output: 3226


🏁 Script executed:

rg -n "urlopen" ci/scripts/license_diff.py

Repository: NVIDIA/NeMo-Agent-Toolkit

Length of output: 150


🏁 Script executed:

sed -n '70,80p' ci/scripts/license_diff.py

Repository: NVIDIA/NeMo-Agent-Toolkit

Length of output: 556


Add timeout to urllib.request.urlopen() calls and consider switching to httpx.

This script makes two unauthenticated network requests without timeout:

  1. Line 44: PyPI metadata fetch in pypi_license()
  2. Line 74: GitHub uv.lock fetch in main

Without explicit timeouts, these can hang CI jobs indefinitely. Switching to httpx is preferred per project guidance (OWASP Top-10 compliance with SSL verification enabled by default); httpx is already a project dependency.

At minimum, add timeout=10 to both urlopen() calls. Better: migrate to httpx.Client() with verify=True (default).

🧰 Tools
🪛 Ruff (0.14.14)

[error] 44-44: Audit URL open for permitted schemes. Allowing use of file: or custom schemes is often unexpected.

(S310)

🤖 Prompt for AI Agents
In `@ci/scripts/license_diff.py` around lines 42 - 45, The urllib.request.urlopen
calls in pypi_license() (PyPI metadata fetch using variable url) and in main
(GitHub uv.lock fetch) lack timeouts; either add timeout=10 to both urlopen(...)
calls or, preferably, replace these requests with an httpx.Client() usage
(create a client with default verify=True) and perform client.get(url,
timeout=10) to fetch and json.loads() the response content; update
pypi_license() and the main fetch logic to use the httpx client and ensure
responses are checked for successful status before parsing.

Comment on lines +86 to +89
# Track third-party dependency changes only (skip internal `nvidia-nat*`).
added_packages = {pkg: head_packages[pkg] for pkg in added}
removed_packages = {pkg: base_packages[pkg] for pkg in removed}
changed_packages = {pkg: head_packages[pkg] for pkg in intersection if not pkg.startswith("nvidia-nat")}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Filter internal packages consistently for added/removed.

The comment says internal nvidia-nat* packages are skipped, but added_packages and removed_packages currently include them.

🧹 Suggested fix
-    added_packages = {pkg: head_packages[pkg] for pkg in added}
-    removed_packages = {pkg: base_packages[pkg] for pkg in removed}
+    added_packages = {pkg: head_packages[pkg] for pkg in added if not pkg.startswith("nvidia-nat")}
+    removed_packages = {pkg: base_packages[pkg] for pkg in removed if not pkg.startswith("nvidia-nat")}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# Track third-party dependency changes only (skip internal `nvidia-nat*`).
added_packages = {pkg: head_packages[pkg] for pkg in added}
removed_packages = {pkg: base_packages[pkg] for pkg in removed}
changed_packages = {pkg: head_packages[pkg] for pkg in intersection if not pkg.startswith("nvidia-nat")}
# Track third-party dependency changes only (skip internal `nvidia-nat*`).
added_packages = {pkg: head_packages[pkg] for pkg in added if not pkg.startswith("nvidia-nat")}
removed_packages = {pkg: base_packages[pkg] for pkg in removed if not pkg.startswith("nvidia-nat")}
changed_packages = {pkg: head_packages[pkg] for pkg in intersection if not pkg.startswith("nvidia-nat")}
🤖 Prompt for AI Agents
In `@ci/scripts/license_diff.py` around lines 86 - 89, The added_packages and
removed_packages comprehension currently include internal packages despite the
intent to skip "nvidia-nat*"—update the comprehensions that build added_packages
and removed_packages to filter out packages whose names start with "nvidia-nat"
the same way changed_packages does (i.e., use the same
pkg.startswith("nvidia-nat") check), ensuring all three dictionaries
consistently exclude internal packages; locate the comprehensions that set
added_packages, removed_packages, and changed_packages and apply the filter to
the first two.

Comment on lines +111 to +124
list_of_changes = []
for pkg in sorted(changed_packages.keys()):
try:
head_version = head_packages[pkg]["version"]
base_version = base_packages[pkg]["version"]
if head_version == base_version:
# Only report version or license changes.
continue
head_license = pypi_license(pkg, head_version)
base_license = pypi_license(pkg, base_version)
if head_license != base_license:
list_of_changes.append(f"- {pkg} {head_version} -> {base_version} ({head_license} -> {base_license})")
else:
list_of_changes.append(f"- {pkg} {head_version} -> {base_version}")
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Swap base/head order in change output.

The output currently prints head -> base, which inverts the change direction.

🔁 Suggested fix
-                list_of_changes.append(f"- {pkg} {head_version} -> {base_version} ({head_license} -> {base_license})")
+                list_of_changes.append(f"- {pkg} {base_version} -> {head_version} ({base_license} -> {head_license})")
             else:
-                list_of_changes.append(f"- {pkg} {head_version} -> {base_version}")
+                list_of_changes.append(f"- {pkg} {base_version} -> {head_version}")
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
list_of_changes = []
for pkg in sorted(changed_packages.keys()):
try:
head_version = head_packages[pkg]["version"]
base_version = base_packages[pkg]["version"]
if head_version == base_version:
# Only report version or license changes.
continue
head_license = pypi_license(pkg, head_version)
base_license = pypi_license(pkg, base_version)
if head_license != base_license:
list_of_changes.append(f"- {pkg} {head_version} -> {base_version} ({head_license} -> {base_license})")
else:
list_of_changes.append(f"- {pkg} {head_version} -> {base_version}")
list_of_changes = []
for pkg in sorted(changed_packages.keys()):
try:
head_version = head_packages[pkg]["version"]
base_version = base_packages[pkg]["version"]
if head_version == base_version:
# Only report version or license changes.
continue
head_license = pypi_license(pkg, head_version)
base_license = pypi_license(pkg, base_version)
if head_license != base_license:
list_of_changes.append(f"- {pkg} {base_version} -> {head_version} ({base_license} -> {head_license})")
else:
list_of_changes.append(f"- {pkg} {base_version} -> {head_version}")
🤖 Prompt for AI Agents
In `@ci/scripts/license_diff.py` around lines 111 - 124, The change lines
currently render "head_version -> base_version" and "(head_license ->
base_license)" which inverts the direction; update the two formatted strings in
the list_of_changes append calls to show base first then head (i.e., use
base_version -> head_version and base_license -> head_license) so the diff reads
"base -> head"; locate the code around the loop using changed_packages,
head_packages, base_packages and pypi_license and swap the order of the version
and license interpolations in both appended strings.

Comment on lines +134 to +137
if __name__ == "__main__":
parser = argparse.ArgumentParser(description="Report third-party dependency license changes between lockfiles.")
parser.add_argument("--base-branch", type=str, default="develop")
args = parser.parse_args()
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

cat -n ci/scripts/license_diff.py | head -200

Repository: NVIDIA/NeMo-Agent-Toolkit

Length of output: 6463


Validate the --base-branch CLI input to prevent malformed GitHub URLs.

The argument is directly interpolated into the GitHub API URL at line 75 without validation. Add input sanitization to restrict the value to safe characters.

🛡️ Suggested fix
 import argparse
 import json
 import tomllib
 import urllib.request
+import re
@@
 if __name__ == "__main__":
     parser = argparse.ArgumentParser(description="Report third-party dependency license changes between lockfiles.")
     parser.add_argument("--base-branch", type=str, default="develop")
     args = parser.parse_args()
+    if not re.fullmatch(r"[A-Za-z0-9._/-]+", args.base_branch):
+        raise ValueError("Invalid --base-branch value.")
     main(args.base_branch)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if __name__ == "__main__":
parser = argparse.ArgumentParser(description="Report third-party dependency license changes between lockfiles.")
parser.add_argument("--base-branch", type=str, default="develop")
args = parser.parse_args()
if __name__ == "__main__":
parser = argparse.ArgumentParser(description="Report third-party dependency license changes between lockfiles.")
parser.add_argument("--base-branch", type=str, default="develop")
args = parser.parse_args()
if not re.fullmatch(r"[A-Za-z0-9._/-]+", args.base_branch):
raise ValueError("Invalid --base-branch value.")
main(args.base_branch)
🤖 Prompt for AI Agents
In `@ci/scripts/license_diff.py` around lines 134 - 137, Validate and sanitize the
CLI input for --base-branch (args.base_branch) after parsing to prevent
malformed GitHub URLs; specifically, restrict it to a safe character set (e.g.,
allow letters, digits, dot, underscore, hyphen and slash via a regex like
r'^[A-Za-z0-9._/-]+$') and call parser.error(...) or exit with a clear message
when the value fails validation, or alternatively percent-encode the branch name
before using it in the GitHub API URL construction that interpolates
args.base_branch.

Comment on lines +42 to +44
url = f"https://pypi.org/pypi/{name}/json" if version is None else f"https://pypi.org/pypi/{name}/{version}/json"
with urllib.request.urlopen(url) as r:
data = json.load(r)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# Check if httpx is already a dependency in the project
rg -n 'httpx' pyproject.toml uv.lock

Repository: NVIDIA/NeMo-Agent-Toolkit

Length of output: 4364


🏁 Script executed:

# Read the full context of ci/scripts/sbom_list.py
cat -n ci/scripts/sbom_list.py

Repository: NVIDIA/NeMo-Agent-Toolkit

Length of output: 5252


🏁 Script executed:

# Check how other parts of the codebase handle HTTP requests
rg -B2 -A2 'urllib|httpx|requests' --type py | head -100

Repository: NVIDIA/NeMo-Agent-Toolkit

Length of output: 6251


Migrate to httpx to avoid network hangs and follow project guidelines.

The current code uses urllib.request.urlopen without a timeout or SSL context configuration. This can cause the script to hang indefinitely during network stalls. Per project coding guidelines, httpx is the preferred HTTP client and is already available as a dependency. Use the synchronous httpx.Client for this blocking I/O context.

🛠️ Suggested fix (recommended)
-import urllib.request
+import httpx

-def pypi_license(name: str, version: str | None = None) -> str:
+def pypi_license(name: str, version: str | None = None, client: httpx.Client | None = None) -> str:
     """Resolve a package license from PyPI metadata.

     Args:
         name: Distribution name on PyPI.
         version: Optional version pin used to query version-specific metadata.
+        client: Optional pre-configured httpx.Client for reuse.

     Returns:
         A best-effort license string from the available metadata fields.
     """
     # Use version-specific metadata when available to avoid mismatches.
     url = f"https://pypi.org/pypi/{name}/json" if version is None else f"https://pypi.org/pypi/{name}/{version}/json"
-    with urllib.request.urlopen(url) as r:
-        data = json.load(r)
+    close_client = False
+    if client is None:
+        client = httpx.Client(timeout=10)
+        close_client = True
+    try:
+        response = client.get(url)
+        response.raise_for_status()
+        data = response.json()
+    finally:
+        if close_client:
+            client.close()

Then update main() to reuse the client:

 def main() -> None:
     """Create `sbom_list.tsv` for third-party license reporting."""
     # Load the lockfile that captures the dependency graph.
     with open("uv.lock", "rb") as f:
         head = tomllib.load(f)
 
     # Index packages by name for quick lookups.
     pkgs = {pkg["name"]: pkg for pkg in head["package"]}
 
     sbom_list = []
+    with httpx.Client(timeout=10) as client:
-    for pkg in tqdm(pkgs.keys(), desc="Processing packages", unit="packages"):
+        for pkg in tqdm(pkgs.keys(), desc="Processing packages", unit="packages"):
             try:
                 sbom_list.append({
                     "name": pkg,
                     "version": pkgs[pkg]["version"],
-                    "license": pypi_license(pkg, pkgs[pkg]["version"]),
+                    "license": pypi_license(pkg, pkgs[pkg]["version"], client),
                 })
             except KeyError:
                 # Skip entries that do not contain a version field.
                 pass

Minimum fix (if you prefer to keep urllib): add timeout=10 parameter to urllib.request.urlopen(url, timeout=10).

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
url = f"https://pypi.org/pypi/{name}/json" if version is None else f"https://pypi.org/pypi/{name}/{version}/json"
with urllib.request.urlopen(url) as r:
data = json.load(r)
url = f"https://pypi.org/pypi/{name}/json" if version is None else f"https://pypi.org/pypi/{name}/{version}/json"
close_client = False
if client is None:
client = httpx.Client(timeout=10)
close_client = True
try:
response = client.get(url)
response.raise_for_status()
data = response.json()
finally:
if close_client:
client.close()
🧰 Tools
🪛 Ruff (0.14.14)

[error] 43-43: Audit URL open for permitted schemes. Allowing use of file: or custom schemes is often unexpected.

(S310)

🤖 Prompt for AI Agents
In `@ci/scripts/sbom_list.py` around lines 42 - 44, Replace the blocking
urllib.request.urlopen call with the project's preferred httpx synchronous
client: create/reuse an httpx.Client in main() (or the calling scope), fetch the
PyPI URL via client.get(url, timeout=10) and parse the JSON via response.json(),
and ensure the client is closed (use a with httpx.Client() as client or store
and close it) so requests to the URL in the sbom_list.py function replace
urllib.request.urlopen(url) and json.load(r) with client.get(...).json() using
timeout and proper lifecycle management.

Comment on lines +62 to +68
def process_uvlock(uvlock: dict, base_name: str) -> Path:
"""Write a generic license table from a loaded `uv.lock` structure.

Args:
uvlock: Parsed `uv.lock` content.
base_name: Logical label for the source data (kept for compatibility).

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# First, let's examine the full function implementation
echo "=== File content around lines 62-68 ==="
head -80 ci/scripts/sbom_list.py | tail -30

echo -e "\n=== Full process_uvlock function ==="
# Get the full function definition
sed -n '/^def process_uvlock/,/^def /p' ci/scripts/sbom_list.py | head -n -1

Repository: NVIDIA/NeMo-Agent-Toolkit

Length of output: 2264


🏁 Script executed:

#!/bin/bash
# Now search for all calls to process_uvlock
echo "=== Usage of process_uvlock ==="
rg -n "process_uvlock" --no-heading

Repository: NVIDIA/NeMo-Agent-Toolkit

Length of output: 185


Rename the unused base_name parameter to _base_name.

The parameter is not used within the function body. While the docstring indicates it's kept for compatibility, use the underscore prefix convention to explicitly mark it as intentionally unused per PEP 8.

-def process_uvlock(uvlock: dict, base_name: str) -> Path:
+def process_uvlock(uvlock: dict, _base_name: str) -> Path:

Also update the docstring parameter documentation to reflect the renamed parameter.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
def process_uvlock(uvlock: dict, base_name: str) -> Path:
"""Write a generic license table from a loaded `uv.lock` structure.
Args:
uvlock: Parsed `uv.lock` content.
base_name: Logical label for the source data (kept for compatibility).
def process_uvlock(uvlock: dict, _base_name: str) -> Path:
"""Write a generic license table from a loaded `uv.lock` structure.
Args:
uvlock: Parsed `uv.lock` content.
base_name: Logical label for the source data (kept for compatibility).
🧰 Tools
🪛 Ruff (0.14.14)

[warning] 62-62: Unused function argument: base_name

(ARG001)

🤖 Prompt for AI Agents
In `@ci/scripts/sbom_list.py` around lines 62 - 68, Rename the unused parameter
base_name in the function process_uvlock to _base_name to signal it's
intentionally unused (update the function signature accordingly), and update the
docstring parameter section to document _base_name instead of base_name while
keeping the compatibility note; verify there are no internal references to
base_name that need changing and run tests/linting to ensure no unused-parameter
warnings remain.

Comment on lines +109 to +154
examples = [
"nat_adk_demo",
"nat_agno_personal_finance",
"nat_agents_examples",
"nat_alert_triage_agent",
"nat_autogen_demo",
"nat_automated_description_generation",
"nat_currency_agent_a2a",
"nat_dpo_tic_tac_toe",
"nat_documentation_guides",
"nat_email_phishing_analyzer",
"nat_haystack_deep_research_agent",
"nat_kaggle_mcp",
"nat_math_assistant_a2a",
"nat_math_assistant_a2a_protected",
"nat_multi_frameworks",
"nat_notebooks",
"nat_per_user_workflow",
"nat_plot_charts",
"nat_por_to_jiratickets",
"nat_profiler_agent",
"nat_react_benchmark_agent",
"nat_redis_example",
"nat_retail_agent",
"nat_rl_with_openpipe_art",
"nat_router_agent",
"nat_semantic_kernel_demo",
"nat_sequential_executor",
"nat_service_account_auth_mcp",
"nat_simple_auth",
"nat_simple_auth_mcp",
"nat_simple_calculator",
"nat_simple_calculator_custom_routes",
"nat_simple_calculator_eval",
"nat_simple_calculator_hitl",
"nat_simple_calculator_mcp",
"nat_simple_calculator_mcp_protected",
"nat_simple_calculator_observability",
"nat_simple_rag",
"nat_simple_web_query",
"nat_simple_web_query_eval",
"nat_strands_demo",
"nat_swe_bench",
"nat_user_report",
"text_file_ingest",
]
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Ensure example sources use the required nat_ prefix.

text_file_ingest lacks the nat_ prefix in both the examples list and tool.uv.sources, which violates the examples naming rule.

✅ Suggested fix
-  "text_file_ingest",
+  "nat_text_file_ingest",
@@
-text_file_ingest = { path = "examples/documentation_guides/workflows/text_file_ingest", editable = true }
+nat_text_file_ingest = { path = "examples/documentation_guides/workflows/text_file_ingest", editable = true }
As per coding guidelines, all added examples must have a `nat_` prefix within the `uv.sources` name.

Also applies to: 231-275

🤖 Prompt for AI Agents
In `@pyproject.toml` around lines 109 - 154, The examples list contains
"text_file_ingest" which lacks the required nat_ prefix; update the examples
array entry to "nat_text_file_ingest" and also update the corresponding entry in
tool.uv.sources (where "text_file_ingest" is referenced) to
"nat_text_file_ingest" so both the examples list and tool.uv.sources use the
required nat_ prefix.

Copy link

@Salonijain27 Salonijain27 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Approved from a dependency point of view

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

feature request New feature or request non-breaking Non-breaking change

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants