feat(plugins): add Zscaler AI Guard plugin#1514
Open
z-anvesha wants to merge 4 commits intoPortkey-AI:mainfrom
Open
feat(plugins): add Zscaler AI Guard plugin#1514z-anvesha wants to merge 4 commits intoPortkey-AI:mainfrom
z-anvesha wants to merge 4 commits intoPortkey-AI:mainfrom
Conversation
Contributor
There was a problem hiding this comment.
Pull request overview
This PR adds a new Zscaler AI Guard plugin to the Portkey Gateway, enabling security checks on prompts and LLM responses using Zscaler's Detections Policy API. The plugin acts as a guardrail that can intercept both inbound (prompts) and outbound (responses) content, blocking requests when Zscaler's policy detects threats like prompt injections or data leakage.
Changes:
- New Zscaler AI Guard plugin with support for both
beforeRequestHookandafterRequestHookexecution - Enhanced build system to handle hyphenated plugin IDs and function names
- Integration tests for the plugin (safe and malicious prompt scenarios)
Reviewed changes
Copilot reviewed 5 out of 5 changed files in this pull request and generated 5 comments.
Show a summary per file
| File | Description |
|---|---|
| plugins/zscaler/main-function.ts | Core plugin handler that calls Zscaler's execute-policy API, processes BLOCK/ALLOW actions, and handles errors |
| plugins/zscaler/manifest.json | Plugin manifest defining credentials (API key), parameters (policy ID), and supported hooks |
| plugins/zscaler/test-file.test.ts | Integration tests verifying plugin behavior with real Zscaler API for safe and malicious prompts |
| plugins/build.ts | Updated build script to support hyphenated plugin IDs by sanitizing identifiers |
| conf.json | Registered the new zscaler plugin in the enabled plugins list |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
Addressed PR review comments
Address PR comments
Address PR comments
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
This commit introduces a new guardrail plugin for Zscaler AI Guard. This plugin allows the Portkey Gateway to perform security checks on prompts and LLM responses using Zscaler's Detections Policy.
Key changes include:
A new plugin handler in plugins/zscaler/main-function.ts that calls the Zscaler execute-policy API.
The handler supports both beforeRequestHook and afterRequestHook to scan inbound prompts and outbound responses.
It checks for BLOCK actions from the Zscaler API and individual detectors, returning a failed verdict if content is blocked.
Integration tests have been added in plugins/zscaler/test-file.test.ts to verify the plugin's functionality against the real Zscaler API for both safe and malicious prompts.
The plugin is defined in plugins/zscaler/manifest.json and registered in the main plugin index.
This new plugin enhances the gateway's security capabilities by integrating with Zscaler's advanced threat and data protection.