A Bedrock-backed security vulnerability analysis agent specialized for OWASP Top 10 risks. This project wires an LLM (via AWS Bedrock) together with a small graph-based runtime and several automated security-check tools to analyze URLs, code snippets, or architecture descriptions and return concise, actionable security guidance.
This README explains architecture, setup, usage, configuration, extension, and deployment notes for the agent defined in agent.py.
Table of Contents
- Overview
- Key components
- How it works (high-level)
- Prerequisites
- Installation
- Configuration (.env)
- Running locally
- Invocation (payload format & examples)
- Extending the agent (adding tools, changing LLM)
- Deployment notes (Lambda / Bedrock)
- Troubleshooting
- Security & privacy considerations
- Contributing
- License
VulneraAI-agent is an LLM-driven security assistant that:
- Accepts user prompts describing a target (URLs, code, architecture).
- Uses a short message history and a system prompt tuned for OWASP Top 10 analysis.
- Can call a set of builtin security tooling functions (port scanner, header checks, TLS checks, OWASP context fetcher, injection fingerprinting, etc.).
- Runs with a graph-based control flow using langgraph and an agent runtime (bedrock_agentcore).
agent.py— Main agent implementation and runtime entrypoint.- Defines SecurityState typed dict used as the graph state.
- Creates a ChatBedrockConverse LLM instance (Bedrock) and binds tool functions.
- Configures a StateGraph with an LLM node (
sec_agent) and a ToolNode (tools) to switch control when tool calls are requested. - Exposes a BedrockAgentCoreApp entrypoint function
agent_invocation(payload, context)used as the runtime Lambda entrypoint.
tools/*— (project directory) custom tool implementations. The agent imports:fetch_owasp_contextcheck_security_headerscheck_outdated_componentsscan_open_portscheck_tls_securitycheck_ssrf_riskcheck_authentication_securitycheck_injection_pointscheck_script_integrity
- LLM:
ChatBedrockConverse(from langchain_aws) — configured for AWS Bedrock model via ARN. - Graph runtime:
langgraph.graph.StateGraphandToolNodeto orchestrate agent/tool interaction. - Runtime:
bedrock_agentcore.runtime.BedrockAgentCoreAppfor the entrypoint and lifecycle.
- The runtime receives a payload at
agent_invocation. - A temporary state is created with the user's prompt (and a HumanMessage).
- The graph is invoked:
sec_agentnode prepares message history (SystemMessage + HumanMessage) and invokes the LLM with bound tools.- If the LLM returns tool calls,
_route_from_agentroutes execution totoolsnode which runs the requested checks. - Results from tools are appended back and the graph continues until completion.
- The entrypoint extracts the last AI "assistant" message and returns a text result.
- Python 3.10+ (confirm compatibility with your environment).
- AWS account with Bedrock access and a model you can call.
- AWS credentials configured (environment variables or AWS SDK config).
- Network access from the runtime to any targets the tools may scan.
- A
.envfor local development (the project uses python-dotenv).
Typical Python packages referenced (install via pip or your environment manager):
- langchain_aws
- langchain_core
- langgraph
- bedrock_agentcore
- typing_extensions
- python-dotenv
- any additional packages required by
tools/*(requests, nmap wrappers, etc.)
-
Clone the repository: git clone https://github.com/SarathL754/VulneraAI-agent.git cd VulneraAI-agent
-
Create and activate a virtual environment: python -m venv .venv source .venv/bin/activate # macOS / Linux .venv\Scripts\activate # Windows
-
Install dependencies:
- The project provides
requirements.txt, use: pip install -r requirements.txt
Note: Exact package names and versions may vary by project. If any package is unavailable on PyPI, follow its installation instructions.
- The project provides
Create a .env file at the project root (not checked in) containing at least:
-
AWS credentials (or use your standard AWS config) AWS_ACCESS_KEY_ID=YOUR_KEY AWS_SECRET_ACCESS_KEY=YOUR_SECRET AWS_SESSION_TOKEN=optional_session_token AWS_REGION=us-east-2
-
Bedrock model settings (optional override) BEDROCK_MODEL_ARN=arn:aws:bedrock:... # If you want to override the hardcoded model
-
Other environment flags used by
tools/*(for example API keys, scanning flags).
The agent uses load_dotenv() so environment values placed in .env will be available to the process.
To run the agent locally (development):
- Ensure your
.envis configured and that you have network and Bedrock access. - Start the agent: python agent.py
The script calls app.run() on the BedrockAgentCoreApp instance at the bottom of agent.py. How this exposes an interface (HTTP, CLI, or lambda adapter) depends on bedrock_agentcore default behavior — check its documentation for local server mode or runtime behavior. In many cases the runtime provides a local HTTP endpoint for development.
The agent exposes a single runtime entrypoint agent_invocation(payload, context) used by the runtime. Example payload for invoking the agent:
Request payload (JSON): { "input": { "prompt": "Please analyze https://example.com for common web security issues and OWASP Top 10 risks." }, "sessionId": "optional-thread-id-123" }
input.prompt— The user prompt (string). If omitted, a default guiding message is used.sessionId— Optional. If present, used asthread_idfor the graph session; otherwise a new UUID is generated.
The runtime returns a single string result extracted from the last AI assistant message. The returned string will be:
- A plain text summary or instruction (if the LLM produces text).
- "No AI response generated." or "No messages returned by the agent." on error or empty responses.
Example invocation (pseudo-cURL)
- If the runtime is exposed as HTTP in your environment, adapt this with the local endpoint: curl -X POST http://localhost:8080/ -H "Content-Type: application/json" -d '{ "input": { "prompt": "Find OWASP-related issues in https://example.com" }, "sessionId": "test-session-001" }'
Tools are regular Python callables exported in tools/* and added to the a_tools list in agent.py.
To add a tool:
- Implement the tool in the
toolspackage. Tools should return structured output the LLM or theToolNodeexpects (follow patterns used in existing tool implementations). - Import your tool in
agent.py, then add it to thea_toolslist (in the desired order). - The LLM is bound to tools using
llm.bind_tools(a_tools); the LLM can then request tool execution.
Design notes:
- The agent expects tools to be separately testable and deterministic when possible.
- Keep tool outputs concise and machine-friendly (JSON or simple text blocks), so the LLM can summarize cleanly.
-
State type:
SecurityState(TypedDict)- input: str
- messages: list of message objects (HumanMessage, SystemMessage, AI response)
- explanation: str (LLM output)
- scan_results: any (optional structured tool output)
-
Nodes:
sec_agent— Prepares history, injects the system prompt, appends the user's message, invokes the LLM with bound tools.tools— ToolNode wrapping the set of tool callables. Routed to when the LLM emits tool calls.
-
Routing:
_route_from_agent(state)inspects the last message fortool_calls. If present, execution routes to thetoolsnode, otherwise it ends.
The LLM is created with:
llm = ChatBedrockConverse( model_id="arn:aws:bedrock:us-east-2:197348940135:inference-profile/us.amazon.nova-micro-v1:0", provider="amazon", temperature=0, )
- Replace
model_idwith your desired Bedrock model ARN or supported identifier. - Temperature is set to 0 for deterministic, fact-based answers; tune if you want more creative output.
agent.pyincludes an entrypointagent_invocation(payload, context)and usesBedrockAgentCoreApp. Packaging for Lambda will require:- Bundling dependencies (or use a Lambda Layer).
- Ensuring the Lambda IAM role has permissions to call Bedrock and any other AWS resources required.
- Sensible timeout and memory settings if tools perform network scans (network scans may require longer timeouts).
- IAM permissions (examples, do not grant excessive privileges):
- bedrock:InvokeModel (or relevant action for your Bedrock integration)
- SecretsManager/GetSecretValue (if storing keys)
- CloudWatch Logs (for logging)
- If your tools perform active network scanning, review AWS VPC, NAT, and security group setup so the function has necessary outbound access.
-
No AI response / "No messages returned":
- Verify Bedrock credentials and model_id are correct.
- Check network connectivity to Bedrock endpoints.
- Confirm the model you provided is available in your account/region.
-
Tool execution fails:
- Run tools locally and confirm required system-level utilities (e.g., nmap) are available.
- Ensure any tool-specific API keys are configured in
.env.
-
Long running scans timed out:
- Increase runtime timeout (Lambda) or run scanning tools asynchronously and stream results back to the agent.
- The agent may send user prompts and tool results to Bedrock. Avoid including secrets (API keys, credentials) in prompts.
- Tools that perform active scanning may generate traffic that could be interpreted as intrusive. Ensure authorization before scanning third-party systems.
- Logs may contain sensitive information; configure CloudWatch or local logging appropriately and secure access.
- Implement tool functions under
tools/and add tests to validate behavior. - Keep the system prompt concise and avoid exposing internal chain-of-thought; the system prompt in
agent.pyalready instructs the model: "Do NOT show reasoning or internal steps." - Use pull requests and add clear descriptions of changes. Provide unit tests for tool logic when possible.
Prompt: "Please analyze https://example.com for OWASP Top 10 issues and summarize findings."
Agent flow:
- System prompt + user message are sent to the LLM.
- LLM may call tools (e.g.,
check_security_headers,check_tls_security,scan_open_ports). - Tools run and return structured outputs.
- LLM synthesizes results and returns a short, beginner-friendly summary focused on OWASP risks, with suggested remediation steps.