The llm-vuln-scanner is a security tool designed to evaluate local large language models (LLMs) for potential vulnerabilities. It identifies risks such as jailbreaks, prompt injection, training data leakage, and adversarial abuse. With this tool, you can enhance the security of your AI models with ease.
To use the llm-vuln-scanner, follow these simple steps:
Ensure your computer meets the following requirements:
- Operating System: Windows 10 or later, macOS 10.15 or later, or a recent Linux distribution.
- RAM: Minimum of 4 GB. Recommended 8 GB or more.
- Disk Space: At least 100 MB of free space.
To get your copy of the llm-vuln-scanner, visit the releases page. You will find the latest version available for download.
After downloading the zip file:
-
On Windows:
- Extract the contents of the zip file to a folder.
- Open the folder and find the executable file named
llm-vuln-scanner.exe.
-
On macOS:
- Extract the zip file and open the folder.
- Move
llm-vuln-scanner.appto your Applications folder.
-
On Linux:
- Extract the zip file and navigate to the folder.
- Open a terminal and run the command:
./llm-vuln-scanner.
After installation is complete, you can run the scanner:
- Windows: Double-click the
llm-vuln-scanner.exefile. - macOS: Open
llm-vuln-scanner.appfrom your Applications folder. - Linux: In the terminal, type
./llm-vuln-scannerand press Enter.
The scanner will launch, and you will see a user-friendly interface.
The llm-vuln-scanner features an intuitive dashboard. To start a scan:
- Select the model you wish to analyze.
- Click on the "Start Scan" button.
The scanner will examine your LLM for several factors:
- Jailbreak Resilience: Tests if the model can be manipulated to bypass security measures.
- Prompt Injection: Checks for vulnerabilities that allow unwanted responses from the model.
- Training Data Leakage: Identifies if sensitive data from training sets can be exposed.
- Adversarial Abuse: Evaluates the model's response to malicious inputs.
After the scan completes, you will receive a detailed report summarizing the results.
The results will include sections such as:
- Summary: A quick overview of the scan findings.
- Detailed Findings: In-depth analysis of each vulnerability.
- Recommendations: Suggestions for improving the security of your LLM.
Take your time to review the report carefully. Following the recommendations will help secure your model from potential threats.
A: Yes, you can analyze multiple LLMs using the scanner.
A: This tool is designed to enhance security. However, always ensure your software is up to date and run scans in a safe environment.
A: Currently, the scanner is designed for local LLMs. Cloud models may require different security measures.
To download the llm-vuln-scanner, please visit the releases page. From there, you can access the latest version and start securing your models today.
If you want to contribute to the development of llm-vuln-scanner, feel free to submit issues or pull requests on GitHub. We welcome suggestions for improvements.
Thank you to everyone who has contributed to this project. Your support helps create a safer AI environment.
- ai
- ai-jailbreak-prompts
- ai-vulnerability-assessment
- jailbreak
- llm
- llm-pentesting
- llm-vulnerabilities
- llmstudio
- ollama
- pentest
- pentesting-tools
- red-team-tools
- scanning
- vulnerability
For more detailed information on each topic, check related projects and documentation available in this repository.