"Universal AI security framework - Protect LLM applications from prompt injection, jailbreaks, and adversarial attacks. Works with OpenAI, Anthropic, LangChain, and any LLM."
-
Updated
Feb 1, 2026 - Python
"Universal AI security framework - Protect LLM applications from prompt injection, jailbreaks, and adversarial attacks. Works with OpenAI, Anthropic, LangChain, and any LLM."
🛡️ Secure your LLM applications with PromptShields, a framework designed for real-time protection against prompt injection and data leaks.
🛡️ Protect LLM applications with PromptShields, a robust security framework designed to prevent prompt injection, jailbreaks, and data leakage.
Add a description, image, and links to the llm-protection topic page so that developers can more easily learn about it.
To associate your repository with the llm-protection topic, visit your repo's landing page and select "manage topics."