Skip to content
#

llm-jailbreaks

Here are 17 public repositories matching this topic...

Systematic LLM jailbreak taxonomy — 40 attack patterns, 10 categories, empirical evaluation across 4 frontier models. AI safety research with responsible disclosure.

  • Updated Mar 15, 2026
  • Jupyter Notebook
svalinn-ai

The Self-Hosted AI Firewall & Gateway. Drop-in guardrails for LLMs running entirely on CPU. Blocks jailbreaks, enforces policies, and ensures compliance in real-time

  • Updated Jan 6, 2026
  • Python

Improve this page

Add a description, image, and links to the llm-jailbreaks topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the llm-jailbreaks topic, visit your repo's landing page and select "manage topics."

Learn more