Skip to content
#

adversarial-ai

Here are 32 public repositories matching this topic...

Adversarial AI bug hunter with auto-fix skill for Claude Code, Cursor, Codex CLI, GitHub Copilot CLI, Kiro CLI, Opencode, Pi Coding Agent, and more. Multi-agent pipeline finds security vulnerabilities, logic errors, and runtime bugs — then fixes them autonomously on a safe branch.

  • Updated Mar 14, 2026
  • JavaScript

Basilisk — Open-source AI red teaming framework with genetic prompt evolution. Automated LLM security testing for GPT-4, Claude, Gemini. OWASP LLM Top 10 coverage. 32 attack modules.

  • Updated Mar 11, 2026
  • Python

LLM Attack Testing Toolkit is a structured methodology and mindset framework for testing Large Language Model (LLM) applications against logic abuse, prompt injection, jailbreaks, and workflow manipulation.

  • Updated Feb 27, 2026

LLM Sentinel Red Teaming Platform is an enterprise-grade framework for automated security testing of Large Language Models, detecting vulnerabilities such as jailbreaks, prompt injection, and system prompt leakage across multiple providers, with structured attack orchestration, risk scoring, and security reporting to harden models before production

  • Updated Mar 4, 2026
  • Python

Improve this page

Add a description, image, and links to the adversarial-ai topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the adversarial-ai topic, visit your repo's landing page and select "manage topics."

Learn more