Papers and resources related to the security and privacy of LLMs 🤖
-
Updated
Jun 8, 2025 - Python
Papers and resources related to the security and privacy of LLMs 🤖
[NeurIPS D&B '25] The one-stop repository for LLM unlearning
Python package for measuring memorization in LLMs.
The fastest Trust Layer for AI Agents
An Execution Isolation Architecture for LLM-Based Agentic Systems
It is a comprehensive resource hub compiling all LLM papers accepted at the International Conference on Learning Representations (ICLR) in 2024.
LLM security and privacy
LLM Platform Security: Applying a Systematic Evaluation Framework to OpenAI's ChatGPT Plugins
Make Zettelkasten-style note-taking the foundation of interactions with Large Language Models (LLMs).
User-friendly LLM interface, self-hosted, offline, and privacy-first.
🔒 Detect security leaks in AI-assisted codebases. Static analysis tool for Python & JS/TS with cross-file taint tracking.
Semantic PII Masking & Anonymization for LLMs (RAG). GDPR-compliant, reversible, and context-aware. Supports LangChain & OpenAI
Semantic Privacy Guard: A Java middleware that intercepts text, identifies PII using a three-layer hybrid pipeline (Regex + Naive Bayes ML + Apache OpenNLP NER), and redacts it before it reaches an LLM or leaves the corporate network — with stream-based processing for memory-efficient handling of large files and log streams.
Example of running last_layer with FastAPI on vercel
A 3-tier framework for controlling your AI privacy — from open use to full isolation.
Add a description, image, and links to the llm-privacy topic page so that developers can more easily learn about it.
To associate your repository with the llm-privacy topic, visit your repo's landing page and select "manage topics."