Skip to content

MervinPraison/PraisonAI

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

2,965 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

PraisonAI Logo

Total Downloads Latest Stable Version License MCP Registry

PraisonAI 🦞

MervinPraison%2FPraisonAI | Trendshift

PraisonAI 🦞 β€” Automate and solve complex challenges with AI agent teams that plan, research, code, and deliver results to Telegram, Discord, and WhatsApp β€” running 24/7. A low-code, production-ready multi-agent framework with handoffs, guardrails, memory, RAG, and 100+ LLM providers, built around simplicity, customisation, and effective human-agent collaboration.

PraisonAI Dashboard


Quick Paths:


⚑ Performance

PraisonAI is built for speed, with agent instantiation in under 4ΞΌs. This reduces overhead, improves responsiveness, and helps multi-agent systems scale efficiently in real-world production workloads.

Performance Metric PraisonAI
Avg Instantiation Time 3.77 ΞΌs

🎯 Use Cases

AI agents solving real-world problems across industries:

Use Case Description
πŸ” Research & Analysis Conduct deep research, gather information, and generate insights from multiple sources automatically
πŸ’» Code Generation Write, debug, and refactor code with AI agents that understand your codebase and requirements
✍️ Content Creation Generate blog posts, documentation, marketing copy, and technical writing with multi-agent teams
πŸ“Š Data Pipelines Extract, transform, and analyze data from APIs, databases, and web sources automatically
πŸ€– Customer Support Deploy 24/7 support bots on Telegram, Discord, Slack with memory and knowledge-backed responses
βš™οΈ Workflow Automation Automate multi-step business processes with agents that hand off tasks, verify results, and self-correct

Supported Providers

PraisonAI supports 100+ LLM providers through seamless integration:

OpenAI Anthropic Google Gemini DeepSeek Azure Ollama Groq Mistral Cerebras Cohere OpenRouter Perplexity Fireworks AWS Bedrock xAI Grok Vertex AI HuggingFace Together AI Databricks Replicate Cloudflare

View all 24 providers with examples
Provider Example
OpenAI Example
Anthropic Example
Google Gemini Example
Ollama Example
Groq Example
DeepSeek Example
xAI Grok Example
Mistral Example
Cohere Example
Perplexity Example
Fireworks Example
Together AI Example
OpenRouter Example
HuggingFace Example
Azure OpenAI Example
AWS Bedrock Example
Google Vertex Example
Databricks Example
Cloudflare Example
AI21 Example
Replicate Example
SageMaker Example
Moonshot Example
vLLM Example

🌟 Why PraisonAI?

Feature How
πŸ”Œ MCP Protocol β€” stdio, HTTP, WebSocket, SSE tools=MCP("npx ...")
🧠 Planning Mode β€” plan β†’ execute β†’ reason planning=True
πŸ” Deep Research β€” multi-step autonomous research Docs
πŸ€– External Agents β€” orchestrate Claude Code, Gemini CLI, Codex Docs
πŸ”„ Agent Handoffs β€” seamless conversation passing handoff=True
πŸ›‘οΈ Guardrails β€” input/output validation Docs
Web Search + Fetch β€” native browsing web_search=True
πŸͺž Self Reflection β€” agent reviews its own output Docs
πŸ”€ Workflow Patterns β€” route, parallel, loop, repeat Docs
🧠 Memory (zero deps) β€” works out of the box memory=True
View all 25 features
Feature How
πŸ’‘ Prompt Caching β€” reduce latency + cost prompt_caching=True
πŸ’Ύ Sessions + Auto-Save β€” persistent state across restarts auto_save="my-project"
πŸ’­ Thinking Budgets β€” control reasoning depth thinking_budget=1024
πŸ“š RAG + Quality-Based RAG β€” auto quality scoring retrieval Docs
πŸ“Š Model Router β€” auto-routes to cheapest capable model Docs
🧊 Shadow Git Checkpoints β€” auto-rollback on failure Docs
πŸ“‘ A2A Protocol β€” agent-to-agent interop Docs
πŸ“ Context Compaction β€” never hit token limits Docs
πŸ“‘ Telemetry β€” OpenTelemetry traces, spans, metrics Docs
πŸ“œ Policy Engine β€” declarative agent behavior control Docs
πŸ”„ Background Tasks β€” fire-and-forget agents Docs
πŸ” Doom Loop Detection β€” auto-recovery from stuck agents Docs
πŸ•ΈοΈ Graph Memory β€” Neo4j-style relationship tracking Docs
πŸ–οΈ Sandbox Execution β€” isolated code execution Docs
πŸ–₯️ Bot Gateway β€” multi-agent routing across channels Docs

πŸš€ Quick Start

Get started with PraisonAI in under 1 minute:

# Install
pip install praisonaiagents

# Set API key
export OPENAI_API_KEY=your_key_here

# Create a simple agent
python -c "from praisonaiagents import Agent; Agent(instructions='You are a helpful AI assistant').start('Write a haiku about AI')"

Next Steps: Single Agent Example | Multi Agents | Full Docs


πŸ“¦ Installation

Python SDK

Lightweight package dedicated for coding:

pip install praisonaiagents

For the full framework with CLI support:

pip install praisonai

🦞 AgentClaw β€” full UI with bots, memory, knowledge, and gateway:

pip install "praisonai[claw]"
praisonai claw

JavaScript SDK

npm install praisonai

πŸ“˜ Using Python Code

1. Single Agent

from praisonaiagents import Agent
agent = Agent(instructions="You are a helpful AI assistant")
agent.start("Write a movie script about a robot in Mars")

2. Multi Agents

from praisonaiagents import Agent, Agents

research_agent = Agent(instructions="Research about AI")
summarise_agent = Agent(instructions="Summarise research agent's findings")
agents = Agents(agents=[research_agent, summarise_agent])
agents.start()

3. MCP (Model Context Protocol)

from praisonaiagents import Agent, MCP

# stdio - Local NPX/Python servers
agent = Agent(tools=MCP("npx @modelcontextprotocol/server-memory"))

# Streamable HTTP - Production servers
agent = Agent(tools=MCP("https://api.example.com/mcp"))

# WebSocket - Real-time bidirectional
agent = Agent(tools=MCP("wss://api.example.com/mcp", auth_token="token"))

# With environment variables
agent = Agent(
    tools=MCP(
        command="npx",
        args=["-y", "@modelcontextprotocol/server-brave-search"],
        env={"BRAVE_API_KEY": "your-key"}
    )
)

πŸ“– Full MCP docs β€” stdio, HTTP, WebSocket, SSE transports

4. Custom Tools

from praisonaiagents import Agent, tool

@tool
def search(query: str) -> str:
    """Search the web for information."""
    return f"Results for: {query}"

@tool
def calculate(expression: str) -> float:
    """Evaluate a math expression."""
    return eval(expression)

agent = Agent(
    instructions="You are a helpful assistant",
    tools=[search, calculate]
)
agent.start("Search for AI news and calculate 15*4")

πŸ“– Full tools docs β€” BaseTool, tool packages, 100+ built-in tools

5. Persistence (Databases)

from praisonaiagents import Agent, db

agent = Agent(
    name="Assistant",
    db=db(database_url="postgresql://localhost/mydb"),
    session_id="my-session"
)
agent.chat("Hello!")  # Auto-persists messages, runs, traces

πŸ“– Full persistence docs β€” PostgreSQL, MySQL, SQLite, MongoDB, Redis, and 20+ more

6. AgentClaw 🦞 (Dashboard UI)

Connect your AI agents to Telegram, Discord, Slack, WhatsApp and more β€” all from a single command.

pip install "praisonai[claw]"
praisonai claw

Open http://localhost:8082 β€” the dashboard comes with 13 built-in pages: Chat, Agents, Memory, Knowledge, Channels, Guardrails, Cron, and more. Add messaging channels directly from the UI.

πŸ“– Full Claw docs β€” platform tokens, CLI options, Docker, and YAML agent mode


🎯 CLI Quick Reference

Category Commands
Execution praisonai, --auto, --interactive, --chat
Research research, --query-rewrite, --deep-research
Planning --planning, --planning-tools, --planning-reasoning
Workflows workflow run, workflow list, workflow auto
Memory memory show, memory add, memory search, memory clear
Knowledge knowledge add, knowledge query, knowledge list
Sessions session list, session resume, session delete
Tools tools list, tools info, tools search
MCP mcp list, mcp create, mcp enable
Development commit, docs, checkpoint, hooks
Scheduling schedule start, schedule list, schedule stop

πŸ“– Full CLI reference


✨ Key Features

πŸ€– Core Agents
Feature Code Docs
Single Agent Example πŸ“–
Multi Agents Example πŸ“–
Auto Agents Example πŸ“–
Self Reflection AI Agents Example πŸ“–
Reasoning AI Agents Example πŸ“–
Multi Modal AI Agents Example πŸ“–
πŸ”„ Workflows
Feature Code Docs
Simple Workflow Example πŸ“–
Workflow with Agents Example πŸ“–
Agentic Routing (route()) Example πŸ“–
Parallel Execution (parallel()) Example πŸ“–
Loop over List/CSV (loop()) Example πŸ“–
Evaluator-Optimizer (repeat()) Example πŸ“–
Conditional Steps Example πŸ“–
Workflow Branching Example πŸ“–
Workflow Early Stop Example πŸ“–
Workflow Checkpoints Example πŸ“–
πŸ’» Code & Development
Feature Code Docs
Code Interpreter Agents Example πŸ“–
AI Code Editing Tools Example πŸ“–
External Agents (All) Example πŸ“–
Claude Code CLI Example πŸ“–
Gemini CLI Example πŸ“–
Codex CLI Example πŸ“–
Cursor CLI Example πŸ“–
🧠 Memory & Knowledge
Feature Code Docs
Memory (Short & Long Term) Example πŸ“–
File-Based Memory Example πŸ“–
Claude Memory Tool Example πŸ“–
Add Custom Knowledge Example πŸ“–
RAG Agents Example πŸ“–
Chat with PDF Agents Example πŸ“–
Data Readers (PDF, DOCX, etc.) CLI πŸ“–
Vector Store Selection CLI πŸ“–
Retrieval Strategies CLI πŸ“–
Rerankers CLI πŸ“–
Index Types (Vector/Keyword/Hybrid) CLI πŸ“–
Query Engines (Sub-Question, etc.) CLI πŸ“–
πŸ”¬ Research & Intelligence
Feature Code Docs
Deep Research Agents Example πŸ“–
Query Rewriter Agent Example πŸ“–
Native Web Search Example πŸ“–
Built-in Search Tools Example πŸ“–
Unified Web Search Example πŸ“–
Web Fetch (Anthropic) Example πŸ“–
πŸ“‹ Planning & Execution
Feature Code Docs
Planning Mode Example πŸ“–
Planning Tools Example πŸ“–
Planning Reasoning Example πŸ“–
Prompt Chaining Example πŸ“–
Evaluator Optimiser Example πŸ“–
Orchestrator Workers Example πŸ“–
πŸ‘₯ Specialized Agents
Feature Code Docs
Data Analyst Agent Example πŸ“–
Finance Agent Example πŸ“–
Shopping Agent Example πŸ“–
Recommendation Agent Example πŸ“–
Wikipedia Agent Example πŸ“–
Programming Agent Example πŸ“–
Math Agents Example πŸ“–
Markdown Agent Example πŸ“–
Prompt Expander Agent Example πŸ“–
🎨 Media & Multimodal
Feature Code Docs
Image Generation Agent Example πŸ“–
Image to Text Agent Example πŸ“–
Video Agent Example πŸ“–
Camera Integration Example πŸ“–
πŸ”Œ Protocols & Integration
Feature Code Docs
MCP Transports Example πŸ“–
WebSocket MCP Example πŸ“–
MCP Security Example πŸ“–
MCP Resumability Example πŸ“–
MCP Config Management Docs πŸ“–
LangChain Integrated Agents Example πŸ“–
πŸ›‘οΈ Safety & Control
Feature Code Docs
Guardrails Example πŸ“–
Human Approval Example πŸ“–
Rules & Instructions Docs πŸ“–
βš™οΈ Advanced Features
Feature Code Docs
Async & Parallel Processing Example πŸ“–
Parallelisation Example πŸ“–
Repetitive Agents Example πŸ“–
Agent Handoffs Example πŸ“–
Stateful Agents Example πŸ“–
Autonomous Workflow Example πŸ“–
Structured Output Agents Example πŸ“–
Model Router Example πŸ“–
Prompt Caching Example πŸ“–
Fast Context Example πŸ“–
πŸ› οΈ Tools & Configuration
Feature Code Docs
100+ Custom Tools Example πŸ“–
YAML Configuration Example πŸ“–
100+ LLM Support Example πŸ“–
Callback Agents Example πŸ“–
Hooks Example πŸ“–
Middleware System Example πŸ“–
Configurable Model Example πŸ“–
Rate Limiter Example πŸ“–
Injected Tool State Example πŸ“–
Shadow Git Checkpoints Example πŸ“–
Background Tasks Example πŸ“–
Policy Engine Example πŸ“–
Thinking Budgets Example πŸ“–
Output Styles Example πŸ“–
Context Compaction Example πŸ“–
πŸ“Š Monitoring & Management
Feature Code Docs
Sessions Management Example πŸ“–
Auto-Save Sessions Docs πŸ“–
History in Context Docs πŸ“–
Telemetry Example πŸ“–
Project Docs (.praison/docs/) Docs πŸ“–
AI Commit Messages Docs πŸ“–
@Mentions in Prompts Docs πŸ“–
πŸ–₯️ CLI Features
Feature Code Docs
Slash Commands Example πŸ“–
Autonomy Modes Example πŸ“–
Cost Tracking Example πŸ“–
Repository Map Example πŸ“–
Interactive TUI Example πŸ“–
Git Integration Example πŸ“–
Sandbox Execution Example πŸ“–
CLI Compare Example πŸ“–
Profile/Benchmark Docs πŸ“–
Auto Mode Docs πŸ“–
Init Docs πŸ“–
File Input Docs πŸ“–
Final Agent Docs πŸ“–
Max Tokens Docs πŸ“–
πŸ§ͺ Evaluation
Feature Code Docs
Accuracy Evaluation Example πŸ“–
Performance Evaluation Example πŸ“–
Reliability Evaluation Example πŸ“–
Criteria Evaluation Example πŸ“–
🎯 Agent Skills
Feature Code Docs
Skills Management Example πŸ“–
Custom Skills Example πŸ“–
⏰ 24/7 Scheduling
Feature Code Docs
Agent Scheduler Example πŸ“–

πŸ’» Using JavaScript Code

npm install praisonai
export OPENAI_API_KEY=xxxxxxxxxxxxxxxxxxxxxx
const { Agent } = require('praisonai');
const agent = new Agent({ instructions: 'You are a helpful AI assistant' });
agent.start('Write a movie script about a robot in Mars');

⭐ Star History

Star History Chart


πŸŽ“ Video Tutorials

Learn PraisonAI through our comprehensive video series:

View all 22 video tutorials
Topic Video
AI Agents with Self Reflection Self Reflection
Reasoning Data Generating Agent Reasoning Data
AI Agents with Reasoning Reasoning
Multimodal AI Agents Multimodal
AI Agents Workflow Workflow
Async AI Agents Async
Mini AI Agents Mini
AI Agents with Memory Memory
Repetitive Agents Repetitive
Introduction Introduction
Tools Overview Tools Overview
Custom Tools Custom Tools
Firecrawl Integration Firecrawl
User Interface UI
Crawl4AI Integration Crawl4AI
Chat Interface Chat
Code Interface Code
Mem0 Integration Mem0
Training Training
Realtime Voice Interface Realtime
Call Interface Call
Reasoning Extract Agents Reasoning Extract

πŸ‘₯ Contributing

We welcome contributions! Fork the repo, create a branch, and submit a PR β†’ Contributing Guide.


❓ FAQ & Troubleshooting

ModuleNotFoundError: No module named 'praisonaiagents'

Install the package:

pip install praisonaiagents
API key not found / Authentication error

Ensure your API key is set:

export OPENAI_API_KEY=your_key_here

For other providers, see Models docs.

How do I use a local model (Ollama)?
# Start Ollama server first
ollama serve

# Set environment variable
export OPENAI_BASE_URL=http://localhost:11434/v1

See Models docs for more details.

How do I persist conversations to a database?

Use the db parameter:

from praisonaiagents import Agent, db

agent = Agent(
    name="Assistant",
    db=db(database_url="postgresql://localhost/mydb"),
    session_id="my-session"
)

See Persistence docs for supported databases.

How do I enable agent memory?
from praisonaiagents import Agent

agent = Agent(
    name="Assistant",
    memory=True,  # Enables file-based memory (no extra deps!)
    user_id="user123"
)

See Memory docs for more options.

How do I run multiple agents together?
from praisonaiagents import Agent, Agents

agent1 = Agent(instructions="Research topics")
agent2 = Agent(instructions="Summarize findings")
agents = Agents(agents=[agent1, agent2])
agents.start()

See Agents docs for more examples.

How do I use MCP tools?
from praisonaiagents import Agent, MCP

agent = Agent(
    tools=MCP("npx @modelcontextprotocol/server-memory")
)

See MCP docs for all transport options.

Getting Help


Made with ❀️ by the PraisonAI Team

πŸ“š Documentation β€’ GitHub β€’ ▢️ YouTube β€’ 𝕏 X β€’ πŸ’Ό LinkedIn