Daytona is a Secure and Elastic Infrastructure for Running AI-Generated Code
-
Updated
Mar 17, 2026 - TypeScript
Daytona is a Secure and Elastic Infrastructure for Running AI-Generated Code
the easiest way to run natural language-described workflows automatically
The fastest Trust Layer for AI Agents
A free and open-source toolkit for running other people's code in your applications.
Secure autonomous AI agent fleet platform — Docker-isolated, multi-provider, with built-in cost controls. OpenClaw alternative for production use.
AI-native application framework and runtime, simply write a YAML file.
AutomatosX is an orchestrates AI agents, workflows, and memory
Composable agent runtime with enforced isolation boundaries
Bud AI Foundry - A comprehensive inference stack for compound AI deployment, optimization and scaling. Bud Stack provides intelligent infrastructure automation, performance optimization, and seamless model deployment across multi-cloud/multi-hardware environments.
Benchmarked agent execution runtime for Python. Sub-10ms cold starts, real-time streaming, time-travel debugging, and self-growing tool libraries. Compare 3 sandbox backends: Docker (OpenSandbox), MicroVM, and in-process AST.
A self-evolving, AI-native language and platform for intelligent agents and autonomous software.
Production-grade TypeScript AI runtime focused on reliability, governance, and reproducible LLM systems. Multi-provider gateway, agents, RAG, workflows, policy engine, audit trails, and deterministic testing — built for teams shipping AI in production.
Jupyter notebooks for testing Prisma AIRS AI Runtime with your LLM
JL Engine Local is a local-first runtime and UI stack for the JL Engine.
A tutorial showing how to use Software NGFWs to inspect Google Cloud traffic using Network Security Integration.
L0: The Missing Reliability Substrate for AI. Streaming-first. Reliable. Replayable. Deterministic. Multimodal. Retries. Continuation. Fallbacks (provider & model). Consensus. Parallelization. Guardrails. Atomic event logs. Byte-for-byte replays.
LLM agent runtime with paged virtual memory and spatial context awareness
L0: The Missing Reliability Substrate for AI. Streaming-first. Reliable. Replayable. Deterministic. Multimodal. Retries. Continuation. Fallbacks (provider & model). Consensus. Parallelization. Guardrails. Atomic event logs. Byte-for-byte replays.
Demo AI chat app with optional Prisma AIRS Runtime Security for before/after red team testing comparison
A tutorial to deploy and use AI Runtime Security on Google Cloud.
Add a description, image, and links to the ai-runtime topic page so that developers can more easily learn about it.
To associate your repository with the ai-runtime topic, visit your repo's landing page and select "manage topics."