Agentic work orchestrator
for autonomous AI agents

A local-first orchestrator that enables Claude Code agents to work autonomously while you're away, share context with each other, and scale across multiple repositories.

curl -fsSL https://kage.raskell.io/install.sh | sh
Kage Mascot

Built for Autonomous AI Agents

Shadow agents working invisibly, executing with precision

Supervisor Pattern

Kage acts as a supervisor for AI agents, not just a session manager. Spawn agents with goals, monitor health and progress, enforce iteration limits, and enable checkpoint/resume workflows.

Event Sourcing

All agent context is stored as immutable events. Full audit trails, cross-agent context sharing, replay for debugging, and time-travel queries.

Namespace Organization

Repositories are grouped into namespaces. Agents within a namespace can share context and coordinate work seamlessly across your codebase.

Single Binary

Distributed as a single static binary with zero runtime dependencies. No database server, no external services. Just download and run on Linux, macOS, or Windows.

Context Sharing

Two-tier memory system with working and long-term storage. Agents automatically share discoveries, learned patterns, and decisions with each other.

Checkpoints & Resume

Save agent state on iteration limits. Review progress, provide guidance, and resume where you left off. Never lose work when stepping away.

Agent Dashboard

Switch between agent instances at a glance. See which agents are idle and ready for work, jump back and forth to keep your pipeline flowing.

Subscription Pooling

Register multiple Claude Code subscriptions and let Kage route work to available quota. Hit rate limits less, build more.

Why Kage?

AI agents are powerful, but managing them at scale is challenging.

Running a single Claude Code session is straightforward. But what happens when you need agents working across multiple repositories? When you want to step away and let them work autonomously? When they need to share what they've learned?

Kage solves these problems with a supervisor architecture. It spawns agents with specific goals, monitors their progress, and enforces guardrails. When an agent hits its iteration limit, Kage saves a checkpoint so you can review, guide, and resume.

The name "Kage" (影) means "shadow" in Japanese. Like shadows working behind the scenes, Kage enables AI agents to execute with precision while you focus on what matters.

Local-First

Your data stays on your machine. Single binary with embedded storage. No cloud dependencies, no external services. Works offline.

Claude Code Native

First-class support for Claude Code as the primary agent. Trait-based architecture allows extending to other providers when needed.

Built in Rust

Memory-safe, fast, and reliable. Async runtime with Tokio. Tiny resource footprint for long-running daemon processes.

Kage is built for developers who want to scale their AI-assisted workflows without sacrificing control or transparency.

Enterprise Ready

Local-first by default, cloud-enabled when you need it

Multi-User Tenancy

gRPC server mode enables team-wide deployments with isolated namespaces and shared context.

Credential Management

Secure storage via OS keychain — macOS Keychain, Linux Secret Service, Windows Credential Manager.

Cloud Storage

Optional cloud backends for memory and context history — S3, GCS, Azure Blob supported.

Audit & Compliance

Immutable event logs for compliance, namespace-level permissions, and API key scoping.

Ready to orchestrate your AI agents?

Get started with Kage in minutes. Read the docs or explore the source code.