Open-source middleware that verifies every LLM output against real data and ensures every agent action stays within policy.
Works With
Drop-in middleware for any LLM application. Five-minute integration.
Real-time hallucination detection using reasoning-based verification — not embeddings. Extracts claims, cross-references sources, ranks by freshness and authority.
JWT-based agent identity with scoped credentials. Declarative allow/deny policies using glob patterns. Real-time action verification before execution.
Append-only WAL-based log with SHA-256 hash chain verification. Detect tampering instantly. Built for EU AI Act, HIPAA, and SOC 2 compliance.
Real-time metrics, searchable audit log, agent and policy management. Monitor hallucination rates, blocked actions, and compliance status.
Swap between OpenAI, Anthropic, Gemini, Groq, Ollama, and more via environment variables. Zero code changes. Supports cloud and local models.
Docker Compose starts the entire stack in seconds. PowerShell and Bash launch scripts with automatic demo data seeding.
Your App → Trust Proxy → LLM Provider │ ┌────┴────┐ Grounding Guard Engine Engine └────┬────┘ Audit Store → Claim extraction → JWT agent identity → Source verification → Allow/deny policies → Conflict resolution → Real-time evaluation → Temporal checks → SHA-256 audit chain
Open-source. MIT-licensed. Runs anywhere Docker does.