Ground-Keeps

AI That You Can Trust

Open-source middleware that verifies every LLM output against real data and ensures every agent action stays within policy.

$ docker compose up --build
$ curl http://localhost:3000/health
→ {"status":"ok","service":"trust-proxy"}
groundkeeps.in — built in India

Works With

OpenAI Anthropic Google Gemini Groq Ollama DeepSeek

Two Engines, One API

Drop-in middleware for any LLM application. Five-minute integration.

🔍

Grounding Engine

Real-time hallucination detection using reasoning-based verification — not embeddings. Extracts claims, cross-references sources, ranks by freshness and authority.

🛡️

Guard Engine

JWT-based agent identity with scoped credentials. Declarative allow/deny policies using glob patterns. Real-time action verification before execution.

📋

Audit Store

Append-only WAL-based log with SHA-256 hash chain verification. Detect tampering instantly. Built for EU AI Act, HIPAA, and SOC 2 compliance.

📊

React Dashboard

Real-time metrics, searchable audit log, agent and policy management. Monitor hallucination rates, blocked actions, and compliance status.

🔌

Provider Agnostic

Swap between OpenAI, Anthropic, Gemini, Groq, Ollama, and more via environment variables. Zero code changes. Supports cloud and local models.

🐳

One-Command Deploy

Docker Compose starts the entire stack in seconds. PowerShell and Bash launch scripts with automatic demo data seeding.

Architecture

Your AppTrust ProxyLLM Provider
                │
           ┌────┴────┐
        Grounding   Guard
        Engine     Engine
           └────┬────┘
             Audit Store

   Claim extraction      JWT agent identity
   Source verification    Allow/deny policies
   Conflict resolution    Real-time evaluation
   Temporal checks       SHA-256 audit chain

Ready to deploy?

Open-source. MIT-licensed. Runs anywhere Docker does.