Open-source AI governance — prove what your agents do

Keep your AI agents honest,
with evidence you can verify.

Ardur sits between your AI agent and the tools it uses. It enforces boundaries before actions happen, then gives you signed proof of every decision — so you can trust what your agents do, not just what they say.

Real-world test — Cloud model governed by Ardur

Model
Cloud Model (1T params)
Tool calls through proxy
35 — all permitted
Denials
0 — zero false positives
Files created
18 of 20 — zero denials
39 tool calls evaluated across cloud + local models
0 unauthorized actions allowed
<5ms average governance overhead
100% open source, MIT licensed

How it works

Declare. Enforce. Prove.

Ardur gives you three things that plain logs and chat transcripts can't.

1

Declare what the agent can do

Write a mission profile in plain Markdown — which tools are allowed, which files it can touch, what's off-limits. No YAML config hell, no custom DSLs. Just a file called ARDUR.md that reads like English.

2

Stop out-of-bounds actions before they run

When your agent tries to step outside its mission, Ardur says no before the action executes. Not after. Not in a log you'll check next week. Right then, at runtime, with a clear reason why.

3

Get cryptographically verifiable proof

Every tool call produces a signed receipt, chained together with SHA-256 hashes. You can verify the entire session later — what was allowed, what was denied, and whether anyone tampered with the record.

Who is this for?

If you run AI agents that touch your files, your terminal, or your servers — Ardur is for you.

Developers using coding agents

Use Claude Code, Codex, or any terminal agent? Ardur makes sure it stays in the right directory, doesn't delete things it shouldn't, and leaves a paper trail you can actually trust.

Teams running AI in production

Need to prove to your security team, your customers, or your compliance auditor what your AI agents actually did? Ardur's receipt chains are designed for exactly that.

AI tinkerers & researchers

Running local models via Ollama? Experimenting with agent frameworks? Ardur plugs in without changing your stack — just point your agent at the proxy and get governance for free.

Proven with real models

Not synthetic benchmarks. Real cloud models, real governance.

We tested Ardur by asking a 1-trillion-parameter cloud model to build an entire web app — with every single tool call going through the governance proxy. Here's what happened.

Cloud Model Cloud · 1T params

Duration
12 minutes
Tool calls
35
Files created
18 of 20
Denials
0

All calls permitted, zero false denials

Local Model Local · 5GB

Duration
15 minutes
Tool calls
4
Files created
4 of 20
Denials
0

All calls permitted, local model too slow for sustained work

Works with your stack

Plugs into the tools you already use.

Claude Code Native plugin with PreToolUse / PostToolUse hooks
LangChain Runnable quickstart in the examples directory
LangGraph Runnable quickstart in the examples directory
AutoGen v0.4+ quickstart with governance proxy
Ollama Works with any local or cloud model
Cedar Policy engine bridge for advanced rules
SPIFFE / SPIRE Workload identity for production deployments
Any HTTP agent Framework-agnostic REST API

Open source, MIT licensed, honest about limits

Ardur is pre-release software that's already solving real problems.

We publish the code, the tests, the specs, the audit trail, and the caveats — all in one repo. No marketing fluff, no "schedule a demo," no hidden enterprise tier.