Edictum
Edictum ConsoleConcepts

How the Console Works

The coordination layer for governed AI agents. Agents evaluate contracts locally; the console stores events, manages approvals, pushes updates, and monitors the fleet.

AI Assistance

Right page if: you need to understand what runs in the agent process vs. the console server -- the boundary principle, the from_server() connection sequence, and fail-closed behavior when the server is down. Wrong page if: you want the internal tech stack and source layout (see https://docs.edictum.ai/docs/console/architecture) or the security model (see https://docs.edictum.ai/docs/console/concepts/security-model). Gotcha: the console never evaluates contracts in production. The only exception is the /api/v1/bundles/evaluate playground endpoint, which is a development tool -- agents never call it.

Edictum Console is the coordination layer for governed AI agents. It does not evaluate contracts -- agents do that locally. The console stores audit events, manages approval workflows, pushes contract updates, and monitors your fleet. One Docker image, three services, zero agent restarts when contracts change.

The Three Components

Your Agent Process
edictum (core library)
+Pipeline: pre/post eval
+Sandbox enforcement
+Session tracking
+ALLOW / DENY decision
edictum[server] (SDK)
+ServerAuditSink
+ServerApprovalBackend
+ServerBackend
+ServerContractSource
Framework adapter
LangChain / CrewAI / Claude / ...
HTTP
SSE
Edictum Console
FastAPI backend
+Contract storage
+Bundle deployment
+Approval workflow
+Event ingestion
+SSE push
+Fleet monitoring
Postgres + Redis
React SPA (dashboard)
+Contract management
+Event feed
+Approval queue
+Agent fleet view
+Settings + keys

Three components, one principle: evaluation = core library, coordination = console.

The Boundary Principle

The console never evaluates contracts in production. Every allow/deny decision runs in the agent process, in the core library, with zero network latency. The console handles everything that requires coordination across agents and humans.

CapabilityCore (agent-side)Console (server-side)
Contract evaluation (pre, post, session, sandbox)YesNo (except playground)
Sandbox enforcementYesNo
Session tracking (single process)Yes (MemoryBackend)--
Session tracking (distributed)--Yes (ServerBackend)
Audit to stdout/file/OTelYes--
Centralized audit dashboard--Yes
Approval (local CLI)Yes (LocalApprovalBackend)--
Approval (production HITL)--Yes (ServerApprovalBackend)
SSE hot-reload--Yes
Fleet monitoring + drift detection--Yes
Contract management UI--Yes
Notification fan-out--Yes
Bundle signing (Ed25519)--Yes

The one exception: POST /api/v1/bundles/evaluate is a playground endpoint for testing contracts in the dashboard. It evaluates a tool call against a bundle and returns the verdict. This is a development tool -- agents never call it. Production evaluation is always agent-side.

How an Agent Connects

When you call Edictum.from_server(), the SDK wires five components in sequence:

from_server(url, api_key, agent_id, env, bundle_name)
Connection Sequence
1
Create EdictumServerClient
HTTP base client for all API calls
2
Fetch current bundle
GET /api/v1/bundles/{name}/current?env={env} Response: yaml_bytes (base64), version, signature SDK decodes + loads contracts into pipeline
3
Start SSE subscription
GET /api/v1/stream?env={env}&bundle_name={name} Receives: contract_update, approval_decided On update: Edictum.reload() atomically swaps
4
Wire ServerAuditSink
Batches: 50 events or 5s, whichever first POST /api/v1/events (batch ingest) 10,000 buffer. Silent dedup by call_id.
5
Wire ServerApprovalBackend + ServerBackend
Approvals: POST to create, GET to poll (2s) Sessions: GET/PUT/DELETE + atomic increment
Edictum Instance Ready
same API as local usage

After setup, agent code is identical to local usage:

result = await guard.run("read_file", {"path": "data.csv"}, read_file)

The pipeline evaluates locally. Events stream to the console in the background. If a contract requires approval, the SDK posts a request and polls for the decision.

Request Lifecycle

A complete tool call with a server-connected agent:

Agent Process
Agent calls "delete_records"
Pipeline pre_execute
runs in agent process
effect: approve
contract requires approval
SDK: POST
/api/v1/approvals
SDK polls every 2s
GET /approvals/{id}
Decision received
Tool executes
only if approved
post_execute
postconditions checked locally
Audit event queued
ServerAuditSink → batch POST
POST
decision
Console
Create approval
status: PENDING
Fire notification
Telegram / Slack / Discord
Human clicks "Approve"
inline buttons
PUT /approvals/{id}
decision: approved

For calls without approval gates, the flow is simpler: pre_execute locally, tool runs, post_execute locally, audit event batched and sent.

What Happens When the Server Is Down

Edictum follows fail-closed semantics. If the console is unreachable:

  • Audit events: buffer in memory (up to 10,000). When the connection resumes, the buffer flushes. If the buffer fills, oldest events are dropped.
  • Approval requests: POST /api/v1/approvals fails. The error propagates to the pipeline. The pipeline treats this as a denial. The tool does not execute.
  • Session state: ServerBackend calls fail. The pipeline converts backend errors to deny decisions.
  • Contract updates: SSE connection drops. The SDK reconnects with exponential backoff (1s initial, 60s max). The agent continues enforcing its last-known contracts.

The agent never silently passes when the server is down. Every failure mode results in either a denial or continued enforcement of existing contracts.

Fail-closed is non-negotiable. The agent never silently passes when the server is down. Audit events buffer, approvals fail to denial, session calls fail to denial, and contract updates continue using the last-known version. This is by design — a governance gap is worse than a denial.

Server Architecture

Docker Container
FastAPI (uvicorn, async)
/api/v1/*API routes (65+ endpoints)
/api/v1/streamSSE (asyncio queues)
/dashboard/*React SPA (static files)
/Redirect to /dashboard
Background Workers
Approval timeoutevery 10s
Partition managerevery 24h
SSE cleanupevery 5min
AI usage cleanupon startup
PushManager (in-process SSE)
Per-env agent subscriptions
Per-tenant dashboard subscriptions
Targeted push to specific agents
Postgres
16 tables
Partitioned events
Alembic migrations
Redis
Sessions (TTL)
Rate limits (sorted sets)
SSE state
Agent presence

Everything runs in a single Docker image. docker compose up starts Postgres, Redis, and the server. The SPA is built at image build time and served as static files by FastAPI.

Multi-Tenant Data Model

Every database table has a tenant_id column. Every query filters by it. The default deployment is single-tenant (one admin, one tenant, auto-created on bootstrap), but the data model supports multiple tenants from day one.

This is a deliberate architectural choice for a security product. Removing tenant isolation is harder than keeping it, and "we had isolation but removed it" is indefensible. The single-tenant UX hides this complexity -- you never see tenant IDs in the dashboard.

Next Steps

  • Contracts -- the three-level contract model (contracts, compositions, bundles)
  • Hot-Reload -- how SSE push delivers contract updates to running agents
  • Approvals -- the HITL approval lifecycle
  • Security Model -- authentication, tenant isolation, and cryptography

Last updated on

On this page