How the Console Works
The coordination layer for governed AI agents. Agents evaluate contracts locally; the console stores events, manages approvals, pushes updates, and monitors the fleet.
Right page if: you need to understand what runs in the agent process vs. the console server -- the boundary principle, the from_server() connection sequence, and fail-closed behavior when the server is down. Wrong page if: you want the internal tech stack and source layout (see https://docs.edictum.ai/docs/console/architecture) or the security model (see https://docs.edictum.ai/docs/console/concepts/security-model). Gotcha: the console never evaluates contracts in production. The only exception is the /api/v1/bundles/evaluate playground endpoint, which is a development tool -- agents never call it.
Edictum Console is the coordination layer for governed AI agents. It does not evaluate contracts -- agents do that locally. The console stores audit events, manages approval workflows, pushes contract updates, and monitors your fleet. One Docker image, three services, zero agent restarts when contracts change.
The Three Components
Three components, one principle: evaluation = core library, coordination = console.
The Boundary Principle
The console never evaluates contracts in production. Every allow/deny decision runs in the agent process, in the core library, with zero network latency. The console handles everything that requires coordination across agents and humans.
| Capability | Core (agent-side) | Console (server-side) |
|---|---|---|
| Contract evaluation (pre, post, session, sandbox) | Yes | No (except playground) |
| Sandbox enforcement | Yes | No |
| Session tracking (single process) | Yes (MemoryBackend) | -- |
| Session tracking (distributed) | -- | Yes (ServerBackend) |
| Audit to stdout/file/OTel | Yes | -- |
| Centralized audit dashboard | -- | Yes |
| Approval (local CLI) | Yes (LocalApprovalBackend) | -- |
| Approval (production HITL) | -- | Yes (ServerApprovalBackend) |
| SSE hot-reload | -- | Yes |
| Fleet monitoring + drift detection | -- | Yes |
| Contract management UI | -- | Yes |
| Notification fan-out | -- | Yes |
| Bundle signing (Ed25519) | -- | Yes |
The one exception: POST /api/v1/bundles/evaluate is a playground endpoint for testing contracts in the dashboard. It evaluates a tool call against a bundle and returns the verdict. This is a development tool -- agents never call it. Production evaluation is always agent-side.
How an Agent Connects
When you call Edictum.from_server(), the SDK wires five components in sequence:
After setup, agent code is identical to local usage:
result = await guard.run("read_file", {"path": "data.csv"}, read_file)The pipeline evaluates locally. Events stream to the console in the background. If a contract requires approval, the SDK posts a request and polls for the decision.
Request Lifecycle
A complete tool call with a server-connected agent:
For calls without approval gates, the flow is simpler: pre_execute locally, tool runs, post_execute locally, audit event batched and sent.
What Happens When the Server Is Down
Edictum follows fail-closed semantics. If the console is unreachable:
- Audit events: buffer in memory (up to 10,000). When the connection resumes, the buffer flushes. If the buffer fills, oldest events are dropped.
- Approval requests:
POST /api/v1/approvalsfails. The error propagates to the pipeline. The pipeline treats this as a denial. The tool does not execute. - Session state:
ServerBackendcalls fail. The pipeline converts backend errors to deny decisions. - Contract updates: SSE connection drops. The SDK reconnects with exponential backoff (1s initial, 60s max). The agent continues enforcing its last-known contracts.
The agent never silently passes when the server is down. Every failure mode results in either a denial or continued enforcement of existing contracts.
Fail-closed is non-negotiable. The agent never silently passes when the server is down. Audit events buffer, approvals fail to denial, session calls fail to denial, and contract updates continue using the last-known version. This is by design — a governance gap is worse than a denial.
Server Architecture
Everything runs in a single Docker image. docker compose up starts Postgres, Redis, and the server. The SPA is built at image build time and served as static files by FastAPI.
Multi-Tenant Data Model
Every database table has a tenant_id column. Every query filters by it. The default deployment is single-tenant (one admin, one tenant, auto-created on bootstrap), but the data model supports multiple tenants from day one.
This is a deliberate architectural choice for a security product. Removing tenant isolation is harder than keeping it, and "we had isolation but removed it" is indefensible. The single-tenant UX hides this complexity -- you never see tenant IDs in the dashboard.
Next Steps
- Contracts -- the three-level contract model (contracts, compositions, bundles)
- Hot-Reload -- how SSE push delivers contract updates to running agents
- Approvals -- the HITL approval lifecycle
- Security Model -- authentication, tenant isolation, and cryptography
Last updated on