Trust Center
How we keep automations safe
The Automation Hub runs LLM-driven workflows on real client intake data. Every automation goes through guardrail enforcement, an output validator, and an optional human approval gate. This page documents the safety posture so your procurement team doesn't have to reverse-engineer it.
Approval gates
Every spec carries a risk_level ("low", "medium", "high"). High-risk runs require explicit attorney approval before any side effect (webhook, Slack post, email send) fires. Medium-risk runs can be configured per-firm to require approval. Low-risk runs auto-execute.
A run that fails our output-schema validator or trips a guardrail is automatically routed to the approval queue regardless of risk level. The attorney sees the LLM output, the failed validator, and either approves with a manual fix or declines.
Editorial review
Specs marked "Attorney-reviewed" in the catalog have been read by our in-house editorial counsel for accuracy of legal claims, appropriateness of disclaimers, and adherence to ABA Model Rule 5.5 on unauthorized practice. The badge surfaces on the public template library and in the in-product depository.
The full catalog is open for inspection at /resources/automations/templates.
Tier transparency
Production usage caps are documented at the spec level (tier_min) and at the action level (per-month run caps). Users see exactly how many runs they have remaining each billing period. Tier denials return structured 402 responses with the upgrade target so callers can render a self-serve upgrade path.
Disclosure + audit
Every production run writes to automation_runs with full input, output, latency, model, and (on failure) structured error. Cross-product signals fire to the Compliance product on repeated failures so the attorney's compliance dashboard surfaces unhealthy automations alongside other posture issues.
Attorneys can request the full audit log for any automation via the in-product audit-log page. Auditor read-only roles are available on Concierge tier.

