Frequently Asked Questions
Common questions about Governance OS, how it works, and how it differs from other tools.
General
What is Governance OS?
Governance OS is a control plane for executive decision-making. It converts continuous signals into deterministic policy evaluations, raises exceptions when human judgment is required, and produces audit-grade evidence packs.
Think of it as an operating system for governance — not a dashboard, not a copilot, not an RPA tool. It's the layer that ensures decisions are made explicitly, documented immutably, and defensible under scrutiny.
How is this different from a dashboard or BI tool?
Dashboards show you data. Governance OS acts on policies.
A dashboard might show that a position limit is breached. Governance OS will:
- Evaluate the breach against explicit policy rules
- Raise an exception with severity and context
- Present symmetric options without recommendations
- Require a rationale before allowing commitment
- Generate an evidence pack for audit
The difference is accountability. Dashboards inform. Governance OS governs.
Is this a workflow engine or RPA tool?
No. Workflow engines automate processes. Governance OS governs decisions.
We're not trying to automate everything — we're trying to make explicit when automation is appropriate (within policy bounds) and when human judgment is required (exceptions). The goal isn't "zero touch" — it's "right touch."
Is this an AI copilot?
No. Copilots suggest. We explicitly don't.
AI is used as an optional coprocessor for specific tasks (extracting signals from documents, drafting narratives), but never for making decisions. The decision surface presents symmetric options without recommendations. The human owns the choice.
Technical
What does "deterministic" mean?
Given the same inputs (signals, policy version), the system always produces the same outputs (evaluation result, exception). No randomness, no model variance, no "it depends on the day."
This enables replay — you can re-run historical data against current policies, or test policy changes against historical scenarios. It also enables audit — you can prove the system behaved correctly at any point in time.
Can the system make decisions automatically?
Within policy bounds, yes. When evaluation passes, predefined actions can execute automatically. But when evaluation fails or is inconclusive, an exception is raised and a human must decide.
The key principle: autonomy is explicitly constrained by policy. The system never acts outside defined parameters, and never makes judgment calls about exceptions.
How do you handle AI safety?
Clear boundaries:
- AI can: Extract signals from documents (with confidence scores), draft narratives (grounded to evidence), assist policy authoring (human-approved)
- AI cannot: Evaluate policies, determine severity, recommend options, make decisions, be the source of truth for evidence
Every AI-assisted action has a human approval gate and full audit trail.
What happens if the system goes down?
System failures result in human review, never auto-resolution. If signals can't be evaluated, if exceptions can't be raised, if evidence can't be generated — humans are notified and must intervene manually.
We fail safe, not fail silent.
Integration
What systems can feed signals?
Currently supported:
- CSV/Excel file imports (manual or watched folder)
- REST API (push signals via HTTP)
- Unstructured document intake (AI-extracted signals from emails, PDFs)
Coming soon (see roadmap):
- Bloomberg API connector
- Database connectors (read from source systems)
- Webhook receivers (event-driven)
- Email integration
Can I export evidence packs?
Yes. Evidence packs can be exported as:
- JSON — For programmatic processing and integration
- HTML — For human-readable review
- PDF — For formal documentation and filing
Each export includes a SHA-256 content hash for integrity verification.
Is there an API?
Yes. Full REST API with OpenAPI documentation. The interactive demo
includes Swagger docs at /docs.
Business
Who is this for?
Organizations where decisions carry real consequences and require defensible reasoning:
- Corporate Treasury teams (CFO, Treasurer, Controller)
- Wealth Management firms (Advisors, Compliance)
- Asset Managers (Portfolio Management, Risk)
- Financial Operations (COO, Operations Leaders)
How do I get access?
We're onboarding select organizations for early access. Join the waitlist and tell us about your use case.
Is this open source?
Yes. The core system is open source and available on GitHub. We believe governance systems should be inspectable and auditable.
Can I customize policies for my organization?
Yes. Domain packs (Treasury, Wealth) come with default policies that can be customized. You can also create entirely new policies and signal types for your specific requirements.
Philosophy
Why no recommendations?
The moment a system says "we recommend Option A," accountability shifts. The human becomes an approver rather than a decider. When things go wrong, the defense becomes "the system recommended it."
We believe humans should own decisions. Options are presented symmetrically. The system provides context and captures the choice — but never nudges.
Why is uncertainty visible?
Most systems hide uncertainty to appear more confident. This creates false trust that collapses when things go wrong ("But the dashboard said everything was fine!").
We make uncertainty explicit. When a decision is made under uncertainty, that uncertainty is part of the record. Trust increases after the first failure — because the system can show "we flagged this as uncertain at the time."
Why one screen for decisions?
Forcing the decision surface to fit one screen is a design constraint that creates clarity. If you can't explain the exception and present options without scrolling, the policy or exception is too complex.
Deep exploration belongs in secondary surfaces after the decision is made. At the moment of commitment, all critical context must be visible.
Have a question we didn't answer? Open an issue on GitHub or join the waitlist to discuss with our team.