The RegTech industry has a dirty secret: most of the “AI” it sells is a black box.
You ask a question. You get an answer. The answer might be right. You have no way to know.
For consumer applications, that’s an acceptable trade-off. For regulated financial firms, it is not.
A compliance programme is a documented, defensible system. Every decision has to have a basis. Every control has to map to an obligation. Every gap assessment has to show its working. When an FSRA examiner asks why your firm concluded that a particular obligation was satisfied, “the AI said so” is not an answer.
This is why explainability — not speed, not breadth, not automation — is the central problem in RegTech AI.
What Explainability Actually Means
The word “explainability” gets used loosely. In the context of AI for compliance, it means something specific:
An explainable output is one where you can trace, step by step, how the system moved from input to conclusion.
That means knowing:
- Which regulatory text the answer is grounded in
- How the system interpreted that text
- Which inference steps it took
- What confidence it assigned to each step
- Where uncertainty exists and why
An output that merely cites a rule number isn’t explained — it’s sourced. An output that shows the reasoning chain from question to answer, with each step backed by specific text, is explained.
The distinction matters because explainability is what allows a compliance officer to do two things:
- Trust the answer — because they can verify each step
- Defend the answer — because they can present the reasoning to an examiner, auditor, or board
Why Current Tools Fail
There are three categories of AI tools currently marketed to compliance teams:
1. LLM wrappers
These send your question to a general-purpose language model (GPT-4, Claude, Gemini) and return the output. Some add a step where they first retrieve relevant document chunks and inject them into the prompt.
The problem is structural: these models are trained to produce plausible text, not accurate regulatory analysis. They hallucinate. They cite rules that don’t exist. They synthesise confidently across jurisdictions, mixing up FCA and ADGM requirements.
More fundamentally, you can’t audit the output. The model’s reasoning is opaque. When it gets something wrong — and it will — you have no way to diagnose why or prevent it happening again.
2. Document management with AI features
These are essentially search tools. They help you find relevant sections of policy documents or regulatory texts. Some add classification features — flagging documents as relevant to particular obligation categories.
The limitation is that they don’t reason. Finding a relevant paragraph isn’t the same as determining whether it satisfies an obligation. Classification is not analysis.
3. Rules-based engines
These are deterministic and auditable, but they can’t handle the complexity and ambiguity of natural language regulatory text. They work for simple, structured obligation checks. They break down when the underlying rules require interpretation.
The Architecture That Makes Explainability Possible
Seif’s approach to explainability is architectural, not cosmetic. It’s not a matter of adding a “show reasoning” button to an LLM. It requires a fundamentally different system design.
The core principle: use the language model only for what it’s actually good at — semantic synthesis of structured information. Every other part of the pipeline is deterministic and auditable.
Here’s what that looks like in practice:
Step 1: Structured obligation extraction. Regulatory text is processed into structured ComplianceUnit objects. Each unit captures: the deontic modality (must, must not, should), the obligation subject, the applicable firm types, the applicable conditions, and the cross-references. This extraction is done once, verified, and stored in the knowledge graph.
Step 2: Deterministic retrieval. When a query arrives, the system traverses the graph to retrieve the specific obligation nodes that are relevant. This isn’t fuzzy vector search — it’s a deterministic graph traversal informed by the firm’s context (category, regulated activities, products). The retrieved nodes are the ground truth.
Step 3: LLM synthesis. Only at this point does the language model enter. Its job is to synthesise the retrieved obligations into a readable answer. It is not asked to reason from scratch. It is given the specific text and asked to present it clearly.
Step 4: Explainability envelope. The output is wrapped in a structured envelope: the answer, the specific obligations cited, the confidence score, the reasoning chain (which graph nodes were traversed, which retrieval steps were taken, which synthesis decisions were made), and an audit hash.
Every step is logged. Every step is auditable. The output is defensible because you can trace it back to specific regulatory text through a documented chain of inference.
What This Looks Like in Practice
When a compliance officer at an ADGM fund manager asks Seif: “Does our enhanced due diligence process satisfy the FSRA’s requirements for high-risk clients?”
The system doesn’t generate a plausible-sounding answer. It:
- Identifies the firm’s category and regulated activities from the firm context
- Traverses the graph to retrieve the applicable EDD obligations under the AML Rulebook
- Retrieves the firm’s existing CDD policy (uploaded to Seif)
- Applies the Three-Way Match: extract what the policy claims, verify against what the obligation requires, classify the gap
- Returns a classified result (GREEN/AMBER/RED) with the specific rule text, the specific policy clause, the gap assessment, and the confidence score
The compliance officer can see every step. They can click through to the exact rule text. They can see where the classification came from. They can export the whole assessment as an auditor-ready package.
That’s the difference between automation and explainable automation.
Why Regulators Will Demand This
DIFC enacted Regulation 10 on autonomous and semi-autonomous systems in 2023, specifically requiring that AI tools used in regulated decision-making produce auditable, explainable outputs. ADGM has not yet enacted equivalent regulation, but the direction of travel is clear.
The FSRA’s approach to AI governance has been cautious and deliberate. They’ve been watching developments in other jurisdictions. When AI-specific regulation does arrive in ADGM, it will almost certainly require firms to demonstrate that AI tools used in compliance-relevant functions produce explainable, auditable outputs.
Firms that are using black-box AI for compliance decisions today will face a reclassification problem: the tool that was “good enough” becomes non-compliant overnight.
Firms that have built their compliance programme on explainable AI won’t face that problem. They’ll have the documentation already.
The Standard to Hold AI To
Every time a compliance officer accepts an AI output without understanding its basis, they’re taking on regulatory risk they can’t see.
The standard should be simple: if you can’t explain how the AI got to its answer, you can’t defend that answer to an examiner. And if you can’t defend it, you shouldn’t be relying on it.
That’s not a counsel of AI scepticism. It’s a counsel of professional rigour. Compliance has always required documented, defensible reasoning. AI doesn’t change that requirement. It just means the AI has to meet it too.
Seif was built to meet that standard. If you want to see how it applies to your firm’s specific compliance questions, book a demo.
This post reflects our perspective on AI architecture for compliance applications. It is not legal advice.