Logging AI Use on Matters: Audit Trails That Supervisors Actually Read
The SRA and insurers will want to know how AI was used on a matter. Here's exactly what UK firms should record to satisfy partners, COLPs and professional indemnity insurers.
If your firm is going to lean on AI for real work, you need to be able to answer three simple questions:
- How was AI used on this matter?
- Who checked what it produced?
- What did we do when it went wrong?
Regulators, courts, insurers and sophisticated clients will all care about these points. The good news is that you do not need a complex system to get them right — but you do need something better than “we don’t really know”.
This article sets out a practical model for logging AI use on matters, focused on UK firms that want audit trails supervisors will actually read.
1. Decide what you care about logging
Start by agreeing, in principle, what you want the record to show. Typical elements include:
- When AI was used (date and time);
- Who used it (user ID);
- What for (task type, such as “email summary”, “first-draft advice”, “chronology extraction”);
- What it touched (documents, emails, note IDs);
- How it was checked (reviewer, sign-off notes);
- What went wrong (if anything) and how it was corrected.
You do not need to log every keystroke. Aim for a level of detail where, if a regulator or insurer asks about a matter, you can reconstruct:
- the main AI-assisted steps; and
- the human supervision applied.
2. Keep logs close to the matter file
Logs are useless if they live in a separate system nobody checks.
Better pattern:
- treat AI activity as part of the matter record, alongside documents, emails, notes and time entries;
- display recent AI actions in a simple list on the matter overview screen;
- let supervisors drill down into specific events when reviewing files.
For example, a matter timeline might show:
- “09:12 – AI summary of email thread [link] created by AB”
- “09:30 – Chronology updated from AI extraction of documents [list] – reviewed by CD”
- “10:05 – First-draft advice letter generated via AI – edited and approved by EF”
This is far more useful than a separate, obscure log buried in an admin dashboard.
3. Standardise “reason codes” for AI use
Free-text notes are helpful, but for analytics and risk monitoring you also need structured data.
Create a short list of AI “reason codes”, such as:
- RESEARCH_ORIENTATION
- DRAFTING_FIRST_PASS
- EMAIL_SUMMARY
- CHRONOLOGY_UPDATE
- ATTENDANCE_NOTE_STRUCTURING
- TIME_CAPTURE_SUGGESTION
When a user runs an AI action, the system can either:
- infer the reason code automatically (for predefined workflows); or
- prompt the user to choose from a dropdown.
This makes it much easier to answer questions like:
- “How often are we using AI for research vs drafting?”
- “Which departments rely most on AI for client communication drafts?”
4. Capture supervision without creating busywork
Partners do not have time to write essays for every AI-assisted draft. But they do need to be visibly in charge.
A simple approach is:
- when a supervisor reviews an AI-assisted output, they tick one of a few options, for example:
- “Reviewed – accepted with minor edits”
- “Reviewed – significant edits made”
- “Reviewed – rejected output”
- optionally add a short free-text note for unusual cases (“Hallucinated case law – corrected and added to training materials”).
This creates a light-touch supervision record that:
- shows someone senior has looked at key outputs; and
- generates useful data on where AI is working well or badly.
5. Record problems as learning events, not secrets
It is unrealistic to expect AI never to go wrong. What matters is:
- how quickly you catch issues; and
- whether you learn from them.
Logging should therefore include a simple mechanism to flag:
- hallucinations (made-up cases, misquotes);
- tone problems (too aggressive, too informal);
- confidentiality concerns (prompt included more data than necessary).
For example:
- “Flag this output as a problem” button on AI results;
- short reason list and optional description;
- automatic notification to someone in risk/innovation.
Over time, this allows you to tweak prompts, policies and training, backed by real evidence.
6. Align logs with your wider governance
Logging AI use is not an isolated activity. It should connect to:
- your AI policy (what is allowed and prohibited);
- your AI risk assessment (where the biggest risks lie);
- your training (for fee-earners, PAs and support staff);
- your incident and complaints processes.
Practical examples:
- if logs show heavy AI use in research tasks, you might focus training on verification and hallucinations;
- if a complaint involves alleged AI misuse, you should be able to pull the relevant log entries quickly;
- if regulators ask how you supervise AI, logs help you demonstrate real oversight, not just paper policies.
OrdoLux is legal case management software built for UK solicitors, with AI tools integrated directly into the matter workspace.
Where OrdoLux fits
OrdoLux stores all matter activity — time entries, documents, emails and AI research outputs — within the matter record. That gives firms a single place to review what happened on a file, which supports supervision and audit.
The platform includes 2FA for access control, KYC/AML via Checkboard, and client account reconciliation. The built-in AI research tool includes citations so outputs can be verified before use.
Limited offer
6 months free — founding firm access
We're inviting a small number of UK law firms to join OrdoLux as founding customers. Full platform access, completely free for 6 months. No credit card. No catch. When we have enough firms on board, this offer closes.
Apply for founding access →Try OrdoLux — legal case management software built for UK solicitors
Matter management, time capture, billing and AI tools in one platform. Rolling monthly, no lock-in, £50 + VAT per fee earner.
Book a free demo Learn more