How to Run an AI Risk Assessment in Your Firm
Every UK firm using AI needs a simple risk register. This guide shows how to inventory use cases, score risk, select controls and document sign-off — without overcomplicating it.
Most firms now have some kind of AI in use – even if it is just individuals experimenting with tools on their own initiative. The question is no longer “Are we using AI?” but “How much risk are we carrying, and is that risk under control?”
An AI risk assessment sounds heavyweight, but it does not need to be. This article sets out a practical, repeatable method for UK law firms to:
- take stock of AI use across the practice;
- score risks in a consistent way; and
- decide what controls and sign‑offs are appropriate.
Step 1: Build a simple AI inventory
You cannot assess what you do not know about. Start by asking:
- What AI‑enabled tools are we using in the firm?
- Who owns them internally?
- What are they used for (eg, research, drafting, document review, HR, marketing)?
- What data goes in and out?
Capture this in a basic spreadsheet or register with columns such as:
- name of tool / system;
- vendor and hosting location;
- teams using it;
- typical use cases;
- data categories involved (client confidential, personal, special category, staff HR data, open web, etc.).
This inventory is the foundation for everything else.
Step 2: Define your risk criteria
Next, agree how you will score risk. A straightforward approach is to score each use case on:
- Impact – how serious the consequences would be if something went wrong (from minor inconvenience to regulatory investigation or serious client harm).
- Likelihood – how plausible it is that the risk will materialise, given the tool and the way you use it.
For each dimension, define what 1–5 mean in plain language. For example:
- Impact 1: trivial, little or no client impact
- Impact 3: moderate, could cause complaint or financial loss to a client
- Impact 5: severe, could trigger regulatory action or significant harm
The aim is not mathematical precision but consistent judgment across matters and partners.
Step 3: Identify common AI risk themes
For each tool or use case, consider themes such as:
- Confidentiality & GDPR – is client or staff data sent to third parties? Where is it stored? Are there clear DPAs?
- Accuracy & hallucinations – could incorrect outputs meaningfully affect advice, pleadings or negotiations?
- Duty to the court – does the tool assist with research or drafting of submissions? How are authorities checked?
- Bias & fairness – does it influence decisions about individuals (recruitment, promotion, client intake)?
- Operational resilience – what happens if the service fails or changes pricing abruptly?
Score each use case against these themes, then roll them up into your overall impact/likelihood view.
Step 4: Choose proportionate controls
Once you have a risk score, you can choose controls with a lighter or heavier touch.
For lower‑risk internal uses (eg, summarising public documents):
- restrict tools to approved providers with sensible defaults;
- require basic training for users; and
- log activity in the relevant matter or internal project.
For medium‑risk uses (eg, AI‑assisted drafting of client communications):
- require human review and sign‑off;
- insist on verification of any authorities cited; and
- keep prompts and outputs attached to the matter file.
For higher‑risk uses (eg, tools that influence HR decisions or client onboarding):
- carry out a structured DPIA;
- involve risk/compliance in tool selection;
- consider additional technical safeguards such as redaction, access controls and more detailed logging.
Document these controls in your AI policy and procedures, so the assessment is linked to concrete actions.
Step 5: Record decisions and ownership
An AI risk assessment is only useful if:
- decisions are recorded; and
- someone is accountable for acting on them.
For each tool, capture in your register:
- the overall risk score and key concerns;
- controls adopted (eg, “research only”, “no special category data”, “partner review required for outputs to clients”);
- any conditions for continued use (such as contractual changes or technical improvements);
- the internal owner (partner or manager) responsible.
This gives you an audit trail for regulators and clients and avoids the common problem of “shadow AI” where no‑one is clearly in charge.
Step 6: Review regularly, not constantly
Technology and regulation are moving fast, but you do not need to reassess everything every month. A reasonable pattern is:
- annual review of the AI inventory and risk scores;
- ad‑hoc reassessment when:
- new high‑risk use cases are proposed; or
- vendors make major changes to models, pricing or terms.
Keep the process administratively light so that people actually use it.
OrdoLux is legal case management software built for UK solicitors, with AI tools integrated directly into the matter workspace.
Where OrdoLux fits
OrdoLux is a legal case management platform for UK solicitors. It includes a built-in AI legal research tool for case law and legislation research, with citations for human verification.
The platform handles matter management, time recording (via keyboard, automatic Outlook email capture, and WhatsApp), document storage with SharePoint, billing, KYC via Checkboard, Stripe payments, and electronic signatures — all in one place.
Limited offer
6 months free — founding firm access
We're inviting a small number of UK law firms to join OrdoLux as founding customers. Full platform access, completely free for 6 months. No credit card. No catch. When we have enough firms on board, this offer closes.
Apply for founding access →Try OrdoLux — legal case management software built for UK solicitors
Matter management, time capture, billing and AI tools in one platform. Rolling monthly, no lock-in, £50 + VAT per fee earner.
Book a free demo Learn more