AI Red Lines: When Not to Use AI on a Matter
Examples of matters and tasks where AI should not be used, or only with tight guardrails, in a UK firm.
Most firms now have an AI policy that lists approved tools and sensible precautions. Fewer have a really clear answer to a more awkward question:
“When should we not use AI on a matter at all — or only under very tight controls?”
Being enthusiastic about AI does not mean using it everywhere. In fact, having well-defined red lines can make it easier to roll AI out confidently in the areas where it genuinely helps.
This article suggests practical “AI red lines” for high‑risk matters and tasks, and how to turn them into day‑to‑day behaviours in a UK firm.
1. Separate “matter red lines” from “task red lines”
First, distinguish between:
- Matter red lines – types of file where AI should not be used at all, or only in highly constrained ways.
- Task red lines – specific activities where AI should never be used, even on otherwise routine matters.
The mix will be different in every firm, but a typical pattern might be:
- some matters where AI is only allowed for neutral admin (for example, time capture, diary notes) but not for anything that touches evidence or advice;
- certain tasks (for example, drafting undertakings, witness statements, or privileged internal assessments) that are categorically off limits for generative tools.
Writing these down in simple lists is far more useful than abstract policy language.
2. Examples of “matter red lines”
Many firms choose to restrict or ban AI use on matters involving:
- Highly sensitive personal data – serious mental health, sexual offences, child safeguarding.
- National security or very high‑sensitivity government work – where security classification or client policies are strict.
- Certain employment or internal investigation files – where there is a high risk of subsequent disclosure, regulatory scrutiny or litigation about internal processes.
- Matters governed by client panel rules that explicitly prohibit AI – some in‑house teams still take this position, at least for now.
Even where AI is not totally banned, you may decide that only firm‑hosted, tightly governed tools inside your own systems can be used, and that even then they are limited to admin or summarisation tasks.
Whatever you decide, the key is clarity:
- fee‑earners should be able to tell at a glance whether a matter is “green”, “amber” or “red” for AI use;
- partners should be able to override defaults consciously, not by accident.
3. Examples of “task red lines”
Some tasks are poor candidates for AI assistance regardless of the file. Common red lines include:
- Giving legal advice – AI can help draft background sections or structure notes, but it should not be asked to “decide” what advice to give.
- Drafting or revising undertakings – where precision and personal responsibility are critical.
- Assessing witness credibility or performance – any language that could later be criticised as biased or unfair should come from humans, not statistical systems.
- Making charging decisions or sanctions recommendations – in regulatory or investigatory contexts.
- Altering evidence – statements, transcripts and contemporaneous records should not be “cleaned up” by AI in ways that risk changing meaning.
In many cases, you may allow AI to help with surrounding tasks — for example, summarising a long document that will then be the basis for advice — but draw a clear line around the core professional judgment itself.
4. Watch the “creep” from allowed tasks into red‑line territory
A subtle risk is that permitted uses quietly expand. For example:
- an associate starts by using AI to summarise a medical report;
- next they ask it to “draft a few bullet points on liability”;
- finally they are pasting those bullets straight into an advice.
To counter this:
- design tools and prompts that bake in limits, for example: “Summarise the document, but do not express any views on prospects of success or what we should advise.”
- train people on a few real‑world “before and after” examples of when the line has been crossed;
- encourage supervisors to ask “what role did AI play here?” during reviews, not just “is this good?”.
Red lines work best when they are felt in day‑to‑day habits, not just written in policy documents.
5. Client and regulator expectations
Some red lines come directly from clients or regulators. Examples include:
- panel terms that forbid certain categories of data being sent to external AI providers;
- regulator warnings about using AI for court submissions without verification;
- professional rules emphasising that lawyers remain personally responsible for advice and advocacy.
Your AI policy should:
- map specific panel or client requirements onto your own “green / amber / red” framework;
- highlight any practice areas where sector regulators are especially cautious;
- explain how you monitor and enforce compliance (for example, through logging and file reviews).
That way, when challenged, you can show that you have consciously designed your red lines in light of external expectations.
6. Implementation: make red lines visible in the tools
Policies alone are not enough. To make red lines real:
- mark high‑risk matters (for example, with an “AI restricted” flag) in the case management system;
- configure AI features so they are disabled or limited on those files;
- surface short reminders in the UI when users try to invoke AI on an amber or red matter.
For task‑level red lines, consider:
- standard prompt libraries that simply do not include prohibited uses;
- templates with built‑in warnings (“Do not use this assistant to draft undertakings or express legal conclusions.”);
- supervision workflows that flag AI‑assisted drafts for extra review.
The aim is to make it easier to do the right thing by default, rather than relying on memory.
7. Train for judgment, not fear
Red lines can be over‑done. If people feel that “everything is banned”, they will either:
- avoid AI entirely (and miss real productivity gains); or
- use consumer tools quietly outside policy.
Training should:
- explain why particular matters and tasks are restricted;
- give concrete, practice‑specific examples of good and bad use;
- emphasise that lawyers are expected to exercise judgment, not just obey lists mechanically.
Encourage a culture where people feel able to ask, “Is this a sensible place to use AI?” before they try it — and where the answer is sometimes “yes, with care”, not always “no”.
Where OrdoLux fits
OrdoLux is being designed so that firms can operationalise AI red lines, not just write them down:
- matters can carry flags that control which AI features are available;
- AI activity is logged at matter level, so supervisors can see where tools were used on high‑risk files;
- prompt libraries and workflows can be aligned with your policy, so prohibited uses are simply not offered as options.
That way, your firm can say “we use AI where it helps, and we have clear places where we do not” — and point to concrete controls inside the case management system to back that up.
This article is general information for practitioners — not legal advice or regulatory guidance on any particular matter or sector.
Looking for legal case management software?
OrdoLux is legal case management software for UK solicitors, designed to make matter management, documents, time recording and AI assistance feel like one joined‑up system. Learn more on the OrdoLux website.