Five Common Misconceptions About AI in Legal Practice
From ‘AI writes the advice’ to ‘confidentiality is impossible’, let’s separate hype from reality for UK firms.
Lawyers are being sold two unhelpful stories about AI at the same time:
- the hype story: “AI will replace lawyers; you’ll press a button and out pops the advice”; and
- the fear story: “If you even touch it, you’ll breach confidentiality, GDPR and your professional duties.”
Neither is accurate, especially for small and mid‑sized UK firms.
This article tackles five common misconceptions about AI in legal practice and replaces them with more realistic assumptions you can build policy and process around.
Misconception 1: “AI can just write the advice for me”
Modern models can produce scarily fluent text. If you ask, “Draft advice on X” you will get something that looks like advice: headings, structure, confident conclusions.
But:
- the model has no client, no retainer and no instruction letter;
- it cannot weigh conflicting evidence or evaluate credibility; and
- it has no professional duty to anyone.
Treating AI as the author of the advice confuses drafting support with judgment.
Better assumption: AI can help with structure, language and first drafts, but only a solicitor can give legal advice.
In practice that means:
- use AI to draft skeletons, summaries and variants for different audiences;
- never send AI output to a client without your own review, edits and sign‑off; and
- make sure your file shows your reasoning, not just an AI‑generated memo.
Misconception 2: “If we use AI, we must be breaching confidentiality / GDPR”
It depends entirely on how you use it.
There is a big difference between:
- pasting a client’s full factual matrix into a consumer chatbot on a public website; and
- using a firm‑controlled tool where:
- data is encrypted;
- processing is governed by a DPA; and
- prompts and outputs are not used to train public models.
From a UK GDPR and confidentiality perspective, the key questions are:
- What personal data and confidential information are you sending?
- Where is it going, and under what contract?
- Is this necessary and proportionate for the purpose?
You already make similar assessments when using email providers, document management systems or transcription tools.
Better assumption: AI can be used consistently with GDPR and confidentiality – but only with the right technical and contractual controls.
Practically, most firms will want to:
- block or strongly discourage use of unauthenticated public chatbots for client‑specific work;
- prefer tools that run on tenant‑isolated or EU/UK‑hosted models; and
- document their reasoning in an AI / data protection policy and (for higher‑risk use cases) a DPIA.
Misconception 3: “If the model was trained on the internet, our data becomes part of it”
Again, the detail matters.
Two different things are often conflated:
- Training data – what the base model was originally trained on.
- Usage data – your prompts and outputs when you use the model via a particular service.
Most reputable providers now offer options where:
- your usage data is not used to train or fine‑tune the base model; and
- your prompts and outputs are logically and contractually segregated from other customers.
That does not remove all risk – you still have to think about storage, access logs, support access, sub‑processors, etc. But it does mean “anything we type in becomes training data” is no longer automatically true.
Better assumption: Ask each provider, in plain language, what they do with:
- prompts,
- outputs, and
- derivative analytics.
If the answer is vague or buried in marketing speak, think very carefully before using them for client work.
Misconception 4: “Regulators will ban this, so we should wait”
Regulators in the UK have been cautious but are not banning AI outright. Instead, they are:
- reiterating existing duties (competence, confidentiality, supervision, duty to the court); and
- signalling that these duties still apply even if AI is in the loop.
In other words: the tools might be new, but the principles are familiar.
Waiting for a perfect, finalised regulatory regime is risky in itself, because:
- your clients will be using these tools regardless; and
- other firms will be learning how to deploy them productively and safely.
Better assumption: Regulation will evolve, but the core professional duties are stable enough to start from now.
A sensible approach is:
- adopt AI first in low‑risk, internal use cases (summarising documents, drafting internal notes);
- keep anything high‑risk (novel advice, contested litigation strategy) under closer supervision; and
- update your policy periodically as SRA and other guidance develops.
Misconception 5: “This is only for big firms with innovation teams”
In reality, many of the highest‑leverage use cases are in smaller firms:
- turning long email chains or WhatsApp threads into a single update;
- drafting first‑pass letters, attendance notes and chronologies;
- capturing time automatically from work you’re already doing.
Large firms may have dedicated innovation teams and custom models, but small firms have:
- shorter decision chains;
- fewer legacy systems; and
- a stronger incentive to remove friction from everyday work.
Better assumption: Small firms can get value quickly by focusing on everyday workflows, not grand transformation projects.
You do not need an AI lab to:
- pilot a tool with one team or case type;
- measure a couple of concrete outcomes (time saved, write‑offs reduced, speed to first draft); and
- decide whether to roll out further.
Putting healthier assumptions into practice
If you replace the myths above with more grounded assumptions, it becomes much easier to design something workable:
- Policy – short, focused, with clear “do / don’t” examples.
- Training – practical workshops using your own documents and scenarios.
- Technology – a small number of supported tools, wired into your matter workflows.
- Governance – light‑touch oversight, audits of a sample of AI‑assisted work product, and a way for people to raise concerns.
Over time, firms that treat AI as a supervised assistant – not a magic answer, not a prohibited topic – are likely to move faster and with fewer unpleasant surprises.
This article is general information for practitioners — not legal advice.
Looking for legal case management software?
OrdoLux is legal case management software for UK solicitors, designed to make matter management, documents, time recording and AI assistance feel like one joined‑up system. Learn more about OrdoLux’s legal case management software.