AI and Information Barriers: Keeping Chinese Walls Intact

Photo: Compliance and legal AI for UK solicitors – AI and Information Barriers: Keeping Chinese Walls Intact.

Key design choices to stop AI tools leaking information across departments or clients inside a firm.

Traditional information barriers (“Chinese walls”) were designed for a world of:

  • paper files in different rooms;
  • separate secretaries and teams;
  • systems that barely talked to each other.

Modern AI‑enabled tools cut across those boundaries. Search and summarisation features thrive on seeing as much data as possible — which is exactly what information barriers are meant to prevent.

This article looks at how to keep information barriers intact in an AI‑enabled firm, with a focus on practical design choices rather than abstract theory.

1. Start from your existing barrier model

Most firms already have a mix of:

  • hard barriers – physically and logically separate teams, systems and permissions (for example, Chinese‑walled M&A teams);
  • soft barriers – norms and policies limiting who may access or discuss certain files (for example, general conflicts or sensitivities).

AI does not replace these; it adds new questions:

  • which datasets can AI see when answering a query?
  • can a user accidentally pull in material from a walled‑off matter?
  • how are prompts and outputs logged and audited?

Clarify, on paper, which parts of your current model are non‑negotiable. Those are the constraints your AI architecture must respect.

2. Treat AI as a feature layered on top of access controls

A core principle is:

“AI should never see more than the user is allowed to see.”

That means:

  • access control remains at the document and matter level in your DMS or case management;
  • AI features can only operate on documents that are already accessible to the user under those controls;
  • there is no “super‑index” that ignores matter permissions in the name of convenience.

Architecturally, this often involves:

  • building AI indexing and vector stores within each permission domain (for example, by client, by practice area, by barrier group);
  • ensuring that any global search is really a federation of per‑domain searches filtered by the user’s permissions.

If an AI feature requires broader access than your existing barrier model allows, the answer should usually be “no” or “not yet”, not a quiet exception.

3. Avoid cross‑matter training on sensitive content

Many AI features rely on using existing data to improve performance. For law firms with information barriers, you need sharp lines around:

  • what data, if any, is used to train shared models;
  • whether models are trained per tenant, per practice or truly across the whole firm;
  • how you prevent leakage of patterns between walled‑off areas.

Safeguards might include:

  • limiting training data to non‑sensitive, non‑barriered matters;
  • preferring per‑tenant or per‑group models over global ones for certain workflows;
  • rigorous anonymisation and aggregation where you do use cross‑matter examples (for example, synthetic prompts and outputs).

The goal is to get the benefit of better prompts and UX without turning your deal history into a de facto shared training set.

4. Design prompts and features to respect barriers by default

Even with good access controls, careless prompt design can nudge users towards barrier breaches. Examples to avoid include:

  • “Summarise everything we know about Client X” – when some matters are walled off;
  • “Show similar cases to this one from across the firm” – without filtering for permissions;
  • global trend analysis that might indirectly reveal sensitive relationships or counterparties.

Safer patterns focus on the current matter or permitted dataset, for example:

  • “Summarise documents in this matter”;
  • “Find similar pleadings among matters I can access”;
  • “Draft a chronology from the documents linked to this file.”

Careful wording in UI and prompts, combined with technical filters, reduces the chance that lawyers inadvertently ask AI to behave like an omniscient firm‑wide brain.

5. Logging and audit: show your working

If something goes wrong, you will want to know:

  • what the user asked;
  • what the AI system accessed;
  • what it returned;
  • who saw it, and when.

That means building matter‑level and system‑level logs for AI activity, including:

  • prompts (or at least structured records of actions);
  • document IDs involved;
  • outputs or references generated.

These logs should be:

  • secured and access‑controlled;
  • available to risk and compliance teams;
  • retained in line with your wider records policies.

This makes it easier to investigate suspected barrier breaches and to demonstrate, to regulators or clients, that you take them seriously.

6. Training and culture: explain why the limits exist

Lawyers already understand conflicts and barriers in broad terms. The new step is helping them see how AI features intersect with those duties.

Training should cover, with examples:

  • why “ask everything about X” is risky in an AI context;
  • how to use AI features in a way that stays within the walls;
  • how to report and handle incidents where something looks wrong.

Encourage people to:

  • treat suspicious AI outputs as potential barrier issues, not just oddities;
  • escalate early rather than sharing them widely;
  • see risk and IT as allies in shaping safe tools, not blockers.

Where OrdoLux fits

OrdoLux is being designed with multi‑tenant and barrier‑friendly architecture at its core:

  • each firm’s data sits in its own tenant, with strict separation from others;
  • matter and document permissions govern what AI features can see and use;
  • AI activity is logged at matter level, helping firms monitor and investigate use.

As firms adopt more powerful AI features, the aim is that OrdoLux helps you say, truthfully, that your walls are built into the system, not just described in a policy.

This article is general information for practitioners — not legal advice, regulatory guidance or a full information‑security design.

Looking for legal case management software?

OrdoLux is legal case management software for UK solicitors, designed to make matter management, documents, time recording, information barriers and AI assistance feel like one joined‑up system. Learn more on the OrdoLux website.

Further reading

← Back to the blog

Explore related guides