Corsino San Miguel: Retaining strategic knowledge in the age of AI – the new risk to legal intelligence

Corsino San Miguel: Retaining strategic knowledge in the age of AI – the new risk to legal intelligence

As AI systems embed themselves in everyday legal workflows, they begin to absorb not just what we produce, but how we think. Dr Corsino San Miguel sets out a strategy for protecting the judgment that defines a law firm’s identity.

Imagine a chef using an AI assistant to make sandwiches.

At first, the AI follows simple instructions: “Add mustard. Now turkey. Now toast the bread.” But over time, it begins to notice patterns. The chef skips mayo when avocado is present. On Fridays, she uses sourdough for VIP clients. If there’s a peanut allergy, she changes knives – no need to be told.

Now imagine hundreds of chefs using the same AI capability. Each one teaches it something – instincts, workarounds, habits shaped by risk, taste, and discretion. Eventually, the AI starts making very good sandwiches on its own. Not because it was taught a recipe, but because it has absorbed the collective judgment of the chefs who used it.

That’s commoditisation. The original chef’s signature decisions – once hard-earned – are now part of a shared system. No theft. Just quiet absorption.

Now replace chefs with lawyers. Replace sandwiches with contracts.

Today’s AI tools – used for drafting, reviewing, redlining – don’t just help us work faster. They watch how we weigh ambiguity, tighten liability, adjust tone. With every click and correction, they learn our legal posture.

This short paper introduces a concept I call Retaining Strategic Knowledge (RSK), a framework to understand and defend against this invisible erosion. Because in law, as in the kitchen, it’s not just what you produce that matters – but how you think while doing it. And unless that thinking is protected, it becomes available to everyone.

It’s a call for foresight, not fear – and for taking control of the recipe before someone else turns your signature sandwich into the default menu for everyone else.

What AI is learning from us

Legal knowledge isn’t just stored in folders or precedent banks. It lives in how a firm reasons, drafts, and decides. Traditionally, this rested on two foundations. First, explicit knowledge: the documented – case notes, clause libraries, procedural guides. Second, tacit knowledge: the instincts acquired over time, the unspoken judgment that governs tone, timing, and risk.

But now, a third layer is quietly forming: AI-driven knowledge.

This isn’t learned from archives. It’s learned from interaction. Each time we rephrase a clause, adjust a tone, or redraft for risk, we’re not just completing a task – we’re training the AI system. Not explicitly, but behaviourally. And that behaviour is being watched.

AI tools used in contract review, negotiation, and drafting don’t need to see your document vault to learn your strategy. They learn from how you work: which edits you accept, which clauses you reject, how long you hover over a risk provision before softening it. Over time, your firm’s distinct approach becomes statistical memory – available, not just to you, but to the system itself.

This is not data loss. It is recipe leakage.

Like a chef perfecting a sandwich through decades of nuance – timing the toast, swapping spreads, reading the customer’s face – you now have a machine learning your every move. And once enough chefs contribute, the AI can make a very competent sandwich on its own.

Only it’s no longer yours.

That is the quiet cost of convenience. Not theft, but absorption. And unless we guard the recipe, we’ll all be eating variations of the same sandwich.

The RSK Framework: keeping the recipe yours

Retaining Strategic Knowledge, RSK, is not about resisting change or romanticising the past. It is a framework for protecting the craft behind the sandwich. Not the sandwich itself, but how you decide what goes between the bread.

The first layer is strategic. If your law firm is contributing judgment to an AI system – whether through prompts, reviews or feedback – then you need to control what happens to that judgment. Contractual safeguards must prohibit silent learning: no prompt logging, no behavioural fine-tuning, no telemetry reuse unless the kitchen is yours. Transparency must be on paper, not promised. Audit trails, data flows, learning loops – nothing should be assumed.

Workflows must also be segmented. Not every sandwich needs the head chef. High-volume, low-risk tasks – routine drafting, standard NDAs can be delegated to the machine. But bespoke creations, those with complex flavours and reputational consequences, require human discretion. The line between AI-safe and chef-only must be drawn – and respected.

Then comes the operational layer: infrastructure and unpredictability. On-premise or private deployments ensure the sandwich logic stays in-house. But beyond that, you must introduce just enough cognitive obfuscation to remain sovereign. Rotate who assembles the sandwich. Change the sequence. Vary the instruction. Break the pattern before the system learns it too well. Don’t let your secret sauce become a standard setting – especially when the machine is always watching, always averaging.

RSK is not about halting progress. It is about ensuring that when the system learns, it learns only what you choose to teach it. Because once your sandwich logic becomes the system’s default – everyone’s eating your thinking, and only one party owns the kitchen.

Choose the kitchen

Legal work has always been more than the finished product. It is a choreography of judgment, instinct, and experience – layered with care, adjusted in context, and refined over time. What AI tools now observe is not just what we build, but how we build it. And in that observation, they begin to inherit the logic that once made each firm distinct.

Commoditisation, in this sense, is not theft. It is silent learning. The system doesn’t need to steal your sandwich: it only needs to watch you make it enough times.

The RSK framework offers a way forward. It is not a rejection of AI. It is a declaration of boundaries. A statement that strategic knowledge – like any asset – must be governed if it is to remain yours.

The question is not whether the model will learn. It will.

The question is: from whom?

If you don’t choose the kitchen, someone else will.

And by the time the house special shows up on another table, it will be too late to claim the recipe.

Dr Corsino San Miguel is a member of the AI Research Group and the Public Sector AI Task Force at the Scottish Government Legal Directorate. The views expressed here are personal.

Share icon
Share this article: