This week AI feels a little less like a pilot and more like key infrastructure. CMS is rolling Harvey out to more than 7,000 people across 50+ countries. The High Court adds uet another warning on AI generated authorities.

The common thread this week is scale. As tools move from experiments to firmwide platforms, governance, continuity and realistic expectations matter as much as features.

AI in Practice

CMS rolls Harvey out across 50+ countries

CMS has announced a firmwide roll out of Harvey, giving more than 7,000 lawyers and staff across 21 member firms access to generative AI tools in over 50 countries. The firm says this is now the largest Harvey deployment in EMEA, following a trial that started with around 300 users in early 2024 and expanded to roughly 3,000 lawyers before this wider release. According to CMS’s internal analysis, around 93% of users report productivity gains, with time savings of up to 117.9 hours per lawyer per year (CMS).

For UK lawyers this is a clear marker of where larger practices are heading. Harvey is being positioned as a general purpose legal AI platform, not a niche add-on, and CMS is explicit that it sees benefits in reduced write offs and lighter workloads as well as client facing quality. Those statements will resonate with boards and managing partners who are weighing similar business concerns.

At a practical level, a deployment on this scale implies more structured governance than many pilots. Someone must decide which use cases are to be used with Harvey, how prompts and workflows are shared, what training is mandatory, and how outputs are checked before they hit a client or a court. It also shows that “AI literacy” is no longer confined to innovation teams. Everyday users across practice groups are becoming expected to work alongside AI tools as a matter of course.

There are caveats, of course. The headline productivity figures are based on internal measurement, and the underlying methodology is not public. Even so, they underline that general purpose AI tools are now being evaluated against concrete time-saving and pricing metrics, not just enthusiasm or fear of missing out.

Takeaways

  • Act: Map your own general purpose AI footprint. Who in your firm already has access to tools like Harvey, Copilot or others, and what governance (if any) sits around them. If you are still in pilot mode, start thinking now about what evidence you would need to justify any wider roll out.

  • Watch: How CMS talks over time about concrete use cases, training, and supervision, and whether similar “enterprise roll out” announcements follow from other UK or international firms. Also watch for any SRA or judicial commentary that assumes this level of AI use is now normal at larger practices.

  • Risk: As mentioned before, vendor lock in, data protection and confidentiality remain key concerns. If you commit to a single AI platform, you are also committing to its data model, security and financial resilience. You will need to constantly review how you allocate work between AI tools and junior lawyers without undermining supervision or client expectations.

On your radar

  • High Court warning on AI generated authorities: A recent High Court decision, reported by the Law Society Gazette, involved a litigant in person whose submissions included AI generated case citations. The judge reportedly warned that any lawyer who secretly assisted with those submissions could face contempt proceedings.

    Why it matters for UK lawyers: this is another reminder that duties to the court apply even if your involvement is informal or behind the scenes, and that AI generated authorities must be checked before they go anywhere near a pleading or skeleton argument (https://ai-update.co.uk/2025/12/05/warning-to-lawyers-helping-lip-with-ai-generated-authorities-law-societys-gazette/)

  • Harvey’s “Shared Spaces” and the rise of multiplayer legal AI: Harvey has launched Shared Spaces, a collaborative workspace that lets firms invite clients to use AI powered workflows built on their own knowledge assets, alongside a fresh $160,000,000 funding round. This is framed as “multiplayer” legal AI, where clients can self serve first drafts or run due diligence workflows in a shared space that still relies on firm knowledge.

    Why it matters for UK lawyers: if these models gain traction (and see our leading story about CMS), firms will need to decide which parts of their playbooks and precedent banks they are comfortable exposing to clients. This gives concerns about how they price that exposure, and monitor where responsibility sits when a client relies on an AI assisted “self service” answer AI Lawyer and Non Billable.

For Review

On the immortality of Microsoft Word (The Redline)
A long, thoughtful essay by Jordan Bryan arguing that Microsoft Word is effectively the protocol of legal work, and that attempts to drag lawyers into Markdown, proprietary editors or “walled garden” platforms often fail because they break that protocol. It is a useful piece to share with IT, innovation teams and vendors when explaining why integration with existing Word workflows still matters. It also highlights the importance of version control in legal services and refers to their recent article about version histories and a new tool.

The article discusses the problems of formatting for alternative formats such as Markdown (which this newsletter is written in). The writer of this newsletter particularly likes this quote from the article:

“Additionally, a well-formatted document is a symbol of a lawyer’s professionalism. Courts aren’t the only readers of legal documents. Clients, counterparties, colleagues, all read a lawyer’s documents as well. The style of their work product reflects the lawyer’s professionalism — the medium is the message”

Full disclosure; the writer is currently trialling the Version Story software referred to in these articles but has not formed an opinion as yet (although welcomes hearing from others on the tool. Read: The Redline – “On the immortality of Microsoft Word

When to pull the plug on artificial intelligence (Law Society Gazette)
James Wilson’s review of “AI in Criminal Law” uses the book as a springboard to ask when lawyers should stop using AI on a case. The focus is criminal practice, but the questions (about proportionality, reliability, vulnerable parties and public confidence) are equally relevant to civil, regulatory and commercial work, and to any firm drafting its own AI governance playbook.
Read: Law Gazette

Practice Prompt

Set aside 20–30 minutes to pressure test your approach to AI generated authorities. Take one recent piece of work where AI assisted with legal research or drafting, and run a simple “cite check” using your usual case law databases.

Note any issues such as missing citations, or places where AI suggestions crept in without clear labelling.

Draft a short, practical note (even 5 bullet points) for your team on how to verify AI sourced case law before it reaches a client or the court.

How did we do?

Reply and tell me what you would like covered in future issues, or where you would prefer more or less detail.

UK Legal AI Brief

Disclaimer

Guidance and news only. Not legal advice. Test outputs and apply professional judgment.

Keep Reading

No posts found