This week the conversation shifted from tools to something more fundamental. Mustafa Suleyman, CEO of Microsoft AI, predicted that lawyers and other white-collar professionals will be fully automated within 18 months. It is a claim worth engaging with seriously, because the people making it are not idle commentators. But having thought it through, I think it is wrong, and wrong in ways that matter for how we approach AI in legal practice.
Suleyman says lawyers will be automated in 18 months. Here is why I disagree.
In a widely reported interview this month, Mustafa Suleyman, CEO of Microsoft AI, predicted that "white-collar work, where you're sitting down at a computer — being, you know, a lawyer, or an accountant, or a project manager — most of those tasks will be fully automated by an AI within the next 12 to 18 months." He added that AI would soon reach "human-level performance on most, if not all, professional tasks," pointing to software engineering as his evidence of pace. (Fortune)
The timeline is aggressive and, frankly, implausible. A 2025 Thomson Reuters survey of lawyers and accountants found the reality is targeted use (document review; research summaries; drafting) with modest productivity gains. AI made software developers 20% less productive in some observed cases. The gap between a persuasive demo and a reliable professional tool remains significant. I have written about my own experience testing workflows this year, and the efficiency gains are real but bounded. See also my 2026 predictions post in which I predict that lawyers will be redundant en masse this year.
But the more important problem with Suleyman's claim is not the timeline. It is the category error at its heart.
Lawyers are not task processors
Suleyman describes lawyers as people "sitting down at a computer" performing tasks. That is a description of some of what lawyers do, not of what lawyers are to a client. A client who instructs a solicitor is retaining someone who will exercise professional judgment on their behalf, bear personal responsibility for that judgment, carry indemnity insurance against its consequences, and remain answerable to a regulator if things go wrong. No AI system holds SRA authorisation. No AI system can be struck off. No client can enforce a professional duty against an algorithm.
The SRA's own position underlines this. Existing professional obligations (competence, adequate supervision, client protection) apply fully to AI-assisted work. You cannot outsource the responsibility. Even Anthropic, releasing its own legal workflow tools, is explicit that outputs must be reviewed by a licensed lawyer before anyone relies on them. The supplier's disclaimer is the supply chain admission that the tool cannot substitute for professional accountability.
Reserved activities and the Mazur ruling
There is also a harder legal constraint. The Legal Services Act 2007 defines a category of "reserved legal activities" — conducting litigation, exercising rights of audience, preparing certain court documents — that only authorised persons may perform. This is not professional convention. It is the law.
The Mazur case, famously decided last year, brought this into sharp focus. In Mazur v Charles Russell Speechlys LLP [2025] EWHC 2341 (KB), the High Court held that a non-qualified fee earner who had signed the particulars of claim and conducted virtually all steps in the proceedings had unlawfully conducted litigation. The mere fact of being employed by an authorised firm was not enough. The SRA subsequently warned that it will use its enforcement powers against firms that have not addressed the judgment's implications. (SRA)
The relevance to the AI displacement argument is direct. If a non-authorised human cannot conduct litigation even under solicitor supervision, an AI system, with no SRA authorisation and no professional standing whatsoever, cannot do so either. The Law Society has now written to the SRA asking for urgent guidance on precisely this question: whether AI performing key decision-making steps in litigation amounts to "conducting litigation" within the meaning of the statute. (Legal Futures) The profession is not asking because it expects the answer to be favourable to AI autonomy.
The Law Society's broader position on regulation remains that the existing SRA framework is adequate and that what lawyers need is clarity on how existing rules apply to AI — not deregulation or a new regime. The professional duties do not disappear because the tool is clever. They follow the lawyer.
What the argument gets right
Suleyman is correct that the pace of AI development in professional contexts is real and that firms treating this as a distant concern are making a mistake. Adoption among UK lawyers is accelerating sharply. The frontier for agentic tools (AI that can chain multi-step tasks without prompting) is moving. The governance question for litigation practices is not academic but a reality.
The risk is not replacement. It is that individual lawyers, under time and cost pressure, over-delegate to AI in ways the regulatory framework does not permit and that clients have not sanctioned. That is a supervision and governance failure, not an inevitable consequence of the technology. It is also entirely avoidable.
Read: Fortune | SRA on Mazur | Legal Futures
On your radar
Law Society requests urgent SRA guidance on Mazur and AI: The Law Society has written to the SRA asking for clarity on whether AI performing key decision-making steps in litigation amounts to "conducting litigation" under the Legal Services Act 2007. The Law Society's own Mazur guidance has been updated four times since October 2025, a clear sign of how fast the landscape is shifting. It states that AI performing litigation decisions represents "a novel development that was clearly not within the contemplation of the drafters." Why it matters for UK lawyers: any litigation practice using AI tools at any step in live proceedings needs to understand where the line falls. Watch this space. (Legal Futures)
Harvey launches UK legal benchmark: Harvey has released BigLaw Bench: Global, extending its benchmark dataset to include UK, Australian and Spanish legal evaluations. Leading models now score around 90% on standardised legal tasks, up from roughly 60% in 2024. Harvey's own research notes that performance degrades on jurisdiction-specific work. Why it matters for UK lawyers: benchmark figures will increasingly appear in vendor sales conversations. 90% accuracy on a standardised test is not the same as reliable professional judgment on a specific client matter under English law. Read the methodology before taking the number at face value. (Harvey AI)
61% of UK lawyers now using GenAI: A Legal Futures survey shows that 61% of UK lawyers are using generative AI for work, up from 46% earlier; a sharp rise. The proportion saying they have no plans to use AI has fallen from 15% to just 6%. Why it matters for UK lawyers: adoption is real and accelerating. But using AI for specific tasks and being replaced by AI are different things. As adoption grows, so does the exposure to governance and supervision failures. The obligation to check, verify and remain responsible does not diminish as the tools improve. (Legal Futures)
Ad Break
In order to help cover the running costs of this newsletter, please check out the advert below. In line with my promises from the start, adverts will always be declared and actual products that I have tried, with some brief thoughts from me.
I am a big fan of Wispr lately, and dictate a lot of my routine emails and messages. It is, for me, a much quicker way to get my thoughts down and I have been impressed by the software’s ability to correct my errors and replace “umms” and backtracks properly.
The above entire paragraph was dictated with Wispr directly into this draft newsletter. I didn’t amend anything and it captured my writing style well (thanks to the customisations I have applied).
Vibe code with your voice
Vibe code by voice. Wispr Flow lets you dictate prompts, PRDs, bug reproductions, and code review notes directly in Cursor, Warp, or your editor of choice. Speak instructions and Flow will auto-tag file names, preserve variable names and inline identifiers, and format lists and steps for immediate pasting into GitHub, Jira, or Docs. That means less retyping, fewer copy and paste errors, and faster triage. Use voice to dictate prompts and directions inside Cursor or Warp and get developer-ready text with file name recognition and variable recognition built in. For deeper context and examples, see our Vibe Coding article on wisprflow.ai. Try Wispr Flow for engineers.
For Review
Mazur: SRA hot topic page (SRA)
The SRA's own summary of the Mazur judgment: who can and cannot conduct litigation, the factors the SRA will assess, and its approach to self-reported past errors. If you manage a litigation team that includes non-qualified handlers, paralegals or AI tools at any stage of proceedings, this is required reading. The SRA says pre-October 2025 errors made in genuine good faith may be treated sympathetically, but continued non-compliance following the judgment is a different matter entirely.
Read or listen: SRA — Conducting Litigation
Law Society: Mazur and the conduct of litigation
The Law Society's guidance note on Mazur sits alongside the SRA's but covers a wider range of practical questions and firm-facing scenarios. The fact that it has been updated four times in four months is itself instructive: the profession does not yet have a settled view, and the AI dimension remains unresolved. Essential reading for any litigation practice before the SRA responds to the Law Society's latest request.
Read or listen: Law Society
Practice Prompt
This week's issue raises a practical question every litigation firm needs to answer: which steps in your AI-assisted workflow constitute "supporting" litigation versus potentially "conducting" it? Try the prompt below to start that audit. Ensure you fill in the fields marked with {}. Remember to adhere to the Golden Rules and do not upload confidential or privileged information to public tools.
You are advising a solicitor at a {size, e.g. 5-partner regional} law firm regulated by the SRA.
The firm handles {practice area, e.g. civil litigation, debt recovery, possession claims}.
Following the judgment in Mazur v Charles Russell Speechlys LLP [2025] EWHC 2341 (KB), the
firm needs to review how it uses AI tools in its litigation workflow.
The firm currently uses AI tools for the following steps:
{List each step, for example:
- Drafting the letter before action
- Generating the particulars of claim from a template
- Identifying applicable limitation periods from a case summary
- Flagging procedural deadlines
- Drafting witness summary notes from a file
- Summarising opponent correspondence}
For each step listed, produce a table with the following columns:
1. Step description
2. Whether this step is more likely to constitute "conducting litigation" or "supporting"
an authorised person — and why
3. The key risk if this step is performed or relied upon without adequate human review
by an authorised person
4. A suggested supervision control (e.g. who reviews, what sign-off is required,
what is documented)
5. Any flags for further legal advice
Base your analysis on the principles from Mazur: the critical question is who bears
ultimate responsibility and who exercises judgment on case direction. Be specific and
practical. Flag where you are uncertain and recommend the firm seeks independent advice
on any borderline step.
How did we do?
Hit reply and tell me what you would like covered in future issues or any feedback. We read every email!
Thanks for reading,
Serhan, UK Legal AI Brief
Disclaimer
Guidance and news only. Not legal advice. Always use AI tools safely.
Recommended Newsletters
Below are a few newsletters that I recommend, for various reasons. Check them out!




