This week the Court of Appeal handed down its judgment in the Mazur appeal, overturning the High Court and confirming that unauthorised persons can conduct litigation under appropriate supervision. The Law Society has now called on the SRA to provide urgent guidance on a question the judgment does not answer: whether AI systems that make key decisions in a case could themselves amount to "conducting litigation" under the Legal Services Act 2007. The answer, for now, is that nobody knows!
Mazur lands, and the AI question is wide open
On 31 March 2026, the Court of Appeal handed down its judgment in CILEX v Mazur and others [2026] EWCA Civ 369, overturning the High Court's ruling and confirming that an unauthorised person can lawfully conduct litigation provided they do so under the supervision of an authorised individual. The SRA welcomed the decision, which it said provides "clear direction" on how firms can delegate litigation tasks to paralegals and other non-authorised staff. The judgment emphasises that the authorised individual must retain responsibility and put appropriate arrangements in place for direction, management, supervision, and control. The details of what constitutes appropriate supervision are, the court said, a matter for the regulators.
That part of the story is relatively straightforward, and for most firms it represents a return to normal practice after a period of uncertainty following the original High Court ruling. But the part that should concern anyone using AI in litigation workflows is what the judgment does not address.
The Law Society, which has updated its Mazur guidance four times since October 2025, has now called on the SRA to provide advice "as a matter of urgency" on the implications of Mazur for artificial intelligence. The question is whether an AI system that makes key decisions in a case (for example, deciding which documents to disclose, drafting particulars of claim, or assessing the merits of a settlement offer) could be regarded as "conducting litigation" within the meaning of the Legal Services Act 2007. If it could, then using such a tool without appropriate authorisation and supervision arrangements would be unlawful.
The Law Society's guidance states plainly that "the legitimacy of the use of AI to make key decisions in a case that would amount to conducting litigation if taken by an individual remains unresolved." It argues that without regulatory clarity, solicitors will feel "inhibited in exploring what uses of AI in legal services are desirable," making SRA guidance vital for the profession.
This is not a theoretical problem. AI tools are already being used in litigation workflows across UK firms, from document review to drafting to case assessment. The question is not whether firms are using AI in these ways, but whether those uses cross a line that the Legal Services Act draws, and which the Court of Appeal's judgment does not redraw. Civil Litigation Brief ran a webinar on 9 April examining the practical implications, and the Law Society has scheduled a training session for 14 April titled "Mazur: what the Court of Appeal judgment means for you."
Read: Legal Futures / SRA / Law Society
On your radar
AI is mainstream in law, but only 7% of clients are told: A survey of more than 500 legal professionals and 500 members of the public, conducted for practice management platform Clio, has found that AI use in UK and Irish law firms is now widespread, but client disclosure is vanishingly rare. Only 7% of clients recall their lawyer telling them that AI was involved in their matter. Why it matters for UK lawyers: the SRA's outcome-based approach does not prescribe AI disclosure, but the duty of transparency under Principle 7 and the obligation to act in the best interests of clients both cut in favour of telling clients when AI is being used on their work, particularly where it touches on substantive legal analysis. Firms should consider whether their current approach to client communication keeps pace with their AI adoption. (Law Gazette)
Harvey deploys autonomous Spectre agent, previews "law firm world model": Legal AI platform Harvey (valued at $11bn after its $200m raise in March) has deployed an internal autonomous agent called Spectre that monitors incidents, bug reports, customer feedback, and Slack messages, and then makes engineering decisions without human prompts. Harvey is now showing law firm clients what it describes as "what is now possible with agents: systems that can operate over entire client matters like a team of associates." Why it matters for UK lawyers: this is more concept than product for UK firms today, but it signals the direction of travel. The shift from AI as a tool that responds to prompts to AI as an agent that acts independently on a matter raises the same supervision questions that the Mazur and AI debate highlights. Worth watching alongside the SRA's forthcoming guidance. (Artificial Lawyer)
PE-backed Lawfront acquires Field Seymour Parkes, seventh firm in its network: Lawfront, the private-equity-backed legal services group, has acquired Reading-based Field Seymour Parkes (85 professionals, approximately £15m turnover), its seventh regional firm. The acquisition follows Lawfront's stated model of migrating acquired firms onto a common cloud-first technology stack as a "foundation for AI-driven innovation." Why it matters for UK lawyers: PE consolidation of mid-market UK firms is accelerating, and AI capability is increasingly the stated rationale. The pattern is clear: acquire, standardise the technology platform, then deploy AI at scale across the network. Firms in the mid-market that are not part of a consolidation play may find it harder to match the technology investment that these groups can deploy. (Legal IT Insider / Law Gazette)
Ad Break
In order to help cover the running costs of this newsletter, please check out the advert below. In line with my promises from the start, adverts will always be declared and actual products that I have tried, with some brief thoughts from me.
1,000+ Proven ChatGPT Prompts That Help You Work 10X Faster
ChatGPT is insanely powerful.
But most people waste 90% of its potential by using it like Google.
These 1,000+ proven ChatGPT prompts fix that and help you work 10X faster.
Sign up for Superhuman AI and get:
1,000+ ready-to-use prompts to solve problems in minutes instead of hours—tested & used by 1M+ professionals
Superhuman AI newsletter (3 min daily) so you keep learning new AI tools & tutorials to stay ahead in your career—the prompts are just the beginning
For Review
"Buying New Technology" guide (Law Society)
A practical procurement guide aimed at sole practitioners, small firms, and smaller in-house teams. Covers identifying needs, sourcing suppliers, negotiating contracts, evaluating tools, and managing implementation, with downloadable templates. If you are involved in technology procurement at a firm of any size, the supplier evaluation framework is worth a look, and the guide is free.
Read or listen: Law Society
"I Built An Agentic Law Firm, Now What?" (Artificial Lawyer)
Antti Innanen, a Finnish lawyer and serial founder, describes building a virtual law firm run entirely by 66+ AI agents on a Mac Mini, with no data leaving the local machine. The system handles intake, specialist routing, internal debate, escalation, and synthesis. It runs on a local LLM for routine tasks and escalates to Mistral for complex matters. Innanen plans to open-source the entire system if he cannot find a commercial partner by May. The piece is a thought experiment more than a product announcement, but it is a concrete demonstration of where agentic legal AI is heading, and the local-first privacy architecture addresses a genuine concern for firms worried about client data.
Read or listen: Artificial Lawyer
IAPP Global Summit: OpenAI and Anthropic counsel on the privacy-safety tradeoff (LegalTech News)
At the IAPP Global Summit on 30 March, in-house counsel from both Anthropic and OpenAI discussed the tension between monitoring AI usage for safety and protecting user privacy. Anthropic's product counsel noted that "in order to make models safe, you have to have visibility into how they're being used in real life," raising a direct question for any firm deploying AI tools: what data is being collected about your usage, by whom, and for what purpose? Useful context for anyone reviewing AI vendor terms or drafting internal AI governance policies.
Read or listen: LegalTech News
Practice Prompt
Try the below prompt to audit how your firm uses AI in litigation workflows, in light of the Mazur judgment and the Law Society's call for urgent SRA guidance. Ensure you fill in context and constraints and other aspects marked with {}. Remember to adhere to the Golden Rules and do not upload confidential or privileged information to public tools.
You are helping a {litigation partner / compliance officer / COLP / head of innovation} at a UK law firm audit how AI tools are currently used in the firm's litigation workflows, in light of the Court of Appeal's judgment in CILEX v Mazur [2026] EWCA Civ 369 and the Law Society's call for urgent SRA guidance on whether AI can amount to "conducting litigation" under the Legal Services Act 2007.
Context: The Court of Appeal confirmed on 31 March 2026 that an unauthorised person can lawfully conduct litigation under the supervision of an authorised individual, provided the authorised individual retains responsibility and puts appropriate arrangements in place for direction, management, supervision, and control. The judgment does not address whether AI systems fall within the same framework. The Law Society has stated that "the legitimacy of the use of AI to make key decisions in a case that would amount to conducting litigation if taken by an individual remains unresolved" and has called on the SRA to provide guidance "as a matter of urgency."
My firm's litigation practice areas include: {e.g., commercial disputes, professional negligence, property litigation, debt recovery, employment, personal injury}
AI tools currently in use include: {e.g., document review platform, contract analysis tool, legal research assistant, drafting assistant, case assessment tool, e-disclosure platform with AI-assisted review}
For each AI tool or workflow identified, produce a table with the following columns:
1. AI tool or workflow (e.g., "AI-assisted document review for disclosure," "AI drafting of particulars of claim," "AI merit assessment of incoming claims")
2. Task description: what the AI tool actually does in practice, in concrete terms
3. Decision-making level: does the AI (a) assist a lawyer who makes the decision, (b) recommend a course of action that a lawyer reviews and approves, or (c) make the decision autonomously with limited or no lawyer review before it takes effect?
4. "Conducting litigation" risk (high / medium / low): based on whether the task, if performed by an unauthorised individual, would amount to conducting litigation under the Legal Services Act 2007. Consider whether the task involves: issuing or serving proceedings, filing documents at court, making decisions about the conduct of proceedings (disclosure, amendments, settlement), or taking steps that determine the course of litigation.
5. Current supervision arrangements: who supervises the AI's output, how, and at what stage? Is there a named authorised individual responsible?
6. Gap analysis: what is missing? For each tool, identify whether the current supervision arrangements would satisfy the Mazur standard if the AI were treated as an unauthorised person conducting litigation under supervision.
7. Recommended action: a concrete next step (e.g., "implement mandatory lawyer sign-off before AI-generated disclosure lists are finalised," "document the supervision protocol and name the responsible authorised individual," "no change needed, tool is advisory only")
Then:
- Identify the 2-3 highest-risk workflows and draft a brief supervision protocol for each, specifying: the named authorised individual, the point at which they review the AI output, the standard of review expected, and how the review is documented.
- Flag any workflows where the AI tool's terms of service or technical design make meaningful supervision difficult (e.g., where the AI acts in real time and there is no practical opportunity for review before the output takes effect).
- Note any areas where the position is genuinely uncertain and where the firm should seek external advice or wait for SRA guidance before proceeding.
Constraints:
- {Add any firm-specific constraints, e.g., "We use AI for bulk disclosure review in large commercial disputes" or "Our AI tool drafts first versions of witness statements for solicitor review" or "We are a small firm and AI use is limited to legal research."}
- Use UK legal terminology throughout (e.g., "disclosure" not "discovery," "particulars of claim" not "complaint").
- Do not provide legal advice. This is an internal audit tool to support the firm's compliance review.
- Where a workflow falls into a grey area, flag the uncertainty rather than resolving it. The point of this exercise is to identify risk, not to give the firm a clean bill of health.
How did we do?
Hit reply and tell me what you would like covered in future issues or any feedback. We read every email!
Thanks for reading,
Serhan, UK Legal AI Brief
Disclaimer
Guidance and news only. Not legal advice. Always use AI tools safely.
Recommended Newsletters
Below are a few newsletters that I recommend, for various reasons. Check them out!




