This week the government laid regulations requiring the Information Commissioner to prepare a code of practice on AI and automated decision-making under the Data Protection Act 2018. SI 2026/425 comes into force on 12 May and will, for the first time, give the ICO a statutory mandate to set out what good practice looks like when organisations process personal data through AI systems. Alongside that, Thomson Reuters has unveiled the next generation of CoCounsel with what it calls "fiduciary-grade" legal AI, and Artificial Lawyer has published an analysis suggesting Claude could absorb up to 40% of in-house legal tech spending within five years. Plenty to unpack.
AI in Practice
New regulations require the ICO to draft an AI code of practice
On 16 April 2026, the Secretary of State made the Data Protection Act 2018 (Code of Practice on Artificial Intelligence and Automated Decision-Making) Regulations 2026 (SI 2026/425). The regulations were laid before Parliament on 21 April and come into force on 12 May 2026. They require the Information Commissioner to prepare a code of practice providing guidance on good practice in the processing of personal data in connection with the development and use of artificial intelligence and automated decision-making.
The scope appears broad. The code must address processing under both the UK GDPR and the Data Protection Act 2018 (excluding Part 4, which covers intelligence services), and it must specifically cover the processing of children's personal data in AI contexts. The regulations also modify the consultation requirements: a panel established by the Commissioner to consider the draft code must not consider or report on any aspect relating to national security, effectively ring-fencing that area from public scrutiny.
This matters because, until now, the ICO's guidance on AI and data protection has been advisory. The existing guidance (published in 2020 and updated periodically) covers lawful bases for processing, fairness, transparency, and data protection impact assessments, but it sits in the ICO's broader guidance framework rather than having a specific statutory mandate. SI 2026/425 changes that. Once the code is prepared, it will carry the weight of a statutory code of practice, meaning that compliance (or non-compliance) with it will be directly relevant in any enforcement action or regulatory investigation.
For law firms, the practical implications run in two directions. First, firms advising clients who develop or deploy AI systems will need to understand the code once it is published, because it will set out the ICO's expectations on matters including automated decision-making under Article 22C of the UK GDPR, data protection impact assessments for AI systems, and the processing of children's data. Second, firms using AI tools in their own practice (for document review, legal research, client communications, or any other purpose involving personal data) will need to satisfy themselves that their use is consistent with the code's requirements.
The timing is worth noting. These regulations land three weeks before the King's Speech on 13 May, where AI-specific legislation may (or may not) feature. If the government does introduce an AI bill, the ICO's code of practice will sit alongside it as part of the emerging UK regulatory framework. If no bill appears, the code becomes an even more significant, potentially the most concrete regulatory instrument governing AI and personal data in the UK.
The code itself has not been drafted yet, and the regulations do not set a deadline for the Commissioner to complete it. The ICO will need to establish a consultation panel and go through a public consultation process before the code is finalised. That means the practical impact is not immediate, but the era of purely voluntary AI guidance from the ICO is coming to an end.
Read: GovPing / Simplifi Solutions
On your radar
Thomson Reuters unveils "fiduciary-grade" CoCounsel with citation ledger: Thomson Reuters has announced a next-generation version of CoCounsel Legal, built using Anthropic's Claude Agent SDK, which it describes as "fiduciary-grade" legal AI. The headline feature is a patent-pending citation integrity architecture, including what Thomson Reuters calls a "citation ledger" that ensures the system can only reference sources it has actually retrieved, rather than generating plausible-sounding but fabricated citations. Ragunath Ramanathan, President of Legal Professionals at Thomson Reuters, stated that "a single missed citation can cost a client their case, defensibility isn't a nice-to-have. It's the whole point." The system also includes evaluation frameworks involving licensed attorneys and Practical Law editors, and reasoning verification that assesses the complete chain of logic rather than just the final output. Why it matters for UK lawyers: the phrase "fiduciary-grade" is doing a lot of work here, and it remains to be seen whether the product lives up to the billing once it moves beyond beta. But the underlying approach (architectural constraints that prevent hallucination rather than post-hoc checks) is the right direction for any AI tool used in legal practice. If the citation ledger works as described, it addresses one of the most persistent and dangerous failure modes of legal AI. Worth testing when it becomes available. (Artificial Lawyer)
Claude could absorb 25-40% of in-house legal tech spending: An analysis by Artificial Lawyer, using Claude's own estimates, suggests that Anthropic's AI could capture between 25% and 40% of in-house legal technology spending over the next three to five years, driven largely by its Word add-in and broader tooling. The figure drops to 3-8% for Big Law, where deep vendor relationships, data security concerns, and existing contracts create more resistance. Contract review and drafting tools are identified as the most exposed category. Why it matters for UK lawyers: last week this newsletter covered the launch of Claude for Word and Microsoft Copilot's competing legal features. This analysis extends the question from "is the tool useful?" to "what does it do to the market?" For in-house teams with limited budgets, the economics of a general-purpose AI that handles competent first-pass contract review inside Word may be hard to ignore. For legal tech vendors, the implication is clear: the value proposition needs to move beyond the capabilities that frontier AI models now offer out of the box. (Artificial Lawyer)
London solicitor builds free AI adversary that attacks your own arguments: Larissa Meredith-Flister, an associate in the competition team at Charles Lyndon, has built and released a free AI tool called Opposing Counsel Review. The tool takes a legal argument, draft submission, or witness statement, generates qualifying questions tailored to the case type, and then systematically attacks the reasoning, exposing evidential gaps and modelling how a sceptical judge would respond. As Meredith-Flister puts it: "Most lawyers don't stress-test their arguments. Opposing Counsel forces you to confront your assumptions, your bias, and shows you where your reasoning is fragile." The tool is available as a skill via Lawvable and works on Claude, ChatGPT, and Gemini. Why it matters for UK lawyers: this is a pointed counter-example to the dominant "AI as summariser" narrative. Here, AI functions explicitly as an adversary, closer to red-teaming than to document automation. The pattern of UK lawyers building their own AI tools and releasing them for free, outside the traditional legal tech vendor route, is one worth watching. The confidentiality question (the underlying LLMs sit outside firm infrastructure) remains unresolved, and firms should consider their own data handling policies before using it on live matters. (Legal Futures)
Vos: lawyers will survive AI, but legal education needs a "complete rethink": Sir Geoffrey Vos, Master of the Rolls, delivered a speech at the Association of Law Teachers' conference at Exeter University on 17 April in which he argued that the legal profession will endure but must change fundamentally. Clients, Vos said, "do not come to lawyers for advice any more. They come for confirmation" of what AI has already told them, and they expect to pay less for it. He predicted that routine judicial decision-making will be "informed or directed" by machines within 15 to 20 years, and called for ethics to be "taught through a new lens," with data protection and cybersecurity becoming mandatory subjects. This writer covered the speech in more detail in a separate editorial earlier this week, "Lawyers survive. Law firms might not.", which extends Vos's argument to ask what the shift from advice to confirmation means for firms built on previous fee structures and recruitment ratios. (Law Gazette / Legal Futures / Judiciary)
Ad Break
In order to help cover the running costs of this newsletter, please check out the advert below. In line with my promises from the start, adverts will always be declared and actual products that I have tried, with some brief thoughts from me.
Do your searches always hit dead ends?
Nearly half of users abandon a search without getting the result they wanted. Instead, they’re stuck in a loop of irrelevant results, slow-to-load articles and contradicting advice.
heywa is a whole new way of searching. It gives your result as visual & concise stories, meaning you get answers at a glance.
And if you want to explore your topic further, you can tap through your search journey without having to re-prompt and start again.
For Review
"Lawyers and Legal Education in the Machine Age" (Sir Geoffrey Vos, Judiciary)
The full text of the Master of the Rolls' speech at Exeter, which repays reading in its entirety. Vos sets out his vision for how AI will reshape legal practice, client relationships, and judicial decision-making, and argues for a fundamental overhaul of legal education to prepare the next generation. The speech is notably measured in tone but carries some uncomfortable implications for firms that have not yet started thinking about what the shift from "advice" to "confirmation" means for their business model. County Courts are reporting that pleadings are better, not worse, since litigants in person started using AI, which is a detail worth sitting with.
Read or listen: Judiciary
"AI in the Courtroom: Key Takeaways from Recent Decisions in the Courts of England and Wales" (Greenberg Traurig)
A practical overview of how English courts have responded to AI use in litigation, covering recent judicial statements, the disclosure obligations that arise when AI has been used to prepare evidence or submissions, and the evolving expectations around transparency. Useful for any litigator who wants a concise summary of where the courts currently stand, particularly in light of the ongoing Mazur debate about AI and "conducting litigation" covered in this newsletter two weeks ago.
Read or listen: Greenberg Traurig
"Richard Susskind on AI for Lawyers: A Review of 'How to Think About AI'" (LLRX)
A review of Susskind's latest book, which covers both the risks and benefits of AI for the legal profession in accessible terms. The review highlights Susskind's argument that lawyers should approach AI without hype or fear, and that the profession's future lies in understanding how AI changes the nature of legal work rather than assuming it replaces it. A useful companion piece to Vos's speech, given that both men are making essentially the same argument from different vantage points.
Read or listen: LLRX
Practice Prompt
Try the below prompt to stress-test a legal argument before submission. This is inspired by the Opposing Counsel Review tool covered above, but designed to run in any AI assistant without needing a third-party skill. Ensure you fill in context and constraints and other aspects marked with {}. Remember to adhere to the Golden Rules and do not upload confidential or privileged information to public tools.
You are acting as opposing counsel. Your task is to attack the legal argument below as aggressively and precisely as a skilled advocate would, identifying every weakness a judge might seize on.
Jurisdiction: {England and Wales / Scotland / other}
Court or tribunal: {e.g., County Court, High Court (King's Bench Division), Employment Tribunal, First-tier Tribunal}
Type of document: {e.g., skeleton argument, particulars of claim, witness statement, written submissions, grounds of appeal}
Area of law: {e.g., breach of contract, professional negligence, unfair dismissal, possession claim, judicial review}
The argument to attack:
{Paste the relevant section of your draft submission, skeleton argument, or legal reasoning here. Remove client names and any identifying or privileged information before pasting.}
For each point in the argument, do the following:
1. **Identify the weakest link**: what is the most vulnerable step in the reasoning? Is it a factual assertion without supporting evidence, a legal proposition that is overstated or unsupported by authority, a logical leap, or an assumption about what the court will accept?
2. **Draft the opposing submission**: write 2-3 sentences as opposing counsel would actually say them in a skeleton argument or oral submission, attacking that specific point. Use the tone and register of competent English advocacy (firm, precise, measured, no rhetoric for its own sake).
3. **Assess the evidence**: does the argument rely on evidence that has not been exhibited, a witness who has not been called, or a document that has not been disclosed? Flag any evidential gap.
4. **Test the authorities**: if the argument cites a case or statutory provision, consider whether (a) it has been correctly stated, (b) it is binding or merely persuasive, (c) it has been distinguished or overruled, and (d) there is a stronger authority running the other way.
5. **Score the vulnerability**: rate each point as high, medium, or low risk of being successfully attacked at the hearing.
After working through each point, produce:
- A summary table with columns: Point | Core weakness | Vulnerability (high/medium/low) | Suggested fix
- A list of the 3 strongest lines of attack that opposing counsel is most likely to run, in order of strength
- Any procedural or evidential steps the drafting solicitor should consider before the hearing (e.g., obtaining further evidence, amending a statement of case, preparing a fallback position)
Constraints:
- {Add any case-specific constraints, e.g., "The limitation point is our weakest area" or "We do not have expert evidence on this issue" or "The witness is unavailable for cross-examination."}
- Do not invent authorities. If you cannot identify a real case or statutory provision to support an attack, say so and explain the type of authority that opposing counsel would look for.
- Use UK legal terminology throughout (e.g., "claimant" not "plaintiff," "skeleton argument" not "trial brief").
- This is a stress-testing exercise, not legal advice. The output is a tool to help the drafting solicitor identify and address weaknesses before the other side does.
How did we do?
Hit reply and tell me what you would like covered in future issues or any feedback. We read every email!
Thanks for reading,
Serhan, UK Legal AI Brief
Disclaimer
Guidance and news only. Not legal advice. Always use AI tools safely.
Recommended Newsletters
Below are a few newsletters that I recommend, for various reasons. Check them out!




