The King's Speech arrived on Tuesday without the standalone AI bill that many had expected. Instead, the government's approach to AI regulation will sit inside the Regulating for Growth Bill, which creates cross-sector sandbox powers allowing businesses to test AI products under temporarily relaxed rules. Whether that is a pragmatic decision or a missed opportunity depends on your view of how fast the regulatory picture needs to move. Elsewhere this week, the Law Society has called on the MoJ and SRA for clear rules on AI use in court documents, Anthropic made another legal play, and a judge placed the blame for AI-generated fake citations squarely on the law firm rather than the individual solicitor.
The King's Speech: no AI bill, but sandboxes are coming
The King's Speech on 13 May 2026 did not include a dedicated AI bill. This newsletter flagged the possibility in both the 24 April and 8 May issues, and the answer is now clear: the government has opted against standalone AI legislation, at least for this parliamentary session. Instead, AI regulation will be addressed primarily through the Regulating for Growth Bill, a broader piece of economic legislation designed to overhaul how UK regulators operate.
The Bill's headline AI feature is a new cross-sector sandbox power. Ministers will be able to temporarily relax existing regulations so that businesses can test AI-enabled products and services in real-world settings under controlled conditions. The briefing notes refer specifically to "cross-cutting AI sandboxes" that enable "responsible testing and adoption of AI-enabled products and services across multiple sectors where existing regulatory frameworks currently slow innovation." If a sandbox trial proves successful, the Bill allows the government to embed the changes permanently into law through secondary legislation, bypassing the need for a full new Act for each sector.
The Bill also strengthens regulators' duty to promote economic growth, giving ministers a new statutory power to issue "strategic steers" to regulators defining what growth means in different regulatory contexts. For the ICO, CMA, FCA, and other bodies that touch AI, this signals a clear direction: regulation should enable, not obstruct. Whether that framing survives contact with the first serious AI-related consumer harm case remains to be seen.
There is no provision on AI and copyright. The government's decision to drop the proposed copyright opt-out (covered in this newsletter on 20 March) appears to have left the issue without a legislative vehicle. The creative sector had lobbied hard for copyright protections to be included in any AI bill, and their absence from the King's Speech is likely to draw criticism.
The practical effect is that the UK's emerging AI regulatory framework remains a patchwork: the ICO's forthcoming code of practice under SI 2026/425 (which came into force on Monday 12 May, as flagged in this newsletter last week), the Cyber Security and Resilience Bill (carried over from the previous session), the Digital Access to Services Bill (introducing digital ID), and whatever emerges from the Regulating for Growth Bill's sandbox powers. There is no single, comprehensive AI statute on the horizon.
Watch: the Regulating for Growth Bill's progress through Parliament, particularly any amendments that add AI-specific provisions. The copyright question is unresolved and may attract attention at committee stage.
Read: Bird & Bird / Lewis Silkin / GOV.UK
On your radar
Law Society calls on MoJ and SRA for clear rules on AI in court documents: The Law Society of England and Wales published its response to the CJC's consultation on AI use in court document preparation on 13 May, broadly supporting mandatory declarations where AI has been used to generate evidence but warning that disclosure rules alone are insufficient without proper guidance and governance at the firm level. The Law Society wants the SRA to review its code of conduct, HMCTS to introduce clear rules on AI use in court, and the MoJ to coordinate rather than leave individual bodies to produce a patchwork of separate guidance. The intervention follows a year in which AI-generated false citations have moved from isolated embarrassment to systemic risk, with judges responding through wasted-costs orders and SRA referrals. Why it matters for UK lawyers: the CJC consultation closed on 14 April, and interim guidance is expected within the next two quarters. The Law Society's position, that a single judiciary-issued practice note is preferable to fragmented guidance from multiple bodies, is sensible but ambitious. In the meantime, any firm using AI in litigation should already have internal protocols covering verification of citations and disclosure of AI use. If your firm does not, this is the prompt to put them in place. (Law Gazette)
Judge blames law firm, not solicitor, for AI-generated fake citations: In a ruling reported by Legal Futures, HHJ Charman declined to refer solicitor Raphael Newton to the SRA after two fabricated cases appeared in a court filing, holding that the failure was "in substance a failure of management" at Newton's firm, Gordon & Thompson, rather than a failure of the individual solicitor. The fake citations were generated by the built-in research function of the firm's legal software, and the document had been filed in error by administrative staff before Newton had reviewed it. The judge made a wasted-costs order against the firm. Gordon & Thompson has since introduced mandatory verification of all citations by a solicitor before filing, draft labelling to prevent accidental submission, and signature protocols. Why it matters for UK lawyers: this shifts the question of who carries the can when AI goes wrong. The ruling suggests that where a firm's systems allow AI-generated content to reach the court without adequate supervision, it is the firm's management that bears primary responsibility, not the individual fee earner. That has direct implications for professional indemnity, compliance, and how firms design their internal AI workflows. It also reinforces the Law Society's call (above) for clear rules at the firm level, not just individual obligations. (Legal Futures)
Anthropic launches Claude for Legal with 20+ connectors and 12 practice-area plugins: On 12 May, Anthropic released more than 20 integrations with tools law firms already use (including iManage, NetDocuments, Westlaw, and Microsoft 365) and 12 role-specific plugins covering practice areas from M&A due diligence to employment handbook drafting. Thomson Reuters simultaneously announced an MCP integration connecting Claude directly to CoCounsel Legal. The plugins are built with source attribution on every citation, conservative defaults on privilege and subjective legal calls, and explicit gates before anything is filed or relied upon. Anthropic reports that legal professionals have become the most engaged users of its Cowork platform since the initial legal plugin launched in February, and Claude Opus 4.7 scored 90.9% on Harvey's BigLaw Bench. Why it matters for UK lawyers: the Freshfields partnership (covered in this newsletter on 1 May) signalled that Anthropic was serious about legal. This week's release confirms it. The practical question for UK firms is whether the connector architecture (which grounds Claude in live, verified sources rather than generating from memory) addresses the hallucination concerns that have dominated the conversation. The answer will depend on real-world testing, not benchmarks. (Artificial Lawyer / LawNext / Thomson Reuters)
Survey: AI is not reducing lawyers' working hours: A survey of 240 legal professionals by Artificial Lawyer, published on 11 May, found that AI adoption is leading to the same working hours or longer, not the reduction that many expected. The logic is simple enough: AI makes individuals more productive, which means the system as a whole gets busier. More matters can be initiated, more projects handled, and more information processed across every node in the workflow. Efficiency gains at the individual level do not translate into people going home earlier; they translate into more aggregate activity. Why it matters for UK lawyers: this challenges the assumption (explored in this newsletter on 8 May in the context of fee erosion and the Law Society survey) that AI will compress the time, and therefore the cost, of legal work in a way that directly benefits clients. If AI makes lawyers more productive but not less busy, the pressure on hourly billing models may be less acute than some have predicted, though the nature of the work being billed for will change. (Artificial Lawyer)
Carta acquires UK ABS law firm Avantia, launches "Carta Law": Carta, the San Francisco-based private markets platform, has acquired Avantia, a UK alternative business structure law firm that combines human lawyers with proprietary AI to automate routine transactional work for asset managers. The combined entity, Carta Law, will offer AI-powered contracting and compliance reviews, agentic workflows for KYC and NDA playbooks, and attorney review of AI-generated legal recommendations, all within Carta's existing platform. Freshfields advised Carta on the deal. Why it matters for UK lawyers: a platform that already manages fund administration is now offering AI-first legal services as part of the same package. The ABS model makes this possible in England and Wales in a way that it would not be in most US states, and for firms serving the private equity and fund management market, a competitor has just appeared from outside the profession. (Artificial Lawyer / Law.com)
Ad Break
In order to help cover the running costs of this newsletter, please check out the advert below. In line with my promises from the start, adverts will always be declared and actual products that I have tried, with some brief thoughts from me.
Your prompts are leaving out 80% of what you're thinking.
When you type a prompt, you summarize. When you speak one, you explain. Wispr Flow captures your full reasoning — constraints, edge cases, examples, tone — and turns it into clean, structured text you paste into ChatGPT, Claude, or any AI tool. The difference shows up immediately. More context in, fewer follow-ups out.
89% of messages sent with zero edits. Used by teams at OpenAI, Vercel, and Clay. Try Wispr Flow free — works on Mac, Windows, and iPhone.
For Review
"The Future of Agentic AI" (DRCF Foresight Paper)
The Digital Regulation Cooperation Forum (the CMA, FCA, ICO, and Ofcom acting jointly) published a foresight paper on agentic AI on 31 March which has been gaining attention over the past month. The paper establishes a five-level autonomy spectrum for AI agents, from a merely reactive tool through to a truly autonomous actor requiring little human input, and catalogues risks including algorithmic collusion, prompt injection, data minimisation failures, and consumer rights challenges. The four regulators agree that AI agents do not fall outside existing UK regimes, but acknowledge that significant clarity is needed on how current rules apply. Worth reading in full for any firm advising on AI governance, autonomous systems, or commercial contracts involving agentic AI.
Read or listen: DRCF / IAPP
"Even as hallucinations show up in legal filings, Big Law goes all in on AI" (Fortune)
A feature published on 12 May examining the tension at the heart of legal AI adoption: courts are still dealing with fabricated citations, and yet the largest firms are accelerating their investment. The piece covers Anthropic's new legal plugins, the Freshfields partnership, and the industry's approach to managing hallucination risk through architectural constraints rather than post-hoc checking. Worth reading for the way it frames the disconnect between the courtroom experience of AI (where things have gone wrong) and the firm-level strategy (where the bet is getting bigger).
Read or listen: Fortune
"Legal AI's Next Act Is In-House Productivity" (Artificial Lawyer)
An analysis published on 12 May arguing that legal AI is shifting from individual lawyer productivity to team-level and organisational productivity. The piece suggests that the next phase will be measured not by how much faster a single lawyer can draft a contract, but by how AI changes the flow of work across departments, between firms and clients, and within in-house teams. A useful framing for any firm thinking about AI strategy beyond the initial adoption phase.
Read or listen: Artificial Lawyer
Practice Prompt
Try the below prompt to map your firm's position against the emerging UK AI regulatory patchwork following the King's Speech. With no single AI statute on the horizon, firms need to track multiple instruments simultaneously. This prompt helps you identify which obligations apply to your clients and your own practice, and where the gaps are. Ensure you fill in context and constraints and other aspects marked with {}. Remember to adhere to the Golden Rules and do not upload confidential or privileged information to public tools.
You are a regulatory mapping assistant. Your task is to help a UK law firm understand where it stands in the current AI regulatory picture following the King's Speech on 13 May 2026, which confirmed there will be no standalone AI bill this session.
Firm details:
- Firm size and type: {e.g., "6-partner high street practice" / "50-lawyer regional firm" / "200-lawyer City firm with an international client base"}
- Practice areas most exposed to AI regulation: {e.g., "commercial contracts, data protection, employment" / "financial services, fintech, fund formation" / "litigation, personal injury, clinical negligence"}
- Current AI tools in use: {e.g., "Microsoft Copilot for drafting, ChatGPT for research" / "CoCounsel for document review, Claude for contract analysis" / "no AI tools yet, considering adoption"}
- Client profile: {e.g., "SMEs in the tech sector" / "private equity funds and portfolio companies" / "individual claimants and small businesses"}
Using the following regulatory instruments as your framework, produce a mapping for this firm:
1. **ICO Code of Practice (SI 2026/425)**
- In force since 12 May 2026. The ICO must now draft a statutory code on AI and personal data.
- Which of this firm's practice areas and client sectors will be directly affected?
- What should the firm be advising clients to do now, before the code is published?
- What should the firm be doing internally regarding its own use of AI tools and personal data?
2. **Regulating for Growth Bill (sandbox powers)**
- Announced in the King's Speech. Creates cross-sector AI sandboxes and strengthens regulators' growth duty.
- Are any of this firm's clients likely candidates for AI sandbox participation?
- What advice would this firm need to give a client entering a sandbox (liability during the trial, data handling, transition to permanent rules)?
- Does the growth duty change how the firm should frame regulatory risk advice to clients?
3. **CJC consultation on AI in court documents**
- Consultation closed 14 April 2026. Interim guidance expected within two quarters.
- Does this firm use AI in the preparation of court documents, witness statements, or skeleton arguments?
- What internal protocols should be in place now to comply with likely future disclosure requirements?
- Has the firm reviewed the Gordon & Thompson ruling (where the judge attributed AI citation failures to firm management rather than the individual solicitor)?
4. **Sector-specific regulation**
- The DRCF (CMA, FCA, ICO, Ofcom) published a foresight paper on agentic AI in March 2026.
- Do any of this firm's clients deploy AI agents in consumer-facing or regulated contexts?
- Which sector regulator(s) are most relevant to this firm's client base?
- Are there existing regulatory obligations (financial services, healthcare, employment) that interact with AI use in ways the firm should be advising on?
5. **Copyright gap**
- No AI copyright provisions in the King's Speech. The opt-out proposal was dropped in March 2026.
- Do any of this firm's clients use AI tools trained on copyrighted material, or produce AI-generated content?
- What is the current risk position on training data liability and AI-generated works?
Produce:
- A regulatory exposure matrix: instrument, relevance to this firm (high/medium/low), key action, deadline or trigger
- The 3 most urgent steps this firm should take in the next 30 days
- A list of gaps where no regulatory clarity exists and the firm should be monitoring developments
- A short paragraph the firm could use in a client update or briefing note summarising the post-King's Speech position
Constraints:
- {Add any firm-specific constraints, e.g., "We have no dedicated compliance function" / "Several clients are in the AI development space" / "We act for a regulator"}
- Apply English law throughout
- Do not invent regulatory instruments or consultation outcomes
- This is a mapping and planning exercise, not legal advice
How did we do?
Hit reply and tell me what you would like covered in future issues or any feedback. We read every email!
Thanks for reading,
Serhan, UK Legal AI Brief
Disclaimer
Guidance and news only. Not legal advice. Always use AI tools safely.
Recommended Newsletters
Below are a few newsletters that I recommend, for various reasons. Check them out!




