This week Freshfields announced a multi-year partnership with Anthropic to co-build AI legal workflows, deploying Claude across all 33 of its offices and making it available to 5,700 employees. It is, by some margin, the most significant AI vendor commitment by a Magic Circle firm to date. Elsewhere, the Chancellor of the High Court warned that AI use can destroy legal professional privilege, and a new report found that most UK firms remain stuck in what it calls "automation purgatory" despite near-universal AI adoption.
AI in Practice
Freshfields and Anthropic announce multi-year AI co-development partnership
On 23 April, Freshfields Bruckhaus Deringer and Anthropic announced a multi-year agreement to co-build AI legal workflows, moving well beyond the standard enterprise licensing arrangements that have characterised most law firm AI deployments to date. Under the deal, Anthropic's Claude suite of products (including its agentic AI platform, Cowork) will be deployed across all 33 Freshfields offices, every practice group, and all business services functions, with 5,700 employees given access.
The early adoption numbers are striking. Within the first six weeks, usage of Claude increased by approximately 500%, according to the firm. Freshfields and Anthropic have also established what they describe as a co-development programme to build legal-focused AI applications and design agentic workflows. Over the next 12 months, the two organisations' legal teams plan to collaborate on new AI workflows using Anthropic's latest capabilities. Freshfields will also receive early access to future Anthropic models.
The commercial implications are important here. This is not a firm licensing a tool and seeing what happens. It is a Magic Circle firm embedding itself into the development cycle of a frontier AI company, contributing legal domain expertise in exchange for preferential access and, presumably, influence over product direction. That is a different relationship from buying a seat licence. It remains to be seen whether other large firms will feel pressure to secure similar arrangements with competing providers, or risk falling behind.
The market noticed. RELX, parent company of LexisNexis and Westlaw, saw its share price dip following the announcement. The view from the financial press was that frontier AI companies may increasingly compete directly with established legal technology vendors, rather than simply powering their products from behind the scenes (Investing.com).
For firms outside the Magic Circle, the practical question is whether this kind of partnership model (co-development, early access, embedded collaboration) will remain the preserve of the largest firms, or whether it signals a broader shift in how AI vendors engage with the legal sector. As a partner of a smaller firm, this writer is interested to see whether this has impact on the smaller end of the market. The answer will depend partly on whether Anthropic treats Freshfields as a showcase client or as one node in a wider legal strategy.
Read: Freshfields / Artificial Lawyer
On your radar
Chancellor warns AI use can destroy legal professional privilege: Sir Colin Birss, Chancellor of the High Court, used a keynote speech at the City of London Law Society on 22 April to warn that AI use across the legal sector carries a real risk of undermining legal professional privilege. The Chancellor made clear that AI does not change the legal test for privilege (legal advice privilege and litigation privilege remain grounded in confidentiality and dominant legal purpose), but stressed that where confidential or legally sensitive material is input into externally hosted or consumer-grade AI systems, privilege may be lost if confidentiality cannot be guaranteed. The speech follows the Upper Tribunal's observations in Munir v SSHD [2026] UKUT 81, which held that uploading confidential documents into "open source AI tools such as ChatGPT" places information "in the public domain" and waives legal privilege. Why it matters for UK lawyers: this is as close to a judicial instruction as practitioners are likely to get before formal rules arrive. Any firm that has not yet drawn a clear line between enterprise-grade AI tools (where data stays within the firm's environment) and consumer-grade tools (where it does not) should treat this as the prompt to do so. The distinction is no longer theoretical. (Judiciary / Local Government Lawyer)
EU AI Act Omnibus trilogue collapses after 12-hour session: the 28 April trilogue on the EU AI Act Omnibus ended without political agreement after 12 hours of negotiation in Strasbourg. The core sticking point was the conformity-assessment architecture for AI in regulated products under Annex I, specifically how the AI Act interacts with existing sectoral safety legislation (the Machinery Regulation, Medical Devices Regulation, and others). Other elements had broadly converged, including postponed deadlines and a prohibition on non-consensual intimate AI-generated imagery. A follow-up trilogue is expected around 13 May. Without an Omnibus passing, the original 2 August 2026 high-risk applicability date legally stands. Why it matters for UK lawyers: any firm advising clients who supply AI-enabled products or services into the EU needs to track this closely. The August deadline is now three months away with no agreed deferral in place. Firms should be advising affected clients to prepare for the original timeline unless and until the Omnibus formally extends it. (IAPP / The Next Web)
Anthropic's Project Deal shows AI agents can negotiate and close real transactions: Anthropic published results from Project Deal, an internal experiment in which AI agents (running on Claude) autonomously negotiated and completed 186 deals worth over $4,000 in a classified marketplace for Anthropic's San Francisco employees. The agents identified potential matches, proposed prices, handled counteroffers, and reached agreement in natural language, without a pre-built negotiation protocol. A performance gap emerged between models: Opus agents completed more deals, extracted $2.68 more per sale on average, and paid $2.45 less per purchase than Haiku agents. Why it matters for UK lawyers: the immediate practical impact is nil (nobody is deploying AI agents to negotiate client transactions today), but the experiment surfaces an important question about what legal frameworks govern AI agents transacting on a human's behalf. Agency law, contract formation, consumer protection, and liability allocation all need answers that do not yet exist. Worth reading for anyone advising on AI governance or commercial contracts involving autonomous systems. (Artificial Lawyer / Anthropic)
Soren launches "private AI" for law firms and regulated industries: Soren, a Y Combinator-backed startup, has launched a service that deploys AI models directly onto a firm's own infrastructure, fine-tuned on the firm's data and governed according to its compliance constraints. The target market is small to mid-sized institutions, including boutique law firms and community banks, that lack the resources to build their own AI stack but cannot accept the data handling risks of sending sensitive information to third-party providers. Why it matters for UK lawyers: data sovereignty and client confidentiality remain the most commonly cited barriers to AI adoption in smaller firms. A product that credibly solves the "where does my data go?" question without requiring an in-house engineering team could shift the conversation for firms that have so far stayed on the sidelines. It is early days, and the product will need to be tested against those claims, but the positioning is sound. (Artificial Lawyer)
82% of UK lawyers use or plan to use AI, but most firms stuck in "automation purgatory": OneAdvanced's Legal Trends Report 2026, published on 28 April, found that four out of five lawyers in the UK are currently using or planning to use generative AI, and nearly half believe they are ahead of their competitors. The reality is less flattering: more than half of firms surveyed report that less than 25% of their work is actually AI-enabled, and nearly two-thirds describe themselves as stuck in what the report calls "automation purgatory" (having made some progress but failing to achieve meaningful transformation). The widening digital skills gap was identified as the most critical barrier. Why it matters for UK lawyers: the gap between AI adoption (installing the tool) and AI integration (changing how work is done) is the defining challenge for most firms right now. This report puts numbers on what many practitioners already suspect: that having access to AI and actually using it to transform practice are very different things. (Intelligent CXO)
Ad Break
In order to help cover the running costs of this newsletter, please check out the advert below. In line with my promises from the start, adverts will always be declared and actual products that I have tried, with some brief thoughts from me.
Wispr continues to be this writer’s preferred transcription tool, with its accuracy and generous free limits. I am grateful for their continued support of the newsletter.
You think 4x faster than you type. Your IDE should keep up.
Wispr Flow lets you dictate prompts, acceptance criteria, and bug reproductions inside Cursor or Warp — with automatic file name and variable recognition. Say user_id, get user_id. Say useEffect, get useEffect.
Paste directly into GitHub, Jira, or Linear. Give coding agents the full context they need without typing a novel.
89% of messages sent with zero edits. Millions of developers use Flow daily, including teams at OpenAI, Vercel, and Clay. Free on Mac, Windows, and iPhone.
For Review
"Legal professional privilege in the Age of AI" (Sir Colin Birss, Judiciary) The full text of the Chancellor of the High Court's keynote speech at the City of London Law Society on 22 April. Birss works through the established tests for legal advice privilege and litigation privilege, then maps them onto AI use scenarios with a clarity that will be useful for anyone drafting or updating a firm AI policy. The key passage, on the distinction between enterprise-grade and consumer-grade tools, is the most authoritative judicial statement on the topic to date. Read alongside the Norton Rose commentary on the Upper Tribunal's observations in Munir v SSHD [2026] UKUT 81, which held that uploading to ChatGPT waives privilege. Read or listen: Judiciary / Norton Rose Fulbright
"Anthropic's AI agent-to-agent marketplace experiment: The legal frameworks don't exist" (Legal IT Insider) A detailed analysis of what Anthropic's Project Deal experiment means for legal practice and regulatory frameworks. The piece goes beyond the headline results to examine the gaps in existing law: how do you form a binding contract when both parties are represented by AI agents? What happens when an agent exceeds its principal's instructions? Who bears liability for a bad deal? These are not hypothetical questions for long, given the pace at which agentic AI is developing. Read or listen: Legal IT Insider
OneAdvanced Legal Trends Report 2026: "From AI Ambition to Operational Reality" (Legal Support Network) The full report behind the "automation purgatory" finding. Worth reading beyond the headline statistics for its analysis of why AI integration is stalling in practice, with data on skills gaps, training deficiencies, and the disconnect between what leadership thinks is happening and what fee earners actually experience. Practical for any firm benchmarking its own AI maturity against the broader market. Read or listen: Legal Support Network
Practice Prompt
Practice prompt
Try the below prompt to audit your firm's AI tool usage for risks to legal professional privilege, applying the principles from the Chancellor's speech and the Upper Tribunal's observations in Munir. Ensure you fill in context and constraints and other aspects marked with {}. Remember to adhere to the Golden Rules and do not upload confidential or privileged information to public tools.
You are a legal technology compliance advisor. Audit the AI tool usage described below for risks to legal professional privilege and client confidentiality, applying English law principles and the recent guidance from the Chancellor of the High Court (22 April 2026) and the Upper Tribunal's observations in Munir v SSHD [2026] UKUT 81.
Firm details:
- Firm size and type: {e.g., 12-partner regional practice / sole practitioner / 150-lawyer City firm}
- Practice areas: {e.g., commercial litigation, family, conveyancing, corporate}
- AI tools currently in use: {list every AI tool used by fee earners and support staff, e.g., "ChatGPT free tier for research, Microsoft Copilot for drafting, Claude Pro for document review"}
- Current data handling approach: {e.g., "no formal AI policy", "staff told not to upload client documents", "AI use policy exists but is not actively enforced"}
For each AI tool listed:
1. Classify the privilege risk (high / medium / low):
- Is the tool consumer-grade or enterprise-grade?
- Does data leave the firm's environment or get used for model training?
- Could use of the tool constitute placing information "in the public domain" per Munir?
2. Identify current exposure:
- What types of information are fee earners likely inputting? (client names, case summaries, draft advice, witness evidence, financial data)
- Are there workflows where privileged material is routinely being entered?
3. Recommend a classification for each tool:
- Approved for all use / approved with restrictions / restricted to non-privileged work only / prohibited pending review
- Specific controls needed (e.g., anonymisation before input, enterprise agreement required, staff training, audit trail)
4. Identify policy gaps:
- Where does the firm's current approach fall short of the Chancellor's guidance and the Munir observations?
- What specific provisions should the AI use policy include to address privilege?
Produce:
- A risk matrix: tool, risk level, recommended classification, key controls
- The 3 most urgent actions the firm should take
- A draft paragraph for the firm's AI use policy addressing privilege specifically
Constraints:
- {Add firm-specific constraints, e.g., "Some staff use personal ChatGPT accounts on their phones", "We handle legally aided family work with vulnerable clients", "We act for regulated financial services clients"}
- Apply English law principles of legal professional privilege throughout
- Do not invent case law
- This is a compliance assessment tool, not legal advice
How did we do?
Hit reply and tell me what you would like covered in future issues or any feedback. We read every email!
Thanks for reading,
Serhan, UK Legal AI Brief
Disclaimer
Guidance and news only. Not legal advice. Always use AI tools safely.
Recommended Newsletters
Below are a few newsletters that I recommend, for various reasons. Check them out!




