This is a week where the practical and the regulatory converge. A new survey from the Law Society of London finds that one in six City litigators is using AI to analyse how judges think and are likely to rule, a development that sits somewhere between competitive advantage and ethical minefield. Next week brings two milestones in quick succession: SI 2026/425 (the regulations requiring the ICO to draft an AI code of practice, covered in this newsletter on 24 April) comes into force on Monday 12 May, and the King's Speech follows on Tuesday 13 May, where an AI bill may or may not make the cut.
One in six City litigators using AI to profile judges
The Law Society of London's 2026 litigation trends survey, published on 1 May, found that more than one in six City litigators is now using AI to analyse judicial decision-making patterns, studying how individual judges approach particular types of case, argument, or evidence in order to inform litigation strategy. The survey, which drew responses from 143 senior practitioners (mostly partners, consultants, and managing partners at City firms), also found that nearly a third of respondents believe AI has already driven down client fees.
The judicial profiling finding is an interesting one. At its most benign, this is an extension of something good advocates have always done: reading a judge's previous decisions, understanding their preferences, and tailoring submissions accordingly. AI simply does it faster and at scale, processing years of judgments and identifying patterns that a human researcher might miss. At its most concerning, it raises questions about the boundary between legitimate preparation and something closer to gaming the system, particularly if the AI tools being used are opaque about their methodology or if litigators begin to rely on predictions about judicial behaviour rather than the strength of their legal arguments.
The survey does not identify which AI tools are being used for this purpose, and the practice is not (yet) regulated. There is no rule against researching a judge's track record, and the Judicial College has not, to this author's knowledge, issued guidance on the use of AI analytics in this context. But it feels worth flagging. If judicial profiling becomes widespread, courts may need to consider whether parties should be required to disclose the use of AI analytics in the same way that they are increasingly expected to disclose AI use in the preparation of evidence and submissions.
The fee impact finding is less surprising (and mentioned in this newsletter previously). If AI is compressing the time required for research, drafting, and document review, it follows that clients will expect (and in some cases demand) that the savings are passed on. For firms on hourly rates, this creates a tension that has been building for some time: the tools that make lawyers more efficient also reduce the number of billable hours they can charge for. The survey suggests that, for at least some City clients, that adjustment is already happening.
Read: Law Gazette
On your radar
SI 2026/425 comes into force on Monday, giving the ICO its statutory AI mandate: The Data Protection Act 2018 (Code of Practice on Artificial Intelligence and Automated Decision-Making) Regulations 2026, which this newsletter covered as the lead story on 24 April, come into force on 12 May. From Monday, the Information Commissioner has a statutory obligation to prepare a code of practice on AI and personal data, covering both the UK GDPR and the Data Protection Act 2018, with a specific requirement to address children's data. The code itself has not been drafted yet, and no deadline has been set for completion, but the ICO must now establish a consultation panel and begin the formal process. Why it matters for UK lawyers: the era of purely voluntary ICO guidance on AI is now formally over. Firms advising clients who develop or deploy AI systems involving personal data should be tracking the ICO's consultation timeline closely. The code, once finalised, will carry statutory weight in enforcement proceedings. (GovPing / Reed Smith)
King's Speech on Tuesday 13 May: will an AI bill appear?: The government has signalled that AI-specific legislation may feature in the 2026 King's Speech, scheduled for next Tuesday. If it appears, the bill is expected to focus on requirements for developers of the most powerful AI models, with AI and copyright provisions likely included following the government's decision to drop the proposed copyright opt-out (covered in this newsletter on 20 March). Whether the bill actually makes the cut remains uncertain, and the creative sector has been lobbying hard for copyright protections to be included. If it does appear, it would be the first piece of UK-specific AI legislation, shifting the regulatory landscape from principles and guidance to statute. If it does not, SI 2026/425 (above) and the ICO's forthcoming code become even more significant as the primary regulatory instruments. Why it matters for UK lawyers: any AI bill will create new compliance obligations for firms advising on AI development and deployment. Worth monitoring the speech closely on Tuesday. (Slaughter and May / House of Commons Library)
Corgi launches AI liability insurance covering hallucination and bias: Corgi, a Y Combinator-backed insurance startup which recently hit a $1.3 billion valuation, has launched an AI liability insurance product explicitly covering three categories of risk: model performance and hallucination, algorithmic bias, and training data disputes. The product is aimed at both AI developers and businesses that use AI tools, including law firms. One of the example scenarios in the coverage documentation involves a legal-tech AI generating a fictitious case citation in a court filing, the firm being sanctioned, and a subsequent claim against the AI provider. Why it matters for UK lawyers: professional indemnity policies in the UK market have not, as a rule, been tested against AI-specific failure modes. If a firm relies on an AI tool that hallucinates a citation or produces biased advice, the question of where liability falls (the firm, the vendor, or both) remains largely unanswered. Corgi's product does not resolve that question, but it shows that the insurance market is beginning to price AI risk as a distinct category. Firms should be asking their brokers what their existing PI cover says about AI-related claims, and whether supplementary coverage is available or necessary. (Artificial Lawyer / TechCrunch)
AI is eroding the work that trains junior lawyers: Axios reported on 2 May that AI is hollowing out the entry-level work that has traditionally formed the training ground for junior associates at large firms. First-pass document review, contract analysis, and routine legal research, the tasks that junior lawyers cut their teeth on, are increasingly being handled by AI tools that are faster and cheaper. The structural concern is straightforward: junior work has always served two purposes (billing and training), and if the billing rationale disappears, firms will hire fewer juniors, which means fewer lawyers will receive the on-the-job training that produces competent senior practitioners. Some US firms are already responding by allowing first-year associates to count AI training and experimentation toward their billable hours requirement. Why it matters for UK lawyers: this is framed as a Big Law problem, and the data is largely US-focused, but the dynamics apply equally to any UK firm where juniors learn by doing work that AI can now do. Training contracts and the first years of qualification depend on supervised exposure to real matters. If firms reduce junior headcount or reassign that work to AI, they will need to find alternative ways to develop the next generation, or accept a long-term skills gap. This is not a hypothetical: it is already shaping hiring decisions. The Master of the Rolls made a related point in his Exeter speech last month (covered in this newsletter on 24 April), calling for a "complete rethink" of legal education in the age of AI. (Axios)
Ad Break
In order to help cover the running costs of this newsletter, please check out the advert below. In line with my promises from the start, adverts will always be declared and actual products that I have tried, with some brief thoughts from me.
Write docs 4x faster. Without hating every second.
Nobody became a developer to write documentation. But the docs still need to get written — PRDs, README updates, architecture decisions, onboarding guides.
Wispr Flow lets you talk through it instead. Speak naturally about what the code does, how it works, and why you built it that way. Flow formats everything into clean, professional text you can paste into Notion, Confluence, or GitHub.
Used by engineering teams at OpenAI, Vercel, and Clay. 89% of messages sent with zero edits. Works system-wide on Mac, Windows, and iPhone.
For Review
"Mike, the Open Source Legal AI Platform" (Artificial Lawyer)
An interview with Will Chen, creator of Mike, an open-source legal AI platform designed as a secure and affordable alternative to commercial legal AI tools. Mike is aimed at small and medium-sized law firms and allows users to run the software locally, giving them full control over their data. The platform handles contract review, legal research, and document analysis. For any firm that has been deterred from AI adoption by cost or data sovereignty concerns, this is worth examining, with the obvious caveat that open-source tools require technical competence to deploy and maintain, and do not come with the vendor support or regulatory assurances that commercial products offer. The broader trend of lawyers building their own tools (covered in this newsletter two weeks ago with the Opposing Counsel Review skill) continues to gather pace.
Read or listen: Artificial Lawyer
"Everlaw + Legora Partner for Litigation Workflows" (Artificial Lawyer)
Everlaw, the cloud-based litigation platform, has partnered with Legora to integrate legal research, drafting, and case analysis into a single workflow. The integration connects e-discovery with legal research and document generation, allowing litigators to move from evidence identification to argument construction without switching platforms. The practical value depends on whether your firm uses Everlaw (it has a modest but growing UK presence), but the underlying trend, litigation tools converging into integrated platforms rather than standalone point solutions, is relevant regardless of vendor.
Read or listen: Artificial Lawyer
UK law firms lead the world in AI adoption (LEAP Legal Software)
The Profitability in Law: Global Report 2026, commissioned by LEAP Legal Software and surveying 700 legal professionals across six countries, found that UK law firms are pulling ahead of their international peers on AI adoption. Two-thirds of UK respondents rated their firm's AI training and expertise as good or excellent, the strongest result globally and ahead of the US, Canada, Australia, and New Zealand. The report also notes a structural shift in billing: fixed or flat fees now account for 53% of matters, while hourly billing has fallen to 32%, a change which the report attributes in part to AI compressing the time that once justified hourly rates. Worth reading for the benchmarking data, though the sample may skew toward firms that are already engaged with legal technology.
Read or listen: Legal Futures
Practice Prompt
Try the below prompt to prepare for a hearing before a specific judge. This ties into this week's lead story on AI-assisted judicial research, but does it the right way: using publicly available judgments to understand a judge's approach, rather than relying on opaque predictive analytics. Ensure you fill in context and constraints and other aspects marked with {}. Remember to adhere to the Golden Rules and do not upload confidential or privileged information to public tools.
You are a litigation preparation assistant. Your task is to help me prepare for a hearing by analysing a judge's publicly available decisions to understand their approach, preferences, and typical expectations.
Judge: {name and title, e.g., "HHJ Smith, sitting in the County Court at Manchester" or "Mrs Justice Jones, King's Bench Division"}
Court: {e.g., County Court, High Court (King's Bench Division), Employment Tribunal}
Type of hearing: {e.g., summary judgment application, strike-out, costs assessment, case management conference, trial}
Area of law: {e.g., breach of contract, personal injury, professional negligence, possession, insolvency}
Using the judge's publicly available decisions on BAILII, the National Archives, and other open legal databases:
1. **Procedural preferences and expectations**:
- How does this judge typically run hearings of this type? Do they indicate preferences for skeleton arguments, bundles, time estimates, or oral submissions?
- Are there patterns in how they manage case timetables, adjourn matters, or deal with non-compliance?
- Do their judgments suggest they prefer concise submissions or detailed written argument?
2. **Approach to the substantive area**:
- In decisions involving {area of law}, what legal tests or authorities does this judge most frequently cite or rely upon?
- Do they tend to follow a particular line of authority, or have they expressed views that depart from the mainstream?
- Are there recurring themes in how they approach evidence, witness credibility, or expert testimony in this area?
3. **Advocacy style observations**:
- Based on comments in judgments, what does this judge appear to value in advocacy? (e.g., brevity, thorough preparation, realistic time estimates, early concessions on weak points)
- Are there criticisms of counsel or litigants in person that suggest particular frustrations or expectations?
4. **Practical warnings**:
- Has this judge made adverse costs orders, struck out statements of case, or imposed sanctions in circumstances relevant to my hearing type?
- Are there any procedural traps (e.g., strict compliance with directions, specific bundle requirements) that emerge from the decisions?
Produce:
- A summary of the judge's likely expectations for this type of hearing (maximum one page)
- The 3 most important things to get right, based on the judge's track record
- Any authorities this judge has cited repeatedly that I should be prepared to address
- A list of things to avoid, based on criticisms or adverse outcomes in previous cases
Constraints:
- {Add any case-specific constraints, e.g., "We are the applicant on a summary judgment application" or "The judge was recently assigned to this list and may have limited experience in this area"}
- Only use publicly available decisions. Do not fabricate case citations or judicial comments.
- Present findings as preparation notes, not predictions. The purpose is to be well-prepared, not to game the outcome.
- This is a research and preparation tool, not legal advice. All findings should be verified against the original judgments before being relied upon.
How did we do?
Hit reply and tell me what you would like covered in future issues or any feedback. We read every email!
Thanks for reading,
Serhan, UK Legal AI Brief
Disclaimer
Guidance and news only. Not legal advice. Always use AI tools safely.
Recommended Newsletters
Below are a few newsletters that I recommend, for various reasons. Check them out!




