Our lead story is a county court judgment that treats AI-driven fake authorities as a failure of firm systems rather than a lone solicitor’s mistake. Wasted costs are at risk!
Alongside that, consumer regulators and law firms are sharpening their stance on chatbot accuracy, workforce design and AI training expectations.
AI in Practice
Management on the hook for AI-generated fake cases
A Birmingham County Court decision has clarified where responsibility lies when AI produces bogus authorities (see here).
Delivering judgment at Birmingham County Court in Ndaryiyumvire v Brimingham City University, His Honour Judge Charman dealt with an application where two non-existent cases were generated by the research function of a firm’s legal software and were included in Particulars of Claim. The judge made a wasted costs order against the firm, but declined to make a separate referral of the individual solicitor to the SRA, describing the problem as “in substance a failure of management” and noting that the wasted costs order would in any event be reported to the regulator.
The judgment stresses that the use of AI or “built-in” tools is not an excuse for filing unverified authorities. The court reminded practitioners that where false cases are put before the court, with or without AI, referral to the regulator or even the police will usually be appropriate, echoing earlier High Court warnings about AI misuse in litigation, see the report of the High Court’s views here.
For firms, the practical message is that AI risk sits squarely with the organisation. Administrative staff and software may generate drafts, but a supervising solicitor must still verify citations before filing. Firms will need to show that they have thought through how AI enters the workflow, trained staff accordingly and built in controls that are realistic under time pressure.
This is likely to accelerate a shift from informal “try Copilot and see” to formalised litigation protocols. There must be clear rules about which tools may be used for research or drafting, mandatory checking of references, better labelling of drafts, and audit trails that show a human has reviewed AI output before it reaches the court.
This continuing stressing of ownership of work and responsibility echoes messages given since the Mazur decision.
Takeaways
Act: Map where AI or automated research features are used in your litigation workflows, including within case management or document systems. Put in place a simple rule that all citations must be checked against primary sources before filing, and record that check.
Watch: Monitor further wasted costs orders or SRA activity around AI-related mistakes, and check whether your legal software provider is changing how its “AI research” features present authorities or warnings.
Risk: The court has signalled that AI errors can trigger regulatory scrutiny and wasted costs at firm level. Weak supervision, unclear roles for support staff, and over-reliance on automated research will increase exposure to regulatory intervention.
On your radar
Which? warns on bad chatbot money tips: A Which? study found that general-purpose chatbots including Copilot, ChatGPT, Gemini and others gave inaccurate or misleading guidance on ISAs, tax refunds, contract remedies and travel insurance, with the FCA stressing that such tools are not regulated financial advisers (Source).
Why it matters for UK lawyers: although focused on financial advice, this is a clear template for how regulators and consumer groups may view AI-driven legal information. It reinforces the need for robust disclaimers, user education and controls if clients might see AI-generated content.
Irwin Mitchell to scrap litigation assistant roles: Irwin Mitchell is consulting on removing its litigation assistant role across UK offices, affecting around 56 staff, with sources linking the move to increased use of AI and a new case management platform, alongside wider changes in the litigation support model (source).
Why it matters for UK lawyers: regardless of the precise mix of drivers, this is a concrete example of AI and process redesign reshaping junior and support work, raising questions about role responsibilties and re-skilling existing staff.
Has OpenAI “banned” legal advice? Not quite: After social media claims that ChatGPT would no longer provide legal or health advice, OpenAI confirmed that its policies are unchanged. Whilst it is stressed that users must not rely on ChatGPT for licensed advice without appropriate involvement of a qualified professional, but the system will still help users understand legal information. (source 1 and 2)
Why it matters for UK lawyers: firms using general-purpose models should align their internal policies and client-facing wording with these terms, reinforcing that AI tools are assistive only and that a regulated lawyer remains responsible for all advice and conduct of a case.
For Review
The AI culture clash in UK law (LexisNexis / Artificial Lawyer)
LexisNexis surveyed over 700 UK lawyers and found that while AI adoption is racing ahead, integration into strategy, processes and training lags significantly. This is useful material for partners to benchmark where their organisation really sits on the curve and to frame practical questions about governance and training rather than chasing features.
Read on: Artificial Lawyer summary, LexisNexis survey
Law schools add AI to the curriculum as Harvey expands (Non-Billable)
AI tool Harvey is rolling out its law school programme to leading UK institutions including Oxford, King’s College London, BPP and the University of Law. King’s is launching an AI literacy programme giving all students access to multiple legal AI tools plus a structured 12-week course. For UK firms, this means future trainees may arrive with hands-on AI experience and higher expectations of working with tools like Harvey, Luminance and others and firms may want to align their own training and supervision with what is now being taught in universities.
Read on: Non-Billable
Generative AI – the essentials (Law Society)
The Law Society’s “Generative AI – the essentials” guidance summarises key risks and opportunities, including data protection, cyber security, ethical principles and questions to ask vendors about training data, bias and governance. This is a useful baseline document for partners, COLPs and in-house teams when refreshing AI policies or evaluating new tools.
Read on: Law Society guidance
Practice Prompt
Given the stories this week, today’s focus is on reviewing where AI is used now in your workflows.
Pick one live or recent piece of drafted advocacy or advice where AI tools were used at any stage (research, drafting or editing). In under 30 minutes:
Map exactly where AI was involved
Check a sample of citations or legal statements against primary sources,
Note whether your team followed a clear internal checklist or just relied on individual judgment.
Use what you find to draft or tighten a 1-page “AI use in contentious documents” note for your team that covers when AI can be used, what must always be checked, and who signs off the final product, ensuring this is accessible to all staff and any regulatory audit.
How did we do?
Hit reply and tell me what you would like covered in future issues or any feedback. We read every email!
UK Legal AI Brief
Disclaimer
Guidance and news only. Not legal advice. Test outputs and apply professional judgment.
