In partnership with

The ICO made headlines recently by opening a formal investigation into Grok over AI-generated harmful imagery, one of the most significant UK enforcement actions targeting a generative AI system to date. Elsewhere, the government's copyright and AI report deadline is now just weeks away, and DeepJudge showed what it looks like when a legal search tool plugs directly into Claude.

ICO investigates Grok: what the first major UK enforcement action on generative AI means for lawyers

On 3 February, the Information Commissioner's Office announced formal investigations into X Internet Unlimited Company and X.AI LLC over Grok's handling of personal data. The concern is that Grok has been used to generate non-consensual sexual imagery of individuals, including children. ICO Executive Director William Malcolm said the reports raise "deeply troubling questions about how people's personal data has been used to generate intimate or sexualised images without their knowledge or consent." (ICO)

This is significant for UK lawyers beyond the specifics of the case. The ICO is investigating whether personal data was processed lawfully, fairly and transparently, and whether appropriate safeguards were built into Grok's design and deployment. The enforcement powers available are substantial: fines of up to £17.5 million or 4% of annual worldwide turnover under the UK GDPR and Data Protection Act 2018, alongside information, assessment and enforcement notices. The ICO has said it will coordinate with Ofcom and international regulators.

For firms advising on AI, this case is likely to set expectations around what "privacy by design" means for generative models. For firms using AI tools internally, it is a reminder that the data protection obligations do not disappear because the tool is third-party. If a generative AI system your firm uses processes personal data in ways that are not transparent or lawful, the downstream regulatory risk sits with you as well as the developer.

Takeaways

This is a good moment to review your firm's AI tool register and confirm that data processing agreements are in place and adequate.

  • Act: Check whether your firm maintains an up-to-date register of AI tools in use, including what personal data each tool processes and on what legal basis. If you do not have one, start one.

  • Watch: The outcome of this investigation will shape how the ICO approaches generative AI enforcement more broadly. Expect further guidance on safeguards and design obligations.

  • Risk: Firms using third-party AI tools remain responsible for ensuring lawful data processing. If a tool you rely on is found to have inadequate safeguards, you may need to reconsider its use quickly.

On your radar

  • Government AI copyright report deadline is 18 March: Under section 137 of the Data (Use and Access) Act 2025, the government must publish an economic impact assessment and report on the use of copyright works in AI training by 18 March 2026. The consultation received over 11,500 responses, with 88% favouring mandatory licensing. Four policy options remain on the table, and the Copyright Licensing Agency is developing a gen-AI training licence expected in Q3 2026. Why it matters for UK lawyers: the outcome will directly affect IP advice, AI procurement terms, and contractual protections around training data. If you advise on IP, licensing, or technology contracts, this is one to diarise. (GOV.UK)

  • DeepJudge plugs into Claude Cowork via MCP: Following last week's lead story on Anthropic's legal plugins, legal search provider DeepJudge has demonstrated an MCP (Model Context Protocol) integration that connects Claude to a firm's own prior matters and work product. The result: Claude can reason over firm-specific data without that data leaving the firm's environment. CTO Yannic Kilcher argues that "LLM wrappers are rapidly becoming table stakes" and that the real differentiator is proprietary legal data and institutional knowledge. Why it matters for UK lawyers: this is an early example of how the Anthropic plugin ecosystem may evolve. Firms thinking about AI integration should consider whether their knowledge management systems are structured to support these kinds of connections. (Artificial Lawyer)

  • Law Society tells government “clarity, not deregulation”: In its response to DSIT's AI Growth Lab consultation, the Law Society said existing professional regulations support AI innovation and the main challenges come from uncertainty, cost and skills, not regulatory burden. CEO Ian Jeffery emphasised that neither legal professional privilege nor client confidentiality should be curtailed. Why it matters for UK lawyers: the Law Society's position is that the SRA framework is already fit for purpose. The call is for practical guidance on how existing rules apply to AI, not for a new regime. (Law Society)

  • FCA Mills Review: deadline 24 February: A reminder that the FCA's call for input on the Mills Review, examining the long-term impact of AI on retail financial services, closes on 24 February. The review looks to 2030 and beyond, covering agentic AI, consumer behaviour, market structure and how regulators may need to adapt. The FCA Board expects to receive recommendations in the summer. Why it matters for UK lawyers: firms advising financial services clients on AI strategy and governance should be aware of this review. The findings will likely inform FCA expectations around AI use in regulated firms. (FCA)

Ad Break

In order to help cover the running costs of this newsletter, please check out the advert below. In line with my promises from the start, adverts will always be declared and actual products that I have tried, with some brief thoughts from me.

Unlock ChatGPT’s Full Power at Work

ChatGPT is transforming productivity, but most teams miss its true potential. Subscribe to Mindstream for free and access 5 expert-built resources packed with prompts, workflows, and practical strategies for 2025.

Whether you're crafting content, managing projects, or automating work, this kit helps you save time and get better results every week.

For Review

AI regulation in the UK: the role of the regulators (Bird & Bird)

A thorough sector-by-sector analysis of how each UK regulator is approaching AI. Covers the CMA (the most active, with an 80-person data and tech unit), the ICO (updated automated decision-making guidance and agentic AI warnings), the FCA ("supercharged sandbox" with NVIDIA computing access), Ofcom (AI chatbot regulation under the Online Safety Act), the MHRA, EHRC, and several others. Very useful as a reference document for anyone advising on AI compliance across regulated sectors, or for any firm putting together an internal AI governance framework.

AI update for 2026: horizon scanning (Slaughter and May)

Slaughter and May's annual horizon scanning piece covers the key AI legal issues for 2026: the Getty Images v Stability AI appeal, the UK government's March copyright reports, ICO enforcement trends, algorithmic pricing risks, and liability questions around AI hallucinations and "AI washing." A useful overview if you want a single document that maps the year ahead.

Practice Prompt

This week's lead story is a reminder that firms using AI tools need to understand what personal data those tools process and on what basis. Try the below prompt to draft a starting point for an AI tool register. Ensure you fill in the fields marked with {}. Remember to adhere to the Golden Rules and do not upload confidential or privileged information to public tools.

You are a UK data protection adviser. I am a solicitor at a {size of firm, e.g. 5-partner regional} law firm regulated by the SRA. We use the following AI tools in our practice:

{List each tool, e.g.:
- ChatGPT (OpenAI) — used by fee earners for drafting and research
- Microsoft Copilot — integrated into our M365 environment
- [Legal tech tool] — used for document review}

For each tool listed, produce a table with the following columns:

1. Tool name and provider
2. What personal data it may process (consider client names, case details, correspondence content, metadata)
3. Likely lawful basis under UK GDPR (e.g. legitimate interests, consent, contractual necessity)
4. Whether data leaves the UK or EEA, and if so, what transfer mechanism may apply
5. Whether a Data Processing Agreement is likely required
6. Key risks to flag (e.g. model training on inputs, lack of audit trail, no deletion mechanism)
7. Recommended next steps for the firm

Assume we handle {practice areas, e.g. civil litigation, family, conveyancing} and that client data includes special category data in some matters.

Be specific and practical. Flag where you are uncertain and where the firm should seek further advice or check the provider's terms directly.

How did we do?

Hit reply and tell me what you would like covered in future issues or any feedback. We read every email!

Thanks for reading,

Serhan, UK Legal AI Brief

Disclaimer

Guidance and news only. Not legal advice. Always use AI tools safely.

Recommended Newsletters

Below are a few newsletters that I recommend, for various reasons. Check them out and help support this newsletter!

Staying Ahead with AI

Staying Ahead with AI

Step by step on how to use the latest in AI and how it ranks against what you're already using!

There's An AI For That

There's An AI For That

The #1 AI newsletter. Read and trusted by over 2.4 million readers, including employees at Google, Microsoft, Meta, Salesforce, Intel, Samsung, Zoom, Wix, HubSpot, Nebius, Suno, Zapier, as well as ...

Superhuman AI

Superhuman AI

Keep up with the latest AI news, trends, and tools in just 3 minutes a day. Join 1,000,000+ professionals.

Keep Reading