Lawyer Misuse of AI is Incurring Personal and Professional Risk

Should we feel sympathy for solicitors being struck off for their use of AI? With no regulatory guidance, no formal education and a tech industry that is continuously releasing experimental products with known flaws, it is no wonder a legal industry on the front line of AI adoption is floundering. Akber Datoo, CEO of D2 Legal Technology and co-chair of the Technology and Law Committee of the Law Society England & Wales, explains why, at a time of fundamental industry change, law firms and legal service providers must invest in AI strategies, awareness and education.

Law Firms’ AI Risk

The legal profession today will not be the legal profession in two or three years’ time, let alone 20 years’ time. And while the magic circle has rapidly accelerated its investment in and understanding of AI in the past 12 months, the same cannot be said of the next tier of law firms in the UK. Reaching out to 100 law firms to assess their digital foundations, processes, workflow and AI strategy, D2 Legal Technology spoke to six firms where AI had already caused a significant and problematic incident.

In one case, a busy practitioner had used ChatGPT, on the advice of her son. While she was aware not to input any client-identifiable information into the tool, she failed to verify the citations it produced. Dazzled by the apparent quality and fluency of the output, which in some respects appeared more polished than that of a qualified solicitor, she assumed the citations must be correct.

However, the opposing party checked the citations, identified partial inaccuracies, and referred the matter to the Solicitors Regulation Authority (SRA). As a result, the practitioner had, albeit inadvertently, misled the court and failed to discharge her professional duty of competence and diligence. The matter disrupted court proceedings and adversely impacted client representation.

Endemic Misunderstanding

An error of judgement. An attempt to save time. An incredibly sad mistake, which may well cost the individual her job and professional career. This is just one of many incidents underlining the fundamental misunderstanding surrounding what AI is and where it can be used. Her son (part of the technology savvy generation), when asked, couldn’t believe she hadn’t checked because, of course, AI hallucinates all the time – but he hadn’t actually mentioned that to her in the first place. Knowledge, understanding and experience are absolutely vital for any lawyer using AI professionally.

AI doesn’t “know” anything. It can simply determine probabilistically the next right word in a stream of consciousness. What does that mean in practice? It means that while the court will understand that a case law citation mistake made using traditional Lexus Nexus tools is not the fault of the lawyer, should that same individual make a mistake using the Lexus Nexus GenAI tool to draft a paragraph, they will face serious ramifications.

But that is not apparent to the tool user. Most in the legal profession now know that AI tools should not be used without checking the findings against traditional legal databases (or ensuring the tool uses techniques such as Retrieval Augmented Generation (RAG) to do so). Inexperienced individuals do not.

Unprecedented Risk

The AI tools being released dazzle and coupled with the incessant media coverage of the value they can add to practitioners, there is a real worry of FOMO in the minds of many. Law firms – and individual lawyers – must understand the extent of the risks that could be incurred. It is not only an extraordinary pace of innovation that is challenging, but the AI products being released are still in an experimental stage. Hallucinations are increasing, not decreasing with each new iteration, and there are law suits pending regarding the breach of IP rights by Open AI. In the past, the corporate world would never have invested in products with known flaws. Law firms would not have accepted a product that put the onus on the end user to manage hallucinations. Indeed, vendors would never have released products knowing they faced major IP issues. It would not have been acceptable to release products without testing, guardrails or a true understanding of how they would be used.

Yet this is the current AI situation, one that puts the onus on law firms to be far more proactive about their AI strategies.  The result, unfortunately, is a growing gap between the top end law firms able to invest very heavily in dedicated AI technology and education to gain significant commercial gain, and the rest.

Missed Opportunities

AI should be a tool for levelling up, and for providing wider access to legal services. Instead, the legal industry is in danger of accelerating the move to big firms able to invest in expensive tools and education. The opportunity to build genuine value will be lost unless the industry’s approach to AI changes fundamentally.

Law firms cannot afford to roll out AI tools without the appropriate training and planning.  But they also cannot take the blinkered approach of forbidding the use of AI for legal practice work – that simply leads to a grey area between personal use and business use, and inevitable disciplinary actions.

In the UK, the SRA and other regulatory authorities have focused on sandboxes – which only helps vendors sell experimental products. They have approved Garfield Law as the first AI law firm. But they have not given specific guidance regarding expectations and requirements. Instead, the industry is regulating by enforcement and the fear of striking people off. Rather than only placing the burden of accountability on end users and penalising mistakes after the fact, professional bodies, regulators and universities have a vital role to play in improving education and understanding.

Conclusion

When used in the right way and at the right time, AI offers enormous benefits across both the business and practice of law. But jumping into unmanaged, misunderstood use of AI in response to market hype or client pressure creates an enormous risk not only to the firm but individual lawyers and, critically, clients.

Upskilling across the board is essential. While regulators and educators fail to address the gap in AI understanding and adoption, the onus remains on law firms to be proactive. If AI is being used, firms need to know how it is being used and where. They need to develop an AI strategy, invest in training and ensure every lawyer understands its benefits and flaws.

Written, of course, with the assistance of ChatGPT…

spot_img
spot_img

Subscribe to our Newsletter