By Rory O’Keeffe, Technology and AI Lawyer, Partner at Matheson LLP; Trustee at The Solicitors’ Charity
Artificial Intelligence – or as we all say, AI – is the buzzword of the moment.
Use of AI digital software has skyrocketed over the past few years with easy, and often free access, available to everyone – from individuals to world leaders; small businesses to large multinational companies, across all sectors. It has revolutionised the way we collect and impart information on a global scale.
The legal profession is no exception. Generative AI technology, such as ChatGPT, Copilot and Gemini, have become a real game-changer for legal teams working on matters that require increases in accurate content, now created with savings in both time and money.
AI tools help further automation of repetitive tasks, such as document review, contract analysis and legal research, allowing legal professionals to focus on more complex and strategic work.
AI-powered legal research tools can sift through vast amounts of legal information, providing lawyers with more relevant data for case preparation. Likewise, AI algorithms review documents with speed and precision, identifying key information and potential issues. This leads to more thorough due diligence processes in mergers and acquisitions, minimising the risk of human error.
The explosion in AI usage by lawyers has increased efficiency and delivered more effectual legal services. However, as with any transformative technology, AI in law firms also carries pitfalls, such as reliability, data and intellectual property risks, bias and ethical considerations.
For instance, AI can inadvertently throw up biases present in historical data, leading to discriminatory outcomes, this requires regular monitoring and adjustments to algorithms and sourcing synthetic and/or quality data to rectify these biases.
AI systems can also fail to understand the nuances of human emotions, cultural context, and complex legal arguments, limiting its effectiveness in certain legal tasks.
The key to successful and ethical integration of artificial intelligence in law firms is to strike a balance between capitalising on the benefits of AI, while maintaining the human touch with a careful overview and checks of all AI generated work.
It is paramount that the ethical considerations around the use of AI by legal teams are constantly monitored, which may be challenging because of the need to maintain independence, partiality and confidentiality.
The AI tool is not infallible – reliance on the work produced by generative AI can result in legal inaccuracies, with the fallout leading to a loss of reputation, trust and costly damages for the law firm involved.
For example, Donald Trump’s former lawyer Michael Cohen admitted to citing fake, AI-generated court cases in a legal document that was presented to a US federal judge last year. Cohen used Google’s Bard (now Gemini) to perform research after mistaking it for search engine rather than an AI chatbot.
With the question of AI generated liability and possible accusations of misconduct hanging over the legal profession, the adherence to ‘Responsible AI’ and ‘Explainable AI’ is even more crucial. The practice of designing, developing, and deploying AI with good intention to empower employees and businesses, and fairly impact customers and society, will allow legal companies to engender trust and scale AI with confidence.
AI was high on the agenda yet again at this year’s World Economic Forum in Davos with the ‘AI for Good’ message to ‘invest with care’ as the technology continues to evolve.
This applies more than ever to law firms embracing the latest developments in AI – the advantages of time and cost savings; improved quality and accuracy, and increased access and inclusion are of huge value to law firms. So, unleash AI’s potential incrementally. Each step forward should be deliberate, mindful, and guided by ethical principles