Up to 26% of legal professionals are now using generative AI in law firms more than once a month, according to a new study from LexisNexis – up from 11% in the previous year. The study, which surveyed more than 1,200 legal professionals across the UK, also found that general awareness of the technology had grown in the sector since last year, as had concerns about the potential of generative AI applications to hallucinate responses or leak confidential data.
“We’ve also seen a noticeable increase in awareness of the many ways in which lawyers can make use of generative AI technology, which ranges from the achievable to the ambitious,” said LexisNexis’ senior director for segment management, Stuart Greenhill. “Demand is growing in the legal sector for generative AI tools that are grounded in legal sources, and can provide a higher level of transparency for all responses generated.”
Use of AI in law firms growing
Legal professionals were more likely to use generative AI tools on a monthly basis if they worked at larger law forms (32%) or academic institutions (33%). Over a third of respondents also said that they were planning to use such applications in the future, with this rising to 42% among in-house lawyers. Those with no plans whatsoever to embrace the technology in some form in their workplace also dropped from 61% in July 2023 to 39% in 2024.
Generative AI presents a novel opportunity to automate many of the repetitive tasks in a legal practice that nonetheless requires a level of reasoning that’s a cut above the machine learning algorithms of yesteryear. Of those legal professionals who planned to use generative AI, 91% and 90% said that they would use it to draft legal documents and research legal matters respectively, up from 59% and 66%. Meanwhile, 73% stated that they would likely use it to help draft emails, up from a third in July 2023.
Lawyers cognizant of the risks associated with generative AI
This enthusiasm among legal professionals for generative AI is tempered somewhat by the technology’s well-documented limitations. 59% of respondents said that they had minor concerns, with 26% saying that they had more fundamental objections. Some of the biggest worries among lawyers are that an application will hallucinate inappropriate or incorrect responses (57%), leak confidential data from clients (55%) or simply not perform as expected.
The study also revealed an implied expectation among respondents that, for generative AI tools to truly succeed in the legal sector, they needed to be trained more extensively on legal sources. “We avoid using generative AI for research purposes at present to avoid hallucinations,” said Samuel Pitchford, one of Pembrokeshire County Council’s in-house solicitors. “But the development of ‘closed’ generative AI tools, trained exclusively on legal source material and available only to subscribers, should be less prone to hallucinations and would allow our team to use generative AI for research.”
But any fears that such generative AI tools were in danger of supplanting lawyers from key roles within firms are misplaced, said May Winfield, global director of commercial, legal and digital risks at the firm Buro Happold. “This won’t replace the analysis and skillset of lawyers,” Winfield told LexisNexis, “but facilitate their views to be done faster and more consistently.”