From 2018/9 onwards, the commercial use of AI (Artificial Intelligence) Technology has been visible, however, by no means all-encompassing or even mainstream. Currently, AI use within the legal sector falls into seven categories: due diligence, prediction technology, document automation, intellectual property, legal research, e-billing, and legal analytics. The legal services market is worth nearly $1 trillion globally, yet still finds itself largely undigitized in these areas – neatly providing AI with the opportunity to revolutionise the sector. Following this, it has been identified that, with the application of AI in these areas, commercial revenue could increase by $30 billion globally by 2025.
Alongside these obvious commercial benefits, AI boasts the capability to increase efficiency by accelerating the processing time of contracts and other legal documents as well as eliminating human error and the need for excessive review. However, simultaneously, ethical concerns arise and a lack of official regulation over the application of AI. An article published by Information Age, in 2018, claimed that ‘the legal industry has been using AI in the litigation discovery process for nearly ten years’ – raising the further question of if application of the technology, in some areas, has now gone too far to withhold use or at least, properly regulate. It is on account of such concerns that we cannot view AI with tunnel vision and must critically analyse its benefits and if these do in fact outweigh the perceptible restraints upon its benefit to the legal sector.
The future expansion of AI use in the legal, particularly commercial, sector is undeniable; it only remains a question of how this use will arise and the impact of its further application. It may be that large firms lead the mainstream adoption of AI (given that they have the largest scope for use and economic capability to pay for the technology), although there is the potential for small legal start-ups and firms to drive use as they can incorporate the technology into their administration from the offset (as opposed to having to integrate use into a pre-existing structure). Regardless, as more firms adopt AI use it is likely there will arise a climate of peer-pressure, where it becomes ‘fashionable’ for firms to use the technology and a selling-point of their services hence encouraging other firms to do the same and adopt its use; before long it will become customary that firms rely upon AI and expected by clients that this is the case.
However, as highlighted, there is a strong case for the view that AI technology could act as, and is already, a hindrance to the commercial law sector. There is the glaring paradox that the legal profession functions upon the concept of billable hours yet AI, in optimising the production and processing of contracts and documents, could significantly reduce the number of billable hours for lawyers. Moreover, a Deloitte Insight report recorded that AI ‘technology has already contributed to the loss of more than 31,000 jobs in the sector’. However, in contextualising this information, this may not necessarily be a bad thing; the painstaking task of trudging through legal documents often acts as a distraction from bigger tasks at hand and the same report highlighted that ‘there has been an overall increase of approximately 80,000 [jobs], most of which are higher skilled and better paid’. Consequently, these initial ‘losses’ may just be the necessary expense for the evolution of the legal sector under AI.
Although, the lack of regulation regarding AI usage cannot be explained away so easily. Beneath these concerns lies a foundation of deep distress with regards to ethics, cybersecurity, and confidentiality. It appears these concerns are justified; in October 2016 the House of Commons released a report on robotics and AI which revealed they could already foresee the potential for misuse and malpractice, in particular highlighting the potential ethical and legal issues within decision-making, bias, privacy, and accountability. The real-life application of these concerns was then realised by a 2017 report by the American Bar Association which exposed that 22% of law firms have been victim to cyberattacks and, in a further report the same year, 35% of small law firms (defined as law firms with between ten and 49 attorneys) have been hacked. If the foreseen AI roll-out transpires then surely this situation will only deteriorate?
In addition, the UK Trades Union Congress (TUC) produced a report which warned that the use of AI in the workplace could result in ‘widespread’ discrimination and mistreatment unless new laws are implemented. The employment rights lawyers who produced the report, Robin Allen QC and Dee Masters, stated: ‘used properly, AI can change the world of work for good. Used in the wrong way it can be exceptionally dangerous…There are huge gaps in British law when it comes to regulating AI at work.’ Not only could AI have dangerous implications to clients but also law firms themselves and their employees.
When looking further into how the technology itself operates, the dangers only become more obvious. Unlike the current digitisation systems employed by law firms, AI operates through a process called Machine Learning (ML); this system uses algorithms which change over time, learning from themselves and the inputted documents/cases. This methodology has two clear risk factors: Firstly, the problem with accountability – as the algorithms develop it becomes unclear to the user why they work in the way they do and why precisely they have changed; secondly, if a simple error is embedded within an inputted document/case this error will be repeated and learnt from by the algorithm. This was witnessed in practice when in England and Wales an error within an inputted divorce cases form led to the miscalculation of alimonies in 3600 cases across 19 months (exemplifying both how difficult it was to isolate the error – hence the mistake not being noticed for 19 months – and the arising issues of accountability for the mistake). This variety of problem has been deemed a ‘trojan horse’ by Francesco Contini, a senior researcher at the Research Institute on Judicial Systems of the National Research Council in Italy, who argues that a precautionary stance should be assumed towards AI usage until the technical and ethical concerns have been resolved or, at the bear minimum, regulated upon.
Nonetheless, there are obvious advantages with the adoption of AI. In many ways, AI is highly compatible with usage within the legal domain; both ML and the law work upon the concept of precedents and logic orientated methodology meaning that AI can easily facilitate legal practice without the need for extensive reprogramming. This similarity lends AI to application within contract review and analytics, litigation prediction and legal research. This can be witnessed through the success of legal start-ups such as Kira Systems and Lawgeex which specialise in contract analysis; Lawgeex using Natural Language Processing Technology (NLP) to determine which parts of a contract are satisfactory and which may prove problematic and Kira Systems using the same technology to provide stakeholders with an increased understanding of business commitments across the organisation. A particularly interesting development can also be witnessed in litigation prediction; a Toronto-based startup, Blue J Legal, reporting a 90% accuracy rating with its prediction engine (currently specialising in tax law but with the capability to revolutionise the future of litigation practice).
Linklaters has proved the positive capabilities of the technology through the launching of its AI search and knowledge management system MatterExplorer. The system has reduced the search time of documents from hours to seconds and has been praised for the 400 percent improvement in search utilisation from the previous system. This is reminiscent of JP Morgan’s system – COIN – which reduced 36,000 hours of legal work down to a few seconds (following the identification of 12,000 erroneous wholesale contracts). Given that it has been reported (Harvard Business Review) that inefficient contracting results in a loss of between 5 and forty percent on a given deal, the implementation of in-house legal technology has promising potential.
However, once again, this cannot distract from the blatant necessity for regulation. It is important to note that the EU Commission is currently in the midst of constructing a regulatory superstructure for AI usage; however, its completion and application, in practice, is unlikely to take place before 2024. Given the highlighted risks and their severity, it appears sensible to agree with Francesco Contini and take a cautionary stance against widespread, unfettered AI usage until legally binding legislation is implemented. Once this legislation is enforced, especially the EU Commission’s proposed section on ‘high-risk’ usage and biometric data protection, it would be illogical to not fully support the roll-out of AI in the legal sector and in particular corporate law; the potential benefits of regulated AI use proving outstanding and capable of revolutionising legal practice and revenue.
R Towes, 'AI Will Transform The Field Of Law', 19th December 2019, Forbes.
'AI in Law and Legal Practice - A Comprehensive View of 35 Current Applications', Emerj.
B Rich, 'How AI Is Changing Contracts', 12th February 2018, Harvard Business Review.
R Manickam, 'The top players in the AI-powered contract management space', Cenza.
'How AI-enabled tech can ease the headache of contract review', 15th April 2021, Thomson Reuters.
'Linklaters launches new AI data powered system to search and manage legal documents', 16th August 2019, Linklaters.
P Church, 'Regulatory superstructure proposed for artificial intelligence', 26th April 2021, Linklaters.
H Urban, 'Using Artificial Intelligence to Improve Law Firm Performance', 16th February 2021, Law Technology Today.
'Revenues from the artificial intelligence for enterprise applications market worldwide, from 2016 to 2025', 12th September 2016, Statista Research Department.
'TUC demands new laws to protect against misuse of artificial intelligence at work', 29th March 2021, Nautilus International.
T Burke and S Trazo, 'Emerging legal issues in an AI-driven world', Gowling WLG, Lexology.
F Contini, 'Artificial Intelligence: A New Trojan Horse for Undue Influence on Judiciaries?', UNODC.
댓글