Balancing Modernization and Caution: The Rise of AI in the Legal Profession

Balancing Modernization and Caution: The Rise of AI in the Legal Profession

About Angela RobertsonAngela Robertson

Angela Robertson’s full profile on mjm.bm.

The legal industry in Bermuda, like much of the global community, stands on the verge of technological transformation driven by Artificial Intelligence (‘AI’). From drafting documents to conducting legal research, AI offers promising tools that can enhance efficiency and reduce costs. However, the increasing integration of AI into the legal practice also raises serious questions about ethics, accountability, and regulatory oversight and the legal profession need to consider how the integration of AI interplays with privacy and data protection obligations.   

A recent controversy involving an English barrister, who was reprimanded after it was discovered that her written submission included five fake authorities, has amplified concerns around the unregulated use of generative AI in legal proceedings and brought it back to the forefront of the profession’s minds. Although Mr Justice Ritchie in Ayinde, R v The London Borough of Haringey [2025] EWHC 1040 could not rule on whether AI had been used by junior barrister Sarah Forey to generate the fake authorities, he stated “it would have been negligent for this barrister, if she used AI and did not check it, to put that text in her pleading”.

Ms Forey reduced the inclusion of fake cases to a ‘minor citation error’ caused by mistakenly photocopying authorities from a table, however this explanation was not accepted by Mr Justice Ritchie. Mr Justice Ritchie referred to Ms Forey’s error as “professional misconduct” and deemed it “appalling professional misbehaviour”. Mr Justice Ritchie believed that Ms Forey should have reported herself to the Bar Council and due to the severity of the error ordered Ms Forey to personally pay £2000 to Haringey Council’s legal costs. 

This is not the first instance of fake AI cases coming before the courts and while it should serve as a warning to lawyers about the misuse of AI, it is important to acknowledge the clear advantages to integrating AI tools into the daily workflow of legal professionals and that its use is here to stay. In fact, the Solicitors Regulation Authority (‘SRA’) in England recently authorised the first AI-driven law firm with the inherent benefits of AI in mind. Garfield. Law Ltd. is the first purely AI based, authorized law firm to provide legal services in England and Wales. The firm offers small and medium sized businesses the use of an AI powered litigation assistant to help them recover unpaid debts and guide them through the small claims court process up to the point of trial.

In providing its authorization, the SRA considered the potential consumer benefits and accepted that “AI-driven legal services could deliver better, quicker and more affordable legal services”. The most obvious advantage of AI is its ability to assist its user with the completion of traditionally time-consuming tasks in an efficient manner which in turn reduces costs for clients and increases productivity for lawyers. AI can also assist with the drafting of standard contracts, legal forms, and agreements by using predefined templates to generate precise, consistent, and compliant documents. This automation not only reduces human error but also allows lawyers to focus on more strategic and analytical issues.

Despite these clear benefits, the use of AI in legal practice is still relatively unknown and can carry significant risks that must be considered and safeguarded against. The most pressing concern is the reliability and accuracy of AI-generated outputs. As demonstrated by the Ayinde, R case, AI systems can on occasion ’hallucinate’, confidently generating plausible but completely fictitious information. If a lawyer fails to verify information produced by AI and submits said information in a court filing, they may not only damage their own reputation but also risk professional misconduct. If Ms Forey in the Ayinde R case had verified the cases provided, she would have been able to catch the output errors, correct her submissions, and accordingly, avoid any professional embarrassment and potential misconduct findings.   

AI users should also be cautious about the data which is uploaded to AI databases. Many AI tools are designed to be trained, in part, based on information provided by its users. Therefore, there is an inherent risk that users may breach confidentiality by uploading client information into an AI database which stores and reviews said information for training purposes. Lawyers should be cautious about what information, if any, is uploaded, particularly when using AI to populate standard legal forms or drafting agreements. In Bermuda, this becomes particularly important in light of the recent roll out of provisions in the Personal Information Protection Act 2016 (“PIPA”).

The Ayinde R case is a useful example, and reminder, of the dangers of using AI in the legal profession. While we are not aware of any similar instances in Bermuda yet, its use is something that both lawyers and the judiciary will need to be mindful of. It is clear, at this stage, that AI still requires significant human oversight as well as clear guidance regarding its use. With caution, however, AI can be an invaluable tool for lawyers, and will likely be a mainstay in the years to come. It will be interesting to see what guidance and/or restrictions the Bermuda Bar Council provides as AI continues to develop and become more prevalent in the legal profession.

As Bermuda continues to develop its regulatory framework around technology and data protection, law firms must develop a proactive and principled approach to AI adoption to safeguard lawyers from the inherent risks of use. While AI offers exciting opportunities to modernise the practice of law in Bermuda, it is important that it is used in a way that upholds the professional standards that underpin public trust in the legal profession.

In this ever changing landscape, obtaining advice from trusted advisors is key. If you considering the introduction of AI in your business and want to ensure that you continue to meet your obligations under PIPA, reach out to MJM’s PIPA team for compliance advice.

For more information, or for any enquiries, relating to your obligations under PIPA please contact our Angela Robertson, Associate.