Cyber Security

Chat GPT: What are the risks to law firms?

Read the original article by Steve Whiter, Appurity Managing Director, on legaltechnology.com

techradar logo

ChatGPT has taken the world by storm

In the few months since it was launched, hundreds of millions of users have experimented with Open AI’s artificial intelligence chatbot to produce copy, answer existential questions, write essays, and generate realistic conversations.

Lawyers and legal firms are anxious, however, about the explosion of interest in Chat GPT. Some firms are asking existential questions of their own, wondering whether AI-driven automation represents risks to their industry – as well as to society at large. In many ways, science fiction is already becoming science fact. But what might it mean for the legal industry if free tools like ChatGPT could be made to generate instantaneous legal documents and draft briefs? What might come of a computer that can answer complex legal questions, in seconds, with minimal human input? Could developments like ChatGPT be a net positive, and what are the risks to consider?

Data privacy

Legal firms have both a regulatory and a moral obligation to protect their clients’ data. There are numerous policies, regulations, and charters in place to ensure that the highest standards of data privacy and confidentiality are adhered to within the profession.

Partners and their IT teams already go to great lengths to protect this data, using many innovative tools and technologies to ensure that in a digital-first world, sensitive information and communications are protected at all costs. But with the rise of AI tools, and ChatGPT in particular, firms will now need to interrogate whether their existing controls are enough. Do they have the necessary processes in place to protect against an AI-related data leak or vulnerability?

ChatGPT users must understand how the tool processes input data, and how inputting any personal or sensitive information could be in violation of data protection laws or industry codes of conduct. Confidentiality or privilege rights are called into question the moment information is inputted into a tool that the user has no control over.

Firms already have a responsibility to manage and monitor use of communications tools, from email and SMS to WhatsApp and Telegram. Whatever policies or protections are in place to ensure that all messaging and communication has an identifiable digital footprint, is backed up, and managed securely, must be extended to AI tools. Should firms institute sweeping policies to block all use of these models, or enact a strict access control policy?

These are the questions that IT teams must be asking themselves when considering how fee earners and partners may use AI language processing tools in the workplace.

Bias and transparency

With any AI tool, the output is dependent on the information with which it is trained, and the people who make decisions about what information the AI receives. This bias is especially apparent in conversations with AI language tools where, for example, a user might ask for an ‘opinion’ on a complex or sticky moral question.

The core issue for firms here is compliance. New regulatory frameworks or codes of conduct may be on the horizon with respect to how AI is used within the legal profession, which would add additional compliance requirements for firms and their lawyers.

Firms also need paper trails and accounts of all work-related conversations they’re having, even if that’s with an AI tool. They also need complete transparency of how and where they gathered the information used to develop any kind of legal document. Without full disclosure and transparency, firms could be opening up themselves to potential compliance or regulatory issues.

While AI tools, including ChatGPT, may provide quick and easy access to a wealth of information – some of which could be useful for lawyers – use of it, for now, still exists in a grey area.

Vulnerabilities

Where there’s a lack of transparency, unfortunately there’s a heightened risk of vulnerability. If a firm or a user doesn’t know exactly who has authority over the tools they’re using – and how these tools function and process data – there’s a risk that vulnerabilities can arise. These vulnerabilities may not be immediately obvious to users or IT teams.

And of course, whenever a firm introduces another tool into its roster of technologies, there’s another potential attack vector to consider and secure.

Already, bad actors have been looking to exploit ChatGPT’s popularity by creating fake apps and tricking users into downloading malware or visiting live phishing page. While these methods of getting users to click on malicious links aren’t new, it’s another consideration for firm partners and fee earners. If there’s an increase in malicious pages pertaining to an Open AI product, anyone thinking of clicking on those links needs to be especially astute.

Similarly, several cybersecurity firms are warning of ChatGPT’s potential to write compelling phishing emails or malicious code. As previously highlighted, without full transparency into AI language models, attackers could potentially train these tools to generate malicious code.

It’s not even beyond possibility that an attacker could design an AI tool in such a way that it generates malicious code specifically targeting regulated industries and to evade detection.

Ultimately, the responsibility to protect a firm’s data and devices still comes down to the individual firm. But now that there’s another tool to contend with, these responsibilities extend further, and firms must consider adding in additional protections to ensure they don’t get caught out. This might take the form of advanced security training for staff or better and more streamlined vulnerability testing. Continuously checking a firm’s security defences against malware and ransomware attacks is increasingly becoming a must-have for all firms within regulated industries, for example.

ChatGPT is a fascinating innovation, and perhaps marks the start of a growing use and reliance on AI natural language processing tools. This could end up being immensely useful to law firms, as well as leaders in other industries. But the advice is still the same, regardless of any new tool or technology that provides a benefit to how we do work: be aware of the potential risks, and think carefully about how IT and security infrastructures are prepared for this new tech. No firm wants to fall victim to an attack or a data leak due to misuse of a new technology.

RESOURCE

Cyber Essentials

Cyber Essentials is a government-backed scheme that helps businesses protect against a range of online threats.

DOWNLOAD

Appurity Cyber+

Is your business ready for Cyber Essentials Plus accreditation?

Share This Story, Choose Your Platform!

Ready to talk?

Confirm you are a human *

We're committed to your privacy. Appurity uses the information you provide to us to contact you about our relevant content, products, and services. You may unsubscribe from these communications at any time. For more information, check out our Privacy Statement.