ChatGPT’s cyber-risks are just coming into focus
While it’s probably too soon to grasp the full extent of cybersecurity problems that these tools could pose, it’s likely that such AI chatbots could soon become the latest targets of bad actors
The internet frenzy around ChatGPT and other humanlike chatbots is starting to put attention on their cybersecurity risks—many of which are only now coming into focus.
Earlier last month, Forbes reported that a hacker was able to access Microsoft’s AI-powered Bing Chat and prompted it to reveal “confidential instructions that guide how the bot responds to users” after commanding the chatbot to ignore previous instructions.
There have also already been reports of users prompting the AI-powered bots to write malware code and phishing emails—showing that such tools can be not only the target of cyberattacks, but also the source of cyberthreats.
While it’s probably too soon to grasp the full extent of cybersecurity problems that these tools could pose, it’s likely that such AI chatbots could soon become the latest targets of bad actors—especially considering the amount of data they amass every day from users across industries.
In fact, that’s the first security concern that comes to mind, said Mark McCreary, co-chair of the Privacy and Data Security Practice at Fox Rothschild. He noted that users trust ChatGPT and similar tools with information that “basically becomes public domain.”
“As far as security goes, I do think it’s problematic,” McCreary said. He added, “the problem is, once that information goes into ChatGPT, that’s in the ChatGPT. The rest of the world can somehow see that, if not directly, through other queries that come out of chatting.”
But the risk of bad actors accessing that data is only one layer of the cybersecurity concerns for David Carvalho, co-founder and CEO of Naoris Protocol.
“There are potential data-tampering attacks, and there are also attacks on the integrity of the system that makes the model not follow what it was designed to follow in some way, or the data that’s available is no longer available,” Carvalho explained. “So the possibilities of attack are literally limited by the attackers’ imagination.”
Despite such risks, many industries have already begun to leverage OpenAI’s technology to streamline some of their workflows. In the legal space, at least one firm has already announced it would rely on OpenAI’s technology to assist lawyers with their work.
“You actually have no assurance at any point that you can trust it,” Carvalho warned.
To be sure, its not just the chatbots, but the AI systems—known as large language models (LLMs)—that are powering them that could also be vulnerable to cyberattacks.
But so far, it’s unclear what cybersecurity measures tech giants have adopted as they rushed to release the next advanced LLM. But the security risks—and need for regulators to offer guidance—seem to be on their radar.
“When you open it up to as many people as possible with different backgrounds and domain expertise, you’ll definitely get surprised by the kinds of things that they do with the technology, both on the positive front and on the negative front,” Mira Murati, chief technology officer at OpenAI, told TIME.
She added, “We need a ton more input in this system and a lot more input that goes beyond the technologies—definitely regulators and governments and everyone else.”
And companies are, too, now paying close attention, with some already restricting or banning employees’ use of AI chatbots.
“It’s not too early. I’ve had several conversations with clients but it’s revolved around, one, educate me on this, and two, should we block it? … We’ve already written policies for clients to put out to employees, basically do not put any company data in this,” McCreary said. He added, “And I’ve had other clients that have flat-out banned it. They don’t want to risk it.”