Russian cyber-criminals have been observed on dark web forums trying to bypass OpenAI’s API restrictions to gain access to the ChatGPT chatbot for nefarious purposes.
Various individuals have been observed, for instance, discussing how to use stolen payment cards to pay for upgraded users on OpenAI (thus circumventing the limitations of free accounts).
Others have created blog posts on how to bypass the geo controls of OpenAI, and others still have created tutorials explaining how to use semi-legal online SMS services to register to ChatGPT.
“Generally, there are a lot of tutorials in Russian semi-legal online SMS services on how to use it to register to ChatGPT, and we have examples that it is already being used,” wrote Check Point Research (CPR), which shared the findings with Infosecurity ahead of publication.
“It is not extremely difficult to bypass OpenAI’s restricting measures for specific countries to access ChatGPT,” said Sergey Shykevich, threat intelligence group manager at Check Point Software Technologies.
“Right now, we are seeing Russian hackers already discussing and checking how to get past the geofencing to use ChatGPT for their malicious purposes.”
The executive also added that CPR believes these hackers are most likely trying to implement and test ChatGPT in their day-to-day criminal operations.
“Cyber-criminals are growing more and more interested in ChatGPT because the AI technology behind it can make a hacker more cost-efficient,” Shykevich explained.
Case in point, just last week, Check Point Research published a separate advisory highlighting how threat actors had already created malicious tools using ChatGPT. These included infostealers, multi-layer encryption tools and dark web marketplace scripts.
More generally, the cybersecurity firm is not the only one believing ChatGPT could democratize cybercrime, with various experts warning that the AI bot could be used by potential cyber-criminals to teach them how to create attacks and even write ransomware.