Home Hacking Armed With ChatGPT, Cybercriminals Build Malware And Plot Fake Girl Bots

Armed With ChatGPT, Cybercriminals Build Malware And Plot Fake Girl Bots

by

Users of underground forums start sharing malware coded by OpenAI’s viral sensation and dating scammers are planning on creating convincing fake girls with the tool. Cyber prognosticators predict more malicious use of ChatGPT is to come.


Cybercriminals have started using OpenAI’s artificially intelligent chatbot ChatGPT to quickly build hacking tools, cybersecurity researchers warned on Friday. Scammers are also testing ChatGPT’s ability to build other chatbots designed to impersonate young females to ensnare targets, one expert monitoring criminal forums told Forbes.

Many early ChatGPT users had raised the alarm that the app, which went viral in the days after its launch in December, could code malicious software capable of spying on users’ keyboard strokes or create ransomware.

Underground criminal forums have finally caught on, according to a report from Israeli security company Check Point. In one forum post reviewed by Check Point, a hacker who’d previously shared Android malware showcased code written by ChatGPT that stole files of interest, compressed them and sent them across the web. They showed off another tool that installed a backdoor on a computer and could upload further malware to an infected PC.

In the same forum, another user shared Python code that could encrypt files, saying OpenAI’s app helped them build it. They claimed it was the first script they’d ever developed. As Check Point noted in its report, such code can be used for entirely benign purposes, but it could also “easily be modified to encrypt someone’s machine completely without any user interaction,” similar to the way in which ransomware works. The same forum user had previously sold access to hacked company servers and stolen data, Check Point noted.

One user also discussed “abusing” ChatGPT by having it help code up features of a dark web marketplace, akin to drug bazaars like Silk Road or Alphabay. As an example, the user showed how the chat bot could quickly build an app that monitored cryptocurrency prices for a theoretical payment system.

Alex Holden, founder of cyber intelligence company Hold Security, said he’d seen dating scammers start using ChatGPT too, as they try to create convincing personas. “They are planning to create chatbots to impersonate mostly girls to go further in chats with their marks,” he said. “They’re trying to automate idle chatter.”

OpenAI hadn’t responded to a request for comment at the time of publication.

While the ChatGPT-coded tools looked “pretty basic,” Check Point said it was only a matter of time until more “sophisticated” hackers found a way of turning the AI to their advantage. Rik Ferguson, vice president of security intelligence at American cybersecurity company Forescout, said it didn’t appear that ChatGPT was yet capable of coding something as complex as the major ransomware strains that have been see in significant hacking incidents in recent years, such as Conti, infamous for its use in the breach of Ireland’s national health system. OpenAI’s tool will, however, lower the barrier of entry for novices to enter that illicit market by building more basic, but similarly effective malware, Ferguson added.

He raised a further concern that rather than build code that steals victims’ data, ChatGPT could also be used to help build websites and bots that trick users into sharing their information. It could “industrialize the creation and personalisation of malicious web pages, highly-targeted phishing campaigns and social engineering reliant scams,” Ferguson added.

Sergey Shykevich, Check Point threat intelligence researcher, told Forbes ChatGPT will be a “great tool” for Russian hackers who’re not adept at English to craft legitimate-looking phishing emails.

As for protections against criminal use of ChatGPT, Shykevich said it would ultimately, and “unfortunately,” have to be enforced with regulation. OpenAI has implemented some controls, preventing obvious requests for ChatGPT to build spyware with policy violation warnings, though hackers and journalists have found ways to bypass those protections. Shykevich said companies like OpenAI may have to be legally compelled to train their AI to detect such abuse.

Source link

Related Articles

Translate »