With the release of OpenAI’s latest major language model, GPT-4, users have discovered its capabilities and are putting the technology to use in cybersecurity, with both positive and negative intentions.
At the RSA 2023 conference, AI was the dominant topic as many sessions and panels discussed the emergence of LLMs and how they are already affecting cybersecurity. Many of these discussions have focused on the misuse and unintended consequences of LLMs, such as generating misinformation.
Other sessions shed light on ways in which OpenAI could be exploited for nefarious activities by threat actors. A session titled “ChatGPT: A New Generation of Dynamic Machine-Based Attacks?” demonstrated the proficiency of the GPT-3.5 model in building social engineering schemes, writing phishing emails, and even obfuscating code.
However, session speaker Greg Day, vice president and global CISO at Cybereason, said using ChatGPT is a double-edged sword.
“I think we should expect more attacks, more code reuse, more creative ways of using ChatGPT to use it, but I want to balance that because it can be used appropriately for penetration testing,” Day said.
Alongside Day, speaker Paul Vann, a fourth-year student at the University of Virginia, explained that the new ChatGPT model is more efficient than version 3.5 for generating code, with better explanations for its decisions. He also described a tool called OpenAI Playground that IT pros can use to take advantage of these advancements in their defense practices.
OpenAI Playground provides a platform for users to interact with different types of LLMs, such as chat models and fine-tuning models, allowing them to experiment with different permutations of their own OpenAI models.
“What’s really cool about this is that it kind of gives you the ability to create models that you can build into your products and refine to focus on cybersecurity goals,” Vann said.
Over the past few weeks, tech companies have integrated OpenAI into their products in numerous ways across the infosec. Some of the latest applications of AI include analyzing threat intelligence and relieving cybersecurity personnel of low-risk and redundant tasks, with the aim of potentially relieving understaffed and fatigued security teams.
Use of AI in cybersecurity practices
Microsoft’s new Security Copilot, powered by the company’s security-based GPT-4 model, gathers data from verified and trusted sources in addition to Microsoft Defender Threat Intelligence and Microsoft Sentinel to help users in response incident reports, threat hunting and security reports.
Only available to professional security teams, the tool is intended to improve the efficiency of security operations center analysts.
“We believe there are many opportunities to help security professionals gain clarity, understand what others are missing, and learn more about what they need to do to become better security professionals. said Chang Kawaguchi, vice president and AI security architect at Microsoft.
According to Kawaguchi, the tool takes the normal process of detecting threats – submitting queries, analyzing data and making decision at the conclusion of an incident – and increases the volume of skills that many users may lack due to the deficit. current in cybersecurity personnel.
Security professionals can use Copilot to reverse engineer a script to dismantle malware, understand malicious code, and determine what activities were involved in an incident.
Additionally, users can derive forward-looking containment methods for an incident and use the results of origin analysis to detect how malware materialized, through graphical attack sequence mapping.
“It’s a great example of providing skills, capabilities that the individual might not have and helping to bridge the huge safety skills gap that we have,” Kawaguchi said. “There are over 3.5 million unfilled security positions, and we need to find a way to help organizations fill those positions with people and make them more qualified.”
Research firm Gartner predicted that due to the stressors of working in the field, nearly half of cybersecurity leaders will change roles by 2025, and 25% will move on to other disciplines . Like Microsoft, the Israeli startup Skyhawk Security has also used AI with the intention of solving the difficulties brought about by the shortage of talent.
Last month, the company integrated ChatGPT into its cloud threat detection and response platform. According to Chen Burshan, CEO of Skyhawk Security, adding another layer of machine learning to its security analytics workflow has relieved responders of fatigue from incoming alerts on every detected threat. Using ChatGPT ensures responders only receive alerts serious enough to investigate, he said.
“ChatGPT basically added another layer that allowed us to be more specific in what we send to customers and also allows them to analyze it faster,” Burshan said.
Skyhawk’s classic machine learning framework uses malicious behavior indicators to detect real threats. These events are compiled into a single attack scenario and scored according to their severity. ChatGPT is now part of the company’s rating mechanism.
Trained on millions of security points across the web, the chatbot can read an attack sequence in broad terms and give a maliciousness score of the incident along with an explanation for the given score. If the chatbot scores high enough to cause concern, then the incident is sent to responders for analysis.
In addition to optimizing the efforts of IT experts, the company uses the chatbot to speed up the distribution of alerts. Skyhawk researchers conducted tests that found that using the ChatGPT API to assess threat severity accelerated breach detections, with 78% of cases producing alerts sooner than without the technology. AI.
“The whole process of understanding, reacting, responding, blocking and possibly describing what happened becomes much faster,” said Amir Shachar, director of data science at Skyhawk. “It’s sort of our way of helping customers: easier detection.”
Burshan said that with the efficiency Skyhawk researchers found in implementing ChatGPT, he predicts the chatbot will be adopted by more companies with additional security use cases.
“I think the industry will increasingly try to use ChatGPT in real-world detection,” Burshan said. “Technology can be for different functions and different security features.”
The future role of AI in cybersecurity
In September, market research firm IDC predicted that global spending on AI systems would top $300 billion by 2026. The world is rapidly moving around AI and machine learning systems, which have already been deployed in many cybersecurity products for years.
But since ChatGPT models could dive deeper into defense settings, their functionality remains limited – for now.
“AI can only be part of a defense, not the whole thing,” said Sean Gallagher, senior threat researcher at Sophos. “Thus, ‘trust’ in AI, at least for now, should be limited to trusting it as part of a multi-layered defense and as an aid to defenders in spotting malicious activity or content. potential.”
While integrating versions of ChatGPT into cybersecurity tools can make processes more efficient, Gallagher said it requires multiple levels of human oversight.
Integrating AI into cybersecurity practices should alleviate some workload pressures for IT analysts, but it could also mean additional analysis to keep the technology safe. Still, AI will most likely make a slow entry into threat intelligence environments.
“We’re a long way from ‘lights out’ security,” Gallagher said.
Alexis Zacharakos is a journalism and criminal justice student at Northeastern University in Boston.