Few Companies Are Protecting Against Generative AI Threats
Cyberleaf recently posted on ways to look out for Deep Fakes. In line with this same topic, the Splunk survey released last month highlights the industry-wide lack of knowledge and protection against Generative AI threats.
What is it exactly? “Generative AI can be thought of as a machine-learning model that is trained to create new data, rather than (predicting) a specific dataset. A generative AI system (learns) to generate more objects that look like the data it was trained on.” (Massachusetts Institute of Technology)
Most cybersecurity companies have deployed Generative AI, yet most of these same companies using the technology have no safeguards in place to defend against it. According to the Splunk survey, two-thirds (65%) of responders do not understand the hazards. This lack of attention to the issue gives AI threat actors a big advantage over the cybersecurity industry. While AI can be used to create opportunities for innovation and efficiency, that efficiency can also be used maliciously.
Here are some key methods of Generative AI that can be used in a cybersecurity attack:
Malware Creation and Evasion: AI can be used to create malware that constantly changes its code to evade detection by traditional antivirus software. These changes can be subtle yet sufficient to avoid signature-based detection systems.
Automated Vulnerability Discovery: Generative AI can be programmed to scan and identify vulnerabilities in software and networks faster and more efficiently than human hackers. This allows cybercriminals to exploit these vulnerabilities before patches are applied.
Manipulating AI Defenses: Cybercriminals can use Generative AI to create adversarial examples that fool AI-based security systems, such as image recognition and anomaly detection systems, leading them to misclassify malicious activity as benign.
“AI is a powerful tool, but human decision-making related to Gen AI is critical when defending against threats,” said Splunk SURGe security strategist, Audra Streetman during an interview with Cybersecurity Dive. Human decision-making may sound simple, but it is a complex process influenced by a variety of factors, including intuition, experience, emotions, and personal biases.
The integration of Generative AI into cybercriminal activities represents a significant challenge for cybersecurity professionals. It necessitates the development of more advanced and adaptive security measures, including AI-based defenses that can counteract these sophisticated threats, such as:
Anomaly Detection: Implementing machine learning (ML) models to identify unusual patterns in network traffic, user behavior, and system activities that might indicate generative AI-driven attacks.
Behavioral Analysis: Using AI to establish baseline behavior profiles for users and systems, allowing for the detection of deviations that could signify malicious activity.
In addition to 15 years in networking/security and over 20 certifications, making Cyberleaf uniquely qualified in cybersecurity, “we are a proud partner of Splunk, which uses AI and ML to combat and identify Generative AI threats,” said Cyberleaf VP of Cybersecurity.
Read more statistics and information relating to Generative AI from Cybersecurity Dive here.
Access the Splunk survey, State of Security 2024 Report Reveals Growing Impact of Generative AI on Cybersecurity Landscape here.