AI Supercharges Hackers Evolving Cyber Threats
Artificial intelligence is dramatically accelerating the operations of hackers, aiding them in tasks ranging from crafting malware to composing phishing messages. However, a cybersecurity expert speaking at an industry conference on Monday noted that the widely discussed impact of generative AI currently has certain limitations.
AI Supercharging Hacker Operations
Peter Firstbrook, a distinguished VP analyst at Gartner, stated at the company's Security and Risk Management Summit, "Generative AI is being used to improve social engineering and attack automation, but it’s not really introduced novel attack techniques." Experts foresee AI revolutionizing how attackers develop custom intrusion tools. This advancement could significantly shorten the time required for even inexperienced hackers to create malware designed for information theft, keystroke logging, or data erasure.
"There is no question that AI code assistants are a killer app for Gen AI," Firstbrook remarked. "We see huge productivity gains."
The Rise of AI Assisted Malware and Malicious Code
HP researchers reported in September that hackers had utilized AI to create a remote access Trojan. Referencing this, Firstbrook commented, "It would be difficult to believe that the attackers are not going to take advantage of using Gen AI to create new malware. We are starting to see that."
Attackers are employing AI in a particularly insidious manner by creating fake open-source utilities. They then deceive developers into unknowingly integrating this malicious code into their legitimate applications.
"If a developer is not careful and they download the wrong open-source utility, their code could be backdoored before it even hits production," Firstbrook cautioned. While hackers could have executed such tactics previously, AI now enables them to flood code repositories like GitHub, making it difficult to remove malicious packages swiftly. "It’s a cat-and-mouse game," Firstbrook explained, "and the Gen AI enables them to be faster at getting these utilities out there."
Deepfakes The Emerging Landscape
The integration of AI into traditional phishing campaigns poses an increasing threat, though its current impact seems contained. A recent Gartner survey revealed that 28% of organizations encountered a deepfake audio attack, 21% faced a deepfake video attack, and 19% experienced a deepfake media attack that circumvented biometric security. Despite these occurrences, only 5% of organizations reported deepfake attacks leading to financial or intellectual property theft.
Nevertheless, Firstbrook acknowledged, "This is a big new area."
AI Boosting Attack Volume Not Yet Novelty
Analysts are concerned about AI's capacity to enhance the profitability of certain attacks by significantly increasing their volume. "If I’m a salesperson, and it typically takes me 100 inquiries to get a ‘yes,’ then what do you do? You do 200 and you've doubled your sales," Firstbrook illustrated. "The same thing with these guys. If they can automate the full spectrum of the attack, then they can move a lot quicker."
The Search for Truly New AI Driven Attacks
One particular fear associated with generative AI seems, for the moment, to be exaggerated. Researchers have not yet observed AI creating entirely new attack methods. "So far, that has not happened," Firstbrook stated, "but that's on the cusp of what we’re worried about."
Firstbrook referenced data from the MITRE ATT&CK framework, a repository cataloging hacker strategies for breaching computer systems. "We only get one or two brand-new attack techniques every year," he noted.