GPT-4 Can Hack Systems All By Itself — Are Your Digital Defenses Ready?

AI's New Power: GPT-4 Can Now Independently Exploit Security Vulnerabilities—Are We Ready?

AI Threats Unveiled


Researchers have recently found that GPT-4, the latest multimodal large language model from OpenAI, can independently identify security vulnerabilities. As part of OpenAI's premium ChatGPT Plus service, this foundational model demonstrates significant capabilities in spotting security flaws without human intervention.

A new study has illuminated GPT-4's potential for exploiting severe security vulnerabilities solely through examining the details of a flaw. The research underscores a growing proficiency among large language models (LLMs), like GPT-4, in autonomously exploiting zero-day vulnerabilities in real-world scenarios if they have access to detailed flaw descriptions.

This study, conducted by computer scientists at the University of Illinois Urbana-Champaign, further explored the capacity of chatbot technologies and LLMs for malicious use, such as deploying self-replicating computer worms. Despite the increased capabilities of these models, they currently operate without embedded ethical guidelines, raising significant concerns.

The researchers evaluated several models, including OpenAI's commercial products, other open-source LLMs, and vulnerability scanning tools like ZAP and Metasploit. They noted that advanced AI could independently manipulate zero-day vulnerabilities found in a database of 15 zero-day exploits linked to web, container, and Python package vulnerabilities, which were identified as high or critical risks without available patches.

GPT-4 managed to exploit 87% of these vulnerabilities effectively, while previous models, like GPT-3.5, did not succeed in any such attempts. Daniel Kang, an assistant professor at UIUC, highlighted the potential of GPT-4 and future models to democratize the exploitation of vulnerabilities, potentially simplifying cybercrime for less skilled individuals.

However, to successfully exploit known zero-day vulnerabilities, GPT-4 needs access to comprehensive CVE descriptions and related data. Kang suggests a potential mitigation strategy of limiting the publication of detailed vulnerability reports to reduce exploitation risks. Although he questions the efficacy of withholding information, favoring proactive security measures like regular updates to counter threats from sophisticated chatbots.