Since its debut in November, ChatGPT has taken the internet by storm. The AI-driven natural language processing tool has already amassed over a million users and is being used for writing wedding speeches, crafting academic essays, and generating code. But it’s not only tech enthusiasts that are taking notice; it appears the cybersecurity industry is worried that ChatGPT could be abused by hackers with limited resources and zero technical knowledge to write malicious code and convincing phishing lures.
Check Point demonstrated how the chatbot could be used in tandem with OpenAI’s code-writing system Codex to create a phishing email capable of carrying a malicious payload. Reports also claim Google is so alarmed by ChatGPT’s capabilities that it issued a “code red” to ensure the survival of the company’s search business. Darktrace’s Hanah Darley noted that it’s not hard to imagine how threat actors might use ChatGPT as a force multiplier, while Sophos’ Chester Wisniewski said it could be utilized to create more realistic interactions for business email compromise and social engineering attacks.
Security researchers have also witnessed at least three instances where hackers with no technical skills boasted how they had leveraged ChatGPT’s AI smarts for malicious purposes. Picus Security’s Dr. Suleyman Ozarslan demonstrated to TechCrunch how the chatbot could write macOS-targeting ransomware code and a World Cup-themed phishing lure.
Though some experts have moved to debunk concerns that an AI chatbot could turn wannabe-hackers into full-fledged cybercriminals, the security community is worried that ChatGPT will soon be widely embraced by cybercriminals. ESET’s Jake Moore believes ChatGPT will evolve to the point where it can analyze potential attacks on the fly and create positive suggestions to enhance security.
So, while it’s difficult to predict the impact ChatGPT will have on cybersecurity, it’s important to be aware of the potential risks and take appropriate steps to mitigate them.