With generative artificial intelligence (AI) becoming so
popular these days, it is perhaps no surprise that this technology has been
repurposed by criminals for their own benefit, allowing for the acceleration of
cybercrime. This also raises concerns about the use of fake content and the
rise of cybercrime. WormGPT, a ChatGPT alternative, has made our fears come
true as cybercriminals use AI tools to launch sophisticated phishing attacks,
cybercriminals can create convincing fake emails to target individuals for
phishing attacks. In short, WormGPT is similar to ChatGPT without the ethical
restrictions.
Findings from SlashNext reveal that a new generative AI
cybercrime tool called WormGPT has been advertised on underground forums as a
way for bad actors to launch sophisticated phishing attacks and business email
compromises. To make matters worse, threat actors are promoting a
"Jailbreak" for ChatGPT, engineering custom prompts and inputs
designed to manipulate the tool to produce output that could involve disclosing
sensitive information, generating inappropriate content, and executing
malicious code.
According to Kelley, generative AI can create emails with
perfect grammar, making them appear legitimate and reducing the chances of
being flagged as suspicious.
How to Prevent
AI-Generated Phishing Attacks
Email verification: There needs to be a strict email
verification process. AI tools have the ability to generate sophisticated and
highly persuasive emails, so we need to carefully check the email ID, date, and
other details.
Firewalls: A high-quality firewall acts as a buffer between
you, your computer, and outside intruders. We should use two different types:
desktop firewalls and network firewalls.
Be informed about phishing techniques: We need to be aware
of new phishing scams that are being developed from time to time.
0 Comments