Connect with us

Technology

Cybersecurity for AI-powered Attacks in 2024

Cybersecurity for AI-powered Attacks in 2024

The recent crackdown on phishing-as-a-service website LabHost exposed the startling evolution of cybercrime as a result of artificial intelligence (AI). This platform, used by over 2,000 criminals, defrauded over 70,000 UK victims, demonstrating the growing reach, ease and efficiency of AI-empowered cyber-attacks.

If a company has a presence online, it can fall victim to cyber-attacks. Few industries are safe from relentless and evolving cyber threats, and rapid advancements in AI are magnifying the problem.

Staying vigilant and up-to-date about such threats is essential. As risks increase, organisations must arm themselves with the right security measures and employee training to protect their operations and adapt to this threat landscape. It is important to monitor the problem – and know that while things seem very concerning, a combination of technical and human cyber-support can best protect any company.

Emerging attack trends in 2024: leveraging AI

Currently, threat actors are using AI for two main purposes: to steal Intellectual Property (IP) or for monetary gains, often intertwining both motives. In the LabHost example, criminals used the platform to obtain over one million passwords before it was shut down.

One of the most prevalent attack methods is phishing – Hornetsecurity’s annual report found that phishing accounted for more than 43.3% of email attacks last year. This is set to continue to grow as a result of Dark Web variants of well-known Large Language Models (LLMs) like ChatGPT. These variants, such as DarkBERT and WormGPT, can be used to create seemingly legitimate emails for phishing scams, and do this at a scale and speed a criminal could not accomplish alone.

Brand impersonation is another popular AI attack vector with familiar names such as Amazon, Microsoft, LinkedIn, FedEx, DHL and Netflix ranking among the top 10 most impersonated brands. Perpetrators frequently use tools enhanced by AI to specifically steal login credentials to carry out such impersonations. Multi-factor authentication (MFA) bypass kits are often used for this type of attack. These kits enable attackers to avoid additional security measures and steal a user’s login credentials.

Tools such as EvilProxy and W3LL typically create deceptive log-in pages that capture a user’s credentials while passing them to the real service they are attempting to log in to. In the presence of these kits, the user’s session tokens are hijacked and can then be used by attackers to continually log in to the legitimate service as an ‘authentic’ user. Criminals are increasingly exploiting public trust in legitimate popular services – as well as visual cues such as authentic company logos and design – to trick people into revealing their information, relying on trust as well as malicious AI.

The last significant and concerning use of AI by attackers is deepfakes. In a notorious case in Hong Kong, an employee was deceived into authorising a £20 million (HK$200m) payment to a person who assumed the identity and voice of a Senior Officer at the company. These advanced spoofing attacks show the unending challenges AI is producing by bypassing strong cybersecurity provisions.

From easily accessible phishing tools to MFA bypass kits and deepfake technology, it’s clear cybercriminals are using powerful AI tools to orchestrate uncanny attacks. This is a challenge to traditional cybersecurity measures, so what can companies do to combat the growing threat?

Cybersecurity strategies for AI-Powered cyberattacks

To adequately protect against cyber threats, organisations should have a multi-tiered approach (also known as “Defense in Depth”) covering common attack vectors. At Hornetsecurity, we suggest a three-pronged approach which includes mindset, toolset and skillset.

Mindset focuses on the need to highlight the personal ability of employees to enhance and maintain strong cybersecurity awareness. This includes the C-suite taking part in all training and effective communication to ensure comprehension and buy-in from all employees.

Toolset focuses on the need to introduce technologically advanced tools including next-gen malware, spam detection and blocking that incorporate AI-based machine learning, along with backup and recovery systems to support businesses in fighting against AI cyberattacks. It’s crucial businesses understand the value these products bring to keeping all data safe and secure.

Another important tactic in the fight against these attacks is skillset. This can be advanced by continuous security awareness training for all employees. While technology can go so far in protecting a business, it is always vulnerable to natural human errors. The latest next-gen training developed by Hornetsecurity establishes a sustainable security culture by helping to build and boost an organisation’s human firewall though ongoing training. It also demonstrates the value of every employee’s contribution to protecting against current and future cyberattacks.

By integrating this three-pronged approach to security strategies, businesses can have confidence in their defences.

AI presents formidable challenges in the hands of malicious actors, but it also holds immense potential for ever-evolving defensive innovation. Despite the concerning uptick in the use of AI for cyberattacks, security experts are also using generative AI technologies for positive, protective measures, such as to bolster defensive toolkits and enhance cybersecurity resilience. Initiatives such as Hornetsecurity’s Security Awareness Service exemplify this approach, using AI to help organisations best protect themselves against new and evolving cyber threats.

In this dynamic and ever-changing landscape, proactive measures are essential for staying ahead of cyber-attacks and safeguarding critical assets. While the boom in AI-enabled cybercrime is concerning, the technology can – and is – be harnessed for good, helping build a more resilient and secure digital ecosystem for all.

Article by Dr. Yvonne Bernard, Ph.D., CTO,  Hornetsecurity.

About Dr. Yvonne Bernard, Ph.D., CTO,  Hornetsecurity.

Dr. Yvonne Bernard, Ph.D., CTO,  Hornetsecurity.

Dr Yvonne Bernard is CTO at Hornetsecurity, the global provider of next-generation cloud-based security, compliance, backup, and security awareness solutions. With a Ph.D. in Computer Science, she has a technical background and is responsible for strategic and technical development in the areas of Product Development, Innovation, Research and the in-house Security Lab.

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Text Translator

Awards Ceremony

Click on the Image to view the Magazine

GBM Magazine cover


Global Brands Magazine is a leading brands magazine providing opinions and news related to various brands across the world. The company is head quartered in the United Kingdom. A fully autonomous branding magazine, Global Brands Magazine represents an astute source of information from across industries. The magazine provides the reader with up- to date news, reviews, opinions and polls on leading brands across the globe.


Copyright - Global Brands Publications Limited © 2024. Global Brands Publications is not responsible for the content of external sites.

Translate »