Imagine you are at your workplace and receive an email from your direct supervisor with an urgent request. You know that your supervisor is abroad and in the email is also mentioned his current location and the alternative time zone.
While reading the email, you receive a call, you answer it, and you hear your supervisor saying that he just sent you an email, but wanted to call first to make sure you have received his request. At that point you recognise the voice and it is familiar to you when it says, “Can you get on it as soon as possible?”
This request could be for a financial transfer of funds, restricted access to some part of your network or any other significant request that would motivate a highly skilled cybercriminal.
These actual cases have been going on since 2019, and the quality of the fake voices has improved significantly over the last four years.
This case is just one example of how cybercriminals are improving their performance by compromising systems and people with new AI technology.
There are several ways in which attackers are currently using artificial intelligence (AI) in cyber-attacks.
Deepfake attacks
Deepfake technology can create fake videos or audio recordings that are difficult to distinguish from the real thing. Attackers can use deepfakes to impersonate someone in a phishing attack. Cybercriminals use these types of attacks to gather information or trick people into performing harmful actions.
AI phishing attacks
Cybercriminals use AI to create more sophisticated and difficult to detect phishing attacks. For example, they can use AI to generate convincing fake emails or websites that look legitimate. AI can clone any website in a matter of seconds and customise it, based on the original, to give the impression of real access to an internal resource.
Denial of service (DoS) attacks
AI also has the potential to enable launching more sophisticated and powerful distributed denial of service (DDoS) attacks, in which multiple systems are used to flood a target with traffic.
AI-powered ransomware
Cybercriminals can use AI, not only in distinct ransomware attacks, but also combine them simultaneously. Hackers can identify individuals or entire organisations to target. AI can track email addresses and create highly personalised dynamic emails designed to bypass countermeasures. After an AI-powered ransomware attack, cybercriminals gain access to the system. AI allows them to eventually quickly diagnose weaknesses to escalate the attack.
Advanced persistent threats
AI can be used to conduct advanced persistent threats (APTs), in which an attacker establishes a long-term presence on a network to steal confidential information.
Data processing giant
Cybercriminals can use machine learning algorithms to analyse large amounts of data to identify patterns and trends that are not immediately obvious to humans.
Wrap up
To defend against these threats that are increasingly sophisticated and creative, we remind you of the good practices in cybersecurity that we call SOUP-D:
S of “Safeguard”
Save important information in backups that will allow you to recover it later.
O of “Origin”
Always ask yourself what the origin of a certain contact is, especially in digital media. Preferably do not act immediately and confirm through other original sources.
U of “Update”
It is also important to install the updates for your devices, as well as those for your antivirus, as you will have less risk of vulnerabilities.
P for “Password”
It is crucial to define your passwords. These are the predominant access key in technologies and with them the multi-factor authentication (MFA).
D of “Do not trust”
Always be suspicious of approaches, in particular those involving sensitive data or operations, and always seek to confirm, preferably using a route other than that of the initial contact.