Cybercrime and artificial intelligence: discover current threats and future prospects!
Cybercriminals and AI: a growing threat
The use of artificial intelligence (AI) by cybercriminals is becoming more and more widespread. Indeed, the latter exploit the advantages of generative AI to carry out more effective and credible attacks. Tools such as the chatbot ChatGPT have popularized this use of AI in the world of cybercrime. Now phishing, ransomware, scams and even presidential scams are benefiting from these new technologies used by criminals.
A democratization of AI among cybercriminals
According to Jean-Jacques Latour, director of cybersecurity expertise at Cybermalveillance.gouv.fr, AI is becoming more widespread among cybercriminals, which makes them more effective and credible in their attacks. The methods used by these criminals are not changing, but the volume of attacks and their persuasiveness are increasing significantly.
Increasingly sophisticated phishing attacks
Phishing, which involves sending fraudulent emails promising free gifts or discounts, is becoming increasingly sophisticated. Scammers now avoid gross syntax or spelling errors to make their messages more credible. They adapt the language used according to their targets and use appropriate contexts to convince them to click on questionable links or sites.
Generative AI Used to Create Custom Malware
Generative AI is being misused by cybercriminals to create custom malware. These programs exploit known vulnerabilities in computer programs, making the attacks even more effective. Programs such as ThreatGPT, WormGPT, and FraudGPT are growing on the Darknet and gaining popularity among malicious actors.
AI used to sort and mine data
Hackers also use AI to sort and exploit masses of data once they have infiltrated a computer system. This use of AI allows them to maximize their profits by targeting the most relevant information.
The Presidential Scam and Deepfake Audio Generators
AI is also being used in the president scam. Hackers collect information on company managers in order to authorize fraudulent transfers. Thanks to “deepfake” audio generators, they can perfectly imitate the voices of managers to give transfer orders.
Ransomware and vishing also affected
Businesses and hospitals are unfortunately faced with ransomware, which is already using AI to modify their code and evade detection by security tools. Additionally, the technique of vishing, where a fake banker requests a money transfer, could also be improved using AI.
Cases already recorded and remaining doubts
British police have previously reported cases where synthetic AI-generated content has been used to deceive, harass or extort victims. Although the first cases in France have not yet been officially recorded, doubts remain about the use of AI by criminals.
The “zero trust” rule to counter these threats
Faced with these new threats, it is essential to apply the “zero trust” rule. This means that you should not trust anything a priori when it comes to cybersecurity and AI. The most active hackers are generally well-organized networks from Eastern Europe, but state hackers from marginalized countries should not be overlooked.
Conclusion
In conclusion, AI-powered cybercrime represents a growing threat. Cybercriminals are increasingly using AI to improve their techniques and carry out more credible attacks. It is essential to remain vigilant and put appropriate protective measures in place to counter these threats.
