This article first appeared in L'Echo in October 2024. We happily quote it below in English for our blog readers.
Cyber risks are multiplied by artificial intelligence. It is crucial for companies to teach their employees how to counter this threat.
Just over a year ago, the company Vranken Pommery Benelux fell victim to a CEO fraud — more than 800,000 euros were embezzled. A few weeks ago, Trump published photos of Taylor Swift (barely subtly altered by artificial intelligence) urging her fans to vote for the former president. The common point between these two stories? There are two: firstly, even though neither was unprecedented, many people were deceived — an accountant at the wine and champagne producer and potentially thousands (millions?) of Americans. Secondly, both incidents have relatively limited scope and impact… compared to what could happen in the future!
The rise of generative artificial intelligence, as we know, has had numerous effects on businesses: increased productivity, enhanced creativity, healthier governance, more satisfied customers, etc. Technology brings its share of promises, and for centuries, technical progress has been elevating us.
But it would be misleading—and very naive!—to think that AI is only used for noble and positive purposes. Cybersecurity services have widely adopted this new technology, not only to boost their capabilities but also to fight with the same weapons as those they combat: the increasingly active hackers.
The European DIGITAL SME Alliance recently noted a 57% increase in digital attacks between 2022 and 2023, with nearly 80% of the affected companies generating less than $250 million in revenue. As we enter October, which, as every year, highlights Cybersecurity Awareness Month, it is essential to remember that this widely talked-about artificial intelligence can also cause serious damage.
This means that the most significant risks lie within companies, as they present the most attractive opportunities for hackers (mainly from a financial perspective). Like in many other areas, AI doesn’t necessarily introduce new threats, but rather amplifies and scales existing risks.
Take phishing, for example, where hackers send a fake email with a malicious link (think of the infamous Nigerian prince offering to leave you his fortune!). With tools like ChatGPT, the number, quality, and reach of such messages can be significantly increased with minimal effort.
There are two types of risks more specific to generative AI: data overflow and the malicious use of AI. The first risk is twofold: employees may incorrectly feed data into the model or provide too much confidential information that could later be exposed to other users. Alternatively, the model itself may inadvertently deliver sensitive client data. The second risk is more subtle but just as dangerous, if not more so, as it involves data overflow from external sources. If a hacker gains access to your AI model, they could feed it inaccurate data, compromising the algorithm’s results and harming your business. The impact of a customer service chatbot offering unauthorized promotional offers, for instance, could be disastrous, as a recent high-profile case in the U.S. reminds us: companies are responsible for what their conversational agents produce and offer!
In cases of malicious use, the dangers are more evident. Going back to our example of CEO fraud, it’s one thing to overlook the exact origin of an email, but it’s entirely another to find yourself in a Teams or Zoom meeting with a deepfake of your boss asking you to make a transfer!
Employee education is therefore key to effectively guarding against cyber risks that AI amplifies. This includes understanding how AI works to avoid giving it sensitive information, and also recognizing these new risks to prevent falling into traps. This should be addressed through training sessions, workshops, and other awareness activities, which we hope will be strengthened during this Cybersecurity Awareness Month.