The figures published by the German Federal Office for Information Security (BSI) for the same year visualise the rapid increase in threats from cyberspace. In 2023, well-known companies and institutions were affected. According to the BSI, it is striking that so-called ransomware – i.e. malware that paralyses systems from the outside in order to extort a ransom – is increasingly not only hacking into the systems of financially strong companies and large organisations. SMEs (small and medium-sized enterprises), authorities/institutions, private individuals and now even political parties are falling victim to attacks. This shows how important it is to make IT systems and environments more resilient.
General challenges in cyber risk management
The implementation of such security technologies usually involves considerable technical challenges. One of the biggest is integrating new security systems into existing IT infrastructures. SMEs in particular often still work with old systems that are not seamlessly compatible with modern, AI-based security tools. If comprehensive system updates are not carried out at the same time, this can lead to temporary security gaps. In addition, not only the implementation itself, but also the long-term maintenance of the new systems requires specialised knowledge. Smaller companies in particular are often unable to provide this. According to the ‘Cybersecurity jobsreport’, around 3.5 million cybersecurity jobs worldwide will remain unfilled by 2025. Furthermore, the costs of achieving cyber resilience are anything but marginal.
Typical attack scenarios and the human factor
One of the most common gateways for hackers is insufficient updates and patches. Cybercriminals use these security gaps to smuggle in malware or ransomware. Another typical attack scenario is DDoS attacks (Distributed Denial of Service), in which numerous compromised systems are used to overload a company's online services and thus paralyse them. But phishing attempts, in which attackers use seemingly legitimate e-mails or messages to try to obtain sensitive information, are also becoming increasingly sophisticated and more difficult for recipients to recognise. The biggest security risk here is often the human factor. That is why training for more cyber security awareness should become an integral part of every corporate culture. The best combination for more cyber security is always a mix of sensitised employees and customised technology.
Increased security risks through AI technology
So-called social engineering – the targeted manipulation of individuals to gain access to protected data and information – is also being intensified by AI technology. For example, deep fakes or voice cloning help to ensure that employees do not suspect a cyberattack if the manipulated videos or voice messages of the boss appear deceptively genuine. Phishing emails created with AI also rarely contain spelling or grammar mistakes, making them less conspicuous. A study from 2021 shows that, even at that time, an average of 700 companies were affected by social engineering attacks every year. Since the increasing prevalence of technologies with artificial intelligence, the number is likely to have increased again because these attacks can be made even more complex and convincing. How easy it is to commit fraud using such AI tools was demonstrated by an experiment conducted by American journalist Joanna Stern last year. In it, she created situational AI-generated clones of herself and examined their effectiveness. She recorded the results in a video worth watching.
Proactively using AI to combat cybercrime
Conversely, companies can also use artificial intelligence to better protect their systems. For example, by using algorithms that analyse large amounts of data, identify suspicious activity and thus point out potential security breaches. Phishing attacks can be fended off thanks to AI because natural language processing (NLP) helps to recognise suspicious patterns. And when it comes to deep fakes generated by AI, the same technology can be used to more easily detect behavioural deviations and expose the fakes.