Simple because it matters.
Simple because it matters.
Digitalisation & Technology, 07 October 2024
In Germany alone, the total damage caused by cybercriminal activities in 2023 amounted to 205.9 billion euros. The biggest security risk in this context is often the human factor. The use of AI has further increased the risks of cyber attacks – but on the other hand, artificial intelligence also offers new possibilities for better protecting systems.
The figures published by the German Federal Office for Information Security (BSI) for the same year visualise the rapid increase in threats from cyberspace. In 2023, well-known companies and institutions were affected. According to the BSI, it is striking that so-called ransomware – i.e. malware that paralyses systems from the outside in order to extort a ransom – is increasingly not only hacking into the systems of financially strong companies and large organisations. SMEs (small and medium-sized enterprises), authorities/institutions, private individuals and now even political parties are falling victim to attacks. This shows how important it is to make IT systems and environments more resilient.
The implementation of such security technologies usually involves considerable technical challenges. One of the biggest is integrating new security systems into existing IT infrastructures. SMEs in particular often still work with old systems that are not seamlessly compatible with modern, AI-based security tools. If comprehensive system updates are not carried out at the same time, this can lead to temporary security gaps. In addition, not only the implementation itself, but also the long-term maintenance of the new systems requires specialised knowledge. Smaller companies in particular are often unable to provide this. According to the ‘Cybersecurity jobsreport’, around 3.5 million cybersecurity jobs worldwide will remain unfilled by 2025. Furthermore, the costs of achieving cyber resilience are anything but marginal.
One of the most common gateways for hackers is insufficient updates and patches. Cybercriminals use these security gaps to smuggle in malware or ransomware. Another typical attack scenario is DDoS attacks (Distributed Denial of Service), in which numerous compromised systems are used to overload a company's online services and thus paralyse them. But phishing attempts, in which attackers use seemingly legitimate e-mails or messages to try to obtain sensitive information, are also becoming increasingly sophisticated and more difficult for recipients to recognise. The biggest security risk here is often the human factor. That is why training for more cyber security awareness should become an integral part of every corporate culture. The best combination for more cyber security is always a mix of sensitised employees and customised technology.
So-called social engineering – the targeted manipulation of individuals to gain access to protected data and information – is also being intensified by AI technology. For example, deep fakes or voice cloning help to ensure that employees do not suspect a cyberattack if the manipulated videos or voice messages of the boss appear deceptively genuine. Phishing emails created with AI also rarely contain spelling or grammar mistakes, making them less conspicuous. A study from 2021 shows that, even at that time, an average of 700 companies were affected by social engineering attacks every year. Since the increasing prevalence of technologies with artificial intelligence, the number is likely to have increased again because these attacks can be made even more complex and convincing. How easy it is to commit fraud using such AI tools was demonstrated by an experiment conducted by American journalist Joanna Stern last year. In it, she created situational AI-generated clones of herself and examined their effectiveness. She recorded the results in a video worth watching.
Conversely, companies can also use artificial intelligence to better protect their systems. For example, by using algorithms that analyse large amounts of data, identify suspicious activity and thus point out potential security breaches. Phishing attacks can be fended off thanks to AI because natural language processing (NLP) helps to recognise suspicious patterns. And when it comes to deep fakes generated by AI, the same technology can be used to more easily detect behavioural deviations and expose the fakes.
Intelligent multifactor authentication (MFA): AI-enhanced MFA goes beyond traditional password protection methods. It increases security by analysing user behaviour, such as typing speed and access times, and by providing adaptive authentication based on risk and data value. Suspicious activity can automatically trigger blocking measures.
AI early warning system for ransomware: AI continuously analyses network traffic and file access to detect signs of ransomware attacks at an early stage. By identifying malware and documenting indicators of compromise, a quick alert and response can be provided.
AI-powered behavioural analysis and monitoring: By monitoring access and user behaviour, AI detects suspicious activities, such as failed login attempts and unusually high file access, that could indicate a ransomware attack. This analysis is based on the continuous evaluation of activity logs.
Real-time system monitoring: AI enables near real-time monitoring of administrative users and individuals with special rights to detect and respond to anomalous activity.
AI-automated recovery processes: By using AI, platforms can be monitored, issues anticipated and remediation suggested to streamline system recovery.
Dynamic backup planning: AI adjusts backup plans based on data needs, seasonal fluctuations and other variables. This ensures that recovery point objectives (RPOs) are always met.
Efficient data archiving: AI helps companies identify and discard irrelevant data during the backup process. This saves time, increases efficiency and reduces the cost of data storage.
Sources:
In response to the increasing threat, the EU has drawn up the next directive on the security of network and information systems (NIS2) with extensive requirements. This is to be transposed into national law in the EU member states by October 2024. The aim is to achieve a high common level of cybersecurity in the European Union.
To achieve this, NIS2 significantly expands the scope of application compared to the original NIS directive. In addition to public administrations, the group of organisations considered essential to society and the economy now includes organisations in the waste management, food and pharmaceutical industries, the space sector, as well as data processing centres and communication networks.
In addition, the directive requires the affected companies and organisations to take stricter security measures, report serious cyber security incidents and introduce comprehensive risk management practices and report on these at regular intervals.
The NIS2 Directive also strengthens the role of the national supervisory authorities, which monitor compliance with the directives and impose sanctions if the requirements are not met. Furthermore, the approach promotes cross-border cooperation between EU member states in the fight against cyber threats. The BSI report on the state of IT security in Germany also highlights the need for greater cyber resilience at the national level.
The ongoing development of AI-based attack technologies is presenting companies with ever greater challenges. However, just as AI is used by attackers, it can also be used to protect and strengthen cyber security.
It is therefore essential for companies to invest in modern security solutions and to continuously update their systems in order to be prepared for the growing threats. It is equally important to develop incident response plans in advance to respond quickly and effectively to security incidents, isolate threats and minimise damage. The combination of technological innovations and well-trained personnel forms the basis for effective cyber risk management and supports the digital immune system for organisations and companies.
Text: Alexa Brandt
Your opinion
If you would like to share your opinion on this topic with us, please send us a message to: next@ergo.de