Cyber resilience


How AI can help improve security

Digitalisation & Technology, 09.02.2022

The recurrent vulnerability of IT systems was again exposed recently when the world held its breath about the vulnerability of log4j in mid-December. Can artificial intelligence (AI) provide greater support in cyber security in future?

How AI can help improve security 

Artificial intelligence is becoming increasingly relevant in business, politics and society. While it is commonly used in the service or entertainment sector, companies with highly sensitive data, such as insurance companies, banks or credit card companies, are now using AI systems to detect fraud and protect against hacking attacks. Even medicine is working more and more with AI. Canadian researchers recently presented "Cobi", the first AI-controlled, fully automated vaccinator.

But we must not forget one fact with all the benefits and simplifications that AI offers: criminals can also use AI to plan and carry out their cyber attacks. Therefore, it makes sense for companies to rely on artificial intelligence to protect them against threats and attacks by hackers. Link11, a cloud-based online and network security provider, discusses the issue in its white paper entitled "AI and Cyber Resilience: a race between attackers and defenders". Marc Wilczek, cyber expert and Managing Director of Link11, reveals more about this in an interview.

Why is it important to use artificial intelligence to support cyber resilience and IT security?

Artificial intelligence is capable of supporting human actions really well. This can also be seen with autonomous driving, where assist systems can now prevent accidents. Fundamentally, despite its benefits, there is also a darker side to information technology – that is, IT: AI is increasingly being used as a weapon. We see the results day after day in the form of diverse cyber threats, such as deep fakes, where AI is used to manipulate content. Large cyber attacks by bot armies too. 

How can AI be used for the security of a company?

AI is used to defend against cyber threats when analysing and evaluating large volumes of data. Company IT departments have to deal with an overwhelming volume of messages and alerts on a daily basis. The flow of data is too large for humans to analyse it thoroughly and, above all, in a timely manner. That's where AI comes into play: it detects anomalies and correlations in large volumes of data and identifies them as threats – and does so very quickly and precisely. Machines or technologies do not suffer from "alert fatigue" either, which is when people's senses are dulled by being overworked, and they then overlook or ignore alerts. This increases the risk of a cyber attack being overlooked.

How exactly does AI protect a company against cyber attacks?

In IT security, we often work with what is known as blacklisting, a negative list that includes the threats you want to protect against. By contrast, we have reversed the burden of proof. This means that we analyse the customer's legitimate data traffic and derive a series of parameters from it: from which countries does the data traffic to the company's network come from, in what format, at what speed, and when? This is then collated in a customer-specific profile. We then use machine learning and artificial intelligence to identify deviations from this legitimate data traffic profile in real time. We identify the threat by its deviation from the norm.

This method is also used to protect websites: statistical models can be used to define typical user behaviour. If there are deviations from this behaviour, it is possible to clarify via upstream captures whether the user is really a human being or whether there is a bot in the background trying to tamper with a website.

“It is better to be one step ahead of the hackers than lagging behind them. Technology, such as AI, helps in this.”

Marc Wilczek, cyber expert and Managing Director of Link11

What are the dangers of AI? As you mentioned earlier, criminals can also use it to circumvent a company's security measures, can't they?

The threat situation is very dynamic. The status quo in cyber defence today may not help in twelve months time. Therefore, speed and adaptability are very important in IT security. It is better to be one step ahead of the hackers than lagging behind them. Technology, such as AI, helps in this. It takes companies that are frequently targeted by attackers out of the cat-and-mouse game. 

How can a company test whether it is sufficiently protected against cyber criminals? What might be regarded as an alert sign?

The damage is usually considerable if you only react once the attack has already happened. Instead, companies need to consider how to protect themselves in a preventative manner. This includes a good architecture for their own IT security landscape. They need to test its resistance and resilience regularly. Penetration tests, as they are known, simulate specific attacks and highlight weak points in a company's defence. Training in crisis situations also helps to ensure improved security. 

The technology is constantly evolving. Can you fully protect yourself from cyber attacks at some point?

100% protection does not exist, as there is always some residual risk. But companies can try to combat the most obvious and greatest security risks. 

How can people and AI ideally work together?

By not understanding AI as competition or a threat, but above all seeing the benefits in it – just like with autonomous driving or other assistance systems in medicine or in aviation. Pilots use autopilot to assist them but remain in control and can intervene at any time. In IT, it is about AI making life easier for people and improving efficiency.

Interview: Mirjam Wilhelm

Your opinion
If you would like to share your opinion on this topic with us, please send us a message to next@ergo.de.

Related articles

Digitalisation & Technology 07.10.2024

Cyber resilience: Resistance to digital attacks

In Germany alone, the total damage caused by cybercriminal activities in 2023 amounted to 205.9 billion euros. The biggest security risk in this context is often the human factor. The use of AI has further increased the risks of cyber attacks – but on the other hand, artificial intelligence also offers new possibilities for better protecting systems.

Digitalisation & Technology 16.10.2023

What is Deep Fake?

Deep fakes are a serious threat to society and business. On //next, we have talked about the risks several times. To counteract them, awareness measures, technological solutions, strict laws and the promotion of media competence are necessary, says Sima Krause from ERGO's Information Security Team.

Digitalisation & Technology 30.10.2023

How can we recognise AI-generated content?

In the age of artificial intelligence, we are faced with a growing challenge: How do we differentiate truth from fake when lines blur between reality and deception? In a world where deepfake videos and computer-generated images can mislead even the most critical eye, it is crucial to explore innovative solutions that enhance our ability to identify machine-generated content.