BSI: AI enables "unprecedented quality" for phishing attacks

The German Federal Office for Information Security warns of security threats from artificial intelligence. However, much is still up in the air.

Save to Pocket listen Print view
Zwei Roboterhände auf einer ergonomischen Tastatur

The BSI is observing the security situation with regard to the growing possibilities of artificial intelligence critically, but is not alarmed.

(Bild: maxuser/Shutterstock.com)

6 min. read
By
  • Falk Steiner
Contents
This article was originally published in German and has been automatically translated.

The German Federal Office for Information Security (BSI) takes the impact of artificial intelligence (AI) on the cybersecurity situation very seriously, but does not yet see any reason for alarmism. "In our current assessment of the impact of AI on the cyber threat landscape, we assume that there will be no significant breakthroughs in the development of AI, especially large language models, in the near future," says BSI President Claudia Plattner, assessing the situation.

In a research paper, which is exclusively available to heise online, the BSI looks at already known threat scenarios on the one hand and expected developments through AI on the other. Although the huge threat has not yet materialized, developments should not be underestimated.

According to the cybersecurity experts from Bonn, self-learning language models (LLM) are already having an impact: "In addition to general productivity gains for malicious actors, we are currently seeing malicious use in two areas in particular: social engineering and the generation of malicious code."

In social engineering, where technical security precautions are circumvented via human contact with employees or service providers, AI enables an "unprecedented quality" in phishing attempts, for example, warns the BSI. "Conventional methods for detecting fraudulent messages, such as checking for spelling mistakes and unconventional use of language, are therefore no longer sufficient."

The BSI gives a slight all-clear when it comes to the extent to which malicious code is already being created fully automatically. "LLMs can already write simple malware, but we have not found any AI that is capable of writing advanced, previously unknown malware on its own," says the IT security authority in its assessment of the situation.

The fact that an AI uses complicated methods of obfuscation or independently recognizes and exploits zero-day vulnerabilities is not yet a reality. Even the automated adaptation of existing malware has so far mainly been the subject of research work.

The BSI has also examined the extent to which AI-based tools could be used directly for attacks. The authority sees potential for better system defense if pentesting can be automated. To the BSI's knowledge, tools that would automate the process from target selection to penetration of the target system do not yet exist: "Agents that can independently compromise any infrastructure are not yet available and are unlikely to be in the near future."

The BSI concludes: "The use of AI as a fully automated attack tool is an area that is being intensively researched. We expect further projects and tools in this area." LLMs and generative AI in particular would offer promising approaches from the developers' point of view. However, it remains to be seen when the proof-of-concept status will be exceeded.

At best, AI is already being actively used for partial aspects – for example, to map the system landscape and potential weak points. However, even AI-based snooping is usually detected by well-protected systems.

However, the use of AI is already much more concrete when it comes to circumventing well-known security mechanisms. For example, models are being trained to operate with real data from leaked databases instead of dictionaries in brute force attacks on password-protected accesses, which leads to a higher probability of a hit. The BSI also sees massive problems with captchas as a security method, as the automated detection possibilities have increased massively.

The BSI is also concerned about a particularly perfidious attack vector: malware that is already integrated into models. As more and more organizations and companies are pushing for the use of AI, this is a serious problem. "There are already cases in which malware is encrypted in the parameters of neural networks, whereby the usability of the model is hardly changed", the researchers state. "Malicious code can also be hidden in trained models that are frequently distributed on certain platforms."

In its current publication, the BSI does not draw any general conclusions about what Generative Adversarial Networks or LLM mean for the principles of development and operation - for example, whether open-source products are generally more or less vulnerable than closed source products.

"Although the source code is usually required for analysis, it is possible to a certain extent to use methods for detecting vulnerabilities in closed source applications in combination with reverse engineering tools", says Germany's cybersecurity authority, describing the issue.

"There are projects that automate this process using an LLM. However, the results vary greatly depending on the complexity of the code and the obfuscation techniques." It is also crucial for the security of open source that AI-based tools are also used to harden open software.

With its current report, the BSI is strikingly less alarmist than its counterpart in the UK, for example. Back in January, the NCSC warned that AI would cause massive cybersecurity problems in the short term - particularly in the area of ransomware. From 2025, however, the problems would increase massively overall. The BSI assessment does not include such doomsday scenarios for IT security.

BSI President Plattner also draws another conclusion: considering the shortage of skilled workers, it is "crucial that business, science and politics pool their expertise – across national and state borders." She is probably referring to the eternal dispute over responsibility for IT security: whether the BSI should become the central office for IT security at federal and state level.

(are)