Artificial intelligence is rapidly changing how we defend our digital world.
At UT, researcher Thijs van Ede is exploring how AI can help detect and stop cyberattacks and what limits we should set before we let it act on its own.
AI to the defence
Thijs’s research combines cybersecurity and artificial intelligence, focusing on what happens after hackers break into a system. “We use AI systems to detect intruders, understand what they’re doing, and figure out how to stop them from causing further damage,” he explains. These systems analyse everything from network traffic to the activities happening on a computer itself. “If one program suddenly starts opening and overwriting lots of files, that could point to ransomware,” Thijs says. “AI can learn to recognise those patterns, even the ones we haven’t thought of yet.”
Because modern IT environments generate massive amounts of data, humans alone can’t keep up. AI can flag suspicious activity much faster. But the final judgment still lies with people. “We want humans in the loop,” Thijs emphasises. “AI can point to something unusual, but a person decides whether it’s really an attack.” That balance is an important question in his work: how far can we let AI go in defending systems?
Inside the Terminator project
To explore that boundary safely, Thijs and his team created the Terminator Project, a lab setup that simulates attacks in a controlled environment. The name, he jokes, is “a nod to the film, but in our case, the goal is to make sure our AI doesn’t go rogue.” In these simulations, a large language model (like ChatGPT) is asked what actions it would take when facing an ongoing cyberattack. Researchers then test those actions to see what effect they have. “We’re not only interested in whether the AI can stop the attack,” Thijs says, “but also in what side effects its decisions might have for normal users.”
Caution matters
Security companies are already experimenting with AI tools that can automatically block threats. But Thijs believes it’s too early to hand over full control. “If an AI system mistakenly shuts down a company network because it thinks there’s a hacker, the consequences can be huge,” he warns. “That’s why our research focuses on when automation helps and when it’s better to keep a person in charge.” The team collaborates with Siemens in Germany and is preparing a wider European project to explore the technical feasibility as well as ethical and legal questions around AI-driven defence. Who is responsible if an automated system makes the wrong call? How much autonomy should such a system have? These are questions technology alone can’t answer.
Defending in a changing landscape
The urgency of this work is growing. Attackers are already using AI to their advantage, for instance, by generating realistic phishing emails or writing malicious code. “The threshold for launching an attack is getting lower,” Thijs explains. “We need to make sure defending against them becomes easier, too.” Ultimately, his goal is not to replace cybersecurity experts but to make them more effective. “AI can be a powerful ally,” he says, “as long as we keep it on a short leash.”
What drives Thijs is a fascination with how things break and how to fix them. “Computer programs are built by people, and people make mistakes,” he says. “Security is about making sure those mistakes don’t cause harm. It’s a constant battle between attackers, defenders, and technology and that’s what makes it so interesting.”
More recent news
Thu 12 Feb 2026UT Professor Frederic Schuller elected to the Academy for Excellence in Higher Education
Mon 2 Feb 2026Hold tight: How human and robotic touch shape our fear response
Mon 2 Feb 2026UT researchers receive two Dutch Cyber Security research awards
Thu 29 Jan 2026Breathing space with algorithms
Tue 20 Jan 2026Pioneers in Healthcare Award ceremony 2025