That question drives Udipta Boro, PhD researcher in Urban Safety and AI at the University of Twente. In his research, he looks behind the technology and asks what really happens when cities start to ‘see’ through artificial intelligence (AI). “There’s this comforting story about technology that takes care of us, says Boro. “That more cameras mean more safety. But if you look closely, it’s far more complicated.”
The eyes of the smart city
Boro’s PhD project, Cities under the AI Gaze, explores how modern surveillance systems are built, and how ideas about safety take shape along the way. Across the world, cities are installing AI-powered video cameras to monitor traffic, prevent violence, or improve women’s safety in public spaces.
In India, for example, the Safe City Project connects thousands of cameras to algorithms that try to detect SOS gestures or risky situations and automatically alert the police. “It sounds effective,” Boro explains. “But there’s no clear evidence that more cameras mean less crime. Often, problems just move somewhere else, and new ones appear.”
When protection becomes control
For Boro, the key issue isn’t just technical – it’s human. Through his fieldwork in Indian cities like Bengaluru and Chennai, he studies not only what these systems do, but also how they are made. He discovered that police data used to train AI systems often represent certain social groups more than others. “If that’s your dataset,” he explains, “then anyone who looks like those groups is more likely to be flagged as risky. The system isn’t objective, it mirrors existing bias.”
Sometimes, he adds, technologies built to protect can easily become tools of control. “In some countries, cameras installed for safety have been repurposed for moral policing, like monitoring whether women’s clothing is considered ‘appropriate’. That’s when protection turns into control.”
As more of these systems appear in public life, the questions grow deeper. “It also raises questions,” Boro says, “about who decides what’s unsafe, and what happens when technology starts shaping our behaviour.”
What does ‘safe’ really mean?
During his research, Boro follows how the meaning of safety changes between policymakers, police, and technology companies. “At first, it’s about protection,” he says. “But once technology takes over, safety turns into something measurable – about numbers, not people. The human aspect fades away.”
For him, true safety is much more than that. “It’s not only about avoiding harm. It’s about being able to move freely, without fear or constant monitoring. and knowing that your personal data is safe too. A truly safe city trusts its people,” Boro says. “It empowers them to be their true selves.”
Designing with care, not control
“There’s no perfect solution,” Boro admits. “Technology is never neutral - it’s political. What we build reflects who we are and what we believe.” Still, he stays hopeful. “We can design technology differently,” he says. “With care, not control. By listening to the people who actually live in these cities and by asking what they need to feel safe. That means safety should not just be about smarter systems, but about more human ones – systems that include dignity, diversity, and trust.
For Boro, this begins with open dialogue. “We need interdisciplinary and public conversations about what safety really means. Only then can we design technology that serves people in all their diversity.”





