At the Digital Society Institute, our mission is to establish responsible digitalization in a seamless manner, placing society in the center of the process. We conduct the ICT science for a smarter society, taking societal challenges as our starting point, and placing digitalization in service of individuals, groups, organizations, and society at large. The following research groups contribute to reaching our goal.
With the present demographic challenges and limitations in resources, it becomes more and more clear that the existing healthcare system is no longer sustainable. This accounts especially for the care of the growing group of (elderly) people with chronic conditions, who need care during longer periods of time. They want to live as long as possible in their own environment, which is also desired from an economic point of view, but they need support in doing so as well as in their wish to be as independent as possible.
The aim of the Biomedical Signals and Systems (BSS) group is to create innovative, technology based solutions that support these people in their needs. Together with other disciplines, we want to create end‐to‐end solutions that support people with independent living and self‐management.
The focus of our research is therefore on the following areas:
- smart health monitoring services, combining unobtrusive, multimodal sensing with advanced signal processing and data analysis techniques, embedded in a safe, secure and dependable IT infrastructure;
- (real‐time) decision support systems, supporting both clinicians and patients in their decision‐making process with respect to (functional) diagnostics and choice of the best treatment;
- persuasive coaching systems, that support people in developing and maintaining a healthy behaviour and sustainable vitality, using multimodal sensing, artificial intelligence and persuasive human interactions (e.g. gamification).
It is evident that our research group alone is not able to address all these aspects in the best way, therefore we always collaborate with other disciplines. This is also the reason why we started the Center for Monitoring and Coaching (CMC), with other groups of the University of Twente, in which we integrate knowledge and experience from Biomedical Engineering, Computer Science and Behavioural Science to achieve our aims in the best way.
Medium to low urbanized regions generally deal with an ageing population. Many people may no longer have access to a car, health and mobility can be a serious problem. This requires a smart transport system that is accessible for all people and can contribute to a sustainable and inclusive society.
In medium to low-urbanized regions traffic safety, the viability of public transport and equity in accessibility are important issues that should be addressed. Smart transport systems should be flexible or adaptive, since travel demand often varies considerably over the day. Research at the Centre of Transport Studies (CTS) revolves around the design of these smart transport systems.
We use ICT technology to monitor both travel demand and accessibility for all modal transport systems. However, as a basis for a good design monitoring is necessary but not sufficient. We also need a good understanding of underlying processes, e.g., which factors are actually driving travel demand, how does demand influence supply and vice versa, and how can individual travel patterns best be influenced. For this we require new ICT methodologies for data fusion, data analytics and human-machine interaction, and ICT technologies that are able to measure the motivation why travelers make certain choices.
Flexibility means that in a public transport system the service that is provided should meet demand as much as possible. We cannot design a public transport system as a unimodal system with fixed timetables and fixed lines. Instead we need a public transport system based on multimodal principles, a combination of a main backbone system and a variety of modes, including autonomous car(pool)s and e-bikes. Since our approach contains collective elements, the system should not only match demand as much as possible, but also individual travelers should become digitally connected prior to their journey.
The interaction between humans and digital technology increasingly takes the shape of joint cognitive systems. These cognitive systems consist of ensembles of humans and intelligent machines that cooperate in complex and dynamic environments. But how do we humans remain in control, even when we deal with novel and demanding situations?
The Cognitive Psychology and Ergonomics (CPE) research group studies human cognitive processing and behavior in relation with technology. Cognition concerns abilities such as perception, action, reasoning and communication that allow humans to interact with their environment in a meaningful and purposive way. CPE focuses on the interaction between humans and technology and the modeling of cognitive processes to be implemented in technological systems.
CPE is about designing socio-technical systems in such complex environments, to support humans by developing intelligent support technologies based on human factors engineering theories. The challenge is to adopt technologies that are perceived as being a natural part of our environment and society. To this end, our researchers cooperate with other scientific disciplines.
Furthermore, digital technology itself becomes increasingly cognitive. CPE research uses knowledge of how humans perform cognitive tasks to (partially) model and digitally simulate underlying forms of cognitive information processing. In turn, these computer models can be implemented in technological systems (e.g., robots), providing them with human-like forms of cognitive information processing and behavior. This allows such systems to act on their own in a cognitive manner when needed, for example in potentially hazardous or tiresome circumstances, or interact with humans in a more natural way. This research addresses the challenge of CTIT to develop digital technologies that one can effortlessly use, because humans have learned from birth to interact with agents that share their cognitive abilities.
Computers are becoming the brains of our cars that are expected to take over the steering wheel. This will become possible as soon as we can trust these brains. However, existing analysis techniques cannot guarantee correct behavior of these complex computer systems.
Because of ageing, integrated circuits will increasingly show erroneous behavior. Over the years the computers in our cars will indeed start to lose important settings and information, and hence might make wrong decisions. This is a frightening prospect at the eve of the introduction of autonomous cars. Within the CAES group (Computer Architectures for Embedded Systems), we therefore aim for highly reliable, power efficient computers and electricity supply networks. Computer architectures and their accompanying software that is not only correct by design, but also stay correct over the lifetime of the system.
For efficiency reasons, we optimize our systems by giving them just enough precision to execute the tasks at hand. Compared to traditional deterministic computers this makes them look 'inexact' but, at the same time, energy consumption is significantly reduced. These inexact computing techniques can be used in low‐power applications (e.g. battery‐powered wireless communication systems) but also in large‐scale computing systems (e.g. Radio Telescopes for Radio Astronomy).
Efficient use of renewable energy resources is the goal of our activities on optimizing the energy supply chain. By coordinating production, storage and consumption, a balanced load on the electricity network is realized, leading to less over‐dimensioning of the production facilities and distribution grid. Of course, there is an interplay with the electricity consumers; for example the operation of some of their appliances, (e.g. the dishwasher or the EV charger) are shifted in time to prevent energy consumption peaks. To explore the limits on the willingness of consumers to have their lives influenced by algorithms for optimization purposes, leads to interesting multi‐disciplinary research questions.
Raw data is like crude oil. Oil is pumped up from the soil and then refined in many small process steps in an oil refinery into substances with which one can make end products like plastics and fuels. Data is ‘pumped up’ from sources like databases, file systems, sensors, imaging devices, internet, and social media. In its raw form, it is largely unstructured and needs to be refined by many steps of transformation, extraction, combination, and cleaning, before it can be effectively used for the analytics that drive smart technologies and decision making.
The mission of the Databases (DB) group is to develop technology that facilitates a data refinement and analytics process that produces data of sufficient correctness, relevance and trustworthiness. There are many threats that endanger these qualities of data. It may be uncertain where data came from, how it was obtained, what has been done to it, and what it precisely means. Sources may change over time. And the algorithms we use to refine data are imperfect. These may cause ‘refined’ data to be subtly less trustworthy than it may appear. Furthermore, application of data science technology may endanger qualities that we hold dearly. For example, disinformation and untruths may get ranked high in our search engines influencing people’s opinions and decisions, centralistic approaches may create power imbalance, bias in data may create unfairness. Finally, data science is inherently multi-disciplinary, where each discipline has its own requirements and peculiarities.
Data-science technology should be more resilient against these threats, such that the qualities of data and the applications of data-driven technology are sufficiently warranted and data scientists and smart technology designers can take responsibility for their work.
The world is changing. All man-made devices are (being) connected through the internet, and soon, through embedded sensors, men as well. We see a blending of the internet, as core communication infrastructure, with all other sorts of other networks, such as energy networks, logistics networks, networks of people (social networks) and industrial networks. Next to people as end users, more and more, machines themselves will be network end users, with their own sensors and actuators.
The focus of the Design and Analysis of Communication Systems (DACS) group is on dependable cyber-physical systems (dCPS), in which communicating systems are fully integrated in an enclosing physical system, thus providing services to the outside world.
The core of the internet infrastructure is still growing. To support a stable growth, we work on network management techniques to support the provisioning of on-demand highbandwidth connections (lambda switching). However, a focal point in our current research is the provisioning of a secure internet infrastructure, by the timely detection of intrusions and attacks.
Furthermore, we focus on networking solutions for mobility applications, such as car-to-car communications and car-to-infra communications. The development of suitable wireless communication protocols is an important focus, as well as the development of stable distributed control algorithms under uncertainty, e.g., in the context of cooperative adaptive cruise control.
As a second application field, we work on the development of smart energy systems, to enable sustainable distributed energy generation, storage and use. Here the interplay with electricity grids and water networks (for alternative forms of energy storage) plays an important role, and requires the safe and secure exchange of data, control and pricing information.
Finally, we work on the incorporation of dependable networked systems in industrial products and processes (smart industry). Our current work in this context addresses the management of large-scale energy- and cooling-aware data centers, as well as battery-aware scheduling algorithms, e.g., for nanosatellite systems.
Algorithms and optimization are hidden under the surface of our daily lives much more than most people realize. Only some 70 years ago, advanced optimization techniques were mostly limited to the inner circles of military logistics. The Berlin Air Lift was one of the prominent examples where large-scale mathematical programming techniques have been used. Today optimization techniques are at the core of all navigation devices. They schedule processes on our smartphones, determine which web content to display, compute robust train schedules, or match supply and demand in energy networks.
In the group Discrete Mathematics & Mathematical Programming (DMMP) we develop algorithmic techniques to solve both generic and application-specific mathematical optimization problems. Our leading motive is provable quality and efficiency:
- Finding algorithms to compute optimal solutions efficiently, such that also largest-scale problem instances can be tackled in short time, or understanding why no such algorithm can exist.
- For notoriously hard optimization problems, finding algorithms that come with performance guarantees on computation time or solution quality, and in doing that also profoundly deepen our understanding of these problems.
Our work therefore includes fundamental and structural analysis of optimization problems in light of their computational complexity, the development of new tools to analyze and understand algorithms also beyond the realm of worst-case analysis, as well as the development of knowledge and methods that enable us to solve optimization problems even in decentralized settings with several, possibly competing decision makers.
Translated into mathematical terminology, this means that we do fundamental research in combinatorial optimization, mathematical programming, algorithmic game theory, and algorithm design and analysis. This fundamental work is embedded into societal context mainly via two application areas in which the group is actively involved. These areas are public and private traffic and design and control of smart grids.
As software is being more and more complex, it has become a true challenge to develop reliable software. When you write a program, you are actually writing instructions that can be understood by a computer. In the past, these instructions were carried out one after the other. These days, however, it is not unusual for a computer to execute multiple series of instructions simultaneously. This increases the complexity of the process and thus the likelihood of errors or problems. New techniques are needed to check complex programming for instructions that will cause errors or conflicts - even before you try to execute the instructions in your production environment.
The Formal Methods and Tools (FMT) research group distinguish two focus areas:
- Verification of concurrent software. Here the goal is to automatically establish the correctness of software with respect to its specification. The activities contribute to the certification of safety critical applications, for instance in automated driving and healthcare, and to the increased security of critical infrastructure and the Internet of Things.
- Quantitative evaluation of ICT systems. Many reliability criteria have a quantitative nature, for instance performance, availability, resource consumption, risks and costs. A strength of our group is to translate domain-specific models to formal models that address these metrics.
We contribute to optimizing energy consumption, computing residual safety and security risks, and developing smart maintenance strategies. A common approach in software technology, connecting many of our research projects, is the use of domain-specific models and model transformation. It links models for various system aspects at several abstraction levels. Another unifying aspect is formed by the software tools that we construct, to benchmark our algorithms in competitions and challenges, and to validate new methods on realistic case studies.
Besides verification and optimization of the reliability of critical applications of ICT in health, robotics, infrastructure and the Internet of Things, we are open to new and emerging applications of our technology, for instance in systems biology, health management, and nanoprogrammable systems.
When we understand how and why people use interactive media, these interactive systems will be more socially capable, safe, acceptable and fun. The Human Media Interaction (HMI) group combines the study of how humans interact with digital technologies with research into developing digital technology that humans like to interact with.
There is a long history of working on natural language dialogue systems that take the form of virtual humans and more recently, social robots in which artificial intelligence components add the notion of autonomy and decision making and speech, language and nonverbal communication. But besides the research into dialogue, we also use the techniques of sensing and interpretation for the extraction of information from video and speech archives (for instance in the context of oral history and cultural heritage projects) or their potential in ambient settings (for instance changing the lighting of a room depending on the mood of the people present that is detected by the affective computing software).
The premise of HMI is that understanding the user – by automated evaluation of speech, pose, gestures, touch, facial expressions, social behaviors, interactions with other humans, bio-physical signals and all content humans create – should inform the generation of intuitive and satisfying system responses.
The research group of Human Media Interaction consists of researchers with backgrounds in computer science, mathematics, electrical engineering, psychology and linguistics. Several researchers have double degrees. Collaboration between disciplines is needed for several reasons. The development of algorithms requires the understanding of the domain: in this case the human user. But also, measuring the effects of the interactive technology on the user requires a thorough mastery of user studies involving a wide range of methods - from ethnomethodology to experimental studies.
Control is a doubly hidden technology. It is often implemented via software on a computer chip while this chip is built into a device such as a car, thermostat, pacemaker, etc. Although it is hidden, control forms an essential building stone for letting machines run smoother, faster, and more accurate and efficiently.
Control is faced with many new challenges and opportunities:
- Sensors have become smaller, cheaper and more widely available confronting the control engineer with a huge amount of information with enables us to achieve better performance but requires new techniques in extracting useful bits of information from many different types of sensors requiring techniques from big data such as data fusion.
- Higher performance requires better models. Complex (often nonlinear) models with a spatial distribution or hybrid characteristic are needed. Computational power is now available but theoretical insight in these models is desperately needed since brute force simulation does not give sufficient understanding of possible adversarial circumstances.
- The number of actuators is also increasing dramatically and higher performance requirements require an integral design as actuators cannot be controlled individually because of interactions. Better models and a solid understanding of these interactions are needed for our controller design. Think of the complexity of a smart grid with thousands of actuators that are physically located all over the country and still have to create an extremely stable and reliable network.
The ultimate challenge of the researchers of Hybrid Systems (HS) - part of the Mathematical Systems Theory, Applied Analysis and Computational Science (SACS) cluster - is to make all the above work to provide consumers and society with technological devices which simply work.
Technology enables new delivery channels, new services, new business models. Internet of Things, open data, and intelligent software agents open up entirely new ways to organize matters. Increasingly big data is available to analyze and re-design industrial networks. The application of techniques from artificial intelligence and robotics to supply chain have a large impact on the distribution of work and the interfaces between machines and humans.
The Industrial Engineering and Business Information Systems (IEBIS) group focuses on how to use information technology to create value in business processes in logistics, health and services industries. With research projects that have substantial impact on innovating practice while significantly contributing to the international scientific knowledge base. We closely collaborate with industry, knowledge institutes and government agencies.
IEBIS has a special interest in decision support systems and inter-organizational systems connecting networks of businesses and governments. We study novel ways of organizing networks such as dynamic global sourcing and multi-agent coordination. We develop and apply quantitative models and algorithmic approaches, simulation and gaming, ICT architecture and business modeling and prototyping to create and evaluate innovative concepts.
Methodologically, our research is based on quantitative (Operations Research) models and algorithms for both deterministic and stochastic systems, discrete event simulation, serious gaming, ICT architectures and business modeling, data mining and business analytics, and prototyping to create and evaluate innovative concepts. Both central and distributed control architectures for interorganizational systems (e.g. multi-agent models) are applied.
Within the current research portfolio of IEBIS we discern three major themes, with ties existing between any two themes, either methodologically or content-wise. Financial and security aspects play a role in any of the three domains. The two research pillars are Industrial or Systems Engineering and Business Information Systems, while the three themes are
- Design, planning and control of logistics and supply chain networks
- Design and engineering of IT-based services and security measures
- Design and optimization of operational processes in healthcare
Educational needs in society are changing rapidly. The role of traditional institutions for delivering instruction is getting less prominent, there is a need for continuous education, there is a demand for less instructive and more engaging forms of learning, and a requirement for more personalized forms of learning. Technology plays a role in all these developments by enabling the delivery of instruction in a time and place-independent way, by providing learners with more engaging forms of learning (e.g., through online laboratories and games) and by using techniques to adapt the learning environment to individual knowledge levels and aptitudes.
The research group of Instructional Technology (IST) focuses on the design and evaluation of open technology-based learning environments using inquiry-based and collaborative learning approaches and involving technologies such as games, simulations, mobile devices, modelling environments, and touch-based interfaces. With the use of techniques from learning analytics and natural language analysis these environments can be made adaptive to the learner’s learning goals, knowledge levels, and aptitudes. These technology-based environments also allow for the design of highly interactive environments that elicit an active and engaging learning process.
Important building stones of designing interactive and adaptive learning environments are:
- knowledge of (individual and collaborative) learning processes and the impasses that may occur here;
- knowledge of how to scaffold these learning processes;
- knowledge of web-based techniques to develop interactive learning environments and scaffolds;
- knowledge of learning analytics and natural language processing to make diagnoses of leaning processes;
- knowledge of instructional designs.
The ultimate goal is to design and develop learning environments that can be entertained at any moment and anywhere, sometimes as a single learner sometimes in collaboration, in which the distinctions between entertainment, working and learning disappear, and that are there as a natural part of our living environment. Starting point for the designs are the capabilities and requirements of users with a clear driving force form technological developments.
Integrated Circuits (ICs) form the heart of all modern electronic systems. They allow extreme complex and cost effective hardware that have shaped the world as we know it today.
The main tasks that ICs do is digital processing of data. But where do these bits come from? They are captured by sensors. This can be a simple keyboard, but also an antenna (wireless communication, radar), a camera, optical fiber, cable, microphone, and thousands of other sensors like for example used in self-driving cars. These sensors all generate analog signals because nature is analog. These analog signals have to be converted to digital bits and therefore we always need an interface between the physical world of sensors and the computer world of bits. This is not only the case for sensing but also for actuation: we need displays, robot motors, 3D printers, loudspeakers, and complete self-driving cars. These sensor and actuator interfaces are all analog circuits that are implemented in ICs. The analog circuits amplify weak sensor signals, filter away unwanted interfering signals, sample the signals on very precise time moments and convert them to clean digital bits. In order to fabricate electronic systems at a low cost and with high reliability, we integrate analog sensor and actuator interfaces in the same technology used for digital hardware. This allows for single chip devices, which are cheap, reliable and mass producible.
The Integrated Circuit Design (ICD) group is one of the few leading academic groups in the world for analog integrated circuits. We focus on fundamental new approaches, often resulting in one or two orders of magnitude of improvement for a given IC technology. This means that we invent better design techniques. Many techniques that are used in products originate from the IC Design group in Twente. Examples are the “Nauta Transconductor” which is an infinite gain-infinite bandwidth circuit used in high speed filters to filter unwanted signals away. Another breakthrough is our noise “thermal cancelling technique” which can cancel the noise produced by the first transistor which senses the very weak antenna signal.
What are the motives of our increasing use of (new) media? What are the psychological and social effects of ICT in organizations, especially government organizations? And what is the impact of ICT on our information and network society as a whole?
The researchers of Media Communication and Organization (MCO) focus on these questions from three perspectives. First of all, they deploy a user perspective in research. Not technical supply but user demand is the focus of every investigation. This means much interest for issues of adoption or acceptance of digital applications, such as Internet tools and social robots. Furthermore, they examine how humans are embedding technology in their environment and how they actually work with them. In this way they found out how and why people make mistakes in using digital technology for instance harming their safety and privacy.
The second input is the network and contextual perspective. The digital society is also a network society. This means that relations between people and between people and devices, software and interfaces are just as important as their own characteristics. The Media Communication and Organization research group always looks at the human and social relations in digital media use, such as interfaces. What is the interaction between humans and social robots? What is the social context? Can social robots be social anyway? What is the relation of people with devices in the Internet of Things? Do they need the same digital skills here as in using the computers and websites they know? Or do they require special skills to know how they can best benefit from, for example, smart meters in their home?
The third contribution is a particular focus on design thinking, a design perspective. Often researchers of Media Communication and Organization are asked to help with research after a problem in a particular domain is unsatisfactory solved by technology. A better input is to diagnose the problem in an early stage. After that, a participatory design of developing new technology in cooperation with the problem owners can be organized.
The First Wave was: many people per computer, the Second Wave was: one person per computer. And now there is the Third Wave: many computers per person. We want to use these devices at any time in any place. We want a network that can understand its surroundings and improve our experience, productivity at work and quality of life.
The goal of pervasive computing, also called ubiquitous computing, is to make devices smart, thus creating sensor networks capable of collecting, processing and sending data, and ultimately, communicating as a means to adapt to the data’s context and activity. These systems facilitate the human users by pervasively providing the computing power, information, and other services specifically tailored to their needs, such as easy living environments for physically and cognitively impaired persons, remotely providing health care services to chronic patients, and adaptive disaster response systems, are pervasive systems.
The Pervasive Systems (PS) group investigates a new distributed-systems paradigm for bringing the flexibility of information technology to bear in every aspect of daily life. Such distributed networks of smart objects cooperate to support an application as unobtrusively as possible (transparency), making efficient use of scarce resources independent of growth (scalability), in such a way that the system adapts to a dynamically changing environment (evolvability), and that operates and gives results that can be relied upon (trust). Mainly due to resource constraints, devices and connections are inherently unreliable, yet the system should be able to provide reliable services (quality).
The research themes are focused on the design and analysis of the following topics and their interaction:
- collaborative embedded and opportunistic sensing
- extreme wireless networking
- spatio-temporal sensor-data analytics
- learning sensor systems
The group addresses distributed networked intelligence covering sensing platforms, wireless sensor networks, opportunistic networks, real-time distributed systems, privacy-aware computing, sensor fusion, event detection, activity recognition, participatory sensing and computing, and mobility pattern analysis.
How can an information system make a good fit with human users so that it contributes to their autonomy and quality of life? How can we do that while respecting the civil and fundamental rights of users? The answers to these questions will help build a better and more just society in which information technology supports our shared moral values and our collective interests.
The Philosophy (PHIL) department analyzes information technology and its role in contemporary society from a philosophical and ethical perspective. We aim to understand how information technology affects, and is itself shaped by, society. We also aim to provide normative and ethical evaluations and assessments of information technologies and their correlated social and cultural impacts. We particularly focus on new and emerging information technologies, and develop approaches that support responsible innovation, as well as responsible design, development, and use of these technologies.
We consider a broad range of information technologies in our research, with a particular interest in big data, artificial intelligence, robotics, next-generation internet and internet-ofthings, IT in medicine, wearable and implanted IT, virtual and augmented reality, and IT in smart cities. We focus on philosophical themes that include human-technology relations, well-being, autonomy, justice, responsibility, freedom, security, privacy, democracy, and human connectivity.
We have developed, and will continue to develop, specific approaches to support technology developers and policy makers in responsible innovation, and to support users. Among our developments:
- methods of value-sensitive design, which help developers to design IT systems in accordance with stakeholder values such as privacy and nondiscrimination;
- the technological mediation approach, which contributes to designing IT systems that have a better fit with the human mind and body and human needs, and which also helps users by anticipating in the design phase how they will respond to the technologies;
- ethical impact assessment, which helps technology developers, policy makers and users assess the potential impacts and ethical implications of innovations.
Robots interact with the environment often in a physical way. They grasp things, they lift, they carry, and they manipulate things. Designing robots that handle physical interaction elegantly and properly, is a serious challenge. Because of the cyber-physical nature of these automated systems, this challenge can be addressed only from the boundary of ICT and physical — mechanical —parts, implying that the whole — cyber-physical — system must be modelled and understood.
The Robotics and Mechatronics (RAM) group investigates the applicability of modern systems, imaging and control methods to practical situations in the area of robotics. We follow a modelbased, systems engineering / mechatronic approach, allowing for multi-domain modeling and thus finding novel solutions for robotic engineering problems. In this, we also exploit rapidprototyping using 3D printing of mechanical structures, actuators and sensors, and incorporating this in our design methodologies, as a form of engineering-based science.
To have physical interaction of robots intrinsically safe, the energy exchange at that interaction must be limited. Therefore, describing the physical part and the interaction with the cyber part from an energy-exchange perspective is essential. For the combination with the cyber part, that cyber — ICT — part can best be modelled using a comparable approach. We use block diagrams, signal-flow graphs, and software component models for that. All types of models are parts connected with idealized connections. Information / energy exchange is only via those connections, so all our model approaches conform to the CPC — component port connector — principle (metamodel).
Modeling the physical part from an energy-exchange perspective using bond graphs, stimulates seeking design solutions in different engineering domains, as energy is the combining factor in all engineering domains.
This combined modelling approach inspired us to develop design methods and tools supporting this multiparadigm modelling for cyber-physical systems. We developed Variable Impedance Actuators, where the stiffness / resistance behavior at the interface to the environment is adaptable. This is beneficial for bumping-like activities, like grasping items, especially when those move, or walking by a robot.
Digitalization of society transforms existing services by businesses and governments into online services, and creates new services that critically depend on IT networks. This transformation leads to novel online business models, public service provisioning, and entertainment. We interact with these services via traditional IT devices as well as mobile and Internet‐of‐Things (IoT) devices.
This transformation has brought new challenges in terms of intelligence, interoperability, security, privacy and safety of services. Online services may not account for all possible contexts and configuration of devices in which they operate, decreasing rather than increasing service quality. Confidential and private data may be revealed or even abused more easily than before. Availability of services may decrease because of the inherent complexity, and hence fragility, of large‐scale decentralized IT networks. The Services Cyber security and Safety (SCS) group investigates the development of online services with reduced security, privacy and safety threats and of dedicated services mitigating some of these threats.
To further enhance intelligence of online services, we investigate model‐driven techniques for situation awareness in domains such as smart logistics, early warning systems for incidents and disasters, and online fraud detection. Situation‐aware services combine data from a variety of sources, ranging from sensors and databases to log files and social media, and provide responses that contribute to user goals.
To protect online services against cyberattacks, we develop algorithms and protocols that provably secure the underlying IT infrastructure and are able to thwart or detect attackers compromising services. Specifically for services that collect and process personal or otherwise sensitive data, we investigate privacy‐enhancing technologies and design cryptographic techniques as well as anonymity mechanisms to avoid or reduce data theft and privacy violations.
Personal and personalized services that are online and accessible through mobile or IoT devices must rely on biometric authentication for secure access and identity management. For this purpose, but also for law‐enforcement applications, we investigate biometric recognition techniques for implicit or mobile authentication, sometimes combined with large-scale identification.
Wireless communications, telecom, logistics, engineering and social networks are increasingly dominating our society. Complex systems with a high degree of uncertainty. How can we cope with it? Stochastic Operations Research (SOR) wants to understand fundamental behavior of systems under uncertainty, their analysis, improvement of efficiency of simulation, and as the result, improve the systems.
Our approach is through design and analysis of relevant probabilistic models. We need a clear mathematical model to enable better understanding of the system itself. Novel modeling and analysis are important because the problems we address are not possible to solve with naïve approaches. For example, the probability of a rare event (such as a buffer overflow in a telecommunication system) cannot be evaluated by straightforward Monte-Carlo simulations, because they result in large running times and poor accuracy. The solution is in developing new mathematical methods that involve change of measure, of which the efficiency must be proved analytically.
One of the main goals in dealing with uncertainty is to understand how to design systems to optimize their performance, and to indicate how far we can go with performance improvements. Decision making capabilities of the models are important.
Understanding the possibilities and the limits of decision making and performance improvements are in the core of building the right technology. This is the right approach in many real-life technologies, from energy savings in wireless networks to hospital information systems.
Models need to be efficiently executable. Analytical solutions are not always available. Then we need approximations and algorithmic solutions. For example, discrete-event simulations play an important role in providing practical solutions for systems logistics. Algorithms for social networks enable the analysis of central nodes and dependencies between neighbors, that affect network’s behavior, for example, spreading of viruses or information.
The relevant mathematical tools for the SOR group are stochastic processes, queueing systems, optimization, game theory, rare event simulations, information theory, network coding, network science, random graphs.
In today’s world, the desire for increasingly ubiquitous wireless connectivity continues to accelerate. Everything will be connected with everything, from high quality TVs, to personal on-body and inbody devices, to refrigerators, to your car and house. Possible applications are almost unlimited. The wireless connectivity is completely integrated in the applications and part of our daily life.
The number of wireless devices is increasing rapidly. Therefore, these wireless devices are becoming smaller and smaller, the needed energy is harvested from the environment and lifetime is almost unlimited, data rates can be extremely high, and the communication is secure and reliable. Thousands and thousands of wireless devices can be operated in small environments without problems. The challenges are even further exacerbated by rapid international developments in terms of new frequency allocations and improvements in device and packaging technologies as well as signal processing techniques and protocols.
To address these challenges for smart wireless systems, the Telecommunication Engineering (TE) group is working towards a tight integration of antennas, circuits, and signal processing while including the environment. The hard part in realizing robust and secure wireless connectivity in a crowded environment is understanding this environment, like propagation of the electromagnetic signals in various conditions, and the impact of wireless signals on circuits.
The group specifically contributes in the robust, efficient and secure wireless communication at the physical layer level to allow the vision of the huge increase in the number of systems to communicate reliably over the radio channel and to optimally utilize the limited spectrum and support the ever increasing demand for capacity. In other words: we make wireless communication happen.
The TE group is creating models to map nature to the domain of wireless connectivity. This modelbased approach is tested by experiments. Large experiments are carried out to understand the propagation of radio signals and electro-magnetic compatability issues associated with it.
Read more about the education we offer.