Digitalization brings many new opportunities for businesses and governments by fostering the development of innovative online services. However, this development also brings new challenges, notably in terms of intelligence, interoperability, security and privacy. The mission of the SCS group is to advance the development of innovative online services with improved quality through context-alignment and with reduced security and privacy threats.
We work on methods and techniques for requirements conceptualization, architecture design, and model-driven engineering of service systems. We focus on data-driven services that are able to make sense of their context and can reliably and timely react on changing situations. We develop service ontologies and service composition frameworks to realize semantic interoperability and meaningful enterprise services. To protect these services against cyberattacks, we develop algorithms and protocols that provably secure (within dedicated attacker models) the underlying IT infrastructure and that are able to thwart or detect attacks. For services that collect and process sensitive data, we build privacy-enhancing technologies and design data protection and anonymization techniques to avoid or reduce data theft and privacy violations.
We apply and validate our results in various domains where data driven innovation plays an important role (healthcare, logistics, emergency management, smart cities), and where smart, secure and privacy-aware services are vital to society.
People and organizations increasingly depend on Information Systems (IS) services. We use IS services all the time in our personal and working lives, and our institutional and business organizations cannot function without IS services.
At the same time, IS become increasingly complex. Among the major drivers of complexity of sociotechnical IS are the perpetual and mutually reinforcing social behavior and Information Technology (IT) evolutions. User demands push development of new IT, and new IT offers new opportunities for using IS, which in turn lead to new demands. For example, technological advances in sensing, big data and data analytics enabled the development of context-aware IS and the offering of context-adaptable, smart services. Furthermore, modern IS services are effectively coordinating the actions of multiple humans and artificial agents (including institutional ones). In other words, modern IS services are realized by complex distributed systems constituted by autonomous (or autonomously developed and evolving) components.
A fundamental question for next-generation IS service engineering is:
- How to systematically build trustworthy services that are realized by the complex cooperation of human and artificial agents?
By trustworthy services, we mean services that are: (a) aligned with societal values and goals; but also (b) aware and protective of risks and threats posed to these values; (c) compliant to ethical and legal norms; (d) verifiable and controllable. Moreover, these services have to maintain these properties in scenarios of continuous change in goals, threats, norms, technologies, etc. Thus, in order to maintain trustworthiness across time, these services have also to be (e) evolvable.
This question, in turn, leads to the following sub-questions:
- How can we systematically understand the social contexts for which IS services are designed? How can we systematically design these services such that they are at all times aligned with these contexts?
This question requires us to be able to design services that are socially aware. In order to do that, we must be able to elicit, understand and reason with proper representations of, on one hand, values, goals and norms and, on the other hand, risks and threats posed to them. In particular, in enterprise settings, we need to understand the relation between these elements and enterprise structures and processes, as well as their relations to supporting IS technology. Additionally, one must be able to systematically translate all these elements to service and system requirements.
- How can we build service-supporting complex sociotechnical systems with the interoperation of autonomous components?
This question requires us to be able to design service-supporting systems that are semantically transparent. Semantic transparency is an essential pre-condition for verifiability and controllability (i.e., to understand the effect of one’s interventions) but also for explainability. It is also a pre-condition for evolvability. Furthermore, given the distributed nature of these service-supporting systems, one needs to guarantee semantic transparency in face of semantic interoperability. Semantic interoperability is also pre-condition, for system evolution. In fact, evolution is a special case of interoperability (diachronic interoperability). Interoperability constitutes a major challenge both at the social level (knowledge and information) and the IT level (data sharing), and a constant alignment between social level / real world concepts and corresponding IT/digital constructs.
Because of the nature of these questions, our Services Research strategy at the Services and Cybersecurity (SCS) group is grounded on a cutting-edge research program in the area of Ontology-Driven Conceptual Modelling.
By leveraging on results from areas such as formal and applied ontology, cognitive science, formal and computational logics, linguistics, as well as model-based engineering, we develop adequate modeling support for service engineers as well as mechanisms for improving existing service implementations.
First, we research models (and model-building support) for service engineering. Especially, we investigate how we can systematically engineer qualitative computational domain representations to support people in solving problems in those domains, and how we can use these representations in model-driven design approaches for IS services.
Second, we research mechanisms for service operations. Especially, we aim at improving system interoperability in heterogeneous environments and research the use of context data to automate adaptive interoperability and service delivery.
In order to design IS services that are efficient and effective, and that add value from the perspective of the end-users, it is necessary to understand the phenomena in the domain. Based on this understanding, it is possible to develop assumptions that underlie reasoning in the domain, and agree upon the interpretation of information used in the reasoning among parties and systems. Our goal is to systematically engineer qualitative computational domain representations, distinguishing between the language aspect (representations are created with languages), the ontology aspect (representations relate to conceptions of reality), the cognitive aspect (representations are to be aligned with how human cognition works) and the computational aspect (supporting the previous aspects by using computers). We apply insights from this research to requirements engineering, enterprise modeling, model-driven engineering and architectural design, by using ontologies to constrain and direct the development of requirements, technology-specific models and system architectures for IS services. Furthermore, we improve automation in requirements engineering (e.g., by making use of data richness in the form of user feedback and change logs). With model-driven engineering we improve traceability between domain models and technology solutions, and address issues of legacy and heterogeneity. Finally, we develop reference models and architectures as template solutions that foster interoperability, reuse and understanding.
Services enable coordinated action of people, organizations and machines. Interoperability is an essential property to realize services. Since people, organizations and machines, as well as their context, evolve, adaptability is another important property. Interoperability is the ability of systems to exchange information and use this information as intended, i.e., exchange information with meaning preservation. Present-day IS are used by data-driven organizations and for data-driven purposes, the data being generated or stored by a wide variety of heterogeneous data sources. We focus on improving semantic and pragmatic interoperability mechanisms, to enable shared understanding of data, data integration, minimal loss of information and proper actions in the operational context at hand, all contributing to meaningful services. Because of the variety, distributed location and decentralized ownership of data, interoperability is highly challenging. We apply well-founded ontologies to provide technology-independent expressions of domain terms and link these to the myriads of existing data encodings. In this way we can automate the recognition and processing of heterogeneous data, and more efficiently and effectively address the mentioned issues. Adaptability is the ability of systems to adjust to new conditions. By using abstraction and focusing on what is stable, our architectural design approaches facilitate adaptability at design and deployment time. However, we also research context exploitation methods and mechanisms to allow systems to dynamically adapt their service offerings to context-dependent needs of end-users at runtime. We design mechanisms for real-time context data analysis and semantic matching to detect new situations that require adaptation. Furthermore, we design mechanisms for situation-triggered service adaptation, based on assumptions on situation-dependent user needs and user feedback. Since adaptability mechanisms need context data from typically various data sources as input, interoperability is a prerequisite and ontology can be used to enable automated discovery of situations. Both interoperability and adaptability may benefit from machine learning approaches to improve automated model building.
With the digitalization of our society, the amount of collected data and computational demands is ever-increasing. However, the underlying, vital digital systems are threatened by a plethora of cyber-attacks. Examples include impersonation attacks or data exfiltration attacks that frequently lead to mega breaches exposing sensitive data from millions of innocent people to criminals. On an almost daily basis, newspapers world-wide report about such cyber-attacks and the impact that they have on our digital society.
Due to the central role and importance of data, our Cybersecurity research strategy at the Services and Cybersecurity (SCS) group follows a data-centric approach. This approach tackles the challenge of defending computer systems as a whole from two different angles, namely by mitigating the risk imposed by ubiquitous data but also by taking the opportunities provided by data richness. First, we research security mechanisms to provide security for data not only while stored and transmitted over networks as implemented by conventional systems but even during data processing. Second, we research the use of data for security and envision a world in which the continuously increasing amounts of data are utilized to identify, analyze, prevent, and respond to cyber-threats. Both research directions are based on the analysis of existing systems and software but also on the design of novel systems.
- Security for Data
Data breaches happen in various forms but eventually are mainly attributed to an improper protection of data. While traditional encryption technology can be used for the protection of data at rest and in transit, it requires a decryption step for processing the data which in turn exposes the data in the clear and makes it vulnerable to attacks. To close this security vulnerability, we investigate the construction of cryptographic protocols based on non-traditional encryption, such as homomorphic encryption, that allow for the processing of data under encryption without the need to decrypt. Growing amounts of data and increasing complexities of the processing algorithms are complicating factors that largely lead to efficiency problems. We approach this by sacrificing some security for efficiency. Concretely, we explore allowing for some quantifiable leakage (e.g. in terms of differential privacy) to gain efficiency. By studying the success of possible leakage-abuse attacks, we can quantify the loss in security and achieve application-specific, practical tradeoffs between security and efficiency. Lastly, to effectively protect against data breaches, we need to control who has or had access to data at a given point in time. Traditional access control mechanisms typically rely on the complete trust in a single system or administrator, which constitutes a single point of failure. To mitigate this issue, we study decentralized access control approaches based on attribute-based encryption and distributed ledger technologies.
- Data for Security
Traditional security solutions are targeted towards the protection from known threats and are dominantly based on insights acquired through costly manual analysis, which is often too slow to cope with the rapid emergence of new threats. To overcome this, we aim at a fully automated threat identification, analysis, and response and research the use of artificial intelligence, such as machine learning-based threat classification and clustering, to automatically analyze known threats with corresponding mitigation strategies to learn prediction models that allow for the identification of new/unseen threats and adapted mitigation approaches. Moreover, to be one step ahead of possible attackers, we explore automated security testing techniques, such as static and dynamic analysis, to learn models of vulnerable system and software components and associated patches that we use to discover and patch new vulnerabilities. We put a special focus on the threat of data leakage for which we also build new (automated) attacks for data exfiltration and leakage exploitation that we use to learn models to detect and quantify data leakage. Throughout all our research in this context, we make extensive use of simulations and real-world experiments for the validation of achieved results.
- Output and Impact
Our research covers the complete range of steps necessary to develop secure solutions for the real world, starting from the analysis of existing attacks and vulnerabilities and their proper modelling, to the engineering of targeted protection, mitigation, detection, and response solutions, all the way to the implementation of prototypes and proof-of-concepts, combined with extensive evaluation. In each of these steps, we are paying explicit attention to the demands imposed by the socio-economic context and the involved human factor, which can be part of the threat and part of the solution at the same time.
We aim for real, tangible societal and economic impact. To ensure this, our research is very much use-inspired and largely driven by real-world challenges. We focus our research on challenges from three application domains:
- Health and healthcare industry: Patient data and other medical data is extremely sensitive and brings about particular data security challenges, for instance due to its structure, size, and the fact it is typically distributed over many different parties. This makes the health and healthcare industry one of our key application domains.
- Software and Internet industry: Digital data is typically processed by software and communicated and shared via the Internet. Because of this, the software and Internet industry form the backbone of the data-driven economy, which makes it an important application domain for our research.
- Cybersecurity industry: The third major application domain of our research is the cybersecurity industry itself. Since we research existing and develop new security solutions, many of our research questions are motivated by shortcomings of existing security solutions and real-world challenges posed by the cybersecurity industry.
We are committed to perform open and well-documented research to ease reproducibility, reusability, and collaboration to allow for effective knowledge transfer. Key components in this approach are, next to publishing our research at the top security conferences and journals, the release of open source tools and datasets. We follow the well-established guidelines in our community for responsible disclosure of previously unknown vulnerabilities and collaborate with vendors to design suitable patches or mitigations. Furthermore, to ensure innovation lands in society, we support startups in their infancy and also target the creation of new businesses from scratch.
Our cybersecurity education strategy is tightly coupled with our research strategy. We offer fundamental bachelor courses on cybersecurity (ranging from cryptography and data security over software, web and system security, to AI for security) that are mandatory in the computer science bachelor program to provide our students with the basics and to prepare them for more advanced studies. On the master level, we coordinate the 4TU Cybersecurity specialization of our computer science master, which delivers cybersecurity graduates having a T-shaped profile with 2/3 of deep technical knowledge and 1/3 of socio-economic knowledge in cybersecurity. The curriculum is designed in collaboration with our advisory board consisting of senior leaders from industry and government to meet the demands from the real-world. We offer advanced cybersecurity master courses that are tightly coupled with our research, ranging from secure data management, over software and system security, to secure cloud computing and privacy-enhancing technologies. Furthermore, to educate our future cybersecurity innovators and entrepreneurs, we coordinate our participation in the EIT Digital Cybersecurity Master, which puts a particular focus on innovation and entrepreneurship in an international context.
We involve students in our research as much as possible, mostly when they start working on their final projects. We stimulate master projects on real-world challenges in collaboration with our industry partners (for instance through dedicated internships). While our courses in the Master program are mostly targeted towards our Master students, they are offered to our industry partners as well and we offer dedicated educational programs, such as the PdEng program, supporting our industry partners in upskilling their current workforce, even when they already have a Master degree in cyber security. While most of our teaching happens on campus, many of our courses allow for remote (online) participation by attending video conferences and live streams and other forms of digital teaching as well.
We aim to make students interested in cybersecurity and particularly in our research as early as possible by supporting and mentoring students in so-called Capture The Flags (CTFs), which are game-based information security competitions aimed at teaching how to identify, exploit, and patch software vulnerabilities, and, as a consequence, how to write secure code. Together with the Twente Hacking Squad (THS), our own CTF team, we organize cyber security workshops to introduce our students to practical security problems. Our students participate in national and international competitions, such as the European Cyber Security Challenge.