Research Areas

Services

A fundamental question for next-generation IS service engineering is:

  • How to systematically build trustworthy services that are realized by the complex cooperation of human and artificial agents?

    By trustworthy services, we mean services that are: (a) aligned with societal values and goals; but also (b) aware and protective of risks and threats posed to these values; (c) compliant to ethical and legal norms; (d) verifiable and controllable. Moreover, these services have to maintain these properties in scenarios of continuous change in goals, threats, norms, technologies, etc. Thus, in order to maintain trustworthiness across time, these services have also to be (e) evolvable.

This question, in turn, leads to the following sub-questions:

  • How can we systematically understand the social contexts for which IS services are designed? How can we systematically design these services such that they are at all times aligned with these contexts?

    This question requires us to be able to design services that are socially aware. In order to do that, we must be able to elicit, understand and reason with proper representations of, on one hand, values, goals and norms and, on the other hand, risks and threats posed to them. In particular, in enterprise settings, we need to understand the relation between these elements and enterprise structures and processes, as well as their relations to supporting IS technology. Additionally, one must be able to systematically translate all these elements to service and system requirements.

  • How can we build service-supporting complex sociotechnical systems with the interoperation of autonomous components?

    This question requires us to be able to design service-supporting systems that are semantically transparent. Semantic transparency is an essential pre-condition for verifiability and controllability (i.e., to understand the effect of one’s interventions) but also for explainability. It is also a pre-condition for evolvability. Furthermore, given the distributed nature of these service-supporting systems, one needs to guarantee semantic transparency in face of semantic interoperability. Semantic interoperability is also pre-condition, for system evolution. In fact, evolution is a special case of interoperability (diachronic interoperability). Interoperability constitutes a major challenge both at the social level (knowledge and information) and the IT level (data sharing), and a constant alignment between social level / real world concepts and corresponding IT/digital constructs.

Because of the nature of these questions, our Services Research strategy at the Services and Cybersecurity (SCS) group is grounded on a cutting-edge research program in the area of Ontology-Driven Conceptual Modelling.

By leveraging on results from areas such as formal and applied ontology, cognitive science, formal and computational logics, linguistics, as well as model-based engineering, we develop adequate modeling support for service engineers as well as mechanisms for improving existing service implementations.

First, we research models (and model-building support) for service engineering. Especially, we investigate how we can systematically engineer qualitative computational domain representations to support people in solving problems in those domains, and how we can use these representations in model-driven design approaches for IS services.

Second, we research mechanisms for service operations. Especially, we aim at improving system interoperability in heterogeneous environments and research the use of context data to automate adaptive interoperability and service delivery.

  • Models

    In order to design IS services that are efficient and effective, and that add value from the perspective of the end-users, it is necessary to understand the phenomena in the domain. Based on this understanding, it is possible to develop assumptions that underlie reasoning in the domain, and agree upon the interpretation of information used in the reasoning among parties and systems. Our goal is to systematically engineer qualitative computational domain representations, distinguishing between the language aspect (representations are created with languages), the ontology aspect (representations relate to conceptions of reality), the cognitive aspect (representations are to be aligned with how human cognition works) and the computational aspect (supporting the previous aspects by using computers). We apply insights from this research to requirements engineering, enterprise modeling, model-driven engineering and architectural design, by using ontologies to constrain and direct the development of requirements, technology-specific models and system architectures for IS services. Furthermore, we improve automation in requirements engineering (e.g., by making use of data richness in the form of user feedback and change logs). With model-driven engineering we improve traceability between domain models and technology solutions, and address issues of legacy and heterogeneity. Finally, we develop reference models and architectures as template solutions that foster interoperability, reuse and understanding.

  • Mechanisms

    Services enable coordinated action of people, organizations and machines. Interoperability is an essential property to realize services. Since people, organizations and machines, as well as their context, evolve, adaptability is another important property. Interoperability is the ability of systems to exchange information and use this information as intended, i.e., exchange information with meaning preservation. Present-day IS are used by data-driven organizations and for data-driven purposes, the data being generated or stored by a wide variety of heterogeneous data sources. We focus on improving semantic and pragmatic interoperability mechanisms, to enable shared understanding of data, data integration, minimal loss of information and proper actions in the operational context at hand, all contributing to meaningful services. Because of the variety, distributed location and decentralized ownership of data, interoperability is highly challenging. We apply well-founded ontologies to provide technology-independent expressions of domain terms and link these to the myriads of existing data encodings. In this way we can automate the recognition and processing of heterogeneous data, and more efficiently and effectively address the mentioned issues. Adaptability is the ability of systems to adjust to new conditions. By using abstraction and focusing on what is stable, our architectural design approaches facilitate adaptability at design and deployment time. However, we also research context exploitation methods and mechanisms to allow systems to dynamically adapt their service offerings to context-dependent needs of end-users at runtime. We design mechanisms for real-time context data analysis and semantic matching to detect new situations that require adaptation. Furthermore, we design mechanisms for situation-triggered service adaptation, based on assumptions on situation-dependent user needs and user feedback. Since adaptability mechanisms need context data from typically various data sources as input, interoperability is a prerequisite and ontology can be used to enable automated discovery of situations. Both interoperability and adaptability may benefit from machine learning approaches to improve automated model building.

CyberSecurity

Due to the central role and importance of data, our Cybersecurity research strategy at the Services and Cybersecurity (SCS) group follows a data-centric approach. This approach tackles the challenge of defending computer systems as a whole from two different angles, namely by mitigating the risk imposed by ubiquitous data but also by taking the opportunities provided by data richness. First, we research security mechanisms to provide security for data not only while stored and transmitted over networks as implemented by conventional systems but even during data processing. Second, we research the use of data for security and envision a world in which the continuously increasing amounts of data are utilized to identify, analyze, prevent, and respond to cyber-threats. Both research directions are based on the analysis of existing systems and software but also on the design of novel systems.

  • Security for Data

    Data breaches happen in various forms but eventually are mainly attributed to an improper protection of data. While traditional encryption technology can be used for the protection of data at rest and in transit, it requires a decryption step for processing the data which in turn exposes the data in the clear and makes it vulnerable to attacks. To close this security vulnerability, we investigate the construction of cryptographic protocols based on non-traditional encryption, such as homomorphic encryption, that allow for the processing of data under encryption without the need to decrypt. Growing amounts of data and increasing complexities of the processing algorithms are complicating factors that largely lead to efficiency problems. We approach this by sacrificing some security for efficiency. Concretely, we explore allowing for some quantifiable leakage (e.g. in terms of differential privacy) to gain efficiency. By studying the success of possible leakage-abuse attacks, we can quantify the loss in security and achieve application-specific, practical tradeoffs between security and efficiency. Lastly, to effectively protect against data breaches, we need to control who has or had access to data at a given point in time. Traditional access control mechanisms typically rely on the complete trust in a single system or administrator, which constitutes a single point of failure. To mitigate this issue, we study decentralized access control approaches based on attribute-based encryption and distributed ledger technologies.

  • Data for Security

    Traditional security solutions are targeted towards the protection from known threats and are dominantly based on insights acquired through costly manual analysis, which is often too slow to cope with the rapid emergence of new threats. To overcome this, we aim at a fully automated threat identification, analysis, and response and research the use of artificial intelligence, such as machine learning-based threat classification and clustering, to automatically analyze known threats with corresponding mitigation strategies to learn prediction models that allow for the identification of new/unseen threats and adapted mitigation approaches. Moreover, to be one step ahead of possible attackers, we explore automated security testing techniques, such as static and dynamic analysis, to learn models of vulnerable system and software components and associated patches that we use to discover and patch new vulnerabilities. We put a special focus on the threat of data leakage for which we also build new (automated) attacks for data exfiltration and leakage exploitation that we use to learn models to detect and quantify data leakage. Throughout all our research in this context, we make extensive use of simulations and real-world experiments for the validation of achieved results.