IHCI 2024Programme

Program

November 13-15,
Enschede, The Netherlands

Day 0

November 12 Program (Pre-conference workshops)

  • 13:30-15:00 | LA2518 | Interpretability of Large Language Models by dr. Anna Machens
  • 15:00-16:30 | LA2518 | Unity 3D by Luca Frösler

Day 1

November 13 Program

Day 2

November 14 Program

Day 3: Social event

November 15: Social event


Download the program

Workshops, keynotes and talks abstracts

  • Tuesday
    Workshop: Unity 3D
    L.M. Frösler (Luca)
    Developer
    Interpretability of Large Language Models
    dr. A.K. Machens (Anna)
    Researcher

    Large Language Models (LLMs) such as ChatGPT become increasingly integral in AI applications, however, understanding their inner workings remains a challenge. Interpretable Machine Learning (IML) techniques aim to shed light on model decision-making processes, yet applying these methods to complex, deep-learning systems like LLMs is notoriously difficult. In this workshop, we will introduce the fundamentals of both LLMs and IML, highlighting the unique challenges that arise when attempting to interpret LLMs. Attendees will gain insights into why achieving transparency in these models is hard, but also learn practical approaches to make sense of their outputs. Through real-world examples, we will demonstrate existing strategies to enhance the interpretability of LLMs, fostering trust and understanding in AI-powered interactions. This 1.5-hour workshop is for those interested in expanding their toolkit for creating more transparent and explainable AI systems.

  • Wednesday

    Keynotes

    About the reliability and validity of using AI to analyze intensive longitudinal data
    prof.dr.ir. B.P. Veldkamp (Bernard)
    Full Professor

    The availability of sensors, eye-trackers, wearables, smartwatches, or other digital devices facilitates the collection of new types of data that can be used for measurement. The question is how to analyze them. Several statistical/psychometric models are available, but even though they have been applied successfully in many testing programs, they do have their limits with respect to the kind of data they can be applied to. Artificial intelligence (AI) offers many methods for dealing with these new and more complex data sets. They do have some limitations when it comes to reliable and valid measurement thought. The question arises how to apply them in a valid way when dealing with intensive longitudinal data. To answer this question, the benefits and disadvantages of artificial intelligence are illustrated first. Then the argument-based approach to validity (Kane, ) is introduced. It is illustrated how this approach can be applied in the field of AI methods for intensive longitudinal data analysis. Finally, some recommendations for designing a blueprint for a wearables validation pipeline are provided.

    Are you attending? - Monitoring attentional engagement through synchrony in physiological signals
    A.M. Brouwer (Anne-Marie) Prof. dr.
    Bijzonder hoogleraar - Artificiële intelligentie Hoogleraar - Donders Centre for Cognition Hoogleraar - Donders Institute for Brain, Cognition and Behaviour

    Continuous and implicit measures of individuals’ attention would be useful for human computer interaction. Brain responses can tell us about individuals’ level and focus of attention, but it is not straightforward to retrieve this information in real life scenarios. In this talk, I will discuss research showing that the degree to which EEG signals vary in a similar way over time between individuals is associated with attentional engagement. Our findings that this also holds for other physiological signals (heart rate and skin conductance), under various real-life or life-like circumstances, and that it predicts subsequent behavior, make interpersonal physiological synchrony a promising marker of attention for applied settings as well as ecologically valid research.

    Special Session

    Human Factors in Cybersecurity
    Peter Roelofsma lector and Jan Treur dr.
    Risk Management & Cyber Security Group of The Hague University of Applied Sciences

    Human errors, such as biases in cyber risk judgement and decision making, falling for phishing emails, using weak passwords, or accidentally leaking data, can make a secure network vulnerable. These mistakes are not just made by junior staff; even executives and organizational decision makers can be victims. It's clear that no one is immune, making human factors a critical concern for every organization. Attackers use tactics like phishing emails, pretexting, or baiting to psychologically manipulate people into revealing confidential information or taking actions that compromise security. Burnout, fatigue, misalignment, lack of communication and cognitive overload can impair decision-making and weaken the effect of security measures. Cybersecurity protecting regulations or techniques must balance between providing an efficient and motivating work environment and security protection, ensuring that measures to enhance security do not infringe upon individual working environment in an annoying manner. Finding out how such a balance can be found is an important and interesting challenge for the area of human-computer interaction.

    Invited talks

    Conversational agents for the mimicry of bad human behaviour
    dr. S. Borsci (Simone)
    Associate Professor of Human Factors and Cognitive Ergonomics

    Conversational agents are increasingly permeating various aspects of our life, ranging from customer service to personal assistants. These AI-driven entities are designed to facilitate smoother interactions, offering helpful and polite responses. However, programming conversational agents to replicate negative behaviors, such as rudeness, prejudice, or even aggression, can serve specific purposes such as training modules in conflict resolution, psychological studies, and the refinement of AI behavior moderation systems. For instance, these agents can be used in controlled environments to train customer service staff on how to handle difficult interactions, or aid in psychological research by simulating stress-inducing situations. However, this approach carries significant risks. If not properly managed, these AI models could reinforce undesirable behavior or normalize hostility in communications. Thus, the development and deployment of such agents must be handled with caution, ensuring they are used ethically and within contexts that clearly benefit societal or scientific advancements. This talk will present potential usage of such approach for research.

    From Theory to Practice: Simulation in Anaesthesiology through Active Learning Experiences
    Krista Hoek
    PhD Candidate | Anaesthesiologist | Invensivist Department of Anaesthesiology & Department of Intensive Care Leids Universitair Medisch Centrum

    Simulation training has emerged as a cornerstone of medical education in anaesthesiology, providing an immersive and experiential learning environment for learning healthcare professionals. In recent years, virtual reality (VR) has introduced a groundbreaking paradigm to simulation-based training, uncovering many benefits for both educators and learners. By enabling high-fidelity practice in situational awareness, decision-making, and multitiered response systems, VR offers a unique platform for honing clinical skills within a safe yet challenging environment. This article offers an overview of the current landscape of simulation education in medical settings, addressing challenges faced by educators and proposing solutions to enhance the efficacy of simulation-based learning. Additionally, it provides a comparative analysis of traditional manikin-based simulation and VR simulation, highlighting their respective strengths and limitations in medical training.

    REAL-TIME MENTAL WORKLOAD CLASSIFICATION FOR FUTURE COCKPIT PILOTS AIDING IN OPERATIONAL EFFICIENCY
    Manon Tolhuisen
    AI scientist at Thales Netherlands

    Fighter pilots have to deal with environments with increasing complexity. Consequently, their mental demand increases because more information needs to be processed during time-critical situations. An imbalance in increasing task demands and limited available cognitive resources can lead to a drop in the pilot’s performance and raise the risk of human error. The Enhanced Pilot Interfaces & Interactions for fighter Cockpit (EPIIC) project funded by the European Defense Fund (EDF) aims to design the next-generation fighter cockpit, focussing on speeding up the pilot’s Observe, Orient, Decide and Act (OODA) loop. Part of this fighter cockpit design is the integration of a real-time mental workload monitoring system, aiding in pilots’ operational efficiency. As a contribution to the EPIIC project, Thales NL will develop a machine learning model for mental workload classification. The model will be integrated within Thales’ HumAn peRformance MonitOring and eNhancement sYstem (HARMONY). During our invited talk, we will elaborate on our data acquisition protocol, the challenges in the development of a mental workload machine learning model, and our motivation for using the HARMONY system.

  • Thursday

    Keynotes

    Motivated trust in AI: Looking at technology from different perspectives
    Sandra Fisher Prof.
    Professor of Organizational and Business Psychology University of Münster

    Artificial intelligence (AI) applications are currently explored in multiple fields of organizational work, seeking to exploit efficiency potentials but also bringing considerable changes in work processes and roles. For instance, AI systems can be increasingly found in typical Human Resource Management fields, such as leadership or recruiting. However, a core precondition for successful application of AI-based technologies is that involved persons place sufficient trust in these technologies, for example, by following recommendations of the AI-system, by allowing AI-technologies to access personal data, or by deciding to implement an AI system in the first place. Extant models of trust in technology fall short of considering the black box nature of AI systems. Moreover, extant models focus on rational assessments and neglect the fact that different users approach technologies with different needs and expectations. A motivational perspective is important for a conclusive understanding of trust in AI, and can explain trust differences between stakeholder groups, such as managers, employees, or HR professionals. This presentation, based on a recent publication (Hertel, Fisher & Van Fossen, 2024), introduces a new integrative model of trust in AI that considers both cognitive and motivational processes. The model enables specific predictions for different stakeholder groups across different levels of analysis (organization, team, individual). Instead of direct linkages between system features and trust, we argue that the impact of system features depends on gains and losses different stakeholders expect as a consequence of AI usage. For example, managers and HR professionals will receive and seek different information about the reliability of an AI-based recruiting system than will job applicants, leading to different assessments of the AI system. In addition to suggesting avenues for future research, the integrative model provides practical implications for building and maintaining trust in AI systems for different stakeholder groups.

    Digital Health solutions: an intricate path to implementation through regulations and clinical acceptance
    Enrico Gianluca Caiani Prof.
    Associate Prof. at the Electronic, Information and Bioengineering Department of Politecnico di Milano

    In this talk, the potentials of digital solutions for health in the current European scenario will be explored, together with the barriers to implementation perceived both from the healthcare professionals and other stakeholders involved.

    In this view, the regulatory aspects relevant to software as medical device, as defined by the EU Medical Device Regulation (2017/745), and its post-market surveillance will be described, also including a view on ongoing parallel legislative initiatives, such as the AI Act and the European Health Data Space, and on path for reimbursement. Finally, an example of how ICT solutions for regulatory science could help involved stakeholders in improving transparency of medical device performances, and improve patient safety, will be presented.

    Info Session

    Information session and Q/A on working/studying at University of Twente
    J.H. Stout MSc (Jaap)
    Exchange Coordinator | BMS

    Invited talks

    Responsible digitalization or the digitalization of responsibility? Work, technology and responsibility practices
    Wolter Pieters Prof. dr. ir.
    Professor of Work, Organisations and Digital Technology Work and Organisational Psychology - Behavioural Science Institute Faculty of Social Sciences - Interdisciplinary Research Hub on Digitalization and Society (iHub) Radboud University

    Around new technologies with large potential impact, such as artificial intelligence, there is often a call for responsible innovation. This requires early involvement of stakeholders to identify relevant values, in order to make sure that those values can be taken into account in the design. However, it is not always clear to what extent such adaptations in the design are meaningful in the practices surrounding the use of the new technology in work environments. For example, if artificial intelligence is made explainable to improve transparency, why and how would users want to make use of such a feature?  Social sciences can play a key role in addressing the practical aspects of responsible digitalization, which is why “the human factor in new technologies” is one of the interdisciplinary themes in the Dutch investment in social sciences via the so-called “sector plans”. In this talk, I will outline how social sciences can provide an essential contribution to responsible digitalization through studying “responsibility practices”, including for example seeking, receiving and challenging advice, searching for information, or communicating decisions, and the impact of technological developments on those practices. In particular, I will look at the challenges of studying those practices in situations where interaction with digital technologies goes beyond 1-on-1 situations, as in human-AI teams.

    KEEP IT WORK
    prof.dr. J.E.W.C. van Gemert - Pijnen PhD (Lisette)
    Full Professor Persuasive Health Technology

    Keep IT work! Remember Covid and digitalisation, privacy by design was the driving force to develop contact tracing apps. Overruling Human centred design and holistic development of eHealth technologies.

    In this presentation I will focus on case studies about data driven eHealth technologies that struggle in finding a balance between privacy by design and human centred design, this in the context of how we can enhance multi-site and cross nations data sharing. Keep IT work, referring to holistic development and privacy enhancing technologies to achieve personalized HealthCare.

    The case studies are Telemonitoring of heart failure, Hybrid care in mental care and Decision Support systems for ICU. The studies were conducted in the Netherlands, in order from Dutch Secretary of Health and Sports. This to understand the barriers and facilitators for Keep IT work  and to accelerate digitalisation In healthcare. The balance between privacy by design and Human centred design is of utmost precarious in the context of data sharing across institutes and nations, to enable personalization of treatments in healthcare (Fit between people, tech, context and data). Current Privacy Enhancing Techniques can be applied (like multiparty computation and synthetic data) to keep IT work, overcoming the limitations in data sharing and finding a balance between privacy by design and holistic, user centred design.  My presentation will be based on the aforementioned 4 cases and recent work for World Economic Forum and Frontiers about Global Scale Data sharing.

    Neuroscience in B2B Marketing: Bridging Emotion, Negotiation, and Strategy.
    Carolina Herrando
    Associate Professor of Marketing, University of Zaragoza

    When customers process information, they also process emotions. Therefore, identifying emotions during information processing is crucial for marketers and businesses to generate an effective customer experience, in both B2C and B2B settings. However, many individual internal neurophysiological processes remain unexplored or difficult to access using traditional marketing methods. Neuroscience allows researchers to study emotions by monitoring neural activity and physiological responses through the various stages of the customer experience. In B2B marketing, neuroscience offers valuable insights into decision-making, price setting, and negotiation processes, among other areas. This talk will present the current state of neuroscience in B2B marketing, along with a fresh methodological perspective on how different communication styles impact negotiation process.