In May 2020 the winners of the EEMCS Theme Team grants, to encourage the cross-disciplinary EEMCS research within the faculty, were announced. 5 proposals were submitted within the Robotics theme. The winning project entitled Predictive Avatar Control and Feedback (PACOF) is creating a robotic system that allows the robot operator to experience the location just as the robot does.  

In January 2020, the EEMCS Theme Team call was launched. This unique call aims to encourage the cross-disciplinary EEMCS research within the faculty. Project proposals, by EEMCS teams, could be submitted on the four faculty themes: Robotics, Health, Embedded AI and/or Data Science, and Energy. Teams needed to be composed of three junior staff members, one each from EE, M and CS. The teams will be responsible for developing the theme within the faculty. The grant provides the team members with three PhD students, embedded in the three disciplines and to be supervised by the three team members in cooperation.

To read more about the winning proposal, see this article on the Predictive Avatar Control and Feedback (PACOF) project. Below we will introduce the other proposals within the Robotics theme.  

Drone Swarms for Public Safety

Siavash Safapourhajari, Clara Stegehuis and Suzan Bayhan

Swarms of drones are used in many applications, including public safety and disaster management. They can accomplish missions faster, safer, and in hard-to-reach places. To realize a fully autonomous operation, a set of essential tools is required: communication, computation, and networking. Firstly, in post-disaster situations or remote areas, the telecommunication infrastructure may be unreliable or nonexistent. Thus, drones need to build their own network or even provide wireless coverage for ground users. Secondly, the tight processing and energy budget of drones makes computation offloading and task allocation vital for robust performance. Finally, the mobility, short flight time, and communication loss results in a dynamic network with uncertainties which require non-trivial optimization techniques. These three challenges are inter-related and need to be considered jointly. Thus, in our research we try to tackle these challenges and provide a reliable infrastructure for drone communication and computation.

Contact for more info:

Towards Adaptive Scheduling for Social and Expressive Robots

Edwin Dertien, Ruben Hoeksma, Dennis Reidsma

We believe that on of the requirements for succesful social robots is that they are socially expressive in a mix of autonomous and responsive behavior. To achieve this, we aim to work on three topics:

1) design and validation of the expressive qualities of movement and action, based on knowledge from theatre; this includes addressing the problem of authoring new behaviors.

2) development of new forms of planning and scheduling for complex and possibly conflicting behavior requests that may come from a mixture of top down deliberate and bottom up reactive and autonomous sources; the use of online optimization techniques might allow the schedule of ongoing behavior to be adapted at any moment in response to changes in the interaction.

and 3) development of new forms of Human-Robot dialog modelling that make optimal use of the above possibilities for specifying this mixture of deliberative and reactive behavior in interaction, including mid-execution modification requests.

We are ultimately interested in both the ease of use for robot dialog designers and in the effect on user perception: how do end users appreciate these responsive robots, and does it also impact how they act with the robot?

Contact for more info: 

AI HERO - Generative AI for Healthcare Robotics in 4D

Christoph Brune, Nicola Strisciuglio, Momen Abayazid

AI & robotics is the future of interventional imaging in healthcare. In our digital society, AI-guided robotic systems are increasingly used in automated assembly lines in industry. However, in healthcare the interventional treatment by robots is a much more sensitive matter. As there is a need for an AI in robotics approach that clinicians can trust, i.e. which is robust, explainable and allows for a reinforced adaptive human-robot interaction.

The effectiveness of image-based robotics is strongly dependent on an informative imaging basis in 4D (3D+time) that can compensate for respiratory-induced motion of tumors. To achieve 4D super-resolution it is essential to combine MRI having high 3D spatial resolution with the complementary mobile ultrasound (US) imaging providing high temporal resolution.

Hence, our goal is to develop an AI platform based on generative DL which produces an informative super-resolution 4D MRI/US imaging basis enabling explainable AI-guided robotics in healthcare.

Contact for more info:

MR safe breast CAncer Treatment Robot (CAT-Rob) 

Alessandro Chiumento, Matthias Schlottbom, Françoise Siepel

In the Netherlands, 1 in 8 women suffer from breast cancer in their lifetime. In successful breast cancer treatment, accurate early detection of spreading malignant cells in the lymphatic system is essential. Current treatments are, however, extremely invasive (debridement and chemotherapy) as they are unable to accurately determine the positive lymph nodes (LNs). The proposed research programme focuses on the development of a minimally invasive robot that can localize and treat LNs affected by tumour spread using advanced navigation technologies. The novel technology developed in this programme is foreseen to be similarly applicable to other cancer types, such as lung and prostate. Subphases include: Data acquisition and processing, determine cancer spread models, Data-driven Lymphatic Network topology and dynamics discovery, and development of an Image-guided robotic system. The proposed research paves the way for novel cancer detection and treatment procedures that are minimally invasive and can be performed during regular MRI scans, which is only possible by problem-tailored integration of imaging and robotic technologies, certified reduced biological cancer models with data-driven graph models.

Contact for more info: