Project Number: 645378
Project Manager: Prof. dr. Dirk Heylen
Faculty of Electrical Engineering, Mathematics and Computer Science
The ARIA-VALUSPA project will create a disruptive new framework that will allow easy creation of Affective Retrieval of Information Assistants (ARIA agents) that are capable of holding multi-modal social interactions in challenging and unexpected situations. The system can generate search queries and return the information requested by interacting with humans through virtual characters. These virtual humans will be able to sustain an interaction with a user for some time, and react appropriately to the user's verbal and non- verbal behaviour when presenting the requested information and refining search results. Using audio and video signals as input, both verbal and non-verbal components of human communication are captured. A sophisticated dialogue management system decides how to respond to a user's input, be it a spoken sentence, a head nod, or a smile. The ARIA uses special designed speech synthesisers to create emotionally coloured speech and a fully expressive 3D face to create the chosen response. Back-channelling, indicating that the ARIA understood what the user meant, or returning a smile are but a few of the many ways in which it can employ emotionally coloured social signals to improve communication.
As part of the project, the consortium will develop two specific implementations of ARIAs for two different industrial applications. A ‘speaking book’ application will create an ARIA with a rich personality capturing the essence of a novel, so the user can ask questions about anything related to the novel. Secondly an ARIA scenario will be implemented that is proposed in a business case by one of the Industry Associates at the end of year one. Both applications will have provable commercial value, either to our Industry Partners or Industry Associates. In addition, the general ARIA-VALUSPA framework will be periodically reviewed with the Industry Associates.
The ARIA-VALUSPA project builds on the capacities of existing Virtual Humans developed by the consortium partners and/or made publicly available by other researchers, but will greatly enhance them. The assistants will be able to handle unexpected situations and environmental conditions, add self-adaptation, learning, European multilingual skills, and extended dialogue abilities to a multimodal dialogue system. ARIA-VALUSPA will be able to adapt to any European language, with the prototypes developed during the project supporting the three European languages English, French, and German to increase the variety and amount of potential user groups. The framework to be developed will be suitable for multiple platforms, including desktop PCs, laptops, tablets, and smartphones. The ARIAs will be able to be displayed and operate in a web browser.
The ARIAs will have to deal with unexpected situations that occur during the course of an interaction. Interruptions by the user, unexpected task switching by the user, or a change in who is communicating with the agent (e.g. when a second user joins in the conversation) will require the agents to either interrupt its behaviour, execute a repair behaviour, re-plan mid and long-term actions, or even adapt on the fly to the behaviour of its interlocutor.
Four of our partners were part of the SEMAINE consortium1 that created the predecessor of the type of virtual agents called for in this LEIT, and we thus argue that this consortium has significant expertise in this area. Furthermore, the make-up of the consortium with two Industry Partners and a group of Industry Associates committed to potential integration of the project’s outcomes in their own products, we argue that we are perfectly placed to make real impact on the EU’s competitiveness.
Project duration: 1-7-2015 – 1-7-2018
Project budget: 2.95 M-€ / 2.95 M-€ funding
Number of person/months: 363 person months
Project Coordinator: University of Nottingham
Participants: University of Nottingham, Technische Universität München, Centre National de la Recherche Scientifique, Universität Augsburg, Universiteit Twente, Cereproc, Cantoche
Project budget CTIT: 415 k-€ funding
Number of person/years CTIT: 56 person months
Involved groups: Human Media Interaction (HMI)
CTIT Research Centre: Centre for Monitoring and Coaching