The Multimodal interface consists of three components, namely 1) a user control input interface, 2) mechatronic arm and hand support interface and 3) environment observation interface.
User control input interface
Eye tracking to derive user control input is key to the multimodal interface. It is ideally suited to derive user motion intention in a natural manner without adding cognitive load, has high information capacity and is low cost (Figure 2). It will be supplemented by sensory information derived from the existing arm and hand control, including muscle activity (electromyographical or mechanoacoustic sensing), movement (inertial sensing) and interaction force between body and support system and with the environment. This supplementary sensing is especially important for natural identification of grasp intention.
Mechatronic arm and hand support interface
Mechatronic support of arm and hand function enables the user to functionally interact with the environment during daily-life tasks like taking a meal or handling objects. eNHANCE will integrate existing arm and hand support systems, improve their actuation to make them controllable based on user intention detection. The improved support system will be designed to be minimally obtrusive to the end-user. The intelligent mechatronics arm and hand support system is supplemented by additional multi-modal motivational communication, including audial and visual cues, skin vibration and suggestive movements of the arm.
Environment observation interface
The environmental context has an important impact on our motor behaviour and interaction with the environment. We will apply a portable head-mounted 3D scene camera in combination with inertial sensing of head orientation to derive the natural active vision of the user, and derive context information, including social interactions with other persons, using supervised learning approaches.