2019, Article / Letter to editor ((2019))Background: A large part of the communication cues exchanged between persons is nonverbal. Persons with a visual impairment are often unable to perceive these cues, such as gestures or facial expression of emotions. In a previous study, we have determined that visually impaired persons can increase their ability to recognize facial expressions of emotions from validated pictures and videos by using an emotion recognition system that signals vibrotactile cues associated with one of the six basic emotions. Objective: The aim of this study was to determine whether the previously tested emotion recognition system worked equally well in realistic situations and under controlled laboratory conditions. Methods: The emotion recognition system consists of a camera mounted on spectacles, a tablet running facial emotion recognition software, and a waist belt with vibrotactile stimulators to provide haptic feedback representing Ekman’s six universal emotions. A total of 8 visually impaired persons (4 females and 4 males; mean age 46.75 years, age range 28-66 years) participated in two training sessions followed by one experimental session. During the experiment, participants engaged in two 15 minute conversations, in one of which they wore the emotion recognition system. To conclude the study, exit interviews were conducted to assess the experiences of the participants. Due to technical issues with the registration of the emotion recognition software, only 6 participants were included in the video analysis. Results: We found that participants were quickly able to learn, distinguish, and remember vibrotactile signals associated with the six emotions. A total of 4 participants felt that they were able to use the vibrotactile signals in the conversation. Moreover, 5 out of the 6 participants had no difficulties in keeping the camera focused on the conversation partner. The emotion recognition was very accurate in detecting happiness but performed unsatisfactorily in recognizing the other five universal emotions. Conclusions: The system requires some essential improvements in performance and wearability before it is ready to support visually impaired persons in their daily life interactions. Nevertheless, the participants saw potential in the system as an assistive technology, assuming their user requirements can be met.
Schematic overview of the used system.
Schematic overview of the used system.
…
Emotion mapping. The mapping of Ekman's universal emotions on the waist band.
Emotion mapping. The mapping of Ekman's universal emotions on the waist band.
…
Crosstabs of agreement between coders and software. The table shows a tally of the number of time the coders and FaceReader classified a fragment as a particular emotion. The diagonal shows the number of times that the coders and FaceReader classified a fragment as the same emotion.
Crosstabs of agreement between coders and software. The table shows a tally of the number of time the coders and FaceReader classified a fragment as a particular emotion. The diagonal shows the number of times that the coders and FaceReader classified a fragment as the same emotion.
…
Figures - uploaded by Hendrik BuimerAuthor content
Content may be subject to copyright.
ResearchGate Logo
Discover the world's research
20+ million members
135+ million publications
700k+ research projects
Join for free
2018, Article / Letter to editor (vol. 13, iss. 3, (2018))In face-to-face social interactions, blind and visually impaired persons (VIPs) lack access to nonverbal cues like facial expressions, body posture, and gestures, which may lead to impaired interpersonal communication. In this study, a wearable sensory substitution device (SSD) consisting of a head mounted camera and a haptic belt was evaluated to determine whether vibrotactile cues around the waist could be used to convey facial expressions to users and whether such a device is desired by VIPs for use in daily living situations. Ten VIPs (mean age: 38.8, SD: 14.4) and 10 sighted persons (SPs) (mean age: 44.5, SD: 19.6) participated in the study, in which validated sets of pictures, silent videos, and videos with audio of facial expressions were presented to the participant. A control measurement was first performed to determine how accurately participants could identify facial expressions while relying on their functional senses. After a short training, participants were asked to determine facial expressions while wearing the emotion feedback system. VIPs using the device showed significant improvements in their ability to determine which facial expressions were shown. A significant increase in accuracy of 44.4% was found across all types of stimuli when comparing the scores of the control (mean±SEM: 35.0±2.5%) and supported (mean±SEM: 79.4±2.1%) phases. The greatest improvements achieved with the support of the SSD were found for silent stimuli (68.3% for pictures and 50.8% for silent videos). SPs also showed consistent, though not statistically significant, improvements while supported. Overall, our study shows that vibrotactile cues are well suited to convey facial expressions to VIPs in real-time. Participants became skilled with the device after a short training session. Further testing and development of the SSD is required to improve its accuracy and aesthetics for potential daily use.
2017, Article in monograph or in proceedings (Poster presented at the ACM conference ASSETS '17, October 29-November 1, 2017, Baltimore, MD, USA, pp. 331-332)One of the big problems visually impaired persons experience in their daily lives, is the inability to see non-verbal cues of conversation partners. In this study, a wearable assistive technology is presented and evaluated which supports visually impaired persons with the recognition of facial expressions of emotions. The wearable assistive technology consists of a camera clipped on spectacles, emotion recognition software, and a vibrotactile belt with six tactors. An earlier controlled experimental study showed that users of the system improved significantly in their ability to recognize emotions from validated stimuli. In this paper, the next iteration in testing the system is presented, in which a more realistic usage situation was simulated. Eight visually impaired persons were invited to participate in conversations with an actor, who was instructed not to exaggerate his facial expressions. Participants engaged in two 15-minute mock job interview conversations, during one of which they were wearing the system. In the other conversation, no assistive technologies were used. The preliminary results showed that the concept of such wearable assistive technologies remains feasible. Participants within the study found it easy to learn and interpret the vibrotactile cues, which was also shown in their training performance. Furthermore, most participants could use the vibrotactile cues, while being able to stay engaged in the conversation. Nevertheless, some improvements are needed before the system can be used as assistive technology. The accuracy of the system was negatively affected by the lighting and movement conditions present in realistic conversations, compared to the controlled experiment condition. Furthermore, participants requested developments to improve the wearability of the system.