2016, Article / Letter to editor (ACM International Conference Proceeding Series, (2016), pp. 1-4)In this paper, we present the rationale and approach for establishing guidelines for the development of accessible wearables. Wearable technology is increasingly integrated in our everyday lives. Therefore, ensuring accessibility is pivotal to prevent a digital divide between persons who have and persons who lack access to these devices, caused by their abilities. We present a project in which guidelines are created that enable developers to design accessible wearable apps and technologies. These guidelines will be created with developers who have experience with designing accessible technology and/or wearables. In addition, users who (potentially) experience problems with accessibility of wearables (persons who have a disability) are involved in the development of the guideline, to ensure their validity from an end-user perspective.
2012, Article in monograph or in proceedings (The Web and Beyond 2012)The internet is becoming a tightly interwoven part of our everyday lives. There is a growing market for web services which augment the daily life of users trough products with an internet connection. We call these real world extensions of the web embedded media. In the last couple of years we explored embedded media design through student projects with real world clients. We learned that the UX difficulty of embedded media design is to mix, enforce and augment existing user experiences. We’ve tried to capture this challenge in the intuitive notion of experience blend. In this paper we use examples from our project work to introduce this notion of experience blend.
2008, Article in monograph or in proceedings (AmI-08, pp. 58-74)This paper presents the results of a study on how elderly people perceive an intelligent system, embedded in their home, which should enable them to live independently longer. Users of a motion sensor system were interviewed about their experiences. A sensor system that autonomously works as well as a manipulated version was studied. The manipulation contained a touch screen that informed the users if the gathered information was correct before sending it to caregivers, so more control over personal information was provided. To test the use intention of the motion sensor system Spiekermann’s Ubiquitous Computing Acceptance Model of was used. This study shows that people, who perceive more control over their wellbeing, show more use intention. And that the subjective norm influences their acceptance. This study shows that acceptance models for Ambient Intelligence application in care situations need to be developed.
2009, Part of book or chapter of book (, pp. 5)Covers papers organized in topical sections on AI methods for ambient intelligence, evaluating ubiquitous systems with users, model driven software engineering for ambient intelligence applications, smart products, ambient assisted living, human aspects in ambient intelligence, Amigo, WASP as well as the cojoint PERSONA and SOPRANO workshops.
2016, Article / Letter to editor (vol. 9745, (2016), pp. 381-388)Card sort studies can facilitate developers to create an information structure for their website or application. In addition, this human-centered design method provides researchers with insights into the target group’s mental models regarding the information domain under study. In this method, participants sort cards, with excerpts of the website’s or information source’s information on them, into piles or groups. Even though the method lends itself for large numbers of participants, it can be difficult to include sufficient participants in a study to ensure generalizability among large user groups. Especially when the potential user group is heterogeneous, basing the information structure on a limited participant group may not always be valid. In this study, we investigate if card-sort results among one user group (nurses) are comparable to the results of a second (potential) user group (physicians/residents). The results of a formative card sort study that were used to create an antibiotic information application are compared to the results of a second card sort study. This second study was conducted with the aim of redesigning the nurse-aimed information application to meet the (overlapping) needs of physicians. During the first card sort study, 10 nurses participated. In the second card sort study, 8 residents participated. The same set of 43 cards were used in both setups. These cards contain fragments of antibiotic protocols and reference documents that nurses and physicians use to be informed about the use and administration of antibiotics. The participants sorted the cards in individual sessions, into as many categories as they liked. The sorts of both user groups were analyzed separately. Dendrograms and similarity matrices were generated using the Optimal Sort online program. Based on the matrices, clusters were identified by two independent researchers. On these resulting clusters of cards, overlap scores were calculated (between nurse and resident clusters). Differences are compared. The results show that overall, residents reached higher agreement than the nurses. Some overlap between categories is observed in both card sort data matrices. Based on the nurses’ data, more and more specific clusters were created (which in part were observed in the larger residents’ clusters). Based on our findings we conclude that a redesign may not be necessary. Especially when the target group with the lowest prior knowledge levels of the information domain is included in the card sort study, the results can be translated to other groups as well. However, groups with little knowledge will more likely result in lower agreement in the card sorts. Therefore, a larger sample and/or including participants with low and high knowledge of the information domain is advisable.
2019, Article / Letter to editor ((2019))Background: A large part of the communication cues exchanged between persons is nonverbal. Persons with a visual impairment are often unable to perceive these cues, such as gestures or facial expression of emotions. In a previous study, we have determined that visually impaired persons can increase their ability to recognize facial expressions of emotions from validated pictures and videos by using an emotion recognition system that signals vibrotactile cues associated with one of the six basic emotions. Objective: The aim of this study was to determine whether the previously tested emotion recognition system worked equally well in realistic situations and under controlled laboratory conditions. Methods: The emotion recognition system consists of a camera mounted on spectacles, a tablet running facial emotion recognition software, and a waist belt with vibrotactile stimulators to provide haptic feedback representing Ekman’s six universal emotions. A total of 8 visually impaired persons (4 females and 4 males; mean age 46.75 years, age range 28-66 years) participated in two training sessions followed by one experimental session. During the experiment, participants engaged in two 15 minute conversations, in one of which they wore the emotion recognition system. To conclude the study, exit interviews were conducted to assess the experiences of the participants. Due to technical issues with the registration of the emotion recognition software, only 6 participants were included in the video analysis. Results: We found that participants were quickly able to learn, distinguish, and remember vibrotactile signals associated with the six emotions. A total of 4 participants felt that they were able to use the vibrotactile signals in the conversation. Moreover, 5 out of the 6 participants had no difficulties in keeping the camera focused on the conversation partner. The emotion recognition was very accurate in detecting happiness but performed unsatisfactorily in recognizing the other five universal emotions. Conclusions: The system requires some essential improvements in performance and wearability before it is ready to support visually impaired persons in their daily life interactions. Nevertheless, the participants saw potential in the system as an assistive technology, assuming their user requirements can be met.
Schematic overview of the used system.
Schematic overview of the used system.
…
Emotion mapping. The mapping of Ekman's universal emotions on the waist band.
Emotion mapping. The mapping of Ekman's universal emotions on the waist band.
…
Crosstabs of agreement between coders and software. The table shows a tally of the number of time the coders and FaceReader classified a fragment as a particular emotion. The diagonal shows the number of times that the coders and FaceReader classified a fragment as the same emotion.
Crosstabs of agreement between coders and software. The table shows a tally of the number of time the coders and FaceReader classified a fragment as a particular emotion. The diagonal shows the number of times that the coders and FaceReader classified a fragment as the same emotion.
…
Figures - uploaded by Hendrik BuimerAuthor content
Content may be subject to copyright.
ResearchGate Logo
Discover the world's research
20+ million members
135+ million publications
700k+ research projects
Join for free
2018, Article in monograph or in proceedings (Proceedings of S-BPM ONE 2018 ; 10th International Conference on Subject-Oriented Business Process Management)In this paper, we describe the development of a collaborative approach to elicit and analyse service process experience as part of a project commissioned by the Dutch Ministry of Infrastructure and Environment. We designed and deployed a model-based instrument for measuring the experiences of both the general public and the civil servants, involved in information sharing, delivery and use and execution of environmental permit application services. In addition, information on the case-specific process structure that is underlying the service delivery was to be gathered with the instrument. We combined a collaborative, stakeholder-oriented process modelling technique with workshops, inspired by the CoMPArE method, with detailed service experience-oriented probing questions focusing on interactions, roles and process ‘bottlenecks’. We carried out a first, baseline measurement on the information, processes and experiences around environmental permit services through 6 identical six-hour workshop sessions with 67 civil servants. Our experiences in executing the baseline measurement are reported, as well as some main results, and lessons learned in developing and applying the workshop approach.
2017, Article in monograph or in proceedings (Poster presented at the ACM conference ASSETS '17, October 29-November 1, 2017, Baltimore, MD, USA, pp. 331-332)One of the big problems visually impaired persons experience in their daily lives, is the inability to see non-verbal cues of conversation partners. In this study, a wearable assistive technology is presented and evaluated which supports visually impaired persons with the recognition of facial expressions of emotions. The wearable assistive technology consists of a camera clipped on spectacles, emotion recognition software, and a vibrotactile belt with six tactors. An earlier controlled experimental study showed that users of the system improved significantly in their ability to recognize emotions from validated stimuli. In this paper, the next iteration in testing the system is presented, in which a more realistic usage situation was simulated. Eight visually impaired persons were invited to participate in conversations with an actor, who was instructed not to exaggerate his facial expressions. Participants engaged in two 15-minute mock job interview conversations, during one of which they were wearing the system. In the other conversation, no assistive technologies were used. The preliminary results showed that the concept of such wearable assistive technologies remains feasible. Participants within the study found it easy to learn and interpret the vibrotactile cues, which was also shown in their training performance. Furthermore, most participants could use the vibrotactile cues, while being able to stay engaged in the conversation. Nevertheless, some improvements are needed before the system can be used as assistive technology. The accuracy of the system was negatively affected by the lighting and movement conditions present in realistic conversations, compared to the controlled experiment condition. Furthermore, participants requested developments to improve the wearability of the system.
2011, Article in monograph or in proceedings (Chi Sparks)For successful execution of operational tasks within complex work situations, communication is essential.
This ‘operational’ communication is analyzed to gain insight in the way the parties concerned put a similar meaning on the exchanged information. Or, as linguists call it: create shared understanding. This paper focuses on the characteristics of complex work situations, the methods of analyzing, and it indicates the preliminary results of the first field study. The final results consist of guidelines for the design of ICT-systems in order to realize a more effective and efficient way of communication.
2016, Article / Letter to editor ((2016), pp. 45-48)New health technologies are not accessible to all users due to the circumstantial or permanent disabilities some users have. Especially in healthcare, attention must be paid to accommodating all potential users or patients. With the smart use of multimodal systems and multimedia solutions, a broader patient group can be reached. In this paper, we lay out the concept guidelines for accessible wearable technology. Wearables are used for many purposes, including health. The research on these guidelines is in progress, first recommendations based on preliminary outcomes are given.
2017, Article / Letter to editor (Universal Access in the Information Society, vol. 16, (2017), pp. 173-190)Local government organizations such as municipalities often seem unable to fully adopt or implement web accessibility standards even if they are actively pursuing it. Based on existing adoption models, this study identifies factors in five categories that influence the adoption and implementation of accessibility standards for local government websites. Awareness of these factors is importap and understand these factors, this study has identified and interviewed experts in the field of (organizational) accessibility. This has led to an extension of the existing models. The extended model was then validated by interviews with key stakeholders. The outcome of this study places existing adoption models in a new context. The result is an adoption model that contributes better to explaining adoption and implementation processes within eGovernment systems and organizations. This adoption model aims to better help local governments in the identification of factors influencing the actual adoption and implementation of web accessibility standards in their situation. The model explains how factors in the different categories contribute to the adoption and implementation of web accessibility standards. The model may also be applicable to the adoption and implementation of other guidelines and (open) standards within local government.
2017, Article / Letter to editor ((2017))Original PaperExploring Determinants of Patient Adherence to aPortal-Supported Oncology Rehabilitation Program:Interview andData Log AnalysesHendrik P Buimer1,2, MSc; Monique Tabak1,3, PhD; Lex van Velsen1,3, PhD; Thea van der Geest4, PhD; HermieHermens1,3, PhD1Department of Biomedical Signals & Systems, Faculty of Electrical Engineering, Mathematics and Computer Science, University of Twente, Enschede,Netherlands2Department of Biophysics, Faculty of Science, Radboud University, Nijmegen, Netherlands3Telemedicine Group, Roessingh Research & Development, Enschede, Netherlands4Research Center IT + Media, HAN University of Applied Sciences, Arnhem, NetherlandsCorresponding Author:Hendrik P Buimer, MScDepartment of BiophysicsFaculty of ScienceRadboud UniversityHeijendaalseweg 135Nijmegen, 6525 AJ NijmegenNetherlandsPhone: 31 24 3652428Email: h.buimer@donders.ru.nlAbstractBackground: Telemedicine applications often do not live up to their expectations and often fail once they have reached theoperational phase.Objective: The objective of this study was to explore the determinants of patient adherence to a blended care rehabilitationprogram, which includes a Web portal, from a patient’s perspective.Methods: Patients were enrolled in a 12-week oncology rehabilitation treatment supported by a Web portal that was developedin cooperation with patients and care professionals. Semistructured interviews were used to analyze thought processes and behaviorconcerning patient adherence and portal use. Interviews were conducted with patients close to the start and the end of the treatment.Besides, usage data from the portal were analyzed to gain insights into actual usage of the portal.Results: A total of 12 patients participated in the first interview, whereas 10 participated in the second round of interviews.Furthermore, portal usage of 31 patients was monitored. On average, 11 persons used the portal each week, with a maximum of20 in the seventh week and a drop toward just one person in the weeks in the follow-up period of the treatment. From the interviews,it was derived that patients’ behavior in the treatment and use of the portal was primarily determined by extrinsic motivation cues(eg, stimulation by care professionals and patient group), perceived severity of the disease (eg, physical and mental condition),perceived ease of use (eg, accessibility of the portal and the ease with which information is found), and perceived usefulness (eg,fit with the treatment).Conclusions: The results emphasized the impact that care professionals and fellow patients have on patient adherence and portalusage. For this reason, the success of blended care telemedicine interventions seems highly dependent on the willingness of careprofessionals to include the technology in their treatment and stimulate usage among patients.(JMIR Rehabil Assist Technol 2017;4(2):e12) doi:10.2196/rehab.6294KEYWORDStelemedicine; rehabilitation; patient portals; treatment adherence; complianceJMIR Rehabil Assist Technol 2017 | vol. 4 | iss. 2 | e12 | p.1http://rehab.jmir.org/2017/2/e12/(page number not for citation purposes)Buimer et alJMIR REHABILITATION AND ASSISTIVE TECHNOLOGIESXSL•FORenderX
2016, Article in monograph or in proceedings (Project: Smart glasses for visually impaired persons, pp. 157-163)The rise of smart technologies has created new opportunities to support blind and visually impaired persons (VIPs). One of the biggest problems we identified in our previous research on problems VIPs face during activities of daily life concerned the recognition of persons and their facial expressions. In this study we developed a system to detect faces, recognize their emotions, and provide vibrotactile feedback about the emotions expressed. The prototype system was tested to determine whether vibrotactile feedback through a haptic belt is capable of enhancing social interactions for VIPs. The system consisted of commercially available technologies. A Logitech C920 webcam mounted on a cap, a Microsoft Surface Pro 4 carried in a mesh backpack, an Elitac tactile belt worn around the waist, and the VicarVision FaceReader software application, which recognizes facial expressions. In preliminary tests with the systems both visually impaired and sighted persons were presented with sets of stimuli consisting of actors displaying six emotions (e.g. joy, surprise, anger, sadness, fear, and disgust) derived from the validated Amsterdam Dynamic Facial Expression Set and Warsaw Set of Emotional Facial Expression Pictures with matching audio by using nonlinguistic affect bursts. Subjects had to determine the emotions expressed in the videos without and, after a training period, with haptic feedback. An exit survey was conducted aimed to gain insights into the opinion of the users, on the perceived usefulness and benefits of the emotional feedback, and their willingness of using the prototype as assistive technology in daily life. Haptic feedback about facial expressions may improve the ability of VIPs to determine emotions expressed by others and, as a result, increase the confidence of VIPs during social interactions. More studies are needed to determine whether this is a viable method to convey information and enhance social interactions in the daily life of VIPs.
2016, Article / Letter to editor (Lecture Notes in Computer Science, vol. 9737, (2016), pp. 109-119)Smart wearable devices are integrated our everyday lives. Such wearable technology is worn on or near the body, while leaving both hands free. This enables users to receive and send information in a non-obtrusive way. Because of the ability to continuously assist and support activities, wearables could be of great value to persons with a disability. Persons with a disability can only benefit from the potential of wearables if they are accessible. Like other devices, platforms, and applications, developers of wearables need to take accessibility into account during early development, for example by including multimodal interfaces in the design. Even though some accessibility guidelines and standards exist for websites and mobile phones, more support for the development of accessible wearables is needed. The aim of our project is to develop a set of guidelines for accessible wearables. Three approaches are combined to develop the guidelines. A scan of the literature was done to identify publications addressing the accessibility of wearables and/or development guidelines. Semi-structured interviews were held with developers of accessible wearable technology. Based on these first activities, a draft set of guidelines is created. This draft is evaluated with developers and researchers in the field of universal design, accessibility, and wearables. Further, the draft is evaluated with visually impaired people (VIP) in interviews. Based on these results, a final set of guidelines will be created. This set is evaluated against an actual project in which apps are developed for VIP. This study is in progress; first results are presented (literature study, semi-structured interviews, first draft of guidelines) and a call for participation in the Delphi study is issued
2016, Article in monograph or in proceedings (workshop)une 2013 issue of IEEE Transactions on Professional Communication features a special section on 'Designing a Better User Experience for Self-Service Systems'. Self-service systems offers the users the benefit of 24/7 access to an ever-growing range of services and perhaps also a strong sense of autonomy and fulfillment. Three papers in this section approach the design of the user experience of self-service systems in an integrated way and show the readership of this journal what methods and techniques can be used in this type of design process. These are, 'Identifying User Experience Factors for Mobile Incident Reporting in Urban Contexts,' by Bach, Bernhaupt, and Winckler, 'Improving User Experience for Passenger Information Systems. Prototypes and Reference Objects,' by Wirtz and Jakobs, and in 'A User-Centered Design Approach to Self-Service Ticket Vending Machines,' by Siebenhandl, Schreder, Smuc, Mayr, and Nagl.
2014, Article in monograph or in proceedings (NordiCHI'14)In this paper we discuss mixed-method research in HCI. We report on an empirical literature study of the NordiCHI 2012 proceedings which aimed to uncover and describe common mixed-method approaches, and to identify good practices for mixed-methods research in HCI. We present our results as mixed-method research design patterns, which can be used to design, discuss and evaluate mixed-method research. Three dominant patterns are identified and fully described and three additional pattern candidates are proposed. With our pattern descriptions we aim to lay a foundation for a more thoughtful application of, and a stronger discourse about, mixed-method approaches in HCI.
2018, Article / Letter to editor (vol. 13, iss. 3, (2018))In face-to-face social interactions, blind and visually impaired persons (VIPs) lack access to nonverbal cues like facial expressions, body posture, and gestures, which may lead to impaired interpersonal communication. In this study, a wearable sensory substitution device (SSD) consisting of a head mounted camera and a haptic belt was evaluated to determine whether vibrotactile cues around the waist could be used to convey facial expressions to users and whether such a device is desired by VIPs for use in daily living situations. Ten VIPs (mean age: 38.8, SD: 14.4) and 10 sighted persons (SPs) (mean age: 44.5, SD: 19.6) participated in the study, in which validated sets of pictures, silent videos, and videos with audio of facial expressions were presented to the participant. A control measurement was first performed to determine how accurately participants could identify facial expressions while relying on their functional senses. After a short training, participants were asked to determine facial expressions while wearing the emotion feedback system. VIPs using the device showed significant improvements in their ability to determine which facial expressions were shown. A significant increase in accuracy of 44.4% was found across all types of stimuli when comparing the scores of the control (mean±SEM: 35.0±2.5%) and supported (mean±SEM: 79.4±2.1%) phases. The greatest improvements achieved with the support of the SSD were found for silent stimuli (68.3% for pictures and 50.8% for silent videos). SPs also showed consistent, though not statistically significant, improvements while supported. Overall, our study shows that vibrotactile cues are well suited to convey facial expressions to VIPs in real-time. Participants became skilled with the device after a short training session. Further testing and development of the SSD is required to improve its accuracy and aesthetics for potential daily use.
2022, Article in monograph or in proceedings (NordiCHI workshop - Age against the machine: A Call for Designing Ethical AI for and with Children)With the development of content-generating Artificial Intelligence (AI) systems, such as generating images from a textual description, new possibilities for using such system in design processes arise. In this position paper, we argue that we need to explicitly incorporate children's values when we develop design methods that incorporate content-generating AI to protect their creative processes. In a mini-inquiry we find that children from different ages have articulate ideas about being in the same design space as a content-generating AI’s. They share concerns about fidelity, transparency and how it changes the level-playing field for them. To setup a safe and ethical design space when co-creating with children we foresee three important steps: 1) explore the value of children with respect to content-generation AI. 2) improve the accessibility of these systems for children and 3) study the effect of using such a system on creativity and innovation in a design process.
2017, Article / Letter to editor (vol. 5, iss. Vol. 5, nr. 1, (2017), pp. 26-42)In this article, we present a method for analyzing the communication of people who exchange dynamic and complex information to come to a shared understanding of situations and of the actions planned and monitored by one party, but executed remotely by another. To examine this situation, we analyzed dispatchers working in police dispatch center in a large city in the Netherlands and their communication behavior in three different settings. The results of our analyses answer the question of how collaborative parties should assess an emergency situation in order to decide how to handle the incident in accordance with the procedures. Our results indicate which information must be communicated in order to deal with the current problem during the course of an incident. We will also demonstrate the proposed way of analyzing the communication used here is needed to understand how information is collaboratively handled in complex tasks.
2021, Article in monograph or in proceedings (ACL Anthology -- proceedings of the Seventh International Workshop on Controlled Natural Language (CNL'21))