Eine Übersicht aller Sessions/Sitzungen dieser Veranstaltung.
Bitte wählen Sie einen Ort oder ein Datum aus, um nur die betreffenden Sitzungen anzuzeigen. Wählen Sie eine Sitzung aus, um zur Detailanzeige zu gelangen.

MCI-SE02: Method Development & Exploration
Montag, 04.09.2023:
14:00 - 15:30

Chair der Sitzung: Jürgen Ziegler
Ort: Gebäude 4, Aula


Patient Journey Value Mapping: Illustrating values and experiences along the patient journey to support eHealth design

Michael Bui1, Kira Oberschmidt1,2, Christiane Grünloh1,2

1University of Twente, The Netherlands; 2Roessingh Research and Development, The Netherlands

This paper introduces patient journey value mapping - an approach to capture experiences, emotions and values implicated in patients' care delivery. As patients’ values (i.e., what's important to them in their lives) may change along their patient journeys, our approach aims to support designers to respond to patients' changing needs in the (re)design of eHealth, by mapping patients' values and their prioritisations over time. To substantiate the creation of the map, we propose two preceding data collection phases comprising complementary empirical methods. First, important care-related events and associated values are collected retrospectively through interviews, and in-situ through diary studies. Subsequently, the data are analysed to develop materials to elicit values and value tensions through deepening discussions in an interactive workshop based on which the maps are finalised. The approach is illustrated through discussions and reflections on its application in a case study investigating patient values in eHealth for rehabilitation care.

Behind the Screens: Exploring Eye Movement Visualization to Optimize Online Teaching and Learning

Marian Sauter1, Tobias Wagner2, Teresa Hirzle2,3, Bao Xin Lin1, Enrico Rukzio2, Anke Huckauf1

1Ulm University, Institute for Psychology, Germany; 2Ulm University, Institute for Media Informatics, Germany; 3University of Copenhagen, Denmark

The effective delivery of e-learning relies on the continuous monitoring and management of students' attention. While instructors in traditional classroom settings can readily assess crowd attention through gaze cues, these cues are largely unavailable in online learning environments. To address this challenge and highlight the significance of our study, we collected eye movement data from twenty students and developed four visualization methods: (a) a heat map, (b) an ellipse map, (c) two moving bars, and (d) one vertical bar, which were overlaid on 13 instructional videos. Our findings revealed unexpected preferences among instructors. Contrary to expectations, they did not favor the established heat map and vertical bar for live online teaching. Instead, they opted for the less intrusive ellipse visualization. Despite this, the heat map remained the preferred choice for retrospective analyses due to its more detailed information. Importantly, all visualizations were deemed useful and contributed to re-establishing emotional connections in online learning.

In conclusion, our innovative visualizations of crowd attention demonstrate considerable potential for a broad range of applications, extending beyond e-learning to encompass all online presentations and retrospective analyses. The significant outcomes of our study underscore the crucial role these visualizations will play in enhancing both the effectiveness and emotional connectedness of future e-learning experiences, thereby facilitating the educational landscape.

From ChatGPT to FactGPT: A Participatory Design Study to Mitigate the Effects of Large Language Model Hallucinations on Users

Florian Leiser1, Sven Eckhardt2, Merlin Knaeble1, Alexander Maedche1, Gerhard Schwabe2, Ali Sunyaev1

1Karlsruhe Institute of Technology, Deutschland; 2University of Zurich, Switzerland

Large language models (LLMs) like ChatGPT recently gained interest across all walks of life with their human-like quality in textual responses. Despite their success in research, healthcare, or education, LLMs frequently include incorrect information, called hallucinations, in their responses. These hallucinations could influence users to trust fake news or change their general beliefs. Therefore, we investigate mitigation strategies desired by users to enable identification of LLM hallucinations. To achieve this goal, we conduct a participatory design study where everyday users design interface features which are then assessed for their feasibility by machine learning (ML) experts. We find that many of the desired features are well-perceived by ML experts but are also considered as difficult to implement. Finally, we provide a list of desired features that should serve as a basis for mitigating the effect of LLM hallucinations on users.

Scalable Design Evaluation for Everyone! Designing Configuration Systems for Crowd-Feedback Request Generation

Saskia Haug, Sophia Sommerrock, Ivo Benke, Alexander Maedche

Karlsruhe Institute of Technology (KIT)

Design evaluation is an important step during software development to ensure users’ requirements are met. Crowd feedback represents an effective approach to tackling scalability issues of traditional design evaluation methods. Crowd-feedback systems are usually developed for a fixed use case and designers lack knowledge on how to build individual crowd-feedback systems by themselves. Consequently, they are rarely applied in practice. To address this challenge, we propose the design of a configuration system to support designers in creating individual crowd-feedback requests. By conducting expert interviews (N=14) and an exploratory literature review, we derive four design rationales for such configuration systems and propose a prototypical configuration system instantiation. We evaluate this instantiation in exploratory focus groups (N=10). The results show that feedback requesters appreciate guidance. However, the configuration system needs to find a balance between complexity and flexibility. With our research, we contribute with a generalizable concept to support feedback requesters to create individualized crowd-feedback requests to support scalable design evaluation for everyone.

Changes in Research Ethics, Openness, and Transparency in Empirical Studies between CHI 2017 and CHI 2022

Kavous Salehzadeh Niksirat1, Lahari Goswami1, Pooja S. B. Rao1, James Tyler1, Alessandro Silacci1,2, Sadiq Aliyu1, Annika Aebli1, Chat Wacharamanotham3, Mauro Cherubini1

1Université de Lausanne; 2HES-SO University of Applied Sciences and Arts Western Switzerland Fribourg, Switzerland; 3Swansea University

In recent years, various initiatives from within and outside the HCI field have encouraged researchers to improve research ethics, openness, and transparency in their empirical research. We quantify how the CHI literature might have changed in these three aspects by analyzing samples of 118 CHI 2017 and 127 CHI 2022 papers—randomly drawn and stratified across conference sessions. We operationalized research ethics, openness, and transparency into 45 criteria and manually annotated the sampled papers. The results show that the CHI 2022 sample was better in 18 criteria, but in the rest of the criteria, it has no improvement. The most noticeable improvements were related to research transparency (10 out of 17 criteria). We also explored the possibility of assisting the verification process by developing a proof-of-concept screening system. We tested this tool with eight criteria. Six of them achieved high accuracy and F1 score. We discuss the implications for future research practices and education.

This paper and all supplementary materials are freely available at