Eine Übersicht aller Sessions/Sitzungen dieser Veranstaltung.
Bitte wählen Sie einen Ort oder ein Datum aus, um nur die betreffenden Sitzungen anzuzeigen. Wählen Sie eine Sitzung aus, um zur Detailanzeige zu gelangen.

MCI-SE06: Learning, Reading and Support
Mittwoch, 06.09.2023:
9:00 - 10:30

Chair der Sitzung: Fiona Draxler
Ort: Gebäude 4, Aula


Accessible Text Tools for People with Cognitive Impairments and Non-Native Readers: Challenges and Opportunities

Hendrik Heuer1,2, Elena Leah Glassman3

1Universität Bremen; 2Institut für Informationsmanagement Bremen (ifib); 3Harvard University

Many people have problems with reading, which limits their ability to participate in society. This paper explores tools that make text more accessible. For this, we interviewed experts, who proposed tools for different stakeholders and scenarios. Important stakeholders of such tools are people with cognitive impairments and non-native readers. Frequently mentioned scenarios are public administration, the medical domain, and everyday life. The tools proposed by experts support stakeholders by improving how text is compressed, expanded, reviewed, and experienced. In a survey of stakeholders, we confirm that the scenarios are relevant and that the proposed tools appear helpful to them. We provide the Accessible Text Framework to help researchers understand how the different tools can be combined and discuss how individual tools can be implemented. The investigation shows that accessible text tools are an important HCI+AI challenge that a large number of people can benefit from.

Supporting Software Developers Through a Gaze-Based Adaptive IDE

Thomas Weber, Rafael Vinicius Mourao Thiel, Sven Mayer

LMU Munich, Germany

Highly complex systems, such as software development tools, constantly gain features and, consequently, complexity and, thus, risk overwhelming or distracting the user. We argue that automation and adaptation could help users to focus on their work. However, the challenge is to correctly and promptly determine when to adapt what, as often the users' intent is unclear. To assist software developers, we build a gaze-adaptive integrated development environment using the developers' gaze as the source for learning appropriate adaptation. Beyond our experience of using gaze for an adaptive user interface, we also report feedback from developers regarding the desirability of such a user interface, which indicated that adaptations for development tools need to strike a careful balance between automation and user control. Nonetheless, the developers see the value in a gaze-based adaptive user interface and how it could improve software development tools going forward.

Influence of Annotation Media on Proof-Reading Tasks

Andreas Schmid, Marie Sautmann, Vera Wittmann, Florian Kaindl, Philipp Schauhuber, Philipp Gottschalk, Raphael Wimmer

Universität Regensburg, Deutschland

Annotating and proof-reading documents are common tasks. Digital annotation tools provide easily searchable annotations and facilitate sharing documents and remote collaboration with others. On the other hand, advantages of paper, such as creative freedom and intuitive use, can get lost when annotating digitally. There is a large amount of research indicating that paper outperforms digital annotation tools in task time, error recall and task load. However, most research in this field is rather old and does not take into consideration increasing screen resolution and performance, as well as better input techniques in modern devices. We present three user studies comparing different annotation media in the context of proof-reading tasks. We found that annotating on paper is still faster and less stressful than with a PC or tablet computer, but the difference is significantly smaller with a state-of-the-art device. We did not find a difference in error recall, but the used medium has a strong influence on how users annotate.

Enacted Selves in Technological Activities – Framework and Case Study in Immersive Telementoring

Bastian Dewitz1,2, Sobhan Moazemi1, Sebastian Kalkhoff1, Steven Kessler1, Christian Geiger3, Frank Steinicke2, Hug Aubin1, Falko Schmid1

1Universitätsklinikum Düsseldorf, Deutschland; 2Universität Hamburg, Deutschland; 3Hochschule Düsseldorf, Deutschland

The incorporation of new technology into existing human activities can be challenging. Numerous models have been proposed in human-computer interaction (HCI) to guide research and analyze effects. However, bridging the gap between experimental data and real-world applications often proves to be difficult. In the last decades, post-cognitivistic approaches have been developed to explain human cognition and the relation between humans and their environment. In this paper, we present a novel framework to systematically describe and analyze challenges in the context of HCI from multiple perspectives. It extends Cultural-Historical Activity Theory (CHAT) and is enriched by contemporary philosophical perspectives (enactivism, pattern theory of self and post-phenomenology). The proposed framework is further illustrated by applying it to an immersive telementoring prototype system.

Leveraging driver vehicle and environment interaction: Machine learning using driver monitoring cameras to detect drunk driving

Kevin Koch1, Martin Maritsch2, Eva van Weenen2, Stefan Feuerriegel3, Matthias Pfäffli4, Elgar Fleisch2,1, Wolfgang Weinmann4, Felix Wortmann1

1Institute of Technology Management, University of St. Gallen; 2Department of Management, Technology, and Economics, ETH Zurich; 3School of Management, LMU Munich; 4Institute of Forensic Medicine, University of Bern

Excessive alcohol consumption causes disability and death. Digital interventions are promising means to promote behavioral change and thus prevent alcohol-related harm, especially in critical moments such as driving. This requires real-time information on a person’s blood alcohol concentration (BAC). Here, we develop an in-vehicle machine learning system to predict critical BAC levels. Our system leverages driver monitoring cameras mandated in numerous countries worldwide. We evaluate our system with n = 30 participants in an interventional simulator study. Our system reliably detects driving under any alcohol influence (area under the receiver operating characteristic curve [AUROC] 0.88) and driving above the WHO recommended limit of 0.05 g/dL BAC (AUROC 0.79). Model inspection reveals reliance on pathophysiological effects associated with alcohol consumption. To our knowledge, we are the first to rigorously evaluate the use of driver monitoring cameras for detecting drunk driving. Our results highlight the potential of driver monitoring cameras and enable next-generation drunk driver interaction preventing alcohol-related harm.