Eine Übersicht aller Sessions/Sitzungen dieser Veranstaltung.
Bitte wählen Sie einen Ort oder ein Datum aus, um nur die betreffenden Sitzungen anzuzeigen. Wählen Sie eine Sitzung aus, um zur Detailanzeige zu gelangen.

MCI-SE01: Extended Reality
Montag, 04.09.2023:
11:00 - 12:30

Chair der Sitzung: Linda Hirsch
Ort: Gebäude 4, Aula


Understanding the Effects of Perceived Avatar Appearance on Latency Sensitivity in Full-Body Motion-Tracked Virtual Reality

David Halbhuber1, Martin Kocur3, Alexander Kalus1, Kevin Angermeyer1, Valentin Schwind2, Niels Henze1

1Universität Regensburg; 2Frankfurt University of Applied Sciences; 3University of Applied Sciences Upper Austria

Latency in virtual reality (VR) can decrease the feeling of presence and body ownership. How users perceive latency, however, is plastic and affected by the design of the virtual content. Previous work found that an avatar's visual appearance, particularly its perceived fitness, can be leveraged to change user perception and behavior. Moreover, previous work investigating non-VR video games also demonstrated that controlling avatars that visually conform to users' expectations associated with the avatars' perceived characteristics increases the users' latency tolerance. However, it is currently unknown if the avatar's visual appearance can be used to modulate the users' latency sensitivity in full-body motion-tracked VR. Therefore, we conducted two studies to investigate if the avatars' appearance can be used to decrease the negative impact of latency. In the first study, 41 participants systematically determined two sets of avatars whose visual appearance is perceived to be more or less fit in two physically challenging tasks. In a second study (N = 16), we tested the two previously determined avatars (perceived to be more fit vs. perceived to be less fit) in the two tasks using VR with two levels of controlled latency (system vs. high). We found that embodying an avatar perceived as more fit significantly increases the participants' physical performance, body ownership, presence, and intrinsic motivation. While we show that latency negatively affects performance, our results also suggest that the avatar's visual appearance does not alter the effects of latency in VR.

Increasing Realism of Displayed Vibrating AR Objects through Edge Blurring

Marco Kurzweg, Maximilian Letter, Katrin Wolf

Berlin University of Applied Sciences and Technology, Germany

Many standard AR devices, such as the HoloLens2, have limitations in displaying fast motions, like the ones required to visualize moving or vibrating objects. One reason for this is the low computing power compared to other technologies, resulting in frame rate drops. Further, established visualization enhancement methods, such as anti-aliasing, cannot be applied because of their high computational demands. Therefore, we have looked at possible alternatives on the HoloLens2 for displaying vibrations more realistically as long as these technical limitations exist. We have chosen to examine vibrations as they are widely used for different use cases, like creating feedback, communicating the success of interactions, and generating a better scene understanding. In a user study, three different effects were evaluated against a baseline method, which was the representation of a vibration using a sinus function to calculate the displacement of the object. We found that an effect where the edges of the AR object are blurred (continuously with changing intensity) is perceived as significantly more realistic than other effects and the baseline method.

Head-anchored text placements and cognitive load in information-rich virtual environments

Nils A. Mack, Markus Heun, Marion Rose

Bergische Universität Wuppertal, Deutschland

In recent years, there has been a growing trend toward using information-rich virtual environments that combine virtual environments (VE) with text as educational tools. This has been made viable by advances in head-mounted displays (HMDs). However, the use of virtual reality (VR) can create a problem of high extraneous cognitive load (ECL) caused by the VR technology itself. This can hinder learning if it exceeds a user’s limited working memory. To reduce this load, designers of VEs can only address design elements like the text placement method. To give insights into the placement with the lowest ECL, we conducted a study with 30 participants, evaluating different head-anchored text placements in two abstracted, non-stationary learning tasks to assess their cognitive load, usability, and task load. Our results showed that head-anchored text should be placed above eye level for tasks at normal working height. Furthermore, the horizontal movement of text had little to no influence on cognitive load.

How Are Your Participants Feeling Today? Accounting For and Assessing Emotions in Virtual Reality

Rivu Radiah1, Pia Prodan4, Ville Mäkelä3, Pascal Knierim2, Florian Alt1

1Universität der Bundeswehr München, Deutschland; 2University of Innsbruck, Österreich; 3University of Waterloo, Canada; 4LMU München, Deutschland

Emotions affect our perception, attention, and behavior. Hereby, the emotional state is greatly affected by the surrounding environment that can seamlessly be designed in Virtual Reality (VR). However, research typically does not account for the influence of the environment on participants' emotions, even if this influence might alter acquired data. To mitigate the impact, we formulated a design space that explains how the creation of virtual environments influences emotions. Furthermore, we present EmotionEditor, a toolbox that assists researchers in rapidly developing virtual environments that influence and asses the users' emotional state. We evaluated the capability of EmotionEditor to elicit emotions in a lab study (n=30). Based on interviews with VR experts (n=13), we investigate how they consider the effect of emotions in their research, how the EmotionEditor can prospectively support them, and analyze prevalent challenges in the design as well as development of VR user studies.

Collaborating Across Realities: Analytical Lenses for Understanding Dyadic Collaboration in Transitional Interfaces

Jan-Henrik Schröder, Daniel Schacht, Niklas Peper, Anita Marie Hamurculu, Hans-Christian Jetter

Universität zu Lübeck

Transitional Interfaces are a yet underexplored, emerging class of cross-reality user interfaces that enable users to freely move along the reality-virtuality continuum during collaboration. To analyze and understand how such collaboration unfolds, we propose four analytical lenses derived from an exploratory study of transitional collaboration with 15 dyads. While solving a complex spatial optimization task, participants could freely switch between three contexts, each with different displays (desktop screens, tablet-based augmented reality, head-mounted virtual reality), input techniques (mouse, touch, handheld controllers), and visual representations (monoscopic and allocentric 2D/3D maps, stereoscopic egocentric views). Using the rich qualitative and quantitative data from our study, we evaluated participants’ perceptions of transitional collaboration and identified commonalities and differences between dyads. We then derived four lenses including metrics and visualizations to analyze key aspects of transitional collaboration: (1) place and distance, (2) temporal patterns, (3) group use of contexts, (4) individual use of contexts.