Team

Team:

Marvin Prochazka

Research Aide

Marvin Prochazka

Room:
S-M 201
Email:

Publications:

Filter:
  • Liebers, Carina; Prochazka, Marvin; Pfützenreuter, Niklas; Liebers, Jonathan; Auda, Jonas; Gruenefeld, Uwe; Schneegass, Stefan: Pointing It out! Comparing Manual Segmentation of 3D Point Clouds between Desktop, Tablet, and Virtual Reality. In: International Journal of Human–Computer Interaction (2023), p. 1-15. doi:10.1080/10447318.2023.2238945PDFFull textCitationDetails
    Pointing It out! Comparing Manual Segmentation of 3D Point Clouds between Desktop, Tablet, and Virtual Reality

    Scanning everyday objects with depth sensors is the state-of-the-art approach to generating point clouds for realistic 3D representations. However, the resulting point cloud data suffers from outliers and contains irrelevant data from neighboring objects. To obtain only the desired 3D representation, additional manual segmentation steps are required. In this paper, we compare three different technology classes as independent variables (desktop vs. tablet vs. virtual reality) in a within-subject user study (N = 18) to understand their effectiveness and efficiency for such segmentation tasks. We found that desktop and tablet still outperform virtual reality regarding task completion times, while we could not find a significant difference between them in the effectiveness of the segmentation. In the post hoc interviews, participants preferred the desktop due to its familiarity and temporal efficiency and virtual reality due to its given three-dimensional representation.

  • Detjen, Henrik; Faltaous, Sarah; Keppel, Jonas; Prochazka, Marvin; Gruenefeld, Uwe; Sadeghian, Shadan; Schneegass, Stefan: Investigating the Influence of Gaze- and Context-Adaptive Head-up Displays on Take-Over Requests. In: Acm (Ed.): AutomotiveUI '22: Proceedings of the 14th International Conference on Automotive User Interfaces and Interactive Vehicular Applications. 2022. doi:10.1145/3543174.3546089Full textCitationDetails

    In Level 3 automated vehicles, preparing drivers for take-over requests (TORs) on the head-up display (HUD) requires their repeated attention. Visually salient HUD elements can distract attention from potentially critical parts in a driving scene during a TOR. Further, attention is (a) meanwhile needed for non-driving-related activities and can (b) be over-requested. In this paper, we conduct a driving simulator study (N=12), varying required attention by HUD warning presence (absent vs. constant vs. TOR-only) across gaze-adaptivity (with vs. without) to fit warnings to the situation. We found that (1) drivers value visual support during TORs, (2) gaze-adaptive scene complexity reduction works but creates a benefit-neutralizing distraction for some, and (3) drivers perceive constant HUD warnings as annoying and distracting over time. Our findings highlight the need for (a) HUD adaptation based on user activities and potential TORs and (b) sparse use of warning cues in future HUD designs.

  • Faltaous, Sarah; Prochazka, Marvin; Auda, Jonas; Keppel, Jonas; Wittig, Nick; Gruenefeld, Uwe; Schneegass, Stefan: Give Weight to VR: Manipulating Users’ Perception of Weight in Virtual Reality with Electric Muscle Stimulation, Association for Computing Machinery, New York, NY, USA 2022. (ISBN 9781450396905) doi:10.1145/3543758.3547571) CitationDetails

    Virtual Reality (VR) devices empower users to experience virtual worlds through rich visual and auditory sensations. However, believable haptic feedback that communicates the physical properties of virtual objects, such as their weight, is still unsolved in VR. The current trend towards hand tracking-based interactions, neglecting the typical controllers, further amplifies this problem. Hence, in this work, we investigate the combination of passive haptics and electric muscle stimulation to manipulate users’ perception of weight, and thus, simulate objects with different weights. In a laboratory user study, we investigate four differing electrode placements, stimulating different muscles, to determine which muscle results in the most potent perception of weight with the highest comfort. We found that actuating the biceps brachii or the triceps brachii muscles increased the weight perception of the users. Our findings lay the foundation for future investigations on weight perception in VR.