Team:

Dr. Uwe Grünefeld

Senior Academic Staff

Dr. Uwe Grünefeld

Room:
SM 203
Phone:
+49 201183 - 3298
Email:
Consultation Hour:
by appointment
Address:
Universität Duisburg-Essen
Institut für Informatik und Wirtschaftsinformatik (ICB)
Mensch-Computer Interaktion
Schützenbahn 70
45127 Essen
Author Profile:
Google Scholar
ResearchGate

Bio:

I am a Postdoc Researcher in Human-Computer Interaction at the University of Duisburg-Essen. I am fascinated by Augmented and Virtual Reality. My research has mainly focused on investigating out-of-view objects, peripheral visualization, and attention guidance.

Publications:

Filter:
  • Faltaous, Sarah; Prochazka, Marvin; Auda, Jonas; Keppel, Jonas; Wittig, Nick; Gruenefeld, Uwe; Schneegass, Stefan: Give Weight to VR: Manipulating Users’ Perception of Weight in Virtual Reality with Electric Muscle Stimulation, Association for Computing Machinery, New York, NY, USA 2022. (ISBN 9781450396905) doi:10.1145/3543758.3547571) CitationDetails

    Virtual Reality (VR) devices empower users to experience virtual worlds through rich visual and auditory sensations. However, believable haptic feedback that communicates the physical properties of virtual objects, such as their weight, is still unsolved in VR. The current trend towards hand tracking-based interactions, neglecting the typical controllers, further amplifies this problem. Hence, in this work, we investigate the combination of passive haptics and electric muscle stimulation to manipulate users’ perception of weight, and thus, simulate objects with different weights. In a laboratory user study, we investigate four differing electrode placements, stimulating different muscles, to determine which muscle results in the most potent perception of weight with the highest comfort. We found that actuating the biceps brachii or the triceps brachii muscles increased the weight perception of the users. Our findings lay the foundation for future investigations on weight perception in VR.

  • Gruenefeld, Uwe; Auda, Jonas; Mathis, Florian; Schneegass, Stefan; Khamis, Mohamed; Gugenheimer, Jan; Mayer, Sven: VRception: Rapid Prototyping of Cross-Reality Systems in Virtual Reality. In: Proceedings of the 41st ACM Conference on Human Factors in Computing Systems (CHI). Association for Computing Machinery, New Orleans, United States 2022. doi:10.1145/3491102.3501821CitationDetails

    Cross-reality systems empower users to transition along the realityvirtuality continuum or collaborate with others experiencing different manifestations of it. However, prototyping these systems is challenging, as it requires sophisticated technical skills, time, and often expensive hardware. We present VRception, a concept and toolkit for quick and easy prototyping of cross-reality systems. By simulating all levels of the reality-virtuality continuum entirely in Virtual Reality, our concept overcomes the asynchronicity of realities, eliminating technical obstacles. Our VRception Toolkit leverages this concept to allow rapid prototyping of cross-reality systems and easy remixing of elements from all continuum levels. We replicated six cross-reality papers using our toolkit and presented them to their authors. Interviews with them revealed that our toolkit sufficiently replicates their core functionalities and allows quick iterations. Additionally, remote participants used our toolkit in pairs to collaboratively implement prototypes in about eight minutes that they would have otherwise expected to take days.

  • Auda, Jonas; Grünefeld, Uwe; Schneegass, Stefan: If The Map Fits! Exploring Minimaps as Distractors from Non-Euclidean Spaces in Virtual Reality. In: CHI 22. ACM, 2022. doi:10.1145/3491101.3519621CitationDetails
  • Abdrabou, Yasmeen; Rivu, Radiah; Ammar, Tarek; Liebers, Jonathan; Saad, Alia; Liebers, Carina; Gruenefeld, Uwe; Knierim, Pascal; Khamis, Mohamed; Mäkelä, Ville; Schneegass, Stefan; Alt, Florian: Understanding Shoulder Surfer Behavior Using Virtual Reality. In: Proceedings of the IEEE conference on Virtual Reality and 3D User Interfaces (IEEE VR). IEEE, Christchurch, New Zealand 2022. CitationDetails

    We explore how attackers behave during shoulder surfing. Unfortunately, such behavior is challenging to study as it is often opportunistic and can occur wherever potential attackers can observe other people’s private screens. Therefore, we investigate shoulder surfing using virtual reality (VR). We recruited 24 participants and observed their behavior in two virtual waiting scenarios: at a bus stop and in an open office space. In both scenarios, avatars interacted with private screens displaying different content, thus providing opportunities for shoulder surfing. From the results, we derive an understanding of factors influencing shoulder surfing behavior.

  • Auda, Jonas; Grünefeld, Uwe; Kosch, Thomas; Schneegass, Stefan: The Butterfly Effect: Novel Opportunities for Steady-State Visually-Evoked Potential Stimuli in Virtual Reality. In: Researchgate (Ed.): Augmented Humans. Kashiwa, Chiba, Japan 2022. doi:DOI:10.1145/3519391.3519397CitationDetails
  • Pascher, Max; Kronhardt, Kirill; Franzen, Til; Gruenefeld, Uwe; Schneegass, Stefan; Gerken, Jens: My Caregiver the Cobot: Comparing Visualization Techniques to Effectively Communicate Cobot Perception to People with Physical Impairments. In: MDPI Sensors, Vol 22 (2022). doi:10.3390/s22030755Full textCitationDetails

    Nowadays, robots are found in a growing number of areas where they collaborate closely with humans. Enabled by lightweight materials and safety sensors, these cobots are gaining increasing popularity in domestic care, where they support people with physical impairments in their everyday lives. However, when cobots perform actions autonomously, it remains challenging for human collaborators to understand and predict their behavior, which is crucial for achieving trust and user acceptance. One significant aspect of predicting cobot behavior is understanding their perception and comprehending how they "see" the world. To tackle this challenge, we compared three different visualization techniques for Spatial Augmented Reality. All of these communicate cobot perception by visually indicating which objects in the cobot's surrounding have been identified by their sensors. We compared the well-established visualizations Wedge and Halo against our proposed visualization Line in a remote user experiment with participants suffering from physical impairments. In a second remote experiment, we validated these findings with a broader non-specific user base. Our findings show that Line, a lower complexity visualization, results in significantly faster reaction times compared to Halo, and lower task load compared to both Wedge and Halo. Overall, users prefer Line as a more straightforward visualization. In Spatial Augmented Reality, with its known disadvantage of limited projection area size, established off-screen visualizations are not effective in communicating cobot perception and Line presents an easy-to-understand alternative.

  • Liebers, Jonathan; Brockel, Sascha; Gruenefeld, Uwe; Schneegass, Stefan: Identifying Users by Their Hand Tracking Data in Augmented and Virtual Reality. In: International Journal of Human–Computer Interaction (2022). doi:10.1080/10447318.2022.2120845Full textCitationDetails
    Identifying Users by Their Hand Tracking Data in Augmented and Virtual Reality

    Nowadays, Augmented and Virtual Reality devices are widely available and are often shared among users due to their high cost. Thus, distinguishing users to offer personalized experiences is essential. However, currently used explicit user authentication(e.g., entering a password) is tedious and vulnerable to attack. Therefore, this work investigates the feasibility of implicitly identifying users by their hand tracking data. In particular, we identify users by their uni- and bimanual finger behavior gathered from their interaction with eight different universal interface elements, such as buttons and sliders. In two sessions, we recorded the tracking data of 16 participants while they interacted with various interface elements in Augmented and Virtual Reality. We found that user identification is possible with up to 95 % accuracy across sessions using an explainable machine learning approach. We conclude our work by discussing differences between interface elements, and feature importance to provide implications for behavioral biometric systems.

  • Liebers, Jonathan; Horn, Patrick; Burschik, Christian; Gruenefeld, Uwe; Schneegass, Stefan: Using Gaze Behavior and Head Orientation for Implicit Identification in Virtual Reality. In: Proceedings of the 27th ACM Symposium on Virtual Reality Software and Technology (VRST). Association for Computing Machinery, Osaka, Japan 2021. doi:10.1145/3489849.3489880CitationDetails

    Identifying users of a Virtual Reality (VR) headset provides designers of VR content with the opportunity to adapt the user interface, set user-specific preferences, or adjust the level of difficulty either for games or training applications. While most identification methods currently rely on explicit input, implicit user identification is less disruptive and does not impact the immersion of the users. In this work, we introduce a biometric identification system that employs the user’s gaze behavior as a unique, individual characteristic. In particular, we focus on the user’s gaze behavior and head orientation while following a moving stimulus. We verify our approach in a user study. A hybrid post-hoc analysis results in an identification accuracy of up to 75% for an explainable machine learning algorithm and up to 100% for a deep learning approach. We conclude with discussing application scenarios in which our approach can be used to implicitly identify users.

  • Auda, Jonas; Grünefeld, Uwe; Pfeuffer, Ken; Rivu, Radiah; Florian, Alt; Schneegass, Stefan: I'm in Control! Transferring Object Ownership Between Remote Users with Haptic Props in Virtual Reality. In: Proceedings of the 9th ACM Symposium on Spatial User Interaction (SUI). Association for Computing Machinery, 2021. doi:10.1145/3485279.3485287CitationDetails
  • Saad, Alia; Liebers, Jonathan; Gruenefeld, Uwe; Alt, Florian; Schneegass, Stefan: Understanding Bystanders’ Tendency to Shoulder Surf Smartphones Using 360-Degree Videos in Virtual Reality. In: Proceedings of the 23rd International Conference on Mobile Human-Computer Interaction (MobileHCI). Association for Computing Machinery, Toulouse, France 2021. doi:10.1145/3447526.3472058CitationDetails

    Shoulder surfing is an omnipresent risk for smartphone users. However, investigating these attacks in the wild is difficult because of either privacy concerns, lack of consent, or the fact that asking for consent would influence people’s behavior (e.g., they could try to avoid looking at smartphones). Thus, we propose utilizing 360-degree videos in Virtual Reality (VR), recorded in staged real-life situations on public transport. Despite differences between perceiving videos in VR and experiencing real-world situations, we believe this approach to allow novel insights on observers’ tendency to shoulder surf another person’s phone authentication and interaction to be gained. By conducting a study (N=16), we demonstrate that a better understanding of shoulder surfers’ behavior can be obtained by analyzing gaze data during video watching and comparing it to post-hoc interview responses. On average, participants looked at the phone for about 11% of the time it was visible and could remember half of the applications used.

  • Auda, Jonas; Grünefeld, Uwe; Schneegass, Stefan: Enabling Reusable Haptic Props for Virtual Reality by Hand Displacement. In: Proceedings of the Conference on Mensch Und Computer (MuC). Association for Computing Machinery, Ingolstadt, Germany 2021, p. 412-417. doi:10.1145/3473856.3474000CitationDetails

    Virtual Reality (VR) enables compelling visual experiences. However, providing haptic feedback is still challenging. Previous work suggests utilizing haptic props to overcome such limitations and presents evidence that props could function as a single haptic proxy for several virtual objects. In this work, we displace users’ hands to account for virtual objects that are smaller or larger. Hence, the used haptic prop can represent several differently-sized virtual objects. We conducted a user study (N = 12) and presented our participants with two tasks during which we continuously handed them the same haptic prop but they saw in VR differently-sized virtual objects. In the first task, we used a linear hand displacement and increased the size of the virtual object to understand when participants perceive a mismatch. In the second task, we compare the linear displacement to logarithmic and exponential displacements. We found that participants, on average, do not perceive the size mismatch for virtual objects up to 50% larger than the physical prop. However, we did not find any differences between the explored different displacement. We conclude our work with future research directions.

  • Faltaous, Sarah; Gruenefeld, Uwe; Schneegass, Stefan: Towards a Universal Human-Computer Interaction Model for Multimodal Interactions. In: Proceedings of the Conference on Mensch Und Computer (MuC). Association for Computing Machinery, Ingolstadt, Germany 2021, p. 59-63. doi:10.1145/3473856.3474008CitationDetails

    Models in HCI describe and provide insights into how humans use interactive technology. They are used by engineers, designers, and developers to understand and formalize the interaction process. At the same time, novel interaction paradigms arise constantly introducing new ways of how interactive technology can support humans. In this work, we look into how these paradigms can be described using the classical HCI model introduced by Schomaker in 1995. We extend this model by presenting new relations that would provide a better understanding of them. For this, we revisit the existing interaction paradigms and try to describe their interaction using this model. The goal of this work is to highlight the need to adapt the models to new interaction paradigms and spark discussion in the HCI community on this topic.

  • Faltaous, Sarah; Janzon, Simon; Heger, Roman; Strauss, Marvin; Golkar, Pedram; Viefhaus, Matteo; Prochazka, Marvin; Gruenefeld, Uwe; Schneegass, Stefan: Wisdom of the IoT Crowd: Envisioning a Smart Home-Based Nutritional Intake Monitoring System. In: Proceedings of the Conference on Mensch Und Computer (MuC). Association for Computing Machinery, Ingolstadt, Germany 2021, p. 568-573. doi:10.1145/3473856.3474009CitationDetails

    Obesity and overweight are two factors linked to various health problems that lead to death in the long run. Technological advancements have granted the chance to create smart interventions. These interventions could be operated by the Internet of Things (IoT) that connects different smart home and wearable devices, providing a large pool of data. In this work, we use IoT with different technologies to present an exemplary nutrition monitoring intake system. This system integrates the input from various devices to understand the users’ behavior better and provide recommendations accordingly. Furthermore, we report on a preliminary evaluation through semi-structured interviews with six participants. Their feedback highlights the system’s opportunities and challenges.

  • Auda, Jonas; Heger, Roman; Gruenefeld, Uwe; Schneegaß, Stefan: VRSketch: Investigating 2D Sketching in Virtual Reality with Different Levels of Hand and Pen Transparency. In: 18th International Conference on Human–Computer Interaction (INTERACT). Springer, Bari, Italy 2021, p. 195-211. doi:10.1007/978-3-030-85607-6_14CitationDetails

    Sketching is a vital step in design processes. While analog sketching on pen and paper is the defacto standard, Virtual Reality (VR) seems promising for improving the sketching experience. It provides myriads of new opportunities to express creative ideas. In contrast to reality, possible drawbacks of pen and paper drawing can be tackled by altering the virtual environment. In this work, we investigate how hand and pen transparency impacts users’ 2D sketching abilities. We conducted a lab study (N=20N=20) investigating different combinations of hand and pen transparency. Our results show that a more transparent pen helps one sketch more quickly, while a transparent hand slows down. Further, we found that transparency improves sketching accuracy while drawing in the direction that is occupied by the user’s hand.

  • Liebers, Jonathan; Abdelaziz, Mark; Mecke, Lukas; Saad, Alia; Auda, Jonas; Alt, Florian; Schneegaß, Stefan: Understanding User Identification in Virtual Reality Through Behavioral Biometrics and the Effect of Body Normalization. In: Proceedings of the 40th ACM Conference on Human Factors in Computing Systems (CHI). Association for Computing Machinery, Yokohama, Japan 2021. doi:10.1145/3411764.3445528CitationDetails

    Virtual Reality (VR) is becoming increasingly popular both in the entertainment and professional domains. Behavioral biometrics have recently been investigated as a means to continuously and implicitly identify users in VR. Applications in VR can specifically benefit from this, for example, to adapt virtual environments and user interfaces as well as to authenticate users. In this work, we conduct a lab study (N = 16) to explore how accurately users can be identified during two task-driven scenarios based on their spatial movement. We show that an identification accuracy of up to 90% is possible across sessions recorded on different days. Moreover, we investigate the role of users’ physiology in behavioral biometrics by virtually altering and normalizing their body proportions. We find that body normalization in general increases the identification rate, in some cases by up to 38%; hence, it improves the performance of identification systems.

  • Schultze, Sven; Gruenefeld, Uwe; Boll, Susanne: Demystifying Deep Learning: Developing and Evaluating a User-Centered Learning App for Beginners to Gain Practical Experience. In: i-com, Vol 2020 (2021) No 19. doi:10.1515/icom-2020-0023CitationDetails

    Deep Learning has revolutionized Machine Learning, enhancing our ability to solve complex computational problems. From image classification to speech recognition, the technology can be beneficial in a broad range of scenarios. However, the barrier to entry is quite high, especially when programming skills are missing. In this paper, we present the development of a learning application that is easy to use, yet powerful enough to solve practical Deep Learning problems. We followed the human-centered design approach and conducted a technical evaluation to identify solvable classification problems. Afterwards, we conducted an online user evaluation to gain insights on users’ experience with the app, and to understand positive as well as negative aspects of our implemented concept. Our results show that participants liked using the app and found it useful, especially for beginners. Nonetheless, future iterations of the learning app should step-wise include more features to support advancing users.

  • Illing, Jannike; Klinke, Philipp; Gruenefeld, Uwe; Pfingsthorn, Max; Heuten, Wilko: Time is money! Evaluating Augmented Reality Instructions for Time-Critical Assembly Tasks. In: 19th International Conference on Mobile and Ubiquitous Multimedia (MUM). Association for Computing Machinery, Essen, Germany 2020, p. 277-287. doi:10.1145/3428361.3428398CitationDetails

    Manual assembly tasks require workers to precisely assemble parts in 3D space. Often additional time pressure increases the complexity of these tasks even further (e.g., adhesive bonding processes). Therefore, we investigate how Augmented Reality (AR) can improve workers’ performance in time and spatial dependent process steps. In a user study, we compare three conditions: instructions presented on (a) paper, (b) a camera-based see-through tablet, and (c) a head-mounted AR device. For instructions we used selected work steps from a standardized adhesive bonding process as a representative for common time-critical assembly tasks. We found that instructions in AR can improve the performance and understanding of time and spatial factors. The tablet instruction condition showed the best subjective results among the participants, which can increase motivation, particularly among less-experienced workers.

  • Faltaous, Sarah; Neuwirth, Joshua; Gruenefeld, Uwe; Schneegass, Stefan: SaVR: Increasing Safety in Virtual Reality Environments via Electrical Muscle Stimulation. In: 19th International Conference on Mobile and Ubiquitous Multimedia (MUM). Association for Computing Machinery, Essen, Germany 2020, p. 254-258. doi:10.1145/3428361.3428389CitationDetails

    One of the main benefits of interactive Virtual Reality (VR) applications is that they provide a high sense of immersion. As a result, users lose their sense of real-world space which makes them vulnerable to collisions with real-world objects. In this work, we propose a novel approach to prevent such collisions using Electrical Muscle Stimulation (EMS). EMS actively prevents the movement that would result in a collision by actuating the antagonist muscle. We report on a user study comparing our approach to the commonly used feedback modalities: audio, visual, and vibro-tactile. Our results show that EMS is a promising modality for restraining user movement and, at the same time, rated best in terms of user experience.

  • Gruenefeld, Uwe; Brueck, Yvonne; Boll, Susanne: Behind the Scenes: Comparing X-Ray Visualization Techniques in Head-Mounted Optical See-through Augmented Reality. In: 19th International Conference on Mobile and Ubiquitous Multimedia (MUM). Association for Computing Machinery, Essen, Germany 2020, p. 179-185. doi:10.1145/3428361.3428402CitationDetails

    Locating objects in the environment can be a difficult task, especially when the objects are occluded. With Augmented Reality, we can alternate our perceived reality by augmenting it with visual cues or removing visual elements of reality, helping users to locate occluded objects. However, to our knowledge, it has not yet been evaluated which visualization technique works best for estimating the distance and size of occluded objects in optical see-through head-mounted Augmented Reality. To address this, we compare four different visualization techniques derived from previous work in a laboratory user study. Our results show that techniques utilizing additional aid (textual or with a grid) help users to estimate the distance to occluded objects more accurately. In contrast, a realistic rendering of the scene, such as a cutout in the wall, resulted in higher distance estimation errors.

  • Auda, Jonas; Gruenefeld, Uwe; Mayer, Sven: It Takes Two To Tango: Conflicts Between Users on the Reality-Virtuality Continuum and Their Bystanders. In: Proceedings of the 14th ACM Interactive Surfaces and Spaces (ISS). Association for Computing Machinery, Lisbon, Portugal 2020. CitationDetails

    Over the last years, Augmented and Virtual Reality technology became more immersive. However, when users immerse themselves in these digital realities, they detach from their real-world environments. This detachment creates conflicts that are problematic in public spaces such as planes but also private settings. Consequently, on the one hand, the detaching from the world creates an immerse experience for the user, and on the other hand, this creates a social conflict with bystanders. With this work, we highlight and categorize social conflicts caused by using immersive digital realities. We first present different social settings in which social conflicts arise and then provide an overview of investigated scenarios. Finally, we present research opportunities that help to address social conflicts between immersed users and bystanders.

  • Schneegaß, Stefan; Auda, Jonas; Heger, Roman; Grünefeld, Uwe; Kosch, Thomas: EasyEG: A 3D-printable Brain-Computer Interface. In: Proceedings of the 33rd ACM Symposium on User Interface Software and Technology (UIST). Minnesota, USA 2020. doi:https://doi.org/10.1145/3379350.3416189CitationDetails

    Brain-Computer Interfaces (BCIs) are progressively adopted by the consumer market, making them available for a variety of use-cases. However, off-the-shelf BCIs are limited in their adjustments towards individual head shapes, evaluation of scalp-electrode contact, and extension through additional sensors. This work presents EasyEG, a BCI headset that is adaptable to individual head shapes and offers adjustable electrode-scalp contact to improve measuring quality. EasyEG consists of 3D-printed and low-cost components that can be extended by additional sensing hardware, hence expanding the application domain of current BCIs. We conclude with use-cases that demonstrate the potentials of our EasyEG headset.

  • Gruenefeld, Uwe; Prädel, Lars; Illing, Jannike; Stratmann, Tim; Drolshagen, Sandra; Pfingsthorn, Max: Mind the ARm: Realtime Visualization of Robot Motion Intent in Head-Mounted Augmented Reality. In: Proceedings of the Conference on Mensch Und Computer (MuC). Association for Computing Machinery, 2020, p. 259-266. doi:10.1145/3404983.3405509CitationDetails

    Established safety sensor technology shuts down industrial robots when a collision is detected, causing preventable loss of productivity. To minimize downtime, we implemented three Augmented Reality (AR) visualizations (Path, Preview, and Volume) which allow users to understand robot motion intent and give way to the robot. We compare the different visualizations in a user study in which a small cognitive task is performed in a shared workspace. We found that Preview and Path required significantly longer head rotations to perceive robot motion intent. Volume, however, required the shortest head rotation and was perceived as most safe, enabling closer proximity of the robot arm before one left the shared workspace without causing shutdowns.

  • Saad, Alia; Wittig, Nick; Grünefeld, Uwe; Schneegass, Stefan: A Systematic Analysis of External Factors Affecting Gait Identification. . CitationDetails