Publications

Selected Publications

This page shows selected publications from the last years. For a detailed list please refer to the Google Scholar or DBLP page of Stefan Schneegass.

Filter:
  • Pascher, Max; Goldau, Felix Ferdinand; Kronhardt, Kirill; Frese, Udo; Gerken, Jens: AdaptiX – A Transitional XR Framework for Development and Evaluation of Shared Control Applications in Assistive Robotics. In: Proc. ACM Hum.-Comput. Interact., Vol 8 (2024) No EICS. doi:10.48550/ARXIV.2310.15887PDFCitationDetails
    AdaptiX – A Transitional XR Framework for Development and Evaluation of Shared Control Applications in Assistive Robotics

    With the ongoing efforts to empower people with mobility impairments and the increase in technological acceptance by the general public, assistive technologies, such as collaborative robotic arms, are gaining popularity. Yet, their widespread success is limited by usability issues, specifically the disparity between user input and software control along the autonomy continuum. To address this, shared control concepts provide opportunities to combine the targeted increase of user autonomy with a certain level of computer assistance. This paper presents the free and open-source AdaptiX XR framework for developing and evaluating shared control applications in a high-resolution simulation environment. The initial framework consists of a simulated robotic arm with an example scenario in Virtual Reality (VR), multiple standard control interfaces, and a specialized recording/replay system. AdaptiX can easily be extended for specific research needs, allowing Human-Robot Interaction (HRI) researchers to rapidly design and test novel interaction methods, intervention strategies, and multi-modal feedback techniques, without requiring an actual physical robotic arm during the early phases of ideation, prototyping, and evaluation. Also, a Robot Operating System (ROS) integration enables the controlling of a real robotic arm in a PhysicalTwin approach without any simulation-reality gap. Here, we review the capabilities and limitations of AdaptiX in detail and present three bodies of research based on the framework. AdaptiX can be accessed at adaptix.robot-research.de.

  • Liebers, Jonathan; Laskowski, Patrick; Rademaker, Florian; Sabel, Leon; Hoppen, Jordan; Gruenefeld, Uwe; Schneegass, Stefan: Kinetic Signatures: A Systematic Investigation of Movement-Based User Identification in Virtual Reality. In: Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI ’24). Association for Computing Machinery, New York, NY, USA 2024. doi:10.1145/3613904.3642471PDFCitationDetails

    Behavioral Biometrics in Virtual Reality (VR) enable implicit user identification by leveraging the motion data of users’ heads and hands from their interactions in VR. This spatiotemporal data forms a Kinetic Signature, which is a user-dependent behavioral biometric trait. Although kinetic signatures have been widely used in recent research, the factors contributing to their degree of identifiability remain mostly unexplored. Drawing from existing literature, this work systematically examines the influence of static and dynamic components in human motion. We conducted a user study (N = 24) with two sessions to reidentify users across different VR sports and exercises after one week. We found that the identifiability of a kinetic signature depends on its inherent static and dynamic factors, with the best combination allowing for 90.91% identification accuracy after one week had passed. Therefore, this work lays a foundation for designing and refining movement-based identification protocols in immersive environments.

  • Nanavati, Amal; Pascher, Max; Ranganeni, Vinitha; Gordon, Ethan K.; Faulkner, Taylor Kessler; Srinivasa, Siddhartha S.; Cakmak, Maya; Alves-Oliveira, Patrícia; Gerken, Jens: Multiple Ways of Working with Users to Develop Physically Assistive Robots. In: A3DE '24: Workshop on Assistive Applications, Accessibility, and Disability Ethics at the ACM/IEEE International Conference on Human-Robot Interaction. 2024. doi:10.48550/arXiv.2403.00489PDFCitationDetails
    Multiple Ways of Working with Users to Develop Physically Assistive Robots

    Despite the growth of physically assistive robotics (PAR) research over the last decade, nearly half of PAR user studies do not involve participants with the target disabilities. There are several reasons for this---recruitment challenges, small sample sizes, and transportation logistics---all influenced by systemic barriers that people with disabilities face. However, it is well-established that working with end-users results in technology that better addresses their needs and integrates with their lived circumstances. In this paper, we reflect on multiple approaches we have taken to working with people with motor impairments across the design, development, and evaluation of three PAR projects: (a) assistive feeding with a robot arm; (b) assistive teleoperation with a mobile manipulator; and (c) shared control with a robot arm. We discuss these approaches to working with users along three dimensions---individual- vs. community-level insight, logistic burden on end-users vs. researchers, and benefit to researchers vs. community---and share recommendations for how other PAR researchers can incorporate users into their work.

  • Wozniak, Maciej K.; Pascher, Max; Ikeda, Bryce; Luebbers, Matthew B.; Jena, Ayesha: Virtual, Augmented, and Mixed Reality for Human-Robot Interaction (VAM-HRI). In: Companion of the 2024 ACM/IEEE International Conference on Human-Robot Interaction (HRI '24 Companion), March 11--14, 2024, Boulder, CO, USA. ACM, Boulder, Colorado, USA 2024. doi:10.1145/3610978.3638158PDFCitationDetails
    Virtual, Augmented, and Mixed Reality for Human-Robot Interaction (VAM-HRI)

    The 7th International Workshop on Virtual, Augmented, and Mixed Reality for Human-Robot Interaction (VAM-HRI) seeks to bring together researchers from human-robot interaction (HRI), robotics, and mixed reality (MR) to address the challenges related to mixed reality interactions between humans and robots. Key topics include the development of robots capable of interacting with humans in mixed reality, the use of virtual reality for creating interactive robots, designing augmented reality interfaces for communication between humans and robots, exploring mixed reality interfaces for enhancing robot learning, comparative analysis of the capabilities and perceptions of robots and virtual agents, and sharing best design practices. VAM-HRI 2024 will build on the success of VAM-HRI workshops held from 2018 to 2023, advancing research in this specialized community. The prior year’s website is located at vam-hri.github.io.

  • Pascher, Max; Saad, Alia; Liebers, Jonathan; Heger, Roman; Gerken, Jens; Schneegass, Stefan; Gruenefeld, Uwe: Hands-On Robotics: Enabling Communication Through Direct Gesture Control. In: Companion of the 2024 ACM/IEEE International Conference on Human-Robot Interaction (HRI '24 Companion), March 11--14, 2024, Boulder, CO, USA. ACM, Boulder, Colorado, USA 2024. doi:10.1145/3610978.3640635PDFCitationDetails
    Hands-On Robotics: Enabling Communication Through Direct Gesture Control

    Effective Human-Robot Interaction (HRI) is fundamental to seamlessly integrating robotic systems into our daily lives. However, current communication modes require additional technological interfaces, which can be cumbersome and indirect. This paper presents a novel approach, using direct motion-based communication by moving a robot's end effector. Our strategy enables users to communicate with a robot by using four distinct gestures -- two handshakes ('formal' and 'informal') and two letters ('W' and 'S'). As a proof-of-concept, we conducted a user study with 16 participants, capturing subjective experience ratings and objective data for training machine learning classifiers. Our findings show that the four different gestures performed by moving the robot's end effector can be distinguished with close to 100% accuracy. Our research offers implications for the design of future HRI interfaces, suggesting that motion-based interaction can empower human operators to communicate directly with robots, removing the necessity for additional hardware.

  • Pascher, Max; Zinta, Kevin; Gerken, Jens: Exploring of Discrete and Continuous Input Control for AI-enhanced Assistive Robotic Arms. In: Companion of the 2024 ACM/IEEE International Conference on Human-Robot Interaction (HRI '24 Companion), March 11--14, 2024, Boulder, CO, USA. ACM, Boulder, Colorado, USA 2024. doi:10.1145/3610978.3640626PDFCitationDetails
    Exploring of Discrete and Continuous Input Control for AI-enhanced Assistive Robotic Arms

    Robotic arms, integral in domestic care for individuals with motor impairments, enable them to perform Activities of Daily Living (ADLs) independently, reducing dependence on human caregivers. These collaborative robots require users to manage multiple Degrees-of-Freedom (DoFs) for tasks like grasping and manipulating objects. Conventional input devices, typically limited to two DoFs, necessitate frequent and complex mode switches to control individual DoFs. Modern adaptive controls with feed-forward multi-modal feedback reduce the overall task completion time, number of mode switches, and cognitive load. Despite the variety of input devices available, their effectiveness in adaptive settings with assistive robotics has yet to be thoroughly assessed. This study explores three different input devices by integrating them into an established XR framework for assistive robotics, evaluating them and providing empirical insights through a preliminary study for future developments.

  • Pascher, Max: System and Method for Providing an Object-related Haptic Effect. German Patent and Trade Mark Office (DPMA), 2024. Full textCitationDetails
    System and Method for Providing an Object-related Haptic Effect
  • Liebers, Carina; Megarajan, Pranav; Auda, Jonas; Stratmann, Tim C.; Pfingsthorn, Max; Gruenefeld, Uwe; Schneegass, Stefan: Keep the Human in the Loop: Arguments for Human Assistance in the Synthesis of Simulation Data for Robot Training. 3. 2024. doi:10.3390/mti8030018Full textCitationDetails

    Robot training often takes place in simulated environments, particularly with reinforcement learning. Therefore, multiple training environments are generated using domain randomization to ensure transferability to real-world applications and compensate for unknown real-world states. We propose improving domain randomization by involving human application experts in various stages of the training process. Experts can provide valuable judgments on simulation realism, identify missing properties, and verify robot execution. Our human-in-the-loop workflow describes how they can enhance the process in five stages: validating and improving real-world scans, correcting virtual representations, specifying application-specific object properties, verifying and influencing simulation environment generation, and verifying robot training. We outline examples and highlight research opportunities. Furthermore, we present a case study in which we implemented different prototypes, demonstrating the potential of human experts in the given stages. Our early insights indicate that human input can benefit robot training at different stages.

  • Saad, Alia; Pascher, Max; Kassem, Khaled; Heger, Roman; Liebers, Jonathan; Schneegass, Stefan; Gruenefeld, Uwe: Hand-in-Hand: Investigating Mechanical Tracking for User Identification in Cobot Interaction. In: Proceedings of International Conference on Mobile and Ubiquitous Multimedia (MUM). Vienna, Austria 2023. doi:10.1145/3626705.3627771PDFCitationDetails
    Hand-in-Hand: Investigating Mechanical Tracking for User Identification in Cobot Interaction

    Robots play a vital role in modern automation, with applications in manufacturing and healthcare. Collaborative robots integrate human and robot movements. Therefore, it is essential to ensure that interactions involve qualified, and thus identified, individuals. This study delves into a new approach: identifying individuals through robot arm movements. Different from previous methods, users guide the robot, and the robot senses the movements via joint sensors. We asked 18 participants to perform six gestures, revealing the potential use as unique behavioral traits or biometrics, achieving F1-score up to 0.87, which suggests direct robot interactions as a promising avenue for implicit and explicit user identification.

  • Liebers, Jonathan; Burschik, Christian; Gruenefeld, Uwe; Schneegass, Stefan: Exploring the Stability of Behavioral Biometrics in Virtual Reality in a Remote Field Study: Towards Implicit and Continuous User Identification through Body Movements. In: Proceedings of the 29th ACM Symposium on Virtual Reality Software and Technology. Association for Computing Machinery, New York, NY, USA 2023. doi:10.1145/3611659.3615696CitationDetails

    Behavioral biometrics has recently become a viable alternative method for user identification in Virtual Reality (VR). Its ability to identify users based solely on their implicit interaction allows for high usability and removes the burden commonly associated with security mechanisms. However, little is known about the temporal stability of behavior (i.e., how behavior changes over time), as most previous works were evaluated in highly controlled lab environments over short periods. In this work, we present findings obtained from a remote field study (N = 15) that elicited data over a period of eight weeks from a popular VR game. We found that there are changes in people’s behavior over time, but that two-session identification still is possible with a mean F1-score of up to 71%, while an initial training yields 86%. However, we also see that performance can drop by up to over 50 percentage points when testing with later sessions, compared to the first session, particularly for smaller groups. Thus, our findings indicate that the use of behavioral biometrics in VR is convenient for the user and practical with regard to changing behavior and also reliable regarding behavioral variation.

  • Auda, Jonas; Grünefeld, Uwe; Faltaous, Sarah; Mayer, Sven; Schneegass, Stefan: A Scoping Survey on Cross-reality Systems. In: ACM Computing Surveys. 2023. doi:10.1145/3616536CitationDetails
    A Scoping Survey on Cross-reality Systems

    Immersive technologies such as Virtual Reality (VR) and Augmented Reality (AR) empower users to experience digital realities. Known as distinct technology classes, the lines between them are becoming increasingly blurry with recent technological advancements. New systems enable users to interact across technology classes or transition between them—referred to as cross-reality systems. Nevertheless, these systems are not well understood. Hence, in this article, we conducted a scoping literature review to classify and analyze cross-reality systems proposed in previous work. First, we define these systems by distinguishing three different types. Thereafter, we compile a literature corpus of 306 relevant publications, analyze the proposed systems, and present a comprehensive classification, including research topics, involved environments, and transition types. Based on the gathered literature, we extract nine guiding principles that can inform the development of cross-reality systems. We conclude with research challenges and opportunities.

  • Auda, Jonas; Grünefeld, Uwe; Mayer, Sven; Faltaous, Sarah; Schneegass, Stefan: The Actuality-Time Continuum: Visualizing Interactions and Transitions Taking Place in Cross-Reality Systems. In: IEEE ISMAR 2023. Sydney 2023. Full textCitationDetails
    The Actuality-Time Continuum: Visualizing Interactions and Transitions Taking Place in Cross-Reality Systems

    In the last decade, researchers contributed an increasing number of cross-reality systems and their evaluations. Going beyond individual technologies such as Virtual or Augmented Reality, these systems introduce novel approaches that help to solve relevant problems such as the integration of bystanders or physical objects. However, cross-reality systems are complex by nature, and describing the interactions and transitions taking place is a challenging task. Thus, in this paper, we propose the idea of the Actuality-Time Continuum that aims to enable researchers and designers alike to visualize complex cross-reality experiences. Moreover, we present four visualization examples that illustrate the potential of our proposal and conclude with an outlook on future perspectives.

  • Keppel, Jonas; Strauss, Marvin; Faltaous, Sarah; Liebers, Jonathan; Heger, Roman; Gruenefeld, Uwe; Schneegass, Stefan: Don't Forget to Disinfect: Understanding Technology-Supported Hand Disinfection Stations. In: Proc. ACM Hum.-Comput. Interact., Vol 7 (2023). doi:10.1145/3604251PDFCitationDetails
    Don't Forget to Disinfect: Understanding Technology-Supported Hand Disinfection Stations

    The global COVID-19 pandemic created a constant need for hand disinfection. While it is still essential, disinfection use is declining with the decrease in perceived personal risk (e.g., as a result of vaccination). Thus this work explores using different visual cues to act as reminders for hand disinfection. We investigated different public display designs using (1) paper-based only, adding (2) screen-based, or (3) projection-based visual cues. To gain insights into these designs, we conducted semi-structured interviews with passersby (N=30). Our results show that the screen- and projection-based conditions were perceived as more engaging. Furthermore, we conclude that the disinfection process consists of four steps that can be supported: drawing attention to the disinfection station, supporting the (subconscious) understanding of the interaction, motivating hand disinfection, and performing the action itself. We conclude with design implications for technology-supported disinfection.

  • Pascher, Max; Kronhardt, Kirill; Goldau, Felix Ferdinand; Frese, Udo; Gerken, Jens: In Time and Space: Towards Usable Adaptive Control for Assistive Robotic Arms. In: RO-MAN 2023 - IEEE International Conference on Robot and Human Interactive Communication. IEEE, Busan, Korea 2023, p. 2300-2307. doi:10.1109/RO-MAN57019.2023.10309381PDFCitationDetails
    In Time and Space: Towards Usable Adaptive Control for Assistive Robotic Arms

    Robotic solutions, in particular robotic arms, are becoming more frequently deployed for close collaboration with humans, for example in manufacturing or domestic care environments. These robotic arms require the user to control several Degrees-of-Freedom (DoFs) to perform tasks, primarily involving grasping and manipulating objects. Standard input devices predominantly have two DoFs, requiring time-consuming and cognitively demanding mode switches to select individual DoFs. Contemporary Adaptive DoF Mapping Controls (ADMCs) have shown to decrease the necessary number of mode switches but were up to now not able to significantly reduce the perceived workload. Users still bear the mental workload of incorporating abstract mode switching into their workflow. We address this by providing feed-forward multimodal feedback using updated recommendations of ADMC, allowing users to visually compare the current and the suggested mapping in real-time. We contrast the effectiveness of two new approaches that a) continuously recommend updated DoF combinations or b) use discrete thresholds between current robot movements and new recommendations. Both are compared in a Virtual Reality (VR) in-person study against a classic control method. Significant results for lowered task completion time, fewer mode switches, and reduced perceived workload conclusively establish that in combination with feedforward, ADMC methods can indeed outperform classic mode switching. A lack of apparent quantitative differences between Continuous and Threshold reveals the importance of user-centered customization options. Including these implications in the development process will improve usability, which is essential for successfully implementing robotic technologies with high user acceptance.

  • Liebers, Carina; Prochazka, Marvin; Pfützenreuter, Niklas; Liebers, Jonathan; Auda, Jonas; Gruenefeld, Uwe; Schneegass, Stefan: Pointing It out! Comparing Manual Segmentation of 3D Point Clouds between Desktop, Tablet, and Virtual Reality. In: International Journal of Human–Computer Interaction (2023), p. 1-15. doi:10.1080/10447318.2023.2238945PDFFull textCitationDetails
    Pointing It out! Comparing Manual Segmentation of 3D Point Clouds between Desktop, Tablet, and Virtual Reality

    Scanning everyday objects with depth sensors is the state-of-the-art approach to generating point clouds for realistic 3D representations. However, the resulting point cloud data suffers from outliers and contains irrelevant data from neighboring objects. To obtain only the desired 3D representation, additional manual segmentation steps are required. In this paper, we compare three different technology classes as independent variables (desktop vs. tablet vs. virtual reality) in a within-subject user study (N = 18) to understand their effectiveness and efficiency for such segmentation tasks. We found that desktop and tablet still outperform virtual reality regarding task completion times, while we could not find a significant difference between them in the effectiveness of the segmentation. In the post hoc interviews, participants preferred the desktop due to its familiarity and temporal efficiency and virtual reality due to its given three-dimensional representation.

  • Abdrabou, Yasmeen; Rivu, Sheikh Radiah; Ammar, Tarek; Liebers, Jonathan; Saad, Alia; Liebers, Carina; Gruenefeld, Uwe; Knierim, Pascal; Khamis, Mohamed; Makela, Ville; Schneegass, Stefan: Understanding Shoulder Surfer Behavior and Attack Patterns Using Virtual Reality. In: Proceedings of the 2022 International Conference on Advanced Visual Interfaces, Vol 2022 (2023), p. 1-9. doi:10.1145/3531073.3531106PDFFull textCitationDetails
  • Liebers, Carina; Agarwal, Shivam; Beck, Fabian: CohExplore: Visually Supporting Students in Exploring Text Cohesion. In: Gillmann, Christina; Krone, Michael; Lenti, Simone (Ed.): EuroVis 2023 - Posters. The Eurographics Association, 2023. doi:10.2312/evp.20231058PDFCitationDetails

    A cohesive text allows readers to follow the described ideas and events. Exploring cohesion in text might aid students enhancing their academic writing. We introduce CohExplore, which promotes exploring and reflecting on cohesion of a given text by visualizing computed cohesion related metrics on an overview and detailed level. Detected topics are color-coded, semantic similarity is shown via lines, while connectives and co-references in a paragraph are encoded using text decoration. Demonstrating the system, we share insights about a student-authored text.

  • Liebers, Carina; Agarwal, Shivam; Krug, Maximilian; Pitsch, Karola; Beck, Fabian: VisCoMET: Visually Analyzing Team Collaboration in Medical Emergency Trainings. In: Computer Graphics Forum (2023). doi:10.1111/cgf.14819PDFCitationDetails

    Handling emergencies requires efficient and effective collaboration of medical professionals. To analyze their performance, in an application study, we have developed VisCoMET, a visual analytics approach displaying interactions of healthcare personnel in a triage training of a mass casualty incident. The application scenario stems from social interaction research, where the collaboration of teams is studied from different perspectives. We integrate recorded annotations from multiple sources, such as recorded videos of the sessions, transcribed communication, and eye-tracking information. For each session, an informationrich timeline visualizes events across these different channels, specifically highlighting interactions between the team members. We provide algorithmic support to identify frequent event patterns and to search for user-defined event sequences. Comparing different teams, an overview visualization aggregates each training session in a visual glyph as a node, connected to similar sessions through edges. An application example shows the usage of the approach in the comparative analysis of triage training sessions, where multiple teams encountered the same scene, and highlights discovered insights. The approach was evaluated through feedback from visualization and social interaction experts. The results show that the approach supports reflecting on teams’ performance by exploratory analysis of collaboration behavior while particularly enabling the comparison of triage training sessions.

  • Pascher, Max; Grünefeld, Uwe; Schneegass, Stefan; Gerken, Jens: How to Communicate Robot Motion Intent: A Scoping Review. In: Acm (Ed.): Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (CHI ’23). 2023. doi:10.1145/3544548.3580857PDFFull textCitationDetails
    How to Communicate Robot Motion Intent: A Scoping Review

    Robots are becoming increasingly omnipresent in our daily lives, supporting us and carrying out autonomous tasks. In Human-Robot Interaction, human actors benefit from understanding the robot's motion intent to avoid task failures and foster collaboration. Finding effective ways to communicate this intent to users has recently received increased research interest. However, no common language has been established to systematize robot motion intent. This work presents a scoping review aimed at unifying existing knowledge. Based on our analysis, we present an intent communication model that depicts the relationship between robot and human through different intent dimensions (intent type, intent information, intent location). We discuss these different intent dimensions and their interrelationships with different kinds of robots and human roles. Throughout our analysis, we classify the existing research literature along our intent communication model, allowing us to identify key patterns and possible directions for future research.

  • Pascher, Max; Franzen, Til; Kronhardt, Kirill; Grünefeld, Uwe; Schneegass, Stefan; Gerken, Jens: HaptiX: Vibrotactile Haptic Feedback for Communication of 3D Directional Cues. In: Acm (Ed.): Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems - Extended Abstract (CHI ’23). 2023. doi:10.1145/3544549.3585601PDFFull textCitationDetails
    HaptiX: Vibrotactile Haptic Feedback for Communication of 3D Directional Cues

    In Human-Computer-Interaction, vibrotactile haptic feedback offers the advantage of being independent of any visual perception of the environment. Most importantly, the user's field of view is not obscured by user interface elements, and the visual sense is not unnecessarily strained. This is especially advantageous when the visual channel is already busy, or the visual sense is limited. We developed three design variants based on different vibrotactile illusions to communicate 3D directional cues. In particular, we explored two variants based on the vibrotactile illusion of the cutaneous rabbit and one based on apparent vibrotactile motion. To communicate gradient information, we combined these with pulse-based and intensity-based mapping. A subsequent study showed that the pulse-based variants based on the vibrotactile illusion of the cutaneous rabbit are suitable for communicating both directional and gradient characteristics. The results further show that a representation of 3D directions via vibrations can be effective and beneficial.

  • Tarner, Hagen; Beck, Fabian: Visualizing Runtime Evolution Paths in a Multidimensional Space (Work In Progress Paper). In: Companion of the 2023 ACM/SPEC International Conference on Performance Engineering. ACM, Coimbra, Portugal 2023. doi:10.1145/3578245.3585031PDFCitationDetails
    Visualizing Runtime Evolution Paths in a Multidimensional Space (Work In Progress Paper)

    Runtime data of software systems is often of multivariate nature, describing different aspects of performance among other characteristics, and evolves along different versions or changes depending on the execution context. This poses a challenge for visualizations, which are typically only two- or three-dimensional. Using dimensionality reduction, we project the multivariate runtime data to 2D and visualize the result in a scatter plot. To show changes over time, we apply the projection to multiple timestamps and connect temporally adjacent points to form trajectories. This allows for cluster and outlier detection, analysis of co-evolution, and finding temporal patterns. While projected temporal trajectories have been applied to other domains before, we use it to visualize software evolution and execution context changes as evolution paths. We experiment with and report results of two application examples: (I) the runtime evolution along different versions of components from the Apache Commons project, and (II) a benchmark suite from scientific visualization comparing different rendering techniques along camera paths.

  • Saad, Alia; Izadi, Kian; Ahmad Khan, Anam; Knierim, Pascal; Schneegass, Stefan; Alt, Florian; Abdelrahman, Yomna: HotFoot: Foot-Based User Identification Using Thermal Imaging. In: Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. Association for Computing Machinery, New York, NY, USA 2023. doi:10.1145/3544548.3580924CitationDetails

    We propose a novel method for seamlessly identifying users by combining thermal and visible feet features. While it is known that users’ feet have unique characteristics, these have so far been underutilized for biometric identification, as observing those features often requires the removal of shoes and socks. As thermal cameras are becoming ubiquitous, we foresee a new form of identification, using feet features and heat traces to reconstruct the footprint even while wearing shoes or socks. We collected a dataset of users’ feet (N = 21), wearing three types of footwear (personal shoes, standard shoes, and socks) on three floor types (carpet, laminate, and linoleum). By combining visual and thermal features, an AUC between 91.1% and 98.9%, depending on floor type and shoe type can be achieved, with personal shoes on linoleum floor performing best. Our findings demonstrate the potential of thermal imaging for continuous and unobtrusive user identification.

  • Escobar, Ronald; Sandoval Alcocer, Juan Pablo; Tarner, Hagen; Beck, Fabian; Bergel, Alexandre: Spike – A code editor plugin highlighting fine-grained changes. In: Working Conference on Software Visualization (VISSOFT). Limassol, Cyprus 2022. doi:0.1109/VISSOFT55257.2022.00026CitationDetails
    Spike – A code editor plugin highlighting fine-grained changes

    Information about source code changes is important for many software development activities. As such, modern IDEs, including,  \emph{IntelliJ IDEA} and \emph{Visual Studio Code}, show visual clues within the code editor that highlight lines that have been changed since the last synchronization with the code repository. However, the granularity of the change information is limited to a line level, showing mainly a small colored icon on the left side of the lines that have been added, deleted, or modified.

    This paper introduces Spike, a source code highlighting plugin that uses the font color to visually encode fine-grained version difference information within the code editor. In contrast to previously mentioned tools, Spike can highlight insertions, deletions, updates, and refactorings all in a same line. Our plugin also enriches the source code with small icons that allow retrieving detailed information about a given code change. We perform an exploratory user study with five professional software engineers. Our results show that our approach is able to assist practitioners with complex comprehension tasks about software history within the code editor.

  • Tarner, Hagen; Bruder, Valentin; Frey, Steffen; Ertl, Thomas; Beck, Fabian: Visually Comparing Rendering Performance from Multiple Perspectives. In: Bender, Jan; Botsch, Mario; Keim, Daniel A. (Ed.): Vision, Modeling, and Visualization. The Eurographics Association, Konstanz 2022. doi:10.2312/vmv.20221211PDFCitationDetails
    Visually Comparing Rendering Performance from Multiple Perspectives

    Evaluation of rendering performance is crucial when selecting or developing algorithms, but challenging as performance can largely differ across a set of selected scenarios. Despite this, performance metrics are often reported and compared in a highly aggregated way. In this paper we suggest a more fine-grained approach for the evaluation of rendering performance, taking into account multiple perspectives on the scenario: camera position and orientation along different paths, rendering algorithms, image resolution, and hardware. The approach comprises a visual analysis system that shows and contrasts the data from these perspectives. The users can explore combinations of perspectives and gain insight into the performance characteristics of several rendering algorithms. A stylized representation of the camera path provides a base layout for arranging the multivariate performance data as radar charts, each comparing the same set of rendering algorithms while linking the performance data with the rendered images. To showcase our approach, we analyze two types of scientific visualization benchmarks.

  • Keppel, Jonas; Gruenefeld, Uwe; Strauss, Marvin; Gonzalez, Luis Ignacio Lopera; Amft, Oliver; Schneegass, Stefan: Reflecting on Approaches to Monitor User's Dietary Intake. MobileHCI 2022, Vancouver, Canada 2022. PDFFull textCitationDetails

    Monitoring dietary intake is essential to providing user feedback and achieving a healthier lifestyle. In the past, different approaches for monitoring dietary behavior have been proposed. In this position paper, we first present an overview of the state-of-the-art techniques grouped by image- and sensor-based approaches. After that, we introduce a case study in which we present a Wizard-of-Oz approach as an alternative and non-automatic monitoring method.

  • Keppel, Jonas; Öztürk, Alper; Herbst, Jean-Luc; Lewin, Stefan: Artificial Conscience - Fight the Inner Couch Potato. MobileHCI 2022, Vancouver, Canada 2022. PDFFull textCitationDetails

    The Artificial Conscience concept aims to improve the user’s quality of life by giving recommendations for a healthier lifestyle and reacting to possibly harmful situations detected by the various sensors of the Huawei Eyewear. But autonomous reactions to situations that pose an immediate danger to the user’s health as well as methods for habit-forming and other supporting functions are only representing a subset of the possible design space. All functions of this concept are described and evaluated individually, both under the assumption of autonomous operation of the Huawei Eyewear and with the inclusion of other data sources and sensors (smartphone, smartwatch). In addition, an outlook is given on additional features for the Huawei Eyewear that could be implemented in future versions of the glasses. YouTube

  • Detjen, Henrik; Faltaous, Sarah; Keppel, Jonas; Prochazka, Marvin; Gruenefeld, Uwe; Sadeghian, Shadan; Schneegass, Stefan: Investigating the Influence of Gaze- and Context-Adaptive Head-up Displays on Take-Over Requests. In: Acm (Ed.): AutomotiveUI '22: Proceedings of the 14th International Conference on Automotive User Interfaces and Interactive Vehicular Applications. 2022. doi:10.1145/3543174.3546089Full textCitationDetails

    In Level 3 automated vehicles, preparing drivers for take-over requests (TORs) on the head-up display (HUD) requires their repeated attention. Visually salient HUD elements can distract attention from potentially critical parts in a driving scene during a TOR. Further, attention is (a) meanwhile needed for non-driving-related activities and can (b) be over-requested. In this paper, we conduct a driving simulator study (N=12), varying required attention by HUD warning presence (absent vs. constant vs. TOR-only) across gaze-adaptivity (with vs. without) to fit warnings to the situation. We found that (1) drivers value visual support during TORs, (2) gaze-adaptive scene complexity reduction works but creates a benefit-neutralizing distraction for some, and (3) drivers perceive constant HUD warnings as annoying and distracting over time. Our findings highlight the need for (a) HUD adaptation based on user activities and potential TORs and (b) sparse use of warning cues in future HUD designs.

  • Faltaous, Sarah; Prochazka, Marvin; Auda, Jonas; Keppel, Jonas; Wittig, Nick; Gruenefeld, Uwe; Schneegass, Stefan: Give Weight to VR: Manipulating Users’ Perception of Weight in Virtual Reality with Electric Muscle Stimulation. Association for Computing Machinery, New York, NY, USA 2022. doi:10.1145/3543758.3547571CitationDetails

    Virtual Reality (VR) devices empower users to experience virtual worlds through rich visual and auditory sensations. However, believable haptic feedback that communicates the physical properties of virtual objects, such as their weight, is still unsolved in VR. The current trend towards hand tracking-based interactions, neglecting the typical controllers, further amplifies this problem. Hence, in this work, we investigate the combination of passive haptics and electric muscle stimulation to manipulate users’ perception of weight, and thus, simulate objects with different weights. In a laboratory user study, we investigate four differing electrode placements, stimulating different muscles, to determine which muscle results in the most potent perception of weight with the highest comfort. We found that actuating the biceps brachii or the triceps brachii muscles increased the weight perception of the users. Our findings lay the foundation for future investigations on weight perception in VR.

  • Grünefeld, Uwe; Geilen, Alexander; Liebers, Jonathan; Wittig, Nick; Koelle, Marion; Schneegass, Stefan: ARm Haptics: 3D-Printed Wearable Haptics for Mobile Augmented Reality. In: Proc. ACM Hum.-Comput. Interact., Vol 6 (2022). doi:10.1145/3546728CitationDetails

    Augmented Reality (AR) technology enables users to superpose virtual content onto their environments. However, interacting with virtual content while mobile often requires users to perform interactions in mid-air, resulting in a lack of haptic feedback. Hence, in this work, we present the ARm Haptics system, which is worn on the user's forearm and provides 3D-printed input modules, each representing well-known interaction components such as buttons, sliders, and rotary knobs. These modules can be changed quickly, thus allowing users to adapt them to their current use case. After an iterative development of our system, which involved a focus group with HCI researchers, we conducted a user study to compare the ARm Haptics system to hand-tracking-based interaction in mid-air (baseline). Our findings show that using our system results in significantly lower error rates for slider and rotary input. Moreover, use of the ARm Haptics system results in significantly higher pragmatic quality and lower effort, frustration, and physical demand. Following our findings, we discuss opportunities for haptics worn on the forearm.

  • Schneegass, Stefan; Saad, Alia; Heger, Roman; Delgado Rodriguez, Sarah; Poguntke, Romina; Alt, Florian: An Investigation of Shoulder Surfing Attacks on Touch-Based Unlock Events. In: Proc. ACM Hum.-Comput. Interact., Vol 6 (2022). doi:10.1145/3546742CitationDetails
    An Investigation of Shoulder Surfing Attacks on Touch-Based Unlock Events

    This paper contributes to our understanding of user-centered attacks on smartphones. In particular, we investigate the likelihood of so-called shoulder surfing attacks during touch-based unlock events and provide insights into users' views and perceptions. To do so, we ran a two-week in-the-wild study (N=12) in which we recorded images with a 180-degree field of view lens that was mounted on the smartphone's front-facing camera. In addition, we collected contextual information and allowed participants to assess the situation. We found that only a small fraction of shoulder surfing incidents that occur during authentication are actually perceived as threatening. Furthermore, our findings suggest that our notions of (un)safe places need to be rethought. Our work is complemented by a discussion of implications for future user-centered attack-aware systems. This work can serve as a basis for usable security researchers to better design systems against user-centered attacks.

  • Pascher, Max; Kronhardt, Kirill; Franzen, Til; Gerken, Jens: Adaptive DoF: Concepts to Visualize AI-generated Movements in Human-Robot Collaboration. In: Proceedings of the 2022 International Conference on Advanced Visual Interfaces (AVI 2022). ACM, NewYork, NY, USA 2022. doi:10.1145/3531073.3534479CitationDetails
    Adaptive DoF: Concepts to Visualize AI-generated Movements in Human-Robot Collaboration

    Nowadays, robots collaborate closely with humans in a growing number of areas. Enabled by lightweight materials and safety sensors , these cobots are gaining increasing popularity in domestic care, supporting people with physical impairments in their everyday lives. However, when cobots perform actions autonomously, it remains challenging for human collaborators to understand and predict their behavior. This, however, is crucial for achieving trust and user acceptance. One significant aspect of predicting cobot behavior is understanding their motion intent and comprehending how they "think" about their actions. We work on solutions that communicate the cobots AI-generated motion intent to a human collaborator. Effective communication enables users to proceed with the most suitable option. We present a design exploration with different visualization techniques to optimize this user understanding, ideally resulting in increased safety and end-user acceptance.

  • Grünefeld, Uwe; Auda, Jonas; Mathis, Florian; Schneegass, Stefan; Khamis, Mohamed; Gugenheimer, Jan; Mayer, Sven: VRception: Rapid Prototyping of Cross-Reality Systems in Virtual Reality. In: Proceedings of the 41st ACM Conference on Human Factors in Computing Systems (CHI). Association for Computing Machinery, New Orleans, United States 2022. doi:10.1145/3491102.3501821CitationDetails

    Cross-reality systems empower users to transition along the realityvirtuality continuum or collaborate with others experiencing different manifestations of it. However, prototyping these systems is challenging, as it requires sophisticated technical skills, time, and often expensive hardware. We present VRception, a concept and toolkit for quick and easy prototyping of cross-reality systems. By simulating all levels of the reality-virtuality continuum entirely in Virtual Reality, our concept overcomes the asynchronicity of realities, eliminating technical obstacles. Our VRception Toolkit leverages this concept to allow rapid prototyping of cross-reality systems and easy remixing of elements from all continuum levels. We replicated six cross-reality papers using our toolkit and presented them to their authors. Interviews with them revealed that our toolkit sufficiently replicates their core functionalities and allows quick iterations. Additionally, remote participants used our toolkit in pairs to collaboratively implement prototypes in about eight minutes that they would have otherwise expected to take days.

  • Auda, Jonas; Grünefeld, Uwe; Schneegass, Stefan: If The Map Fits! Exploring Minimaps as Distractors from Non-Euclidean Spaces in Virtual Reality. In: CHI 22. ACM, 2022. doi:10.1145/3491101.3519621CitationDetails
  • Abdrabou, Yasmeen; Rivu, Radiah; Ammar, Tarek; Liebers, Jonathan; Saad, Alia; Liebers, Carina; Gruenefeld, Uwe; Knierim, Pascal; Khamis, Mohamed; Mäkelä, Ville; Schneegass, Stefan; Alt, Florian: Understanding Shoulder Surfer Behavior Using Virtual Reality. In: Proceedings of the IEEE conference on Virtual Reality and 3D User Interfaces (IEEE VR). IEEE, Christchurch, New Zealand 2022. CitationDetails

    We explore how attackers behave during shoulder surfing. Unfortunately, such behavior is challenging to study as it is often opportunistic and can occur wherever potential attackers can observe other people’s private screens. Therefore, we investigate shoulder surfing using virtual reality (VR). We recruited 24 participants and observed their behavior in two virtual waiting scenarios: at a bus stop and in an open office space. In both scenarios, avatars interacted with private screens displaying different content, thus providing opportunities for shoulder surfing. From the results, we derive an understanding of factors influencing shoulder surfing behavior.

  • Auda, Jonas; Grünefeld, Uwe; Kosch, Thomas; Schneegass, Stefan: The Butterfly Effect: Novel Opportunities for Steady-State Visually-Evoked Potential Stimuli in Virtual Reality. In: Researchgate (Ed.): Augmented Humans. Kashiwa, Chiba, Japan 2022. doi:DOI:10.1145/3519391.3519397CitationDetails
  • Pascher, Max; Kronhardt, Kirill; Franzen, Til; Gruenefeld, Uwe; Schneegass, Stefan; Gerken, Jens: My Caregiver the Cobot: Comparing Visualization Techniques to Effectively Communicate Cobot Perception to People with Physical Impairments. In: MDPI Sensors, Vol 22 (2022). doi:10.3390/s22030755Full textCitationDetails

    Nowadays, robots are found in a growing number of areas where they collaborate closely with humans. Enabled by lightweight materials and safety sensors, these cobots are gaining increasing popularity in domestic care, where they support people with physical impairments in their everyday lives. However, when cobots perform actions autonomously, it remains challenging for human collaborators to understand and predict their behavior, which is crucial for achieving trust and user acceptance. One significant aspect of predicting cobot behavior is understanding their perception and comprehending how they "see" the world. To tackle this challenge, we compared three different visualization techniques for Spatial Augmented Reality. All of these communicate cobot perception by visually indicating which objects in the cobot's surrounding have been identified by their sensors. We compared the well-established visualizations Wedge and Halo against our proposed visualization Line in a remote user experiment with participants suffering from physical impairments. In a second remote experiment, we validated these findings with a broader non-specific user base. Our findings show that Line, a lower complexity visualization, results in significantly faster reaction times compared to Halo, and lower task load compared to both Wedge and Halo. Overall, users prefer Line as a more straightforward visualization. In Spatial Augmented Reality, with its known disadvantage of limited projection area size, established off-screen visualizations are not effective in communicating cobot perception and Line presents an easy-to-understand alternative.

  • Kronhardt, Kirill; Rübner, Stephan; Pascher, Max; Goldau, Felix Ferdinand; Frese, Udo; Gerken, Jens: Adapt or Perish? Exploring the Effectiveness of Adaptive DoF Control Interaction Methods for Assistive Robot Arms. In: Technologies, Vol 10 (2022). doi:10.3390/technologies10010030Full textCitationDetails

    Robot arms are one of many assistive technologies used by people with motor impairments. Assistive robot arms can allow people to perform activities of daily living (ADL) involving grasping and manipulating objects in their environment without the assistance of caregivers. Suitable input devices (e.g., joysticks) mostly have two Degrees of Freedom (DoF), while most assistive robot arms have six or more. This results in time-consuming and cognitively demanding mode switches to change the mapping of DoFs to control the robot. One option to decrease the difficulty of controlling a high-DoF assistive robot arm using a low-DoF input device is to assign different combinations of movement-DoFs to the device\’s input DoFs depending on the current situation (adaptive control). To explore this method of control, we designed two adaptive control methods for a realistic virtual 3D environment. We evaluated our methods against a commonly used non-adaptive control method that requires the user to switch controls manually. This was conducted in a simulated remote study that used Virtual Reality and involved 39 non-disabled participants. Our results show that the number of mode switches necessary to complete a simple pick-and-place task decreases significantly when using an adaptive control type. In contrast, the task completion time and workload stay the same. A thematic analysis of qualitative feedback of our participants suggests that a longer period of training could further improve the performance of adaptive control methods.

  • Liebers, Jonathan; Brockel, Sascha; Gruenefeld, Uwe; Schneegass, Stefan: Identifying Users by Their Hand Tracking Data in Augmented and Virtual Reality. In: International Journal of Human–Computer Interaction (2022). doi:10.1080/10447318.2022.2120845PDFCitationDetails
    Identifying Users by Their Hand Tracking Data in Augmented and Virtual Reality

    Nowadays, Augmented and Virtual Reality devices are widely available and are often shared among users due to their high cost. Thus, distinguishing users to offer personalized experiences is essential. However, currently used explicit user authentication(e.g., entering a password) is tedious and vulnerable to attack. Therefore, this work investigates the feasibility of implicitly identifying users by their hand tracking data. In particular, we identify users by their uni- and bimanual finger behavior gathered from their interaction with eight different universal interface elements, such as buttons and sliders. In two sessions, we recorded the tracking data of 16 participants while they interacted with various interface elements in Augmented and Virtual Reality. We found that user identification is possible with up to 95 % accuracy across sessions using an explainable machine learning approach. We conclude our work by discussing differences between interface elements, and feature importance to provide implications for behavioral biometric systems.

  • Keppel, Jonas; Liebers, Jonathan; Auda, Jonas; Gruenefeld, Uwe; Schneegass, Stefan: ExplAInable Pixels: Investigating One-Pixel Attacks on Deep Learning Models with Explainable Visualizations. In: Proceedings of the 21st International Conference on Mobile and Ubiquitous Multimedia. Association for Computing Machinery, New York, NY, USA 2022, p. 231-242. doi:10.1145/3568444.3568469CitationDetails

    Nowadays, deep learning models enable numerous safety-critical applications, such as biometric authentication, medical diagnosis support, and self-driving cars. However, previous studies have frequently demonstrated that these models are attackable through slight modifications of their inputs, so-called adversarial attacks. Hence, researchers proposed investigating examples of these attacks with explainable artificial intelligence to understand them better. In this line, we developed an expert tool to explore adversarial attacks and defenses against them. To demonstrate the capabilities of our visualization tool, we worked with the publicly available CIFAR-10 dataset and generated one-pixel attacks. After that, we conducted an online evaluation with 16 experts. We found that our tool is usable and practical, providing evidence that it can support understanding, explaining, and preventing adversarial examples.

  • Liebers, Jonathan; Horn, Patrick; Burschik, Christian; Gruenefeld, Uwe; Schneegass, Stefan: Using Gaze Behavior and Head Orientation for Implicit Identification in Virtual Reality. In: Proceedings of the 27th ACM Symposium on Virtual Reality Software and Technology (VRST). Association for Computing Machinery, Osaka, Japan 2021. doi:10.1145/3489849.3489880CitationDetails

    Identifying users of a Virtual Reality (VR) headset provides designers of VR content with the opportunity to adapt the user interface, set user-specific preferences, or adjust the level of difficulty either for games or training applications. While most identification methods currently rely on explicit input, implicit user identification is less disruptive and does not impact the immersion of the users. In this work, we introduce a biometric identification system that employs the user’s gaze behavior as a unique, individual characteristic. In particular, we focus on the user’s gaze behavior and head orientation while following a moving stimulus. We verify our approach in a user study. A hybrid post-hoc analysis results in an identification accuracy of up to 75% for an explainable machine learning algorithm and up to 100% for a deep learning approach. We conclude with discussing application scenarios in which our approach can be used to implicitly identify users.

  • Auda, Jonas; Mayer, Sven; Verheyen, Nils; Schneegass, Stefan: Flyables: Haptic Input Devices for Virtual Reality using Quadcopters. In: ResearchGate (Ed.): VRST. 2021. doi:10.1145/3489849.3489855CitationDetails
  • Latif, Shahid; Agarwal, Shivam; Gottschalk, Simon; Chrosch, Carina; Feit, Felix; Jahn, Johannes; Braun, Tobias; Tchenko, Yanick Christian; Demidova, Elena; Beck, Fabian: Visually Connecting Historical Figures Through Event Knowledge Graphs. In: 2021 IEEE Visualization Conference (VIS) - Short Papers. IEEE, 2021. doi:10.1109/VIS49827.2021.9623313CitationDetails
    Visually Connecting Historical Figures Through Event Knowledge Graphs

    Knowledge graphs store information about historical figures and their relationships indirectly through shared events. We developed a visualization system, VisKonnect, for analyzing the intertwined lives of historical figures based on the events they participated in. A user’s query is parsed for identifying named entities, and related data is retrieved from an event knowledge graph. While a short textual answer to the query is generated using the GPT-3 language model, various linked visualizations provide context, display additional information related to the query, and allow exploration.

  • Auda, Jonas; Grünefeld, Uwe; Pfeuffer, Ken; Rivu, Radiah; Florian, Alt; Schneegass, Stefan: I'm in Control! Transferring Object Ownership Between Remote Users with Haptic Props in Virtual Reality. In: Proceedings of the 9th ACM Symposium on Spatial User Interaction (SUI). Association for Computing Machinery, 2021. doi:10.1145/3485279.3485287CitationDetails
  • Saad, Alia; Liebers, Jonathan; Gruenefeld, Uwe; Alt, Florian; Schneegass, Stefan: Understanding Bystanders’ Tendency to Shoulder Surf Smartphones Using 360-Degree Videos in Virtual Reality. In: Proceedings of the 23rd International Conference on Mobile Human-Computer Interaction (MobileHCI). Association for Computing Machinery, Toulouse, France 2021. doi:10.1145/3447526.3472058CitationDetails

    Shoulder surfing is an omnipresent risk for smartphone users. However, investigating these attacks in the wild is difficult because of either privacy concerns, lack of consent, or the fact that asking for consent would influence people’s behavior (e.g., they could try to avoid looking at smartphones). Thus, we propose utilizing 360-degree videos in Virtual Reality (VR), recorded in staged real-life situations on public transport. Despite differences between perceiving videos in VR and experiencing real-world situations, we believe this approach to allow novel insights on observers’ tendency to shoulder surf another person’s phone authentication and interaction to be gained. By conducting a study (N=16), we demonstrate that a better understanding of shoulder surfers’ behavior can be obtained by analyzing gaze data during video watching and comparing it to post-hoc interview responses. On average, participants looked at the phone for about 11% of the time it was visible and could remember half of the applications used.

  • Auda, Jonas; Grünefeld, Uwe; Schneegass, Stefan: Enabling Reusable Haptic Props for Virtual Reality by Hand Displacement. In: Proceedings of the Conference on Mensch Und Computer (MuC). Association for Computing Machinery, Ingolstadt, Germany 2021, p. 412-417. doi:10.1145/3473856.3474000CitationDetails

    Virtual Reality (VR) enables compelling visual experiences. However, providing haptic feedback is still challenging. Previous work suggests utilizing haptic props to overcome such limitations and presents evidence that props could function as a single haptic proxy for several virtual objects. In this work, we displace users’ hands to account for virtual objects that are smaller or larger. Hence, the used haptic prop can represent several differently-sized virtual objects. We conducted a user study (N = 12) and presented our participants with two tasks during which we continuously handed them the same haptic prop but they saw in VR differently-sized virtual objects. In the first task, we used a linear hand displacement and increased the size of the virtual object to understand when participants perceive a mismatch. In the second task, we compare the linear displacement to logarithmic and exponential displacements. We found that participants, on average, do not perceive the size mismatch for virtual objects up to 50% larger than the physical prop. However, we did not find any differences between the explored different displacement. We conclude our work with future research directions.

  • Faltaous, Sarah; Gruenefeld, Uwe; Schneegass, Stefan: Towards a Universal Human-Computer Interaction Model for Multimodal Interactions. In: Proceedings of the Conference on Mensch Und Computer (MuC). Association for Computing Machinery, Ingolstadt, Germany 2021, p. 59-63. doi:10.1145/3473856.3474008CitationDetails

    Models in HCI describe and provide insights into how humans use interactive technology. They are used by engineers, designers, and developers to understand and formalize the interaction process. At the same time, novel interaction paradigms arise constantly introducing new ways of how interactive technology can support humans. In this work, we look into how these paradigms can be described using the classical HCI model introduced by Schomaker in 1995. We extend this model by presenting new relations that would provide a better understanding of them. For this, we revisit the existing interaction paradigms and try to describe their interaction using this model. The goal of this work is to highlight the need to adapt the models to new interaction paradigms and spark discussion in the HCI community on this topic.

  • Faltaous, Sarah; Janzon, Simon; Heger, Roman; Strauss, Marvin; Golkar, Pedram; Viefhaus, Matteo; Prochazka, Marvin; Gruenefeld, Uwe; Schneegass, Stefan: Wisdom of the IoT Crowd: Envisioning a Smart Home-Based Nutritional Intake Monitoring System. In: Proceedings of the Conference on Mensch Und Computer (MuC). Association for Computing Machinery, Ingolstadt, Germany 2021, p. 568-573. doi:10.1145/3473856.3474009CitationDetails

    Obesity and overweight are two factors linked to various health problems that lead to death in the long run. Technological advancements have granted the chance to create smart interventions. These interventions could be operated by the Internet of Things (IoT) that connects different smart home and wearable devices, providing a large pool of data. In this work, we use IoT with different technologies to present an exemplary nutrition monitoring intake system. This system integrates the input from various devices to understand the users’ behavior better and provide recommendations accordingly. Furthermore, we report on a preliminary evaluation through semi-structured interviews with six participants. Their feedback highlights the system’s opportunities and challenges.

  • Auda, Jonas; Weigel, Martin; Cauchard, Jessica; Schneegass, Stefan: Understanding Drone Landing on the Human Body. In: ResearchGate (Ed.): 23rd International Conference on Mobile Human-Computer Interaction. 2021. doi:10.1145/3447526.3472031CitationDetails
  • Auda, Jonas; Heger, Roman; Gruenefeld, Uwe; Schneegaß, Stefan: VRSketch: Investigating 2D Sketching in Virtual Reality with Different Levels of Hand and Pen Transparency. In: 18th International Conference on Human–Computer Interaction (INTERACT). Springer, Bari, Italy 2021, p. 195-211. doi:10.1007/978-3-030-85607-6_14CitationDetails

    Sketching is a vital step in design processes. While analog sketching on pen and paper is the defacto standard, Virtual Reality (VR) seems promising for improving the sketching experience. It provides myriads of new opportunities to express creative ideas. In contrast to reality, possible drawbacks of pen and paper drawing can be tackled by altering the virtual environment. In this work, we investigate how hand and pen transparency impacts users’ 2D sketching abilities. We conducted a lab study (N=20N=20) investigating different combinations of hand and pen transparency. Our results show that a more transparent pen helps one sketch more quickly, while a transparent hand slows down. Further, we found that transparency improves sketching accuracy while drawing in the direction that is occupied by the user’s hand.

  • Arboleda, S. A.; Pascher, Max; Lakhnati, Y.; Gerken, Jens: Understanding Human-Robot Collaboration for People with Mobility Impairments at the Workplace, a Thematic Analysis. In: 29th IEEE International Conference on Robot and Human Interactive Communication. ACM, 2021. doi:10.1109/RO-MAN47096.2020.9223489.CitationDetails

    Assistive technologies such as human-robot collaboration, have the potential to ease the life of people with physical mobility impairments in social and economic activities. Currently, this group of people has lower rates of economic participation, due to the lack of adequate environments adapted to their capabilities. We take a closer look at the needs and preferences of people with physical mobility impairments in a human-robot cooperative environment at the workplace. Specifically, we aim to design how to control a robotic arm in manufacturing tasks for people with physical mobility impairments. We present a case study of a sheltered-workshop as a prototype for an institution that employs people with disabilities in manufacturing jobs. Here, we collected data of potential end-users with physical mobility impairments, social workers, and supervisors using a participatory design technique (Future-Workshop). These stakeholders were divided into two groups, primary (end-users) and secondary users (social workers, supervisors), which were run across two separate sessions. The gathered information was analyzed using thematic analysis to reveal underlying themes across stakeholders. We identified concepts that highlight underlying concerns related to the robot fitting in the social and organizational structure, human-robot synergy, and human-robot problem management. In this paper, we present our findings and discuss the implications of each theme when shaping an inclusive human-robot cooperative workstation for people with physical mobility impairments.

  • Liebers, Jonathan; Abdelaziz, Mark; Mecke, Lukas; Saad, Alia; Auda, Jonas; Alt, Florian; Schneegaß, Stefan: Understanding User Identification in Virtual Reality Through Behavioral Biometrics and the Effect of Body Normalization. In: Proceedings of the 40th ACM Conference on Human Factors in Computing Systems (CHI). Association for Computing Machinery, Yokohama, Japan 2021. doi:10.1145/3411764.3445528CitationDetails

    Virtual Reality (VR) is becoming increasingly popular both in the entertainment and professional domains. Behavioral biometrics have recently been investigated as a means to continuously and implicitly identify users in VR. Applications in VR can specifically benefit from this, for example, to adapt virtual environments and user interfaces as well as to authenticate users. In this work, we conduct a lab study (N = 16) to explore how accurately users can be identified during two task-driven scenarios based on their spatial movement. We show that an identification accuracy of up to 90% is possible across sessions recorded on different days. Moreover, we investigate the role of users’ physiology in behavioral biometrics by virtually altering and normalizing their body proportions. We find that body normalization in general increases the identification rate, in some cases by up to 38%; hence, it improves the performance of identification systems.

  • Schultze, Sven; Gruenefeld, Uwe; Boll, Susanne: Demystifying Deep Learning: Developing and Evaluating a User-Centered Learning App for Beginners to Gain Practical Experience. In: i-com, Vol 2020 (2021) No 19. doi:10.1515/icom-2020-0023CitationDetails

    Deep Learning has revolutionized Machine Learning, enhancing our ability to solve complex computational problems. From image classification to speech recognition, the technology can be beneficial in a broad range of scenarios. However, the barrier to entry is quite high, especially when programming skills are missing. In this paper, we present the development of a learning application that is easy to use, yet powerful enough to solve practical Deep Learning problems. We followed the human-centered design approach and conducted a technical evaluation to identify solvable classification problems. Afterwards, we conducted an online user evaluation to gain insights on users’ experience with the app, and to understand positive as well as negative aspects of our implemented concept. Our results show that participants liked using the app and found it useful, especially for beginners. Nonetheless, future iterations of the learning app should step-wise include more features to support advancing users.

  • Borsum, Florian; Pascher, Max; Auda, Jonas; Schneegass, Stefan; Lux, Gregor; Gerken, Jens: Stay on Course in VR: Comparing the Precision of Movement between Gamepad, Armswinger, and Treadmill: Kurs Halten in VR: Vergleich Der Bewegungspräzision von Gamepad, Armswinger Und Laufstall. In: Mensch Und Computer 2021. Association for Computing Machinery, New York, NY, USA 2021, p. 354-365. doi:10.1145/3473856.3473880CitationDetails

    In diesem Beitrag wird untersucht, inwieweit verschiedene Formen von Fortbewegungstechniken in Virtual Reality Umgebungen Einfluss auf die Präzision bei der Interaktion haben. Dabei wurden insgesamt drei Techniken untersucht: Zwei der Techniken integrieren dabei eine körperliche Aktivität, um einen hohen Grad an Realismus in der Bewegung zu erzeugen (Armswinger, Laufstall). Als dritte Technik wurde ein Gamepad als Baseline herangezogen. In einer Studie mit 18 Proband:innen wurde die Präzision dieser drei Fortbewegungstechniken über sechs unterschiedliche Hindernisse in einem VR-Parcours untersucht. Die Ergebnisse zeigen, dass für einzelne Hindernisse, die zum einen eine Kombination aus Vorwärts- und Seitwärtsbewegung erfordern (Slalom, Klippe) sowie auf Geschwindigkeit abzielen (Schiene), der Laufstall eine signifikant präzisere Steuerung ermöglicht als der Armswinger. Auf den gesamten Parcours gesehen ist jedoch kein Eingabegerät signifikant präziser als ein anderes. Die Benutzung des Laufstalls benötigt zudem signifikant mehr Zeit als Gamepad und Armswinger. Ebenso zeigte sich, dass das Ziel, eine reale Laufbewegung 1:1 abzubilden, auch mit einem Laufstall nach wie vor nicht erreicht wird, die Bewegung aber dennoch als intuitiv und immersiv wahrgenommen wird.

  • Arevalo Arboleda, Stephanie; Pascher, Max; Baumeister, Annalies; Klein, Barbara; Gerken, Jens: Reflecting upon Participatory Design in Human-Robot Collaboration for People with Motor Disabilities: Challenges and Lessons Learned from Three Multiyear Projects. In: The 14th PErvasive Technologies Related to Assistive Environments Conference. Association for Computing Machinery, New York, NY, USA 2021, p. 147-155. doi:10.1145/3453892.3458044CitationDetails

    Human-robot technology has the potential to positively impact the lives of people with motor disabilities. However, current efforts have mostly been oriented towards technology (sensors, devices, modalities, interaction techniques), thus relegating the user and their valuable input to the wayside. In this paper, we aim to present a holistic perspective of the role of participatory design in Human-Robot Collaboration (HRC) for People with Motor Disabilities (PWMD). We have been involved in several multiyear projects related to HRC for PWMD, where we encountered different challenges related to planning and participation, preferences of stakeholders, using certain participatory design techniques, technology exposure, as well as ethical, legal, and social implications. These challenges helped us provide five lessons learned that could serve as a guideline to researchers when using participatory design with vulnerable groups. In particular, early-career researchers who are starting to explore HRC research for people with disabilities.

  • Pascher, Max; Baumeister, Annalies; Schneegass, Stefan; Klein, Barbara; Gerken, Jens: Recommendations for the Development of a Robotic Drinking and Eating Aid - An Ethnographic Study. In: Ardito, Carmelo; Lanzilotti, Rosa; Malizia, Alessio; Petrie, Helen; Piccinno, Antonio; Desolda, Giuseppe; Inkpen, Kori (Ed.): Human-Computer Interaction -- INTERACT 2021. Springer International Publishing, Cham 2021, p. 331-351. CitationDetails

    Being able to live independently and self-determined in one's own home is a crucial factor or human dignity and preservation of self-worth. For people with severe physical impairments who cannot use their limbs for every day tasks, living in their own home is only possible with assistance from others. The inability to move arms and hands makes it hard to take care of oneself, e.g. drinking and eating independently. In this paper, we investigate how 15 participants with disabilities consume food and drinks. We report on interviews, participatory observations, and analyzed the aids they currently use. Based on our findings, we derive a set of recommendations that supports researchers and practitioners in designing future robotic drinking and eating aids for people with disabilities.

  • Borsum, Florian; Pascher, Max; Auda, Jonas; Schneegass, Stefan; Lux, Gregor; Gerken, Jens: Stay on Course in VR: Comparing the Precision of Movement between Gamepad, Armswinger, and Treadmill: Kurs Halten in VR: Vergleich Der Bewegungspräzision von Gamepad, Armswinger Und Laufstall. Association for Computing Machinery, New York, NY, USA 2021. doi:10.1145/3473856.3473880CitationDetails

    In diesem Beitrag wird untersucht, inwieweit verschiedene Formen von Fortbewegungstechniken in Virtual Reality Umgebungen Einfluss auf die Präzision bei der Interaktion haben. Dabei wurden insgesamt drei Techniken untersucht: Zwei der Techniken integrieren dabei eine körperliche Aktivität, um einen hohen Grad an Realismus in der Bewegung zu erzeugen (Armswinger, Laufstall). Als dritte Technik wurde ein Gamepad als Baseline herangezogen. In einer Studie mit 18 Proband:innen wurde die Präzision dieser drei Fortbewegungstechniken über sechs unterschiedliche Hindernisse in einem VR-Parcours untersucht. Die Ergebnisse zeigen, dass für einzelne Hindernisse, die zum einen eine Kombination aus Vorwärts- und Seitwärtsbewegung erfordern (Slalom, Klippe) sowie auf Geschwindigkeit abzielen (Schiene), der Laufstall eine signifikant präzisere Steuerung ermöglicht als der Armswinger. Auf den gesamten Parcours gesehen ist jedoch kein Eingabegerät signifikant präziser als ein anderes. Die Benutzung des Laufstalls beötigt zudem signifikant mehr Zeit als Gamepad und Armswinger. Ebenso zeigte sich, dass das Ziel, eine reale Laufbewegung 1:1 abzubilden, auch mit einem Laufstall nach wie vor nicht erreicht wird, die Bewegung aber dennoch als intuitiv und immersiv wahrgenommen wird.

  • Liebers, Jonathan; Horn, Patrick; Burschik, Christian; Gruenefeld, Uwe; Schneegass, Stefan: Using Gaze Behavior and Head Orientation for Implicit Identification in Virtual Reality. 2021, Vrst (Ed.), Association for Computing Machinery, New York, NY, USA 2021. doi:10.1145/3489849.3489880Full textCitationDetails

    Identifying users of a Virtual Reality (VR) headset provides designers of VR content with the opportunity to adapt the user interface, set user-specific preferences, or adjust the level of difficulty either for games or training applications. While most identification methods currently rely on explicit input, implicit user identification is less disruptive and does not impact the immersion of the users. In this work, we introduce a biometric identification system that employs the user’s gaze behavior as a unique, individual characteristic. In particular, we focus on the user’s gaze behavior and head orientation while following a moving stimulus. We verify our approach in a user study. A hybrid post-hoc analysis results in an identification accuracy of up to 75 % for an explainable machine learning algorithm and up to 100 % for a deep learning approach. We conclude with discussing application scenarios in which our approach can be used to implicitly identify users.

  • Illing, Jannike; Klinke, Philipp; Gruenefeld, Uwe; Pfingsthorn, Max; Heuten, Wilko: Time is money! Evaluating Augmented Reality Instructions for Time-Critical Assembly Tasks. In: 19th International Conference on Mobile and Ubiquitous Multimedia (MUM). Association for Computing Machinery, Essen, Germany 2020, p. 277-287. doi:10.1145/3428361.3428398CitationDetails

    Manual assembly tasks require workers to precisely assemble parts in 3D space. Often additional time pressure increases the complexity of these tasks even further (e.g., adhesive bonding processes). Therefore, we investigate how Augmented Reality (AR) can improve workers’ performance in time and spatial dependent process steps. In a user study, we compare three conditions: instructions presented on (a) paper, (b) a camera-based see-through tablet, and (c) a head-mounted AR device. For instructions we used selected work steps from a standardized adhesive bonding process as a representative for common time-critical assembly tasks. We found that instructions in AR can improve the performance and understanding of time and spatial factors. The tablet instruction condition showed the best subjective results among the participants, which can increase motivation, particularly among less-experienced workers.

  • Faltaous, Sarah; Neuwirth, Joshua; Gruenefeld, Uwe; Schneegass, Stefan: SaVR: Increasing Safety in Virtual Reality Environments via Electrical Muscle Stimulation. In: 19th International Conference on Mobile and Ubiquitous Multimedia (MUM). Association for Computing Machinery, Essen, Germany 2020, p. 254-258. doi:10.1145/3428361.3428389CitationDetails

    One of the main benefits of interactive Virtual Reality (VR) applications is that they provide a high sense of immersion. As a result, users lose their sense of real-world space which makes them vulnerable to collisions with real-world objects. In this work, we propose a novel approach to prevent such collisions using Electrical Muscle Stimulation (EMS). EMS actively prevents the movement that would result in a collision by actuating the antagonist muscle. We report on a user study comparing our approach to the commonly used feedback modalities: audio, visual, and vibro-tactile. Our results show that EMS is a promising modality for restraining user movement and, at the same time, rated best in terms of user experience.

  • Gruenefeld, Uwe; Brueck, Yvonne; Boll, Susanne: Behind the Scenes: Comparing X-Ray Visualization Techniques in Head-Mounted Optical See-through Augmented Reality. In: 19th International Conference on Mobile and Ubiquitous Multimedia (MUM). Association for Computing Machinery, Essen, Germany 2020, p. 179-185. doi:10.1145/3428361.3428402CitationDetails

    Locating objects in the environment can be a difficult task, especially when the objects are occluded. With Augmented Reality, we can alternate our perceived reality by augmenting it with visual cues or removing visual elements of reality, helping users to locate occluded objects. However, to our knowledge, it has not yet been evaluated which visualization technique works best for estimating the distance and size of occluded objects in optical see-through head-mounted Augmented Reality. To address this, we compare four different visualization techniques derived from previous work in a laboratory user study. Our results show that techniques utilizing additional aid (textual or with a grid) help users to estimate the distance to occluded objects more accurately. In contrast, a realistic rendering of the scene, such as a cutout in the wall, resulted in higher distance estimation errors.

  • Auda, Jonas; Gruenefeld, Uwe; Mayer, Sven: It Takes Two To Tango: Conflicts Between Users on the Reality-Virtuality Continuum and Their Bystanders. In: Proceedings of the 14th ACM Interactive Surfaces and Spaces (ISS). Association for Computing Machinery, Lisbon, Portugal 2020. CitationDetails

    Over the last years, Augmented and Virtual Reality technology became more immersive. However, when users immerse themselves in these digital realities, they detach from their real-world environments. This detachment creates conflicts that are problematic in public spaces such as planes but also private settings. Consequently, on the one hand, the detaching from the world creates an immerse experience for the user, and on the other hand, this creates a social conflict with bystanders. With this work, we highlight and categorize social conflicts caused by using immersive digital realities. We first present different social settings in which social conflicts arise and then provide an overview of investigated scenarios. Finally, we present research opportunities that help to address social conflicts between immersed users and bystanders.

  • Detjen, Henrik; Geisler, Stefan; Schneegass, Stefan: "Help, Accident Ahead!": Using Mixed Reality Environments in Automated Vehicles to Support Occupants After Passive Accident Experiences. In: Acm (Ed.): AutomotiveUI (adjunct) 2020. 2020. doi:10.1145/3409251.3411723CitationDetails
  • Detjen, Henrik; Pfleging, Bastian; Schneegass, Stefan: A Wizard of Oz Field Study to Understand Non-Driving-Related Activities, Trust, and Acceptance of Automated Vehicles. In: AutomotiveUI 2020. ACM, 2020. doi:10.1145/3409120.3410662CitationDetails
  • Schneegaß, Stefan; Auda, Jonas; Heger, Roman; Grünefeld, Uwe; Kosch, Thomas: EasyEG: A 3D-printable Brain-Computer Interface. In: Proceedings of the 33rd ACM Symposium on User Interface Software and Technology (UIST). Minnesota, USA 2020. doi:https://doi.org/10.1145/3379350.3416189CitationDetails

    Brain-Computer Interfaces (BCIs) are progressively adopted by the consumer market, making them available for a variety of use-cases. However, off-the-shelf BCIs are limited in their adjustments towards individual head shapes, evaluation of scalp-electrode contact, and extension through additional sensors. This work presents EasyEG, a BCI headset that is adaptable to individual head shapes and offers adjustable electrode-scalp contact to improve measuring quality. EasyEG consists of 3D-printed and low-cost components that can be extended by additional sensing hardware, hence expanding the application domain of current BCIs. We conclude with use-cases that demonstrate the potentials of our EasyEG headset.

  • Poguntke, Romina; Schneegass, Christina; van der Vekens, Lucas; Rzayev, Rufat; Auda, Jonas; Schneegass, Stefan; Schmidt:, Albrecht: NotiModes: an investigation of notification delay modes and their effects on smartphone usersy. In: MuC '20: Proceedings of the Conference on Mensch und Computer. ACM, 2020. doi:https://doi.org/10.1145/3404983.3410006CitationDetails
  • Gruenefeld, Uwe; Prädel, Lars; Illing, Jannike; Stratmann, Tim; Drolshagen, Sandra; Pfingsthorn, Max: Mind the ARm: Realtime Visualization of Robot Motion Intent in Head-Mounted Augmented Reality. In: Proceedings of the Conference on Mensch Und Computer (MuC). Association for Computing Machinery, 2020, p. 259-266. doi:10.1145/3404983.3405509CitationDetails

    Established safety sensor technology shuts down industrial robots when a collision is detected, causing preventable loss of productivity. To minimize downtime, we implemented three Augmented Reality (AR) visualizations (Path, Preview, and Volume) which allow users to understand robot motion intent and give way to the robot. We compare the different visualizations in a user study in which a small cognitive task is performed in a shared workspace. We found that Preview and Path required significantly longer head rotations to perceive robot motion intent. Volume, however, required the shortest head rotation and was perceived as most safe, enabling closer proximity of the robot arm before one left the shared workspace without causing shutdowns.

  • Agarwal, Shivam; Auda, Jonas; Schneegaß, Stefan; Beck:, Fabian: A Design and Application Space for Visualizing User Sessions of Virtual and Mixed Reality Environments. In: VMV2020. ACM, 2020. doi:10.2312/vmv.20201194CitationDetails
  • Poguntke, Romina; Schneegass, Christina; van der Vekens, Lucas; Rzayev, Rufat; Auda, Jonas; Schneegass, Stefan; Schmidt, Albrecht: NotiModes: an investigation of notification delay modes and their effects on smartphone users. In: MuC '20: Proceedings of the Conference on Mensch und Computer. ACM, Magdebug, Germany 2020. doi:10.1145/3404983.3410006CitationDetails
  • Saad, Alia; Elkafrawy, Dina Hisham; Abdennadher, Slim; Schneegass, Stefan: Are They Actually Looking? Identifying Smartphones Shoulder Surfing Through Gaze Estimation. In: ETRA. ACM, Stuttgart, Germany 2020. doi:10.1145/3379157.3391422CitationDetails
  • Jonathan Liebers, Stefan Schneegass: Gaze-based Authentication in Virtual Reality. In: ETRA. ACM, 2020. doi:10.1145/3379157.3391421CitationDetails
  • Safwat, Sherine Ashraf; Bolock, Alia El; Alaa, Mostafa; Faltaous, Sarah; Schneegass, Stefan; Abdennadher, Slim: The Effect of Student-Lecturer Cultural Differences on Engagement in Learning Environments - A Pilot Study. In: Communications in Computer and Information Science. Springer, 2020. doi:10.1007/978-3-030-51999-5\_10CitationDetails
  • Schneegass, Stefan; Sasse, Angela; Alt, Florian; Vogel, Daniel: Authentication Beyond Desktops and Smartphones: Novel Approaches for Smart Devices and Environments. In: CHI'20 Proceedings. ACM, Honolulu, HI, USA 2020. doi:10.1145/3334480.3375144CitationDetails
  • Ranasinghe, Champika; Holländer, Kai; Currano, Rebecca; Sirkin, David; Moore, Dylan; Schneegass, Stefan; Ju, Wendy: Autonomous Vehicle-Pedestrian Interaction Across Cultures: Towards Designing Better External Human Machine Interfaces (eHMIs). In: CHI'20 Proceedings. ACM, Honolulu, HI, USA 2020. doi:10.1145/3334480.3382957CitationDetails
  • Jonathan Liebers, Stefan Schneegass: Introducing Functional Biometrics: Using Body-Reflections as a Novel Class of Biometric Authentication Systems. In: CHI Extended Abstracts 2020. ACM, Honolulu, HI, USA 2020. doi:10.1145/3334480.3383059CitationDetails
  • Faltaous, Sarah; Schönherr, Chris; Detjen, Henrik; Schneegass, Stefan: Exploring proprioceptive take-over requests for highly automated vehicles. In: MUM '19: Proceedings of the 18th International Conference on Mobile and Ubiquitous Multimedia. ACM, Pisa, Italy 2019. doi:https://doi.org/10.1145/3365610.3365644PDFCitationDetails
    Exploring proprioceptive take-over requests for highly automated vehicles
  • Detjen, Henrik; Faltaous, Sarah; Geisler, Stefan; Schneegass, Stefan: User-Defined Voice and Mid-Air Gesture Commands - for Maneuver-based Interventions in Automated Vehicles. In: MuC'19: Proceedings of Mensch und Computer 2019. ACM, New York, USA 2019. doi: DOI: https://doi.org/10.1145/3340764.3340798 PDFCitationDetails
    User-Defined Voice and Mid-Air Gesture Commands
  • Poguntke, Romina; Mantz, Tamara; Hassib, Mariam; Schmidt, Albrecht; Schneegass, Stefan: Smile to Me - Investigating Emotions and their Representation in Text-based Messaging in the Wild. In: MuC'19: Proceedings of Mensch und Computer 2019. ACM, New York, USA 2019. doi:https://doi.org/10.1145/3340764.3340795PDFCitationDetails
    Smile to Me
  • Faltaous, Sarah; Eljaki, Salma; Schneegass, Stefan: User Preferences of Voice Controlled Smart Light Systems. In: MuC'19: Proceedings of Mensch und Computer 2019. ACM, New York, USA 2019. doi:https://doi.org/10.1145/3340764.3344437PDFCitationDetails
    User Preferences of Voice Controlled Smart Light Systems
  • Pascher, Max; Schneegass, Stefan; Gerken:, Jens: SwipeBuddy A Teleoperated Tablet and Ebook-Reader Holder for a Hands-Free Interaction. In: Human-Computer Interaction – INTERACT 2019. Springer, Paphos, Cyprus 2019. doi:10.1007/978-3-030-29390-1_39CitationDetails
  • Pfeiffer, Max; Medrano, Samuel Navas; Auda, Jonas; Schneegass, Stefan: STOP! Enhancing Drone Gesture Interaction with Force Feedback. In: CHI'19 Proceedings. HAL, Glasgow, UK 2019. doi:https://hal.archives-ouvertes.fr/hal-02128395/documentPDFFull textCitationDetails
    STOP! Enhancing Drone Gesture Interaction with Force Feedback
  • Jonas Auda, Max Pascher; Schneegass, Stefan: Around the (Virtual) World - Infinite Walking in Virtual Reality Using Electrical Muscle Stimulation. In: Acm (Ed.): CHI'19 Proceedings. Glasgow 2019. doi:https://doi.org/10.1145/3290605.3300661PDFCitationDetails
    Around the (Virtual) World

    Virtual worlds are infinite environments in which the user can move around freely. When shifting from controller-based movement to regular walking as an input, the limitation of the real world also limits the virtual world. Tackling this challenge, we propose the use of electrical muscle stimulation to limit the necessary real-world space to create an unlimited walking experience. We thereby actuate the users‘ legs in a way that they deviate from their straight route and thus, walk in circles in the real world while still walking straight in the virtual world. We report on a study comparing this approach to vision shift – the state-of-the-art approach – as well as combining both approaches. The results show that particularly combining both approaches yield high potential to create an infinite walking experience.

  • Faltaous, Sarah; Haas, Gabriel; Barrios, Liliana; Seiderer, Andreas; Rauh, Sebastian Felix; Chae, Han Joo; Schneegass, Stefan; Alt, Florian: BrainShare: A Glimpse of Social Interaction for Locked-in Syndrome Patients. In: CHI EA '19: Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems. ACM, New York, NY, USA 2019. PDFCitationDetails
    BrainShare: A Glimpse of Social Interaction for Locked-in Syndrome Patients
  • Schneegass, Stefan; Poguntke, Romina; Machulla, Tonja Katrin: Understanding the Impact of Information Representation on Willingness to Share Information. In: CHI '19: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. ACM, New YorkNYUnited States 2019. doi:https://doi.org/10.1145/3290605.3300753PDFCitationDetails
    Understanding the Impact of Information Representation on Willingness to Share Information
  • Sarah Faltaous, Jonathan Liebers; Yomna Abdelrahman, Florian Alt; Schneegass, Stefan: VPID: Towards Vein Pattern Identification Using Thermal Imaging. In: i-com (2019) No 18 (3), p. 259-270. doi:10.1515/icom-2019-0009PDFCitationDetails

    Biometric authentication received considerable attention lately. The vein pattern on the back of the hand is a unique biometric that can be measured through thermal imaging. Detecting this pattern provides an implicit approach that can authenticate users while interacting. In this paper, we present the Vein-Identification system, called VPID. It consists of a vein pattern recognition pipeline and an authentication part. We implemented six different vein-based authentication approaches by combining thermal imaging and computer vision algorithms. Through a study, we show that the approaches achieve a low false-acceptance rate (“FAR”) and a low false-rejection rate (“FRR”). Our findings show that the best approach is the Hausdorff distance-difference applied in combination with a Convolutional Neural Networks (CNN) classification of stacked images.

  • Pascher, Max; Schneegass, Stefan; Gerken, Jens: SwipeBuddy. In: Lamas, David; Loizides, Fernando; Nacke, Lennart; Petrie, Helen; Winckler, Marco; Zaphiris, Panayiotis (Ed.): Human-Computer Interaction -- INTERACT 2019. Springer International Publishing, Cham 2019, p. 568-571. CitationDetails

    Mobile devices are the core computing platform we use in our everyday life to communicate with friends, watch movies, or read books. For people with severe physical disabilities, such as tetraplegics, who cannot use their hands to operate such devices, these devices are barely usable. Tackling this challenge, we propose SwipeBuddy, a teleoperated robot allowing for touch interaction with a smartphone, tablet, or ebook-reader. The mobile device is mounted on top of the robot and can be teleoperated by a user through head motions and gestures controlling a stylus simulating touch input. Further, the user can control the position and orientation of the mobile device. We demonstrate the SwipeBuddy robot device and its different interaction capabilities.

  • Hoppe, Matthias; Knierim, Pascal; Kosch, Thomas; Funk, Markus; Futami, Lauren; Schneegass, Stefan; Henze, Niels; Schmidt, Albrecht; Machulla, Tonja: VRHapticDrones - Providing Haptics in Virtual Reality through Quadcopters. In: MUM 2018: Proceedings of the 17th International Conference on Mobile and Ubiquitous Multimedia. ACM, Cairo, Egypt 2018. doi:https://doi.org/10.1145/3282894.3282898PDFCitationDetails
    VRHapticDrones
  • Saad, Alia; Chukwu, Michael; Schneegass, Stefan: Communicating Shoulder Surfing Attacks to Users. In: MUM 2018: Proceedings of the 17th International Conference on Mobile and Ubiquitous Multimedia. ACM, Cairo, Egypt 2018. doi:https://doi.org/10.1145/3282894.3282919PDFCitationDetails
    Communicating Shoulder Surfing Attacks to Users
  • Schneegass, Christina; Terzimehić, Nađa; Nettah, Mariam; Schneegass, Stefan: Informing the Design of User - adaptive Mobile Language Learning Applications. In: MUM 2018: Proceedings of the 17th International Conference on Mobile and Ubiquitous Multimedia. ACM, Cairo, Egypt 2018. doi:https://doi.org/10.1145/3282894.3282926PDFCitationDetails
    Informing the Design of User
  • Faltaous, Sarah; Elbolock, Alia; Talaat, Mostafa; Abdennadher, Slim; Schneegass, Stefan: Virtual Reality for Cultural Competences. In: MUM 2018: Proceedings of the 17th International Conference on Mobile and Ubiquitous Multimedia. ACM, Cairo, Egypt 2018. doi:https://doi.org/10.1145/3282894.3289739PDFCitationDetails
    Virtual Reality for Cultural Competences
  • Antoun, Sara; Auda, Jonas; Schneegass, Stefan: SlidAR - Towards using AR in Education. In: MUM 2018: Proceedings of the 17th International Conference on Mobile and Ubiquitous Multimedia. ACM, Cairo, Egypt 2018. doi:https://doi.org/10.1145/3282894.3289744PDFCitationDetails
    SlidAR
  • Elagroudy, Passant; Abdelrahman, Yomna; Faltaous, Sarah; Schneegass, Stefan; Davis, Hilary: Workshop on Amplified and Memorable Food Interactions. In: MUM 2018: Proceedings of the 17th International Conference on Mobile and Ubiquitous Multimedia. ACM, Cairo, Egypt 2018. doi:https://doi.org/10.1145/3282894.3286059PDFCitationDetails
    Workshop on Amplified and Memorable Food Interactions
  • Auda, Jonas; Hoppe, Matthias; Amiraslanov, Orkhan; Zhou, Bo; Knierim, Pascal; Schneegass, Stefan; Schmidt, Albrecht; Lukowicz, Paul: LYRA - smart wearable in-flight service assistant. In: ISWC '18: Proceedings of the 2018 ACM International Symposium on Wearable Computers. ACM, Singapore, Singapore 2018. doi:https://doi.org/10.1145/3267242.3267282PDFCitationDetails
    LYRA
  • Arévalo-Arboleda, Stephanie; Pascher, Max; Gerken, Jens: Opportunities and Challenges in Mixed-Reality for an Inclusive Human-Robot Collaboration Environment. In: Proceedings of the 2018 International Workshop on Virtual, Augmented, and Mixed Reality for Human-Robot Interactions (VAM-HRI) as part of the ACM/IEEE Conference on Human-Robot Interaction. Chicago, USA 2018. CitationDetails

    This paper presents an approach to enhance robot control using Mixed-Reality. It highlights the opportunities and challenges in the interaction design to achieve a Human-Robot Collaborative environment. In fact, Human-Robot Collaboration is the perfect space for social inclusion. It enables people, who suffer severe physical impairments, to interact with the environment by providing them movement control of an external robotic arm. Now, when discussing about robot control it is important to reduce the visual-split that different input and output modalities carry. Therefore, Mixed-Reality is of particular interest when trying to ease communication between humans and robotic systems.

  • Faltaous, Sarah; Baumann, M.; Schneegass, Stefan; Chuang, Lewis: Design Guidelines for Reliability Communication in Autonomous Vehicles. In: AutomotiveUI '18: Proceedings of the 10th International Conference on Automotive User Interfaces and Interactive Vehicular Applications. ACM, Toronto, Canada 2018. doi:https://doi.org/10.1145/3239060.3239072PDFCitationDetails
    Design Guidelines for Reliability Communication in Autonomous Vehicles
  • Poguntke, Romina; Tasci, Cagri; Korhonen, Olli; Alt, Florian; Schneegass, Stefan: AVotar - exploring personalized avatars for mobile interaction with public displays. In: MobileHCI '18: Proceedings of the 20th International Conference on Human-Computer Interaction with Mobile Devices and Services Adjunct. ACM, Barcelona, Spain 2018. doi:https://doi.org/10.1145/3236112.3236113PDFCitationDetails
    AVotar
  • Weber, Dominik; Voit, Alexandra; Auda, Jonas; Schneegass, Stefan; Henze, Niels: Snooze! - investigating the user-defined deferral of mobile notifications. In: MobileHCI '18: Proceedings of the 20th International Conference on Human-Computer Interaction with Mobile Devices and Services. ACM, Barcelona, Spain 2018. doi:https://doi.org/10.1145/3229434.3229436PDFCitationDetails
    Snooze!
  • Poguntke, Romina; Kiss, Francisco; Kaplan, Ayhan; Schmidt, Albrecht; Schneegass, Stefan: RainSense - exploring the concept of a sense for weather awareness. In: MobileHCI '18: Proceedings of the 20th International Conference on Human-Computer Interaction with Mobile Devices and Services Adjunct. ACM, Barcelona, Spain 2018. doi:https://doi.org/10.1145/3236112.3236114PDFCitationDetails
    RainSense
  • Voit, Alexandra; Salm, Marie Olivia; Beljaars, Miriam; Kohn, Stefan; Schneegass, Stefan: Demo of a smart plant system as an exemplary smart home application supporting non-urgent notifications. In: NordiCHI '18: Proceedings of the 10th Nordic Conference on Human-Computer Interaction. ACM, Oslo, Norway 2018. doi:https://doi.org/10.1145/3240167.3240231PDFCitationDetails
    Demo of a smart plant system as an exemplary smart home application supporting non-urgent notifications
  • Auda, Jonas; Schneegass, Stefan; Faltaous, Sarah: Control, Intervention, or Autonomy? Understanding the Future of SmartHome Interaction. In: Conference on Human Factors in Computing Systems (CHI). ACM, Montreal, Canada 2018. PDFCitationDetails
    Control, Intervention, or Autonomy? Understanding the Future of SmartHome Interaction
  • Hassib, Mariam; Schneegass, Stefan; Henze, Niels; Schmidt, Albrecht; Alt, Florian: A Design Space for Audience Sensing and Feedback Systems. In: CHI EA '18: Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems. ACM, Montreal, Canada 2018. doi:https://doi.org/10.1145/3170427.3188569PDFCitationDetails
    A Design Space for Audience Sensing and Feedback Systems
  • Kiss, Francisco; Boldt, Robin; Pfleging, Bastian; Schneegass, Stefan: Navigation Systems for Motorcyclists: Exploring Wearable Tactile Feedback for Route Guidance in the Real World. In: CHI '18: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. ACM, Montreal, Canada 2018. doi:https://doi.org/10.1145/3173574.3174191PDFCitationDetails
    Navigation Systems for Motorcyclists: Exploring Wearable Tactile Feedback for Route Guidance in the Real World
  • Voit, Alexandra; Pfähler, Ferdinand; Schneegass, Stefan: Posture Sleeve: Using Smart Textiles for Public Display Interactions. In: CHI EA '18: Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems. ACM, Montreal 2018. doi:https://doi.org/10.1145/3170427.3188687PDFCitationDetails
    Posture Sleeve: Using Smart Textiles for Public Display Interactions
  • Auda, Jonas; Weber, Dominik; Voit, Alexandra; Schneegass, Stefan: Understanding User Preferences towards Rule-based Notification Deferral. In: CHI EA '18: Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems. ACM, Montreal, Canada 2018. doi:https://doi.org/10.1145/3170427.3188688PDFCitationDetails
    Understanding User Preferences towards Rule-based Notification Deferral
  • Henze, Niels: Design and evaluation of a computer-actuated mouse. In: MUM '17: Proceedings of the 16th International Conference on Mobile and Ubiquitous Multimedia. ACM, Stuttgart. Germany 2017. doi:https://doi.org/10.1145/3152832.3152862PDFCitationDetails
    Design and evaluation of a computer-actuated mouse
  • Hassib, Mariam; Khamis, Mohamed; Friedl, Susanne; Schneegass, Stefan; Alt, Florian: Brainatwork - logging cognitive engagement and tasks in the workplace using electroencephalography. In: MUM '17: Proceedings of the 16th International Conference on Mobile and Ubiquitous Multimedia. ACM, Stuttgart, Germany 2017. doi:https://doi.org/10.1145/3152832.3152865PDFCitationDetails
    Brainatwork
  • Alexandra Voit, Stefan Schneegass: FabricID - using smart textiles to access wearable devices. In: MUM '17: Proceedings of the 16th International Conference on Mobile and Ubiquitous Multimedia. ACM, Stuttgart, Germany 2017. doi:https://doi.org/10.1145/3152832.3156622PDFCitationDetails
    FabricID
  • Simon Mayer, Stefan Schneegass: IoT 2017 - the Seventh International Conference on the Internet of Things. In: IoT '17: Proceedings of the Seventh International Conference on the Internet of Things. ACM, Linz, Austria 2017. doi:https://doi.org/10.1145/3131542.3131543PDFCitationDetails
    IoT 2017
  • Duente, Im; Schneegass, Stefan; Pfeiffer, Max: EMS in HCI - challenges and opportunities in actuating human bodies. In: MobileHCI '17: Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services. ACM, Vienna, Austria 2017. doi:https://doi.org/10.1145/3098279.3119920PDFCitationDetails
    EMS in HCI
  • Oberhuber, Sascha; Kothe, Tina; Schneegass, Stefan; Alt, Florian: Augmented Games - Exploring Design Opportunities in AR Settings With Children. In: IDC '17: Proceedings of the 2017 Conference on Interaction Design and Children. ACM, Stanford California, USA 2017. doi:https://doi.org/10.1145/3078072.3079734PDFCitationDetails
    Augmented Games
  • Knierim, Pascal; Kosch, Thomas; Schwind, Valentin; Funk, Markus; Kiss, Francisco; Schneegass, Stefan; Henze, Niels: Tactile Drones - Providing Immersive Tactile Feedback in Virtual Reality through Quadcopters. In: CHI EA '17: Proceedings of the 2017 CHI Conference Extended Abstracts on Human Factors in Computing Systems. ACM, Denver Colorado, USA 2017. doi:https://doi.org/10.1145/3027063.3050426PDFCitationDetails
    Tactile Drones
  • Schmidt, Albrecht; Schneegass, Stefan; Kunze, Kai; Rekimoto, Jun; Woo, Woontack: Workshop on Amplification and Augmentation of Human Perception. In: CHI EA '17: Proceedings of the 2017 CHI Conference Extended Abstracts on Human Factors in Computing Systems. ACM, Denver Colorado, USA 2017. doi:https://doi.org/10.1145/3027063.3027088PDFCitationDetails
    Workshop on Amplification and Augmentation of Human Perception
  • Mariam Hassib, Max Pfeiffer; Stefan Schneegass, Michael Rohs; Alt, Florian: Emotion Actuator - Embodied Emotional Feedback through Electroencephalography and Electrical Muscle Stimulation. In: Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI '17). ACM, New York, USA 2017, p. 6133-6146. doi:https://doi.org/10.1145/3025453.3025953PDFCitationDetails
    Emotion Actuator

    The human body reveals emotional and bodily states through measurable signals, such as body language and electroencephalography. However, such manifestations are difficult to communicate to others remotely. We propose EmotionActuator, a proof-of-concept system to investigate the transmission of emotional states in which the recipient performs emotional gestures to understand and interpret the state of the sender.We call this kind of communication embodied emotional feedback, and present a prototype implementation. To realize our concept we chose four emotional states: amused, sad, angry, and neutral. We designed EmotionActuator through a series of studies to assess emotional classification via EEG, and create an EMS gesture set by comparing composed gestures from the literature to sign-language gestures. Through a final study with the end-to-end prototype interviews revealed that participants like implicit sharing of emotions and find the embodied output to be immersive, but want to have control over shared emotions and with whom. This work contributes a proof of concept system and set of design recommendations for designing embodied emotional feedback systems.

    Video: https://www.youtube.com/watch?v=OgOZmsa8xs8

  • Alt, Florian: Stay Cool! Understanding Thermal Attacks on Mobile-based User Authentication. In: Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI '17). ACM, New York, USA 2017, p. 3751-3763. doi:https://doi.org/10.1145/3025453.3025461PDFCitationDetails
    Stay Cool! Understanding Thermal Attacks on Mobile-based User Authentication

    PINs and patterns remain among the most widely used knowledge-based authentication schemes. As thermal cameras become ubiquitous and affordable, we foresee a new form of threat to user privacy on mobile devices. Thermal cameras allow performing thermal attacks, where heat traces, resulting from authentication, can be used to reconstruct passwords. In this work we investigate in details the viability of exploiting thermal imaging to infer PINs and patterns on mobile devices. We present a study (N=18) where we evaluated how properties of PINs and patterns influence their thermal attacks resistance. We found that thermal attacks are indeed viable on mobile devices; overlapping patterns significantly decrease successful thermal attack rate from 100% to 16.67%, while PINs remain vulnerable (>72% success rate) even with duplicate digits. We conclude by recommendations for users and designers of authentication schemes on how to resist thermal attacks.

    Video: https://www.youtube.com/watch?v=FxOBAvI-YFI

  • Hassib, Mariam; Schneegass, Stefan; Eiglsperger, Philipp; Henze, Niels; Schmidt, Albrecht; ; Alt, Florian: EngageMeter: A System for Implicit Audience Engagement Sensing Using Electroencephalography. In: Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI '17). ACM, New York, USA 2017, p. 5114-5119. doi:https://doi.org/10.1145/3025453.3025669CitationDetails
    EngageMeter: A System for Implicit Audience Engagement Sensing Using Electroencephalography

    Obtaining information about audience engagement in presentations is a valuable asset for presenters in many domains. Prior literature mostly utilized explicit methods of collecting feedback which induce distractions, add workload on audience and do not provide objective information to presenters. We present EngageMeter - a system that allows fine-grained information on audience engagement to be obtained implicitly from multiple brain-computer interfaces (BCI) and to be fed back to presenters for real time and post-hoc access. Through evaluation during an HCI conference (Naudience=11, Npresenters=3) we found that EngageMeter provides value to presenters (a) in real-time, since it allows reacting to current engagement scores by changing tone or adding pauses, and (b) in post-hoc, since presenters can adjust their slides and embed extra elements. We discuss how EngageMeter can be used in collocated and distributed audience sensing as well as how it can aid presenters in long term use.

  • Michahelles, Florian; Ilic, Alexander; Kunze, Kai; Kritzler, Mareike; Schneegass, Stefan: IoT 2016. In: IEEE Pervasive Computing, Vol 16 (2017) No 2, p. 87-89. doi:10.1109/MPRV.2017.25PDFCitationDetails
    IoT 2016

    The 6th International Conference on the Internet of Things (IoT 2016) showed clear departure from the research on data acquisition and sensor management presented at previous series of this conference. Learn about this year's move toward more commercially applicable implementations and cross-domain applications.

  • Stefan Schneegass, Oliver Amft: Introduction to Smart Textiles. In: Stefan Schneegass, Oliver Amft (Ed.): Smart Textiles: Fundamentals, Design, and Interaction. Springer International Publishing, 2017, p. 1-15. doi:10.1007/978-3-319-50124-6_1CitationDetails
    Introduction to Smart Textiles

    This chapter introduces fundamental concepts related to wearable computing, smart textiles, and context awareness. The history of wearable computing is summarized to illustrate the current state of smart textile and garment research. Subsequently, the process to build smart textiles from fabric production, sensor and actuator integration, contacting and integration, as well as communication, is summarized with notes and links to relevant chapters of this book. The options and specific needs for evaluating smart textiles are described. The chapter concludes by highlighting current and future research and development challenges for smart textiles.

  • Cheng, Jingyuan; Zhou, Bo; Lukowicz, Paul; Seoane, Fernando; Varga, Matija; Mehmann, Andreas; Chabrecek, Peter; Gaschler, Werner; Goenner, Karl; Horter, Hansjürgen; Schneegass, Stefan; Hassib, Mariam; Schmidt, Albrecht; Freund, Martin; Zhang, Rui; Amft, Oliver: Textile Building Blocks: Toward Simple, Modularized, and Standardized Smart Textile. In: Stefan Schneegass, Oliver Amft (Ed.): Smart Textiles. Springer International Publishing, 2017, p. 303-331. doi:10.1007/978-3-319-50124-6_14CitationDetails
    Textile Building Blocks: Toward Simple, Modularized, and Standardized Smart Textile

    Textiles are pervasive in our life, covering human body and objects, as well as serving in industrial applications. In its everyday use of individuals, smart textile becomes a promising medium for monitoring, information retrieval, and interaction. While there are many applications in sport, health care, and industry, the state-of-the-art smart textile is still found only in niche markets. To gain mass-market capabilities, we see the necessity of generalizing and modularizing smart textile production and application development, which on the one end lowers the production cost and on the other end enables easy deployment. In this chapter, we demonstrate our initial effort in modularization. By devising types of universal sensing fabrics for conductive and non-conductive patches, smart textile construction from basic, reusable components can be made. Using the fabric blocks, we present four types of sensing modalities, including resistive pressure, capacitive, bioimpedance, and biopotential. In addition, we present a multi-channel textile–electronics interface and various applications built on the top of the basic building blocks by ‘cut and sew’ principle.

  • Stefan Schneegass, Oliver Amft (Ed.): Smart Textiles - Fundamentals, Design, and Interaction. 1st Edition. Springer International Publishing, 2017. doi:10.1007/978-3-319-50124-6CitationDetails
    Smart Textiles

    From a holistic perspective, this handbook explores the design, development and production of smart textiles and textile electronics, breaking with the traditional silo-structure of smart textile research and development.

    Leading experts from different domains including textile production, electrical engineering, interaction design and human-computer interaction (HCI) address production processes in their entirety by exploring important concepts and topics like textile manufacturing, sensor and actuator development for textiles, the integration of electronics into textiles and the interaction with textiles. In addition, different application scenarios, where smart textiles play a key role, are presented too.

    Smart Textiles would be an ideal resource for researchers, designers and academics who are interested in understanding the overall process in creating viable smart textiles.

  • Schneegass, Stefan; Schmidt, Albrecht; Pfeiffer, Max: Creating user interfaces with electrical muscle stimulation. In: interactions, Vol 24 (2016) No 1, p. 74-77. doi:http://doi.acm.org/10.1145/3019606PDFFull textCitationDetails
    Creating user interfaces with electrical muscle stimulation

    Muscle movement is central to virtually everything we do, be it walking, writing, drawing, smiling, or singing. Even while we're standing still, our muscles are active, ensuring that we keep our balance. In a recent forum [1] we showed how electrical signals on the skin that reflect muscle activity can be measured. Here, we look at the reverse direction. We explain how muscles can be activated and how movements can be controlled with electrical signals.

  • Voit, Alexandra; Weber, Dominik; Schneegass, Stefan: Towards Notifications in the Era of the Internet of Things. In: IoT'16: Proceedings of the 6th International Conference on the Internet of Things. ACM, New York, USA 2016. doi:https://doi.org/10.1145/2991561.2998472PDFCitationDetails
    Towards Notifications in the Era of the Internet of Things
  • Schneegass, Stefan; Voit, Alexandra: GestureSleeve: using touch sensitive fabrics for gestural input on the forearm for controlling smartwatches. In: Proceedings of the 2016 ACM International Symposium on Wearable Computers (ISWC '16). ACM, New York, USA 2016, p. 108-115. doi:https://doi.org/10.1145/2971763.2971797CitationDetails

    Smartwatches provide quick and easy access to information. Due to their wearable nature, users can perceive the information while being stationary or on the go. The main drawback of smartwatches, however, is the limited input possibility. They use similar input methods as smartphones but thereby suffer from a smaller form factor. To extend the input space of smartwatches, we present GestureSleeve, a sleeve made out of touch enabled textile. It is capable of detecting different gestures such as stroke based gestures or taps. With these gestures, the user can control various smartwatch applications. Exploring the performance of the GestureSleeve approach, we conducted a user study with a running application as use case. In this study, we show that input using the GestureSleeve outperforms touch input on the smartwatch. In the future the GestureSleeve can be integrated into regular clothing and be used for controlling various smart devices.

  • Stefan Schneegass, Youssef Oualil; Bulling, Andreas: SkullConduct: Biometric User Identification on Eyewear Computers Using Bone Conduction Through the Skull. In: Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI '16). ACM, New York, USA 2016, p. 1379-1384. doi:https://doi.org/10.1145/2858036.2858152PDFCitationDetails

    Secure user identification is important for the increasing number of eyewear computers but limited input capabilities pose significant usability challenges for established knowledge-based schemes, such as passwords or PINs. We present SkullConduct, a biometric system that uses bone conduction of sound through the user's skull as well as a microphone readily integrated into many of these devices, such as Google Glass. At the core of SkullConduct is a method to analyze the characteristic frequency response created by the user's skull using a combination of Mel Frequency Cepstral Coefficient (MFCC) features as well as a computationally light-weight 1NN classifier. We report on a controlled experiment with 10 participants that shows that this frequency response is person-specific and stable -- even when taking off and putting on the device multiple times -- and thus serves as a robust biometric. We show that our method can identify users with 97.0% accuracy and authenticate them with an equal error rate of 6.9%, thereby bringing biometric user identification to eyewear computers equipped with bone conduction technology.

    Video: https://www.youtube.com/watch?v=5yG_nWocXNY

  • Florian Alt, Stefan Schneegass; Alireza Sahami Shirazi, Mariam Hassib; Bulling, Andreas: Graphical Passwords in the Wild: Understanding How Users Choose Pictures and Passwords in Image-based Authentication Schemes. In: Proceedings of the 17th International Conference on Human-Computer Interaction with Mobile Devices and Services (MobileHCI '15). ACM, New York, USA 2015, p. 316-322. doi:10.1145/2785830.2785882CitationDetails

    Common user authentication methods on smartphones, such as lock patterns, PINs, or passwords, impose a trade-off between security and password memorability. Image-based passwords were proposed as a secure and usable alternative. As of today, however, it remains unclear how such schemes are used in the wild. We present the first study to investigate how image-based passwords are used over long periods of time in the real world. Our analyses are based on data from 2318 unique devices collected over more than one year using a custom application released in the Android Play store. We present an in-depth analysis of what kind of images users select, how they define their passwords, and how secure these passwords are. Our findings provide valuable insights into real-world use of image-based passwords and inform the design of future graphical authentication schemes.

  • Max Pfeiffer, Tim Dünte; Stefan Schneegass, Florian Alt; Rohs, Michael: Cruise Control for Pedestrians: Controlling Walking Direction using Electrical Muscle Stimulation. In: In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI '15). ACM, New York, USA 2015, p. 2505-2514. doi:https://doi.org/10.1145/2702123.2702190CitationDetails

    Pedestrian navigation systems require users to perceive, interpret, and react to navigation information. This can tax cognition as navigation information competes with information from the real world. We propose actuated navigation, a new kind of pedestrian navigation in which the user does not need to attend to the navigation task at all. An actuation signal is directly sent to the human motor system to influence walking direction. To achieve this goal we stimulate the sartorius muscle using electrical muscle stimulation. The rotation occurs during the swing phase of the leg and can easily be counteracted. The user therefore stays in control. We discuss the properties of actuated navigation and present a lab study on identifying basic parameters of the technique as well as an outdoor study in a park. The results show that our approach changes a user's walking direction by about 16°/m on average and that the system can successfully steer users in a park with crowded areas, distractions, obstacles, and uneven ground.

    Video: https://www.youtube.com/watch?v=JSfnm_HoUv4

  • Mayer, Sven; Wolf, Katrin; Schneegass, Stefan; ; Henze, Niels: Modeling Distant Pointing for Compensating Systematic Displacements. In: Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI '15). ACM, New York, USA 2015, p. 4165-4168. doi:https://doi.org/10.1145/2702123.2702332CitationDetails

    Distant pointing at objects and persons is a highly expressive gesture that is widely used in human communication. Pointing is also used to control a range of interactive systems. For determining where a user is pointing at, different ray casting methods have been proposed. In this paper we assess how accurately humans point over distance and how to improve it. Participants pointed at projected targets from 2m and 3m while standing and sitting. Testing three common ray casting methods, we found that even with the most accurate one the average error is 61.3cm. We found that all tested ray casting methods are affected by systematic displacements. Therefore, we trained a polynomial to compensate this displacement. We show that using a user-, pose-, and distant-independent quartic polynomial can reduce the average error by 37.3%.

    VIdeo: https://www.youtube.com/watch?v=f8NOERrhWfA

  • Bader, Patrick; Schwind, Valentin; Henze, Niels; Schneegass, Stefan; Broy, Nora; Schmidt, Albrecht: Design and evaluation of a layered handheld 3d display with touch-sensitive front and back. In: NordiCHI '14: Proceedings of the 8th Nordic Conference on Human-Computer Interaction: Fun, Fast, Foundational. ACM, Helsinki, Finland 2014. doi:https://doi.org/10.1145/2639189.2639257PDFCitationDetails
    Design and evaluation of a layered handheld 3d display with touch-sensitive front and back
  • Stefan Schneegass, Frank Steimle; Andreas Bulling, Florian Alt; Schmidt, Albrecht: SmudgeSafe: geometric image transformations for smudge-resistant user authentication. In: Proceedings of the 2014 ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp '14). ACM, New York, USA 2014, p. 775-786. doi:10.1145/2632048.2636090CitationDetails

    Touch-enabled user interfaces have become ubiquitous, such as on ATMs or portable devices. At the same time, authentication using touch input is problematic, since finger smudge traces may allow attackers to reconstruct passwords. We present SmudgeSafe, an authentication system that uses random geometric image transformations, such as translation, rotation, scaling, shearing, and flipping, to increase the security of cued-recall graphical passwords. We describe the design space of these transformations and report on two user studies: A lab-based security study involving 20 participants in attacking user-defined passwords, using high quality pictures of real smudge traces captured on a mobile phone display; and an in-the-field usability study with 374 participants who generated more than 130,000 logins on a mobile phone implementation of SmudgeSafe. Results show that SmudgeSafe significantly increases security compared to authentication schemes based on PINs and lock patterns, and exhibits very high learnability, efficiency, and memorability.

  • Shirazi, Alireza Sahami; Abdelrahman, Yomna; Henze, Niels; Schneegass, Stefan; Khalilbeigi, Mohammadreza; ; Schmidt, Albrecht: Exploiting thermal reflection for interactive systems. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '14). ACM, New York, USA 2014, p. 3483-3492. doi:10.1145/2556288.2557208CitationDetails

    Thermal cameras have recently drawn the attention of HCI researchers as a new sensory system enabling novel interactive systems. They are robust to illumination changes and make it easy to separate human bodies from the image background. Far-infrared radiation, however, has another characteristic that distinguishes thermal cameras from their RGB or depth counterparts, namely thermal reflection. Common surfaces reflect thermal radiation differently than visual light and can be perfect thermal mirrors. In this paper, we show that through thermal reflection, thermal cameras can sense the space beyond their direct field-of-view. A thermal camera can sense areas besides and even behind its field-of-view through thermal reflection. We investigate how thermal reflection can increase the interaction space of projected surfaces using camera-projection systems. We moreover discuss the reflection characteristics of common surfaces in our vicinity in both the visual and thermal radiation bands. Using a proof-of-concept prototype, we demonstrate the increased interaction space for hand-held camera-projection system. Furthermore, we depict a number of promising application examples that can benefit from the thermal reflection characteristics of surfaces.

  • Häkkilä, Jonna R.; Posti, Maaret; Schneegass, Stefan; Alt, Florian; Gultekin, Kunter; ; Schmidt, Albrecht: Let me catch this!: experiencing interactive 3D cinema through collecting content with a mobile phone. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '14). ACM, New York, USA 2014, p. 1011-1020. doi:10.1145/2556288.2557187Full textCitationDetails

    The entertainment industry is going through a transformation, and technology development is affecting how we can enjoy and interact with the entertainment media content in new ways. In our work, we explore how to enable interaction with content in the context of 3D cinemas. This allows viewers to use their mobile phone to retrieve, for example, information on the artist of the soundtrack currently playing or a discount coupon on the watch the main actor is wearing. We are particularly interested in the user experience of the interactive 3D cinema concept, and how different interactive elements and interaction techniques are perceived. We report on the development of a prototype application utilizing smart phones and on an evaluation in a cinema context with 20 participants. Results emphasize that designing for interactive cinema experiences should drive for holistic and positive user experiences. Interactive content should be tied together with the actual video content, but integrated into contexts where it does not conflict with the immersive experience with the movie.

  • Broy, Nora; Schneegass, Stefan; Alt, Florian; ; Schmidt, Albrecht: FrameBox and MirrorBox: tools and guidelines to support designers in prototyping interfaces for 3D displays. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '14). ACM, New York, USA 2014, p. 2037-2046. doi:https://doi.org/10.1145/2556288.2557183Full textCitationDetails

    In this paper, we identify design guidelines for stereoscopic 3D (S3D) user interfaces (UIs) and present the MirrorBox and the FrameBox, two UI prototyping tools for S3D displays. As auto-stereoscopy becomes available for the mass market we believe the design of S3D UIs for devices, for example, mobile phones, public displays, or car dashboards, will rapidly gain importance. A benefit of such UIs is that they can group and structure information in a way that makes them easily perceivable for the user. For example, important information can be shown in front of less important information. This paper identifies core requirements for designing S3D UIs and derives concrete guidelines. The requirements also serve as a basis for two depth layout tools we built with the aim to overcome limitations of traditional prototyping when sketching S3D UIs. We evaluated the tools with usability experts and compared them to traditional paper prototyping.

  • Alt, Florian; Schneegass, Stefan; Auda, Jonas; Rzayev, Rufat; Broy, Nora: Using eye-tracking to support interaction with layered 3D interfaces on stereoscopic displays. In: IUI '14: Proceedings of the 19th international conference on Intelligent User Interfaces. ACM, Haifa, Israel 2014. doi:https://doi.org/10.1145/2557500.2557518PDFCitationDetails
    Using eye-tracking to support interaction with layered 3D interfaces on stereoscopic displays
  • Kubitza, Thomas; Pohl, Norman; Dingler, Tilman; Schneegass, Stefan; Weichel, Christian; Schmidt, Albrecht: Ingredients for a New Wave of Ubicomp Products. In: IEEE Pervasive Computing, Vol 12 (2013) No 3, p. 5-8. doi:10.1109/MPRV.2013.51Full textCitationDetails

    The emergence of many new embedded computing platforms has lowered the hurdle for creating ubiquitous computing devices. Here, the authors highlight some of the newer platforms, communication technologies, sensors, actuators, and cloud-based development tools, which are creating new opportunities for ubiquitous computing.

  • Pascher, Max: Praxisbeispiel Digitalisierung konkret: Wenn der Stromzähler weiß, ob es Oma gut geht. Beschreibung des minimalinvasiven Frühwarnsystems „ZELIA“. In: Wege in die digitale Zukunft - Was bedeuten Smart Living, Big Data, Robotik & Co für die Sozialwirtschaft? S. 137-148. Nomos Verlagsgesellschaft mbH & Co. KG, . CitationDetails
  • Pascher, Max; Baumeister, Annalies; Klein, Barbara; Schneegass, Stefan; Gerken, Jens: Little Helper: A Multi-Robot System in Home Health Care Environments. In: Ecole Nationale de l'Aviation Civile [ENAC]. CitationDetails

    Being able to live independently and self-determined in once own home is a crucial factor for social participation. For people with severe physical impairments, such as tetraplegia, who cannot use their hands to manipulate materials or operate devices, life in their own home is only possible with assistance from others. The inability to operate buttons and other interfaces results also in not being able to utilize most assistive technologies on their own. In this paper, we present an ethnographic field study with 15 tetraplegics to better understand their living environments and needs. Results show the potential for robotic solutions but emphasize the need to support activities of daily living (ADL), such as grabbing and manipulating objects or opening doors. Based on this, we propose Little Helper, a tele-operated pack of robot drones, collaborating in a divide and conquer paradigm to fulfill several tasks using a unique interaction method. The drones can be tele-operated by a user through gaze-based selection and head motions and gestures manipulating materials and applications.

  • Saad, Alia; Wittig, Nick; Grünefeld, Uwe; Schneegass, Stefan: A Systematic Analysis of External Factors Affecting Gait Identification. . CitationDetails