Publications

Selected Publications

This page shows selected publications from the last years. For a detailed list please refer to the Google Scholar or DBLP page of Stefan Schneegass.

Filter:
  • Pascher, Max; Kronhardt, Kirill; Franzen, Til; Gerken, Jens: Adaptive DoF: Concepts to Visualize AI-generated Movements in Human-Robot Collaboration. In: Proceedings of the 2022 International Conference on Advanced Visual Interfaces (AVI 2022). ACM, NewYork, NY, USA 2022. doi:10.1145/3531073.3534479CitationDetails

    Nowadays, robots collaborate closely with humans in a growing number of areas. Enabled by lightweight materials and safety sensors , these cobots are gaining increasing popularity in domestic care, supporting people with physical impairments in their everyday lives. However, when cobots perform actions autonomously, it remains challenging for human collaborators to understand and predict their behavior. This, however, is crucial for achieving trust and user acceptance. One significant aspect of predicting cobot behavior is understanding their motion intent and comprehending how they "think" about their actions. We work on solutions that communicate the cobots AI-generated motion intent to a human collaborator. Effective communication enables users to proceed with the most suitable option. We present a design exploration with different visualization techniques to optimize this user understanding, ideally resulting in increased safety and end-user acceptance.

  • Gruenefeld, Uwe; Auda, Jonas; Mathis, Florian; Schneegass, Stefan; Khamis, Mohamed; Gugenheimer, Jan; Mayer, Sven: VRception: Rapid Prototyping of Cross-Reality Systems in Virtual Reality. In: Proceedings of the 41st ACM Conference on Human Factors in Computing Systems (CHI). Association for Computing Machinery, New Orleans, United States 2022. doi:10.1145/3491102.3501821CitationDetails

    Cross-reality systems empower users to transition along the realityvirtuality continuum or collaborate with others experiencing different manifestations of it. However, prototyping these systems is challenging, as it requires sophisticated technical skills, time, and often expensive hardware. We present VRception, a concept and toolkit for quick and easy prototyping of cross-reality systems. By simulating all levels of the reality-virtuality continuum entirely in Virtual Reality, our concept overcomes the asynchronicity of realities, eliminating technical obstacles. Our VRception Toolkit leverages this concept to allow rapid prototyping of cross-reality systems and easy remixing of elements from all continuum levels. We replicated six cross-reality papers using our toolkit and presented them to their authors. Interviews with them revealed that our toolkit sufficiently replicates their core functionalities and allows quick iterations. Additionally, remote participants used our toolkit in pairs to collaboratively implement prototypes in about eight minutes that they would have otherwise expected to take days.

  • Auda, Jonas; Grünefeld, Uwe; Schneegass, Stefan: If The Map Fits! Exploring Minimaps as Distractors from Non-Euclidean Spaces in Virtual Reality. In: CHI 22. ACM, 2022. doi:10.1145/3491101.3519621CitationDetails
  • Abdrabou, Yasmeen; Rivu, Radiah; Ammar, Tarek; Liebers, Jonathan; Saad, Alia; Liebers, Carina; Gruenefeld, Uwe; Knierim, Pascal; Khamis, Mohamed; Mäkelä, Ville; Schneegass, Stefan; Alt, Florian: Understanding Shoulder Surfer Behavior Using Virtual Reality. In: Proceedings of the IEEE conference on Virtual Reality and 3D User Interfaces (IEEE VR). IEEE, Christchurch, New Zealand 2022. CitationDetails

    We explore how attackers behave during shoulder surfing. Unfortunately, such behavior is challenging to study as it is often opportunistic and can occur wherever potential attackers can observe other people’s private screens. Therefore, we investigate shoulder surfing using virtual reality (VR). We recruited 24 participants and observed their behavior in two virtual waiting scenarios: at a bus stop and in an open office space. In both scenarios, avatars interacted with private screens displaying different content, thus providing opportunities for shoulder surfing. From the results, we derive an understanding of factors influencing shoulder surfing behavior.

  • Auda, Jonas; Grünefeld, Uwe; Kosch, Thomas; Schneegass, Stefan: The Butterfly Effect: Novel Opportunities for Steady-State Visually-Evoked Potential Stimuli in Virtual Reality. In: Researchgate (Ed.): Augmented Humans. Kashiwa, Chiba, Japan 2022. doi:DOI:10.1145/3519391.3519397CitationDetails
  • Pascher, Max; Kronhardt, Kirill; Franzen, Til; Gruenefeld, Uwe; Schneegass, Stefan; Gerken, Jens: My Caregiver the Cobot: Comparing Visualization Techniques to Effectively Communicate Cobot Perception to People with Physical Impairments. In: MDPI Sensors, Vol 22 (2022). doi:10.3390/s22030755Full textCitationDetails

    Nowadays, robots are found in a growing number of areas where they collaborate closely with humans. Enabled by lightweight materials and safety sensors, these cobots are gaining increasing popularity in domestic care, where they support people with physical impairments in their everyday lives. However, when cobots perform actions autonomously, it remains challenging for human collaborators to understand and predict their behavior, which is crucial for achieving trust and user acceptance. One significant aspect of predicting cobot behavior is understanding their perception and comprehending how they "see" the world. To tackle this challenge, we compared three different visualization techniques for Spatial Augmented Reality. All of these communicate cobot perception by visually indicating which objects in the cobot's surrounding have been identified by their sensors. We compared the well-established visualizations Wedge and Halo against our proposed visualization Line in a remote user experiment with participants suffering from physical impairments. In a second remote experiment, we validated these findings with a broader non-specific user base. Our findings show that Line, a lower complexity visualization, results in significantly faster reaction times compared to Halo, and lower task load compared to both Wedge and Halo. Overall, users prefer Line as a more straightforward visualization. In Spatial Augmented Reality, with its known disadvantage of limited projection area size, established off-screen visualizations are not effective in communicating cobot perception and Line presents an easy-to-understand alternative.

  • Kronhardt, Kirill; Rübner, Stephan; Pascher, Max; Goldau, Felix Ferdinand; Frese, Udo; Gerken, Jens: Adapt or Perish? Exploring the Effectiveness of Adaptive DoF Control Interaction Methods for Assistive Robot Arms. In: Technologies, Vol 10 (2022). doi:10.3390/technologies10010030Full textCitationDetails

    Robot arms are one of many assistive technologies used by people with motor impairments. Assistive robot arms can allow people to perform activities of daily living (ADL) involving grasping and manipulating objects in their environment without the assistance of caregivers. Suitable input devices (e.g., joysticks) mostly have two Degrees of Freedom (DoF), while most assistive robot arms have six or more. This results in time-consuming and cognitively demanding mode switches to change the mapping of DoFs to control the robot. One option to decrease the difficulty of controlling a high-DoF assistive robot arm using a low-DoF input device is to assign different combinations of movement-DoFs to the device\’s input DoFs depending on the current situation (adaptive control). To explore this method of control, we designed two adaptive control methods for a realistic virtual 3D environment. We evaluated our methods against a commonly used non-adaptive control method that requires the user to switch controls manually. This was conducted in a simulated remote study that used Virtual Reality and involved 39 non-disabled participants. Our results show that the number of mode switches necessary to complete a simple pick-and-place task decreases significantly when using an adaptive control type. In contrast, the task completion time and workload stay the same. A thematic analysis of qualitative feedback of our participants suggests that a longer period of training could further improve the performance of adaptive control methods.

  • Liebers, Jonathan; Horn, Patrick; Burschik, Christian; Gruenefeld, Uwe; Schneegass, Stefan: Using Gaze Behavior and Head Orientation for Implicit Identification in Virtual Reality. In: Proceedings of the 27th ACM Symposium on Virtual Reality Software and Technology (VRST). Association for Computing Machinery, Osaka, Japan 2021. doi:10.1145/3489849.3489880CitationDetails

    Identifying users of a Virtual Reality (VR) headset provides designers of VR content with the opportunity to adapt the user interface, set user-specific preferences, or adjust the level of difficulty either for games or training applications. While most identification methods currently rely on explicit input, implicit user identification is less disruptive and does not impact the immersion of the users. In this work, we introduce a biometric identification system that employs the user’s gaze behavior as a unique, individual characteristic. In particular, we focus on the user’s gaze behavior and head orientation while following a moving stimulus. We verify our approach in a user study. A hybrid post-hoc analysis results in an identification accuracy of up to 75% for an explainable machine learning algorithm and up to 100% for a deep learning approach. We conclude with discussing application scenarios in which our approach can be used to implicitly identify users.

  • Auda, Jonas; Mayer, Sven; Verheyen, Nils; Schneegass, Stefan: Flyables: Haptic Input Devices for Virtual Reality using Quadcopters. In: ResearchGate (Ed.): VRST. 2021. doi:10.1145/3489849.3489855CitationDetails
  • Latif, Shahid; Agarwal, Shivam; Gottschalk, Simon; Chrosch, Carina; Feit, Felix; Jahn, Johannes; Braun, Tobias; Tchenko, Yanick Christian; Demidova, Elena; Beck, Fabian: Visually Connecting Historical Figures Through Event Knowledge Graphs. In: 2021 IEEE Visualization Conference (VIS) - Short Papers. IEEE, 2021. doi:10.1109/VIS49827.2021.9623313CitationDetails
    Visually Connecting Historical Figures Through Event Knowledge Graphs

    Knowledge graphs store information about historical figures and their relationships indirectly through shared events. We developed a visualization system, VisKonnect, for analyzing the intertwined lives of historical figures based on the events they participated in. A user’s query is parsed for identifying named entities, and related data is retrieved from an event knowledge graph. While a short textual answer to the query is generated using the GPT-3 language model, various linked visualizations provide context, display additional information related to the query, and allow exploration.

  • Auda, Jonas; Grünefeld, Uwe; Pfeuffer, Ken; Rivu, Radiah; Florian, Alt; Schneegass, Stefan: I'm in Control! Transferring Object Ownership Between Remote Users with Haptic Props in Virtual Reality. In: Proceedings of the 9th ACM Symposium on Spatial User Interaction (SUI). Association for Computing Machinery, 2021. doi:10.1145/3485279.3485287CitationDetails
  • Saad, Alia; Liebers, Jonathan; Gruenefeld, Uwe; Alt, Florian; Schneegass, Stefan: Understanding Bystanders’ Tendency to Shoulder Surf Smartphones Using 360-Degree Videos in Virtual Reality. In: Proceedings of the 23rd International Conference on Mobile Human-Computer Interaction (MobileHCI). Association for Computing Machinery, Toulouse, France 2021. doi:10.1145/3447526.3472058CitationDetails

    Shoulder surfing is an omnipresent risk for smartphone users. However, investigating these attacks in the wild is difficult because of either privacy concerns, lack of consent, or the fact that asking for consent would influence people’s behavior (e.g., they could try to avoid looking at smartphones). Thus, we propose utilizing 360-degree videos in Virtual Reality (VR), recorded in staged real-life situations on public transport. Despite differences between perceiving videos in VR and experiencing real-world situations, we believe this approach to allow novel insights on observers’ tendency to shoulder surf another person’s phone authentication and interaction to be gained. By conducting a study (N=16), we demonstrate that a better understanding of shoulder surfers’ behavior can be obtained by analyzing gaze data during video watching and comparing it to post-hoc interview responses. On average, participants looked at the phone for about 11% of the time it was visible and could remember half of the applications used.

  • Auda, Jonas; Grünefeld, Uwe; Schneegass, Stefan: Enabling Reusable Haptic Props for Virtual Reality by Hand Displacement. In: Proceedings of the Conference on Mensch Und Computer (MuC). Association for Computing Machinery, Ingolstadt, Germany 2021, p. 412-417. doi:10.1145/3473856.3474000CitationDetails

    Virtual Reality (VR) enables compelling visual experiences. However, providing haptic feedback is still challenging. Previous work suggests utilizing haptic props to overcome such limitations and presents evidence that props could function as a single haptic proxy for several virtual objects. In this work, we displace users’ hands to account for virtual objects that are smaller or larger. Hence, the used haptic prop can represent several differently-sized virtual objects. We conducted a user study (N = 12) and presented our participants with two tasks during which we continuously handed them the same haptic prop but they saw in VR differently-sized virtual objects. In the first task, we used a linear hand displacement and increased the size of the virtual object to understand when participants perceive a mismatch. In the second task, we compare the linear displacement to logarithmic and exponential displacements. We found that participants, on average, do not perceive the size mismatch for virtual objects up to 50% larger than the physical prop. However, we did not find any differences between the explored different displacement. We conclude our work with future research directions.

  • Faltaous, Sarah; Gruenefeld, Uwe; Schneegass, Stefan: Towards a Universal Human-Computer Interaction Model for Multimodal Interactions. In: Proceedings of the Conference on Mensch Und Computer (MuC). Association for Computing Machinery, Ingolstadt, Germany 2021, p. 59-63. doi:10.1145/3473856.3474008CitationDetails

    Models in HCI describe and provide insights into how humans use interactive technology. They are used by engineers, designers, and developers to understand and formalize the interaction process. At the same time, novel interaction paradigms arise constantly introducing new ways of how interactive technology can support humans. In this work, we look into how these paradigms can be described using the classical HCI model introduced by Schomaker in 1995. We extend this model by presenting new relations that would provide a better understanding of them. For this, we revisit the existing interaction paradigms and try to describe their interaction using this model. The goal of this work is to highlight the need to adapt the models to new interaction paradigms and spark discussion in the HCI community on this topic.

  • Faltaous, Sarah; Janzon, Simon; Heger, Roman; Strauss, Marvin; Golkar, Pedram; Viefhaus, Matteo; Prochazka, Marvin; Gruenefeld, Uwe; Schneegass, Stefan: Wisdom of the IoT Crowd: Envisioning a Smart Home-Based Nutritional Intake Monitoring System. In: Proceedings of the Conference on Mensch Und Computer (MuC). Association for Computing Machinery, Ingolstadt, Germany 2021, p. 568-573. doi:10.1145/3473856.3474009CitationDetails

    Obesity and overweight are two factors linked to various health problems that lead to death in the long run. Technological advancements have granted the chance to create smart interventions. These interventions could be operated by the Internet of Things (IoT) that connects different smart home and wearable devices, providing a large pool of data. In this work, we use IoT with different technologies to present an exemplary nutrition monitoring intake system. This system integrates the input from various devices to understand the users’ behavior better and provide recommendations accordingly. Furthermore, we report on a preliminary evaluation through semi-structured interviews with six participants. Their feedback highlights the system’s opportunities and challenges.

  • Auda, Jonas; Weigel, Martin; Cauchard, Jessica; Schneegass, Stefan: Understanding Drone Landing on the Human Body. In: ResearchGate (Ed.): 23rd International Conference on Mobile Human-Computer Interaction. 2021. doi:10.1145/3447526.3472031CitationDetails
  • Auda, Jonas; Heger, Roman; Gruenefeld, Uwe; Schneegaß, Stefan: VRSketch: Investigating 2D Sketching in Virtual Reality with Different Levels of Hand and Pen Transparency. In: 18th International Conference on Human–Computer Interaction (INTERACT). Springer, Bari, Italy 2021, p. 195-211. doi:10.1007/978-3-030-85607-6_14CitationDetails

    Sketching is a vital step in design processes. While analog sketching on pen and paper is the defacto standard, Virtual Reality (VR) seems promising for improving the sketching experience. It provides myriads of new opportunities to express creative ideas. In contrast to reality, possible drawbacks of pen and paper drawing can be tackled by altering the virtual environment. In this work, we investigate how hand and pen transparency impacts users’ 2D sketching abilities. We conducted a lab study (N=20N=20) investigating different combinations of hand and pen transparency. Our results show that a more transparent pen helps one sketch more quickly, while a transparent hand slows down. Further, we found that transparency improves sketching accuracy while drawing in the direction that is occupied by the user’s hand.

  • Arboleda, S. A.; Pascher, Max; Lakhnati, Y.; Gerken, Jens: Understanding Human-Robot Collaboration for People with Mobility Impairments at the Workplace, a Thematic Analysis. In: 29th IEEE International Conference on Robot and Human Interactive Communication. ACM, 2021. doi:10.1109/RO-MAN47096.2020.9223489.CitationDetails

    Assistive technologies such as human-robot collaboration, have the potential to ease the life of people with physical mobility impairments in social and economic activities. Currently, this group of people has lower rates of economic participation, due to the lack of adequate environments adapted to their capabilities. We take a closer look at the needs and preferences of people with physical mobility impairments in a human-robot cooperative environment at the workplace. Specifically, we aim to design how to control a robotic arm in manufacturing tasks for people with physical mobility impairments. We present a case study of a sheltered-workshop as a prototype for an institution that employs people with disabilities in manufacturing jobs. Here, we collected data of potential end-users with physical mobility impairments, social workers, and supervisors using a participatory design technique (Future-Workshop). These stakeholders were divided into two groups, primary (end-users) and secondary users (social workers, supervisors), which were run across two separate sessions. The gathered information was analyzed using thematic analysis to reveal underlying themes across stakeholders. We identified concepts that highlight underlying concerns related to the robot fitting in the social and organizational structure, human-robot synergy, and human-robot problem management. In this paper, we present our findings and discuss the implications of each theme when shaping an inclusive human-robot cooperative workstation for people with physical mobility impairments.

  • Liebers, Jonathan; Abdelaziz, Mark; Mecke, Lukas; Saad, Alia; Auda, Jonas; Alt, Florian; Schneegaß, Stefan: Understanding User Identification in Virtual Reality Through Behavioral Biometrics and the Effect of Body Normalization. In: Proceedings of the 40th ACM Conference on Human Factors in Computing Systems (CHI). Association for Computing Machinery, Yokohama, Japan 2021. doi:10.1145/3411764.3445528CitationDetails

    Virtual Reality (VR) is becoming increasingly popular both in the entertainment and professional domains. Behavioral biometrics have recently been investigated as a means to continuously and implicitly identify users in VR. Applications in VR can specifically benefit from this, for example, to adapt virtual environments and user interfaces as well as to authenticate users. In this work, we conduct a lab study (N = 16) to explore how accurately users can be identified during two task-driven scenarios based on their spatial movement. We show that an identification accuracy of up to 90% is possible across sessions recorded on different days. Moreover, we investigate the role of users’ physiology in behavioral biometrics by virtually altering and normalizing their body proportions. We find that body normalization in general increases the identification rate, in some cases by up to 38%; hence, it improves the performance of identification systems.

  • Schultze, Sven; Gruenefeld, Uwe; Boll, Susanne: Demystifying Deep Learning: Developing and Evaluating a User-Centered Learning App for Beginners to Gain Practical Experience. In: i-com, Vol 2020 (2021) No 19. doi:10.1515/icom-2020-0023CitationDetails

    Deep Learning has revolutionized Machine Learning, enhancing our ability to solve complex computational problems. From image classification to speech recognition, the technology can be beneficial in a broad range of scenarios. However, the barrier to entry is quite high, especially when programming skills are missing. In this paper, we present the development of a learning application that is easy to use, yet powerful enough to solve practical Deep Learning problems. We followed the human-centered design approach and conducted a technical evaluation to identify solvable classification problems. Afterwards, we conducted an online user evaluation to gain insights on users’ experience with the app, and to understand positive as well as negative aspects of our implemented concept. Our results show that participants liked using the app and found it useful, especially for beginners. Nonetheless, future iterations of the learning app should step-wise include more features to support advancing users.

  • Borsum, Florian; Pascher, Max; Auda, Jonas; Schneegass, Stefan; Lux, Gregor; Gerken, Jens: Stay on Course in VR: Comparing the Precision of Movement between Gamepad, Armswinger, and Treadmill: Kurs Halten in VR: Vergleich Der Bewegungspräzision von Gamepad, Armswinger Und Laufstall. In: Mensch Und Computer 2021. Association for Computing Machinery, New York, NY, USA 2021, p. 354-365. doi:10.1145/3473856.3473880CitationDetails

    In diesem Beitrag wird untersucht, inwieweit verschiedene Formen von Fortbewegungstechniken in Virtual Reality Umgebungen Einfluss auf die Pr\"{a}zision bei der Interaktion haben. Dabei wurden insgesamt drei Techniken untersucht: Zwei der Techniken integrieren dabei eine k\"{o}rperliche Aktivit\"{a}t, um einen hohen Grad an Realismus in der Bewegung zu erzeugen (Armswinger, Laufstall). Als dritte Technik wurde ein Gamepad als Baseline herangezogen. In einer Studie mit 18 Proband:innen wurde die Pr\"{a}zision dieser drei Fortbewegungstechniken \"{u}ber sechs unterschiedliche Hindernisse in einem VR-Parcours untersucht. Die Ergebnisse zeigen, dass f\"{u}r einzelne Hindernisse, die zum einen eine Kombination aus Vorw\"{a}rts- und Seitw\"{a}rtsbewegung erfordern (Slalom, Klippe) sowie auf Geschwindigkeit abzielen (Schiene), der Laufstall eine signifikant pr\"{a}zisere Steuerung erm\"{o}glicht als der Armswinger. Auf den gesamten Parcours gesehen ist jedoch kein Eingabeger\"{a}t signifikant pr\"{a}ziser als ein anderes. Die Benutzung des Laufstalls ben\"{o}tigt zudem signifikant mehr Zeit als Gamepad und Armswinger. Ebenso zeigte sich, dass das Ziel, eine reale Laufbewegung 1:1 abzubilden, auch mit einem Laufstall nach wie vor nicht erreicht wird, die Bewegung aber dennoch als intuitiv und immersiv wahrgenommen wird.

  • Arevalo Arboleda, Stephanie; Pascher, Max; Baumeister, Annalies; Klein, Barbara; Gerken, Jens: Reflecting upon Participatory Design in Human-Robot Collaboration for People with Motor Disabilities: Challenges and Lessons Learned from Three Multiyear Projects. In: The 14th PErvasive Technologies Related to Assistive Environments Conference. Association for Computing Machinery, New York, NY, USA 2021, p. 147-155. doi:10.1145/3453892.3458044CitationDetails

    Human-robot technology has the potential to positively impact the lives of people with motor disabilities. However, current efforts have mostly been oriented towards technology (sensors, devices, modalities, interaction techniques), thus relegating the user and their valuable input to the wayside. In this paper, we aim to present a holistic perspective of the role of participatory design in Human-Robot Collaboration (HRC) for People with Motor Disabilities (PWMD). We have been involved in several multiyear projects related to HRC for PWMD, where we encountered different challenges related to planning and participation, preferences of stakeholders, using certain participatory design techniques, technology exposure, as well as ethical, legal, and social implications. These challenges helped us provide five lessons learned that could serve as a guideline to researchers when using participatory design with vulnerable groups. In particular, early-career researchers who are starting to explore HRC research for people with disabilities.

  • Pascher, Max; Baumeister, Annalies; Schneegass, Stefan; Klein, Barbara; Gerken, Jens: Recommendations for the Development of a Robotic Drinking and Eating Aid - An Ethnographic Study. In: Ardito, Carmelo; Lanzilotti, Rosa; Malizia, Alessio; Petrie, Helen; Piccinno, Antonio; Desolda, Giuseppe; Inkpen, Kori (Ed.): Human-Computer Interaction -- INTERACT 2021. Springer International Publishing, Cham 2021, p. 331-351. CitationDetails

    Being able to live independently and self-determined in one's own home is a crucial factor or human dignity and preservation of self-worth. For people with severe physical impairments who cannot use their limbs for every day tasks, living in their own home is only possible with assistance from others. The inability to move arms and hands makes it hard to take care of oneself, e.g. drinking and eating independently. In this paper, we investigate how 15 participants with disabilities consume food and drinks. We report on interviews, participatory observations, and analyzed the aids they currently use. Based on our findings, we derive a set of recommendations that supports researchers and practitioners in designing future robotic drinking and eating aids for people with disabilities.

  • Borsum, Florian; Pascher, Max; Auda, Jonas; Schneegass, Stefan; Lux, Gregor; Gerken, Jens: Stay on Course in VR: Comparing the Precision of Movement between Gamepad, Armswinger, and Treadmill: Kurs Halten in VR: Vergleich Der Bewegungspräzision von Gamepad, Armswinger Und Laufstall. Association for Computing Machinery, New York, NY, USA 2021. doi:10.1145/3473856.3473880CitationDetails

    In diesem Beitrag wird untersucht, inwieweit verschiedene Formen von Fortbewegungstechniken in Virtual Reality Umgebungen Einfluss auf die Pr\"\\\{a}zision bei der Interaktion haben. Dabei wurden insgesamt drei Techniken untersucht: Zwei der Techniken integrieren dabei eine k\"\\\{o}rperliche Aktivit\"\\\{a}t, um einen hohen Grad an Realismus in der Bewegung zu erzeugen (Armswinger, Laufstall). Als dritte Technik wurde ein Gamepad als Baseline herangezogen. In einer Studie mit 18 Proband:innen wurde die Pr\"\\\{a}zision dieser drei Fortbewegungstechniken \"\\\{u}ber sechs unterschiedliche Hindernisse in einem VR-Parcours untersucht. Die Ergebnisse zeigen, dass f\"\\\{u}r einzelne Hindernisse, die zum einen eine Kombination aus Vorw\"\\\{a}rts- und Seitw\"\\\{a}rtsbewegung erfordern (Slalom, Klippe) sowie auf Geschwindigkeit abzielen (Schiene), der Laufstall eine signifikant pr\"\\\{a}zisere Steuerung erm\"\\\{o}glicht als der Armswinger. Auf den gesamten Parcours gesehen ist jedoch kein Eingabeger\"\\\{a}t signifikant pr\"\\\{a}ziser als ein anderes. Die Benutzung des Laufstalls ben\"\\\{o}tigt zudem signifikant mehr Zeit als Gamepad und Armswinger. Ebenso zeigte sich, dass das Ziel, eine reale Laufbewegung 1:1 abzubilden, auch mit einem Laufstall nach wie vor nicht erreicht wird, die Bewegung aber dennoch als intuitiv und immersiv wahrgenommen wird.

  • Liebers, Jonathan; Horn, Patrick; Burschik, Christian; Gruenefeld, Uwe; Schneegass, Stefan: Using Gaze Behavior and Head Orientation for Implicit Identification in Virtual Reality. 2021, Vrst (Ed.), Association for Computing Machinery, New York, NY, USA 2021. doi:10.1145/3489849.3489880Full textCitationDetails

    Identifying users of a Virtual Reality (VR) headset provides designers of VR content with the opportunity to adapt the user interface, set user-specific preferences, or adjust the level of difficulty either for games or training applications. While most identification methods currently rely on explicit input, implicit user identification is less disruptive and does not impact the immersion of the users. In this work, we introduce a biometric identification system that employs the user’s gaze behavior as a unique, individual characteristic. In particular, we focus on the user’s gaze behavior and head orientation while following a moving stimulus. We verify our approach in a user study. A hybrid post-hoc analysis results in an identification accuracy of up to 75 % for an explainable machine learning algorithm and up to 100 % for a deep learning approach. We conclude with discussing application scenarios in which our approach can be used to implicitly identify users.

  • Illing, Jannike; Klinke, Philipp; Gruenefeld, Uwe; Pfingsthorn, Max; Heuten, Wilko: Time is money! Evaluating Augmented Reality Instructions for Time-Critical Assembly Tasks. In: 19th International Conference on Mobile and Ubiquitous Multimedia (MUM). Association for Computing Machinery, Essen, Germany 2020, p. 277-287. doi:10.1145/3428361.3428398CitationDetails

    Manual assembly tasks require workers to precisely assemble parts in 3D space. Often additional time pressure increases the complexity of these tasks even further (e.g., adhesive bonding processes). Therefore, we investigate how Augmented Reality (AR) can improve workers’ performance in time and spatial dependent process steps. In a user study, we compare three conditions: instructions presented on (a) paper, (b) a camera-based see-through tablet, and (c) a head-mounted AR device. For instructions we used selected work steps from a standardized adhesive bonding process as a representative for common time-critical assembly tasks. We found that instructions in AR can improve the performance and understanding of time and spatial factors. The tablet instruction condition showed the best subjective results among the participants, which can increase motivation, particularly among less-experienced workers.

  • Faltaous, Sarah; Neuwirth, Joshua; Gruenefeld, Uwe; Schneegass, Stefan: SaVR: Increasing Safety in Virtual Reality Environments via Electrical Muscle Stimulation. In: 19th International Conference on Mobile and Ubiquitous Multimedia (MUM). Association for Computing Machinery, Essen, Germany 2020, p. 254-258. doi:10.1145/3428361.3428389CitationDetails

    One of the main benefits of interactive Virtual Reality (VR) applications is that they provide a high sense of immersion. As a result, users lose their sense of real-world space which makes them vulnerable to collisions with real-world objects. In this work, we propose a novel approach to prevent such collisions using Electrical Muscle Stimulation (EMS). EMS actively prevents the movement that would result in a collision by actuating the antagonist muscle. We report on a user study comparing our approach to the commonly used feedback modalities: audio, visual, and vibro-tactile. Our results show that EMS is a promising modality for restraining user movement and, at the same time, rated best in terms of user experience.

  • Gruenefeld, Uwe; Brueck, Yvonne; Boll, Susanne: Behind the Scenes: Comparing X-Ray Visualization Techniques in Head-Mounted Optical See-through Augmented Reality. In: 19th International Conference on Mobile and Ubiquitous Multimedia (MUM). Association for Computing Machinery, Essen, Germany 2020, p. 179-185. doi:10.1145/3428361.3428402CitationDetails

    Locating objects in the environment can be a difficult task, especially when the objects are occluded. With Augmented Reality, we can alternate our perceived reality by augmenting it with visual cues or removing visual elements of reality, helping users to locate occluded objects. However, to our knowledge, it has not yet been evaluated which visualization technique works best for estimating the distance and size of occluded objects in optical see-through head-mounted Augmented Reality. To address this, we compare four different visualization techniques derived from previous work in a laboratory user study. Our results show that techniques utilizing additional aid (textual or with a grid) help users to estimate the distance to occluded objects more accurately. In contrast, a realistic rendering of the scene, such as a cutout in the wall, resulted in higher distance estimation errors.

  • Auda, Jonas; Gruenefeld, Uwe; Mayer, Sven: It Takes Two To Tango: Conflicts Between Users on the Reality-Virtuality Continuum and Their Bystanders. In: Proceedings of the 14th ACM Interactive Surfaces and Spaces (ISS). Association for Computing Machinery, Lisbon, Portugal 2020. CitationDetails

    Over the last years, Augmented and Virtual Reality technology became more immersive. However, when users immerse themselves in these digital realities, they detach from their real-world environments. This detachment creates conflicts that are problematic in public spaces such as planes but also private settings. Consequently, on the one hand, the detaching from the world creates an immerse experience for the user, and on the other hand, this creates a social conflict with bystanders. With this work, we highlight and categorize social conflicts caused by using immersive digital realities. We first present different social settings in which social conflicts arise and then provide an overview of investigated scenarios. Finally, we present research opportunities that help to address social conflicts between immersed users and bystanders.

  • Detjen, Henrik; Geisler, Stefan; Schneegass, Stefan: "Help, Accident Ahead!": Using Mixed Reality Environments in Automated Vehicles to Support Occupants After Passive Accident Experiences. In: Acm (Ed.): AutomotiveUI (adjunct) 2020. 2020. doi:10.1145/3409251.3411723CitationDetails
  • Detjen, Henrik; Pfleging, Bastian; Schneegass, Stefan: A Wizard of Oz Field Study to Understand Non-Driving-Related Activities, Trust, and Acceptance of Automated Vehicles. In: AutomotiveUI 2020. ACM, 2020. doi:10.1145/3409120.3410662CitationDetails
  • Schneegaß, Stefan; Auda, Jonas; Heger, Roman; Grünefeld, Uwe; Kosch, Thomas: EasyEG: A 3D-printable Brain-Computer Interface. In: Proceedings of the 33rd ACM Symposium on User Interface Software and Technology (UIST). Minnesota, USA 2020. doi:https://doi.org/10.1145/3379350.3416189CitationDetails

    Brain-Computer Interfaces (BCIs) are progressively adopted by the consumer market, making them available for a variety of use-cases. However, off-the-shelf BCIs are limited in their adjustments towards individual head shapes, evaluation of scalp-electrode contact, and extension through additional sensors. This work presents EasyEG, a BCI headset that is adaptable to individual head shapes and offers adjustable electrode-scalp contact to improve measuring quality. EasyEG consists of 3D-printed and low-cost components that can be extended by additional sensing hardware, hence expanding the application domain of current BCIs. We conclude with use-cases that demonstrate the potentials of our EasyEG headset.

  • Poguntke, Romina; Schneegass, Christina; van der Vekens, Lucas; Rzayev, Rufat; Auda, Jonas; Schneegass, Stefan; Schmidt:, Albrecht: NotiModes: an investigation of notification delay modes and their effects on smartphone usersy. In: MuC '20: Proceedings of the Conference on Mensch und Computer. ACM, 2020. doi:https://doi.org/10.1145/3404983.3410006CitationDetails
  • Gruenefeld, Uwe; Prädel, Lars; Illing, Jannike; Stratmann, Tim; Drolshagen, Sandra; Pfingsthorn, Max: Mind the ARm: Realtime Visualization of Robot Motion Intent in Head-Mounted Augmented Reality. In: Proceedings of the Conference on Mensch Und Computer (MuC). Association for Computing Machinery, 2020, p. 259-266. doi:10.1145/3404983.3405509CitationDetails

    Established safety sensor technology shuts down industrial robots when a collision is detected, causing preventable loss of productivity. To minimize downtime, we implemented three Augmented Reality (AR) visualizations (Path, Preview, and Volume) which allow users to understand robot motion intent and give way to the robot. We compare the different visualizations in a user study in which a small cognitive task is performed in a shared workspace. We found that Preview and Path required significantly longer head rotations to perceive robot motion intent. Volume, however, required the shortest head rotation and was perceived as most safe, enabling closer proximity of the robot arm before one left the shared workspace without causing shutdowns.

  • Agarwal, Shivam; Auda, Jonas; Schneegaß, Stefan; Beck:, Fabian: A Design and Application Space for Visualizing User Sessions of Virtual and Mixed Reality Environments. In: VMV2020. ACM, 2020. doi:10.2312/vmv.20201194CitationDetails
  • Poguntke, Romina; Schneegass, Christina; van der Vekens, Lucas; Rzayev, Rufat; Auda, Jonas; Schneegass, Stefan; Schmidt, Albrecht: NotiModes: an investigation of notification delay modes and their effects on smartphone users. In: MuC '20: Proceedings of the Conference on Mensch und Computer. ACM, Magdebug, Germany 2020. doi:10.1145/3404983.3410006CitationDetails
  • Saad, Alia; Elkafrawy, Dina Hisham; Abdennadher, Slim; Schneegass, Stefan: Are They Actually Looking? Identifying Smartphones Shoulder Surfing Through Gaze Estimation. In: ETRA. ACM, Stuttgart, Germany 2020. doi:10.1145/3379157.3391422CitationDetails
  • Jonathan Liebers, Stefan Schneegass: Gaze-based Authentication in Virtual Reality. In: ETRA. ACM, 2020. doi:10.1145/3379157.3391421CitationDetails
  • Safwat, Sherine Ashraf; Bolock, Alia El; Alaa, Mostafa; Faltaous, Sarah; Schneegass, Stefan; Abdennadher, Slim: The Effect of Student-Lecturer Cultural Differences on Engagement in Learning Environments - A Pilot Study. In: Communications in Computer and Information Science. Springer, 2020. doi:10.1007/978-3-030-51999-5\_10CitationDetails
  • Schneegass, Stefan; Sasse, Angela; Alt, Florian; Vogel, Daniel: Authentication Beyond Desktops and Smartphones: Novel Approaches for Smart Devices and Environments. In: CHI'20 Proceedings. ACM, Honolulu, HI, USA 2020. doi:10.1145/3334480.3375144CitationDetails
  • Ranasinghe, Champika; Holländer, Kai; Currano, Rebecca; Sirkin, David; Moore, Dylan; Schneegass, Stefan; Ju, Wendy: Autonomous Vehicle-Pedestrian Interaction Across Cultures: Towards Designing Better External Human Machine Interfaces (eHMIs). In: CHI'20 Proceedings. ACM, Honolulu, HI, USA 2020. doi:10.1145/3334480.3382957CitationDetails
  • Jonathan Liebers, Stefan Schneegass: Introducing Functional Biometrics: Using Body-Reflections as a Novel Class of Biometric Authentication Systems. In: CHI Extended Abstracts 2020. ACM, Honolulu, HI, USA 2020. doi:10.1145/3334480.3383059CitationDetails
  • Faltaous, Sarah; Schönherr, Chris; Detjen, Henrik; Schneegass, Stefan: Exploring proprioceptive take-over requests for highly automated vehicles. In: MUM '19: Proceedings of the 18th International Conference on Mobile and Ubiquitous Multimedia. ACM, Pisa, Italy 2019. doi:https://doi.org/10.1145/3365610.3365644PDFCitationDetails
    Exploring proprioceptive take-over requests for highly automated vehicles
  • Detjen, Henrik; Faltaous, Sarah; Geisler, Stefan; Schneegass, Stefan: User-Defined Voice and Mid-Air Gesture Commands - for Maneuver-based Interventions in Automated Vehicles. In: MuC'19: Proceedings of Mensch und Computer 2019. ACM, New York, USA 2019. doi: DOI: https://doi.org/10.1145/3340764.3340798 PDFCitationDetails
    User-Defined Voice and Mid-Air Gesture Commands
  • Poguntke, Romina; Mantz, Tamara; Hassib, Mariam; Schmidt, Albrecht; Schneegass, Stefan: Smile to Me - Investigating Emotions and their Representation in Text-based Messaging in the Wild. In: MuC'19: Proceedings of Mensch und Computer 2019. ACM, New York, USA 2019. doi:https://doi.org/10.1145/3340764.3340795PDFCitationDetails
    Smile to Me
  • Faltaous, Sarah; Eljaki, Salma; Schneegass, Stefan: User Preferences of Voice Controlled Smart Light Systems. In: MuC'19: Proceedings of Mensch und Computer 2019. ACM, New York, USA 2019. doi:https://doi.org/10.1145/3340764.3344437PDFCitationDetails
    User Preferences of Voice Controlled Smart Light Systems
  • Pascher, Max; Schneegass, Stefan; Gerken:, Jens: SwipeBuddy A Teleoperated Tablet and Ebook-Reader Holder for a Hands-Free Interaction. In: Human-Computer Interaction – INTERACT 2019. Springer, Paphos, Cyprus 2019. doi:10.1007/978-3-030-29390-1_39CitationDetails
  • Pfeiffer, Max; Medrano, Samuel Navas; Auda, Jonas; Schneegass, Stefan: STOP! Enhancing Drone Gesture Interaction with Force Feedback. In: CHI'19 Proceedings. HAL, Glasgow, UK 2019. doi:https://hal.archives-ouvertes.fr/hal-02128395/documentPDFFull textCitationDetails
    STOP! Enhancing Drone Gesture Interaction with Force Feedback
  • Jonas Auda, Max Pascher; Schneegass, Stefan: Around the (Virtual) World - Infinite Walking in Virtual Reality Using Electrical Muscle Stimulation. In: Acm (Ed.): CHI'19 Proceedings. Glasgow 2019. doi:https://doi.org/10.1145/3290605.3300661PDFCitationDetails
    Around the (Virtual) World

    Virtual worlds are infinite environments in which the user can move around freely. When shifting from controller-based movement to regular walking as an input, the limitation of the real world also limits the virtual world. Tackling this challenge, we propose the use of electrical muscle stimulation to limit the necessary real-world space to create an unlimited walking experience. We thereby actuate the users‘ legs in a way that they deviate from their straight route and thus, walk in circles in the real world while still walking straight in the virtual world. We report on a study comparing this approach to vision shift – the state-of-the-art approach – as well as combining both approaches. The results show that particularly combining both approaches yield high potential to create an infinite walking experience.

  • Faltaous, Sarah; Haas, Gabriel; Barrios, Liliana; Seiderer, Andreas; Rauh, Sebastian Felix; Chae, Han Joo; Schneegass, Stefan; Alt, Florian: BrainShare: A Glimpse of Social Interaction for Locked-in Syndrome Patients. In: CHI EA '19: Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems. ACM, New York, NY, USA 2019. PDFCitationDetails
    BrainShare: A Glimpse of Social Interaction for Locked-in Syndrome Patients
  • Schneegass, Stefan; Poguntke, Romina; Machulla, Tonja Katrin: Understanding the Impact of Information Representation on Willingness to Share Information. In: CHI '19: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. ACM, New YorkNYUnited States 2019. doi:https://doi.org/10.1145/3290605.3300753PDFCitationDetails
    Understanding the Impact of Information Representation on Willingness to Share Information
  • Sarah Faltaous, Jonathan Liebers; Yomna Abdelrahman, Florian Alt; Schneegass, Stefan: VPID: Towards Vein Pattern Identification Using Thermal Imaging. In: i-com (2019) No 18 (3), p. 259-270. doi:10.1515/icom-2019-0009PDFCitationDetails

    Biometric authentication received considerable attention lately. The vein pattern on the back of the hand is a unique biometric that can be measured through thermal imaging. Detecting this pattern provides an implicit approach that can authenticate users while interacting. In this paper, we present the Vein-Identification system, called VPID. It consists of a vein pattern recognition pipeline and an authentication part. We implemented six different vein-based authentication approaches by combining thermal imaging and computer vision algorithms. Through a study, we show that the approaches achieve a low false-acceptance rate (“FAR”) and a low false-rejection rate (“FRR”). Our findings show that the best approach is the Hausdorff distance-difference applied in combination with a Convolutional Neural Networks (CNN) classification of stacked images.

  • Pascher, Max; Schneegass, Stefan; Gerken, Jens: SwipeBuddy. In: Lamas, David; Loizides, Fernando; Nacke, Lennart; Petrie, Helen; Winckler, Marco; Zaphiris, Panayiotis (Ed.): Human-Computer Interaction -- INTERACT 2019. Springer International Publishing, Cham 2019, p. 568-571. CitationDetails

    Mobile devices are the core computing platform we use in our everyday life to communicate with friends, watch movies, or read books. For people with severe physical disabilities, such as tetraplegics, who cannot use their hands to operate such devices, these devices are barely usable. Tackling this challenge, we propose SwipeBuddy, a teleoperated robot allowing for touch interaction with a smartphone, tablet, or ebook-reader. The mobile device is mounted on top of the robot and can be teleoperated by a user through head motions and gestures controlling a stylus simulating touch input. Further, the user can control the position and orientation of the mobile device. We demonstrate the SwipeBuddy robot device and its different interaction capabilities.

  • Hoppe, Matthias; Knierim, Pascal; Kosch, Thomas; Funk, Markus; Futami, Lauren; Schneegass, Stefan; Henze, Niels; Schmidt, Albrecht; Machulla, Tonja: VRHapticDrones - Providing Haptics in Virtual Reality through Quadcopters. In: MUM 2018: Proceedings of the 17th International Conference on Mobile and Ubiquitous Multimedia. ACM, Cairo, Egypt 2018. doi:https://doi.org/10.1145/3282894.3282898PDFCitationDetails
    VRHapticDrones
  • Saad, Alia; Chukwu, Michael; Schneegass, Stefan: Communicating Shoulder Surfing Attacks to Users. In: MUM 2018: Proceedings of the 17th International Conference on Mobile and Ubiquitous Multimedia. ACM, Cairo, Egypt 2018. doi:https://doi.org/10.1145/3282894.3282919PDFCitationDetails
    Communicating Shoulder Surfing Attacks to Users
  • Schneegass, Christina; Terzimehić, Nađa; Nettah, Mariam; Schneegass, Stefan: Informing the Design of User - adaptive Mobile Language Learning Applications. In: MUM 2018: Proceedings of the 17th International Conference on Mobile and Ubiquitous Multimedia. ACM, Cairo, Egypt 2018. doi:https://doi.org/10.1145/3282894.3282926PDFCitationDetails
    Informing the Design of User
  • Faltaous, Sarah; Elbolock, Alia; Talaat, Mostafa; Abdennadher, Slim; Schneegass, Stefan: Virtual Reality for Cultural Competences. In: MUM 2018: Proceedings of the 17th International Conference on Mobile and Ubiquitous Multimedia. ACM, Cairo, Egypt 2018. doi:https://doi.org/10.1145/3282894.3289739PDFCitationDetails
    Virtual Reality for Cultural Competences
  • Antoun, Sara; Auda, Jonas; Schneegass, Stefan: SlidAR - Towards using AR in Education. In: MUM 2018: Proceedings of the 17th International Conference on Mobile and Ubiquitous Multimedia. ACM, Cairo, Egypt 2018. doi:https://doi.org/10.1145/3282894.3289744PDFCitationDetails
    SlidAR
  • Elagroudy, Passant; Abdelrahman, Yomna; Faltaous, Sarah; Schneegass, Stefan; Davis, Hilary: Workshop on Amplified and Memorable Food Interactions. In: MUM 2018: Proceedings of the 17th International Conference on Mobile and Ubiquitous Multimedia. ACM, Cairo, Egypt 2018. doi:https://doi.org/10.1145/3282894.3286059PDFCitationDetails
    Workshop on Amplified and Memorable Food Interactions
  • Auda, Jonas; Hoppe, Matthias; Amiraslanov, Orkhan; Zhou, Bo; Knierim, Pascal; Schneegass, Stefan; Schmidt, Albrecht; Lukowicz, Paul: LYRA - smart wearable in-flight service assistant. In: ISWC '18: Proceedings of the 2018 ACM International Symposium on Wearable Computers. ACM, Singapore, Singapore 2018. doi:https://doi.org/10.1145/3267242.3267282PDFCitationDetails
    LYRA
  • Arévalo-Arboleda, Stephanie; Pascher, Max; Gerken, Jens: Opportunities and Challenges in Mixed-Reality for an Inclusive Human-Robot Collaboration Environment. In: Proceedings of the 2018 International Workshop on Virtual, Augmented, and Mixed Reality for Human-Robot Interactions (VAM-HRI) as part of the ACM/IEEE Conference on Human-Robot Interaction. Chicago, USA 2018. CitationDetails

    This paper presents an approach to enhance robot control using Mixed-Reality. It highlights the opportunities and challenges in the interaction design to achieve a Human-Robot Collaborative environment. In fact, Human-Robot Collaboration is the perfect space for social inclusion. It enables people, who suffer severe physical impairments, to interact with the environment by providing them movement control of an external robotic arm. Now, when discussing about robot control it is important to reduce the visual-split that different input and output modalities carry. Therefore, Mixed-Reality is of particular interest when trying to ease communication between humans and robotic systems.

  • Faltaous, Sarah; Baumann, M.; Schneegass, Stefan; Chuang, Lewis: Design Guidelines for Reliability Communication in Autonomous Vehicles. In: AutomotiveUI '18: Proceedings of the 10th International Conference on Automotive User Interfaces and Interactive Vehicular Applications. ACM, Toronto, Canada 2018. doi:https://doi.org/10.1145/3239060.3239072PDFCitationDetails
    Design Guidelines for Reliability Communication in Autonomous Vehicles
  • Poguntke, Romina; Tasci, Cagri; Korhonen, Olli; Alt, Florian; Schneegass, Stefan: AVotar - exploring personalized avatars for mobile interaction with public displays. In: MobileHCI '18: Proceedings of the 20th International Conference on Human-Computer Interaction with Mobile Devices and Services Adjunct. ACM, Barcelona, Spain 2018. doi:https://doi.org/10.1145/3236112.3236113PDFCitationDetails
    AVotar
  • Weber, Dominik; Voit, Alexandra; Auda, Jonas; Schneegass, Stefan; Henze, Niels: Snooze! - investigating the user-defined deferral of mobile notifications. In: MobileHCI '18: Proceedings of the 20th International Conference on Human-Computer Interaction with Mobile Devices and Services. ACM, Barcelona, Spain 2018. doi:https://doi.org/10.1145/3229434.3229436PDFCitationDetails
    Snooze!
  • Poguntke, Romina; Kiss, Francisco; Kaplan, Ayhan; Schmidt, Albrecht; Schneegass, Stefan: RainSense - exploring the concept of a sense for weather awareness. In: MobileHCI '18: Proceedings of the 20th International Conference on Human-Computer Interaction with Mobile Devices and Services Adjunct. ACM, Barcelona, Spain 2018. doi:https://doi.org/10.1145/3236112.3236114PDFCitationDetails
    RainSense
  • Voit, Alexandra; Salm, Marie Olivia; Beljaars, Miriam; Kohn, Stefan; Schneegass, Stefan: Demo of a smart plant system as an exemplary smart home application supporting non-urgent notifications. In: NordiCHI '18: Proceedings of the 10th Nordic Conference on Human-Computer Interaction. ACM, Oslo, Norway 2018. doi:https://doi.org/10.1145/3240167.3240231PDFCitationDetails
    Demo of a smart plant system as an exemplary smart home application supporting non-urgent notifications
  • Auda, Jonas; Schneegass, Stefan; Faltaous, Sarah: Control, Intervention, or Autonomy? Understanding the Future of SmartHome Interaction. In: Conference on Human Factors in Computing Systems (CHI). ACM, Montreal, Canada 2018. PDFCitationDetails
    Control, Intervention, or Autonomy? Understanding the Future of SmartHome Interaction
  • Hassib, Mariam; Schneegass, Stefan; Henze, Niels; Schmidt, Albrecht; Alt, Florian: A Design Space for Audience Sensing and Feedback Systems. In: CHI EA '18: Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems. ACM, Montreal, Canada 2018. doi:https://doi.org/10.1145/3170427.3188569PDFCitationDetails
    A Design Space for Audience Sensing and Feedback Systems
  • Kiss, Francisco; Boldt, Robin; Pfleging, Bastian; Schneegass, Stefan: Navigation Systems for Motorcyclists: Exploring Wearable Tactile Feedback for Route Guidance in the Real World. In: CHI '18: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. ACM, Montreal, Canada 2018. doi:https://doi.org/10.1145/3173574.3174191PDFCitationDetails
    Navigation Systems for Motorcyclists: Exploring Wearable Tactile Feedback for Route Guidance in the Real World
  • Voit, Alexandra; Pfähler, Ferdinand; Schneegass, Stefan: Posture Sleeve: Using Smart Textiles for Public Display Interactions. In: CHI EA '18: Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems. ACM, Montreal 2018. doi:https://doi.org/10.1145/3170427.3188687PDFCitationDetails
    Posture Sleeve: Using Smart Textiles for Public Display Interactions
  • Auda, Jonas; Weber, Dominik; Voit, Alexandra; Schneegass, Stefan: Understanding User Preferences towards Rule-based Notification Deferral. In: CHI EA '18: Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems. ACM, Montreal, Canada 2018. doi:https://doi.org/10.1145/3170427.3188688PDFCitationDetails
    Understanding User Preferences towards Rule-based Notification Deferral
  • Henze, Niels: Design and evaluation of a computer-actuated mouse. In: MUM '17: Proceedings of the 16th International Conference on Mobile and Ubiquitous Multimedia. ACM, Stuttgart. Germany 2017. doi:https://doi.org/10.1145/3152832.3152862PDFCitationDetails
    Design and evaluation of a computer-actuated mouse
  • Hassib, Mariam; Khamis, Mohamed; Friedl, Susanne; Schneegass, Stefan; Alt, Florian: Brainatwork - logging cognitive engagement and tasks in the workplace using electroencephalography. In: MUM '17: Proceedings of the 16th International Conference on Mobile and Ubiquitous Multimedia. ACM, Stuttgart, Germany 2017. doi:https://doi.org/10.1145/3152832.3152865PDFCitationDetails
    Brainatwork
  • Alexandra Voit, Stefan Schneegass: FabricID - using smart textiles to access wearable devices. In: MUM '17: Proceedings of the 16th International Conference on Mobile and Ubiquitous Multimedia. ACM, Stuttgart, Germany 2017. doi:https://doi.org/10.1145/3152832.3156622PDFCitationDetails
    FabricID
  • Simon Mayer, Stefan Schneegass: IoT 2017 - the Seventh International Conference on the Internet of Things. In: IoT '17: Proceedings of the Seventh International Conference on the Internet of Things. ACM, Linz, Austria 2017. doi:https://doi.org/10.1145/3131542.3131543PDFCitationDetails
    IoT 2017
  • Duente, Im; Schneegass, Stefan; Pfeiffer, Max: EMS in HCI - challenges and opportunities in actuating human bodies. In: MobileHCI '17: Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services. ACM, Vienna, Austria 2017. doi:https://doi.org/10.1145/3098279.3119920PDFCitationDetails
    EMS in HCI
  • Oberhuber, Sascha; Kothe, Tina; Schneegass, Stefan; Alt, Florian: Augmented Games - Exploring Design Opportunities in AR Settings With Children. In: IDC '17: Proceedings of the 2017 Conference on Interaction Design and Children. ACM, Stanford California, USA 2017. doi:https://doi.org/10.1145/3078072.3079734PDFCitationDetails
    Augmented Games
  • Knierim, Pascal; Kosch, Thomas; Schwind, Valentin; Funk, Markus; Kiss, Francisco; Schneegass, Stefan; Henze, Niels: Tactile Drones - Providing Immersive Tactile Feedback in Virtual Reality through Quadcopters. In: CHI EA '17: Proceedings of the 2017 CHI Conference Extended Abstracts on Human Factors in Computing Systems. ACM, Denver Colorado, USA 2017. doi:https://doi.org/10.1145/3027063.3050426PDFCitationDetails
    Tactile Drones
  • Schmidt, Albrecht; Schneegass, Stefan; Kunze, Kai; Rekimoto, Jun; Woo, Woontack: Workshop on Amplification and Augmentation of Human Perception. In: CHI EA '17: Proceedings of the 2017 CHI Conference Extended Abstracts on Human Factors in Computing Systems. ACM, Denver Colorado, USA 2017. doi:https://doi.org/10.1145/3027063.3027088PDFCitationDetails
    Workshop on Amplification and Augmentation of Human Perception
  • Mariam Hassib, Max Pfeiffer; Stefan Schneegass, Michael Rohs; Alt, Florian: Emotion Actuator - Embodied Emotional Feedback through Electroencephalography and Electrical Muscle Stimulation. In: Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI '17). ACM, New York, USA 2017, p. 6133-6146. doi:https://doi.org/10.1145/3025453.3025953PDFCitationDetails
    Emotion Actuator

    The human body reveals emotional and bodily states through measurable signals, such as body language and electroencephalography. However, such manifestations are difficult to communicate to others remotely. We propose EmotionActuator, a proof-of-concept system to investigate the transmission of emotional states in which the recipient performs emotional gestures to understand and interpret the state of the sender.We call this kind of communication embodied emotional feedback, and present a prototype implementation. To realize our concept we chose four emotional states: amused, sad, angry, and neutral. We designed EmotionActuator through a series of studies to assess emotional classification via EEG, and create an EMS gesture set by comparing composed gestures from the literature to sign-language gestures. Through a final study with the end-to-end prototype interviews revealed that participants like implicit sharing of emotions and find the embodied output to be immersive, but want to have control over shared emotions and with whom. This work contributes a proof of concept system and set of design recommendations for designing embodied emotional feedback systems.

    Video: https://www.youtube.com/watch?v=OgOZmsa8xs8

  • Alt, Florian: Stay Cool! Understanding Thermal Attacks on Mobile-based User Authentication. In: Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI '17). ACM, New York, USA 2017, p. 3751-3763. doi:https://doi.org/10.1145/3025453.3025461PDFCitationDetails
    Stay Cool! Understanding Thermal Attacks on Mobile-based User Authentication

    PINs and patterns remain among the most widely used knowledge-based authentication schemes. As thermal cameras become ubiquitous and affordable, we foresee a new form of threat to user privacy on mobile devices. Thermal cameras allow performing thermal attacks, where heat traces, resulting from authentication, can be used to reconstruct passwords. In this work we investigate in details the viability of exploiting thermal imaging to infer PINs and patterns on mobile devices. We present a study (N=18) where we evaluated how properties of PINs and patterns influence their thermal attacks resistance. We found that thermal attacks are indeed viable on mobile devices; overlapping patterns significantly decrease successful thermal attack rate from 100% to 16.67%, while PINs remain vulnerable (>72% success rate) even with duplicate digits. We conclude by recommendations for users and designers of authentication schemes on how to resist thermal attacks.

    Video: https://www.youtube.com/watch?v=FxOBAvI-YFI

  • Hassib, Mariam; Schneegass, Stefan; Eiglsperger, Philipp; Henze, Niels; Schmidt, Albrecht; ; Alt, Florian: EngageMeter: A System for Implicit Audience Engagement Sensing Using Electroencephalography. In: Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI '17). ACM, New York, USA 2017, p. 5114-5119. doi:https://doi.org/10.1145/3025453.3025669CitationDetails
    EngageMeter: A System for Implicit Audience Engagement Sensing Using Electroencephalography

    Obtaining information about audience engagement in presentations is a valuable asset for presenters in many domains. Prior literature mostly utilized explicit methods of collecting feedback which induce distractions, add workload on audience and do not provide objective information to presenters. We present EngageMeter - a system that allows fine-grained information on audience engagement to be obtained implicitly from multiple brain-computer interfaces (BCI) and to be fed back to presenters for real time and post-hoc access. Through evaluation during an HCI conference (Naudience=11, Npresenters=3) we found that EngageMeter provides value to presenters (a) in real-time, since it allows reacting to current engagement scores by changing tone or adding pauses, and (b) in post-hoc, since presenters can adjust their slides and embed extra elements. We discuss how EngageMeter can be used in collocated and distributed audience sensing as well as how it can aid presenters in long term use.

  • Michahelles, Florian; Ilic, Alexander; Kunze, Kai; Kritzler, Mareike; Schneegass, Stefan: IoT 2016. In: IEEE Pervasive Computing, Vol 16 (2017) No 2, p. 87-89. doi:10.1109/MPRV.2017.25PDFCitationDetails
    IoT 2016

    The 6th International Conference on the Internet of Things (IoT 2016) showed clear departure from the research on data acquisition and sensor management presented at previous series of this conference. Learn about this year's move toward more commercially applicable implementations and cross-domain applications.

  • Stefan Schneegass, Oliver Amft: Introduction to Smart Textiles. In: Stefan Schneegass, Oliver Amft (Ed.): Smart Textiles: Fundamentals, Design, and Interaction. Springer International Publishing, 2017, p. 1-15. doi:10.1007/978-3-319-50124-6_1CitationDetails
    Introduction to Smart Textiles

    This chapter introduces fundamental concepts related to wearable computing, smart textiles, and context awareness. The history of wearable computing is summarized to illustrate the current state of smart textile and garment research. Subsequently, the process to build smart textiles from fabric production, sensor and actuator integration, contacting and integration, as well as communication, is summarized with notes and links to relevant chapters of this book. The options and specific needs for evaluating smart textiles are described. The chapter concludes by highlighting current and future research and development challenges for smart textiles.

  • Cheng, Jingyuan; Zhou, Bo; Lukowicz, Paul; Seoane, Fernando; Varga, Matija; Mehmann, Andreas; Chabrecek, Peter; Gaschler, Werner; Goenner, Karl; Horter, Hansjürgen; Schneegass, Stefan; Hassib, Mariam; Schmidt, Albrecht; Freund, Martin; Zhang, Rui; Amft, Oliver: Textile Building Blocks: Toward Simple, Modularized, and Standardized Smart Textile. In: Stefan Schneegass, Oliver Amft (Ed.): Smart Textiles. Springer International Publishing, 2017, p. 303-331. doi:10.1007/978-3-319-50124-6_14CitationDetails
    Textile Building Blocks: Toward Simple, Modularized, and Standardized Smart Textile

    Textiles are pervasive in our life, covering human body and objects, as well as serving in industrial applications. In its everyday use of individuals, smart textile becomes a promising medium for monitoring, information retrieval, and interaction. While there are many applications in sport, health care, and industry, the state-of-the-art smart textile is still found only in niche markets. To gain mass-market capabilities, we see the necessity of generalizing and modularizing smart textile production and application development, which on the one end lowers the production cost and on the other end enables easy deployment. In this chapter, we demonstrate our initial effort in modularization. By devising types of universal sensing fabrics for conductive and non-conductive patches, smart textile construction from basic, reusable components can be made. Using the fabric blocks, we present four types of sensing modalities, including resistive pressure, capacitive, bioimpedance, and biopotential. In addition, we present a multi-channel textile–electronics interface and various applications built on the top of the basic building blocks by ‘cut and sew’ principle.

  • Stefan Schneegass, Oliver Amft (Ed.): Smart Textiles - Fundamentals, Design, and Interaction. 1st Edition. Springer International Publishing, 2017. doi:10.1007/978-3-319-50124-6CitationDetails
    Smart Textiles

    From a holistic perspective, this handbook explores the design, development and production of smart textiles and textile electronics, breaking with the traditional silo-structure of smart textile research and development.

    Leading experts from different domains including textile production, electrical engineering, interaction design and human-computer interaction (HCI) address production processes in their entirety by exploring important concepts and topics like textile manufacturing, sensor and actuator development for textiles, the integration of electronics into textiles and the interaction with textiles. In addition, different application scenarios, where smart textiles play a key role, are presented too.

    Smart Textiles would be an ideal resource for researchers, designers and academics who are interested in understanding the overall process in creating viable smart textiles.

  • Schneegass, Stefan; Schmidt, Albrecht; Pfeiffer, Max: Creating user interfaces with electrical muscle stimulation. In: interactions, Vol 24 (2016) No 1, p. 74-77. doi:http://doi.acm.org/10.1145/3019606PDFFull textCitationDetails
    Creating user interfaces with electrical muscle stimulation

    Muscle movement is central to virtually everything we do, be it walking, writing, drawing, smiling, or singing. Even while we're standing still, our muscles are active, ensuring that we keep our balance. In a recent forum [1] we showed how electrical signals on the skin that reflect muscle activity can be measured. Here, we look at the reverse direction. We explain how muscles can be activated and how movements can be controlled with electrical signals.

  • Voit, Alexandra; Weber, Dominik; Schneegass, Stefan: Towards Notifications in the Era of the Internet of Things. In: IoT'16: Proceedings of the 6th International Conference on the Internet of Things. ACM, New York, USA 2016. doi:https://doi.org/10.1145/2991561.2998472PDFCitationDetails
    Towards Notifications in the Era of the Internet of Things
  • Schneegass, Stefan; Voit, Alexandra: GestureSleeve: using touch sensitive fabrics for gestural input on the forearm for controlling smartwatches. In: Proceedings of the 2016 ACM International Symposium on Wearable Computers (ISWC '16). ACM, New York, USA 2016, p. 108-115. doi:https://doi.org/10.1145/2971763.2971797CitationDetails

    Smartwatches provide quick and easy access to information. Due to their wearable nature, users can perceive the information while being stationary or on the go. The main drawback of smartwatches, however, is the limited input possibility. They use similar input methods as smartphones but thereby suffer from a smaller form factor. To extend the input space of smartwatches, we present GestureSleeve, a sleeve made out of touch enabled textile. It is capable of detecting different gestures such as stroke based gestures or taps. With these gestures, the user can control various smartwatch applications. Exploring the performance of the GestureSleeve approach, we conducted a user study with a running application as use case. In this study, we show that input using the GestureSleeve outperforms touch input on the smartwatch. In the future the GestureSleeve can be integrated into regular clothing and be used for controlling various smart devices.

  • Stefan Schneegass, Youssef Oualil; Bulling, Andreas: SkullConduct: Biometric User Identification on Eyewear Computers Using Bone Conduction Through the Skull. In: Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI '16). ACM, New York, USA 2016, p. 1379-1384. doi:https://doi.org/10.1145/2858036.2858152PDFCitationDetails

    Secure user identification is important for the increasing number of eyewear computers but limited input capabilities pose significant usability challenges for established knowledge-based schemes, such as passwords or PINs. We present SkullConduct, a biometric system that uses bone conduction of sound through the user's skull as well as a microphone readily integrated into many of these devices, such as Google Glass. At the core of SkullConduct is a method to analyze the characteristic frequency response created by the user's skull using a combination of Mel Frequency Cepstral Coefficient (MFCC) features as well as a computationally light-weight 1NN classifier. We report on a controlled experiment with 10 participants that shows that this frequency response is person-specific and stable -- even when taking off and putting on the device multiple times -- and thus serves as a robust biometric. We show that our method can identify users with 97.0% accuracy and authenticate them with an equal error rate of 6.9%, thereby bringing biometric user identification to eyewear computers equipped with bone conduction technology.

    Video: https://www.youtube.com/watch?v=5yG_nWocXNY

  • Florian Alt, Stefan Schneegass; Alireza Sahami Shirazi, Mariam Hassib; Bulling, Andreas: Graphical Passwords in the Wild: Understanding How Users Choose Pictures and Passwords in Image-based Authentication Schemes. In: Proceedings of the 17th International Conference on Human-Computer Interaction with Mobile Devices and Services (MobileHCI '15). ACM, New York, USA 2015, p. 316-322. doi:10.1145/2785830.2785882CitationDetails

    Common user authentication methods on smartphones, such as lock patterns, PINs, or passwords, impose a trade-off between security and password memorability. Image-based passwords were proposed as a secure and usable alternative. As of today, however, it remains unclear how such schemes are used in the wild. We present the first study to investigate how image-based passwords are used over long periods of time in the real world. Our analyses are based on data from 2318 unique devices collected over more than one year using a custom application released in the Android Play store. We present an in-depth analysis of what kind of images users select, how they define their passwords, and how secure these passwords are. Our findings provide valuable insights into real-world use of image-based passwords and inform the design of future graphical authentication schemes.

  • Max Pfeiffer, Tim Dünte; Stefan Schneegass, Florian Alt; Rohs, Michael: Cruise Control for Pedestrians: Controlling Walking Direction using Electrical Muscle Stimulation. In: In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI '15). ACM, New York, USA 2015, p. 2505-2514. doi:https://doi.org/10.1145/2702123.2702190CitationDetails

    Pedestrian navigation systems require users to perceive, interpret, and react to navigation information. This can tax cognition as navigation information competes with information from the real world. We propose actuated navigation, a new kind of pedestrian navigation in which the user does not need to attend to the navigation task at all. An actuation signal is directly sent to the human motor system to influence walking direction. To achieve this goal we stimulate the sartorius muscle using electrical muscle stimulation. The rotation occurs during the swing phase of the leg and can easily be counteracted. The user therefore stays in control. We discuss the properties of actuated navigation and present a lab study on identifying basic parameters of the technique as well as an outdoor study in a park. The results show that our approach changes a user's walking direction by about 16°/m on average and that the system can successfully steer users in a park with crowded areas, distractions, obstacles, and uneven ground.

    Video: https://www.youtube.com/watch?v=JSfnm_HoUv4

  • Mayer, Sven; Wolf, Katrin; Schneegass, Stefan; ; Henze, Niels: Modeling Distant Pointing for Compensating Systematic Displacements. In: Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI '15). ACM, New York, USA 2015, p. 4165-4168. doi:https://doi.org/10.1145/2702123.2702332CitationDetails

    Distant pointing at objects and persons is a highly expressive gesture that is widely used in human communication. Pointing is also used to control a range of interactive systems. For determining where a user is pointing at, different ray casting methods have been proposed. In this paper we assess how accurately humans point over distance and how to improve it. Participants pointed at projected targets from 2m and 3m while standing and sitting. Testing three common ray casting methods, we found that even with the most accurate one the average error is 61.3cm. We found that all tested ray casting methods are affected by systematic displacements. Therefore, we trained a polynomial to compensate this displacement. We show that using a user-, pose-, and distant-independent quartic polynomial can reduce the average error by 37.3%.

    VIdeo: https://www.youtube.com/watch?v=f8NOERrhWfA

  • Bader, Patrick; Schwind, Valentin; Henze, Niels; Schneegass, Stefan; Broy, Nora; Schmidt, Albrecht: Design and evaluation of a layered handheld 3d display with touch-sensitive front and back. In: NordiCHI '14: Proceedings of the 8th Nordic Conference on Human-Computer Interaction: Fun, Fast, Foundational. ACM, Helsinki, Finland 2014. doi:https://doi.org/10.1145/2639189.2639257PDFCitationDetails
    Design and evaluation of a layered handheld 3d display with touch-sensitive front and back
  • Stefan Schneegass, Frank Steimle; Andreas Bulling, Florian Alt; Schmidt, Albrecht: SmudgeSafe: geometric image transformations for smudge-resistant user authentication. In: Proceedings of the 2014 ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp '14). ACM, New York, USA 2014, p. 775-786. doi:10.1145/2632048.2636090CitationDetails

    Touch-enabled user interfaces have become ubiquitous, such as on ATMs or portable devices. At the same time, authentication using touch input is problematic, since finger smudge traces may allow attackers to reconstruct passwords. We present SmudgeSafe, an authentication system that uses random geometric image transformations, such as translation, rotation, scaling, shearing, and flipping, to increase the security of cued-recall graphical passwords. We describe the design space of these transformations and report on two user studies: A lab-based security study involving 20 participants in attacking user-defined passwords, using high quality pictures of real smudge traces captured on a mobile phone display; and an in-the-field usability study with 374 participants who generated more than 130,000 logins on a mobile phone implementation of SmudgeSafe. Results show that SmudgeSafe significantly increases security compared to authentication schemes based on PINs and lock patterns, and exhibits very high learnability, efficiency, and memorability.

  • Shirazi, Alireza Sahami; Abdelrahman, Yomna; Henze, Niels; Schneegass, Stefan; Khalilbeigi, Mohammadreza; ; Schmidt, Albrecht: Exploiting thermal reflection for interactive systems. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '14). ACM, New York, USA 2014, p. 3483-3492. doi:10.1145/2556288.2557208CitationDetails

    Thermal cameras have recently drawn the attention of HCI researchers as a new sensory system enabling novel interactive systems. They are robust to illumination changes and make it easy to separate human bodies from the image background. Far-infrared radiation, however, has another characteristic that distinguishes thermal cameras from their RGB or depth counterparts, namely thermal reflection. Common surfaces reflect thermal radiation differently than visual light and can be perfect thermal mirrors. In this paper, we show that through thermal reflection, thermal cameras can sense the space beyond their direct field-of-view. A thermal camera can sense areas besides and even behind its field-of-view through thermal reflection. We investigate how thermal reflection can increase the interaction space of projected surfaces using camera-projection systems. We moreover discuss the reflection characteristics of common surfaces in our vicinity in both the visual and thermal radiation bands. Using a proof-of-concept prototype, we demonstrate the increased interaction space for hand-held camera-projection system. Furthermore, we depict a number of promising application examples that can benefit from the thermal reflection characteristics of surfaces.

  • Häkkilä, Jonna R.; Posti, Maaret; Schneegass, Stefan; Alt, Florian; Gultekin, Kunter; ; Schmidt, Albrecht: Let me catch this!: experiencing interactive 3D cinema through collecting content with a mobile phone. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '14). ACM, New York, USA 2014, p. 1011-1020. doi:10.1145/2556288.2557187Full textCitationDetails

    The entertainment industry is going through a transformation, and technology development is affecting how we can enjoy and interact with the entertainment media content in new ways. In our work, we explore how to enable interaction with content in the context of 3D cinemas. This allows viewers to use their mobile phone to retrieve, for example, information on the artist of the soundtrack currently playing or a discount coupon on the watch the main actor is wearing. We are particularly interested in the user experience of the interactive 3D cinema concept, and how different interactive elements and interaction techniques are perceived. We report on the development of a prototype application utilizing smart phones and on an evaluation in a cinema context with 20 participants. Results emphasize that designing for interactive cinema experiences should drive for holistic and positive user experiences. Interactive content should be tied together with the actual video content, but integrated into contexts where it does not conflict with the immersive experience with the movie.

  • Broy, Nora; Schneegass, Stefan; Alt, Florian; ; Schmidt, Albrecht: FrameBox and MirrorBox: tools and guidelines to support designers in prototyping interfaces for 3D displays. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '14). ACM, New York, USA 2014, p. 2037-2046. doi:https://doi.org/10.1145/2556288.2557183Full textCitationDetails

    In this paper, we identify design guidelines for stereoscopic 3D (S3D) user interfaces (UIs) and present the MirrorBox and the FrameBox, two UI prototyping tools for S3D displays. As auto-stereoscopy becomes available for the mass market we believe the design of S3D UIs for devices, for example, mobile phones, public displays, or car dashboards, will rapidly gain importance. A benefit of such UIs is that they can group and structure information in a way that makes them easily perceivable for the user. For example, important information can be shown in front of less important information. This paper identifies core requirements for designing S3D UIs and derives concrete guidelines. The requirements also serve as a basis for two depth layout tools we built with the aim to overcome limitations of traditional prototyping when sketching S3D UIs. We evaluated the tools with usability experts and compared them to traditional paper prototyping.

  • Alt, Florian; Schneegass, Stefan; Auda, Jonas; Rzayev, Rufat; Broy, Nora: Using eye-tracking to support interaction with layered 3D interfaces on stereoscopic displays. In: IUI '14: Proceedings of the 19th international conference on Intelligent User Interfaces. ACM, Haifa, Israel 2014. doi:https://doi.org/10.1145/2557500.2557518PDFCitationDetails
    Using eye-tracking to support interaction with layered 3D interfaces on stereoscopic displays
  • Kubitza, Thomas; Pohl, Norman; Dingler, Tilman; Schneegass, Stefan; Weichel, Christian; Schmidt, Albrecht: Ingredients for a New Wave of Ubicomp Products. In: IEEE Pervasive Computing, Vol 12 (2013) No 3, p. 5-8. doi:10.1109/MPRV.2013.51Full textCitationDetails

    The emergence of many new embedded computing platforms has lowered the hurdle for creating ubiquitous computing devices. Here, the authors highlight some of the newer platforms, communication technologies, sensors, actuators, and cloud-based development tools, which are creating new opportunities for ubiquitous computing.

  • Pascher, Max: Praxisbeispiel Digitalisierung konkret: Wenn der Stromzähler weiß, ob es Oma gut geht. Beschreibung des minimalinvasiven Frühwarnsystems „ZELIA“. In: Wege in die digitale Zukunft - Was bedeuten Smart Living, Big Data, Robotik & Co für die Sozialwirtschaft? S. 137-148. Nomos Verlagsgesellschaft mbH & Co. KG, . CitationDetails
  • Pascher, Max; Baumeister, Annalies; Klein, Barbara; Schneegass, Stefan; Gerken, Jens: Little Helper: A Multi-Robot System in Home Health Care Environments. In: Ecole Nationale de l'Aviation Civile [ENAC]. CitationDetails

    Being able to live independently and self-determined in once own home is a crucial factor for social participation. For people with severe physical impairments, such as tetraplegia, who cannot use their hands to manipulate materials or operate devices, life in their own home is only possible with assistance from others. The inability to operate buttons and other interfaces results also in not being able to utilize most assistive technologies on their own. In this paper, we present an ethnographic field study with 15 tetraplegics to better understand their living environments and needs. Results show the potential for robotic solutions but emphasize the need to support activities of daily living (ADL), such as grabbing and manipulating objects or opening doors. Based on this, we propose Little Helper, a tele-operated pack of robot drones, collaborating in a divide and conquer paradigm to fulfill several tasks using a unique interaction method. The drones can be tele-operated by a user through gaze-based selection and head motions and gestures manipulating materials and applications.

  • . . CitationDetails
  • . CitationDetails
  • . . CitationDetails