Publications

Selected Publications

This page shows selected publications from the last years. For a detailed list please refer to the Google Scholar or DBLP page of Stefan Schneegass.

Filter:
  • Stefan Schneegass; Romina Poguntke; Tonja Machulla: Understanding the Impact of Information Representation on Willingness to Share Information. In: CHI Conference on Human Factors in Computing Systems Proceedings (CHI 2019). ACM, New York, NY, USA 2019. doi:10.1145/3290605.3300753 PDF Citation Details
    Understanding the Impact of Information Representation on Willingness to Share Information
  • Mariam Hassib, Max Pfeiffer, Stefan Schneegass, Michael Rohs,; Florian Alt: Emotion Actuator: Embodied Emotional Feedback through Electroencephalography and Electrical Muscle Stimulation. In: Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI '17). ACM, New York, USA 2017, p. 6133-6146. doi:https://doi.org/10.1145/3025453.3025953 Citation Details

    The human body reveals emotional and bodily states through measurable signals, such as body language and electroencephalography. However, such manifestations are difficult to communicate to others remotely. We propose EmotionActuator, a proof-of-concept system to investigate the transmission of emotional states in which the recipient performs emotional gestures to understand and interpret the state of the sender.We call this kind of communication embodied emotional feedback, and present a prototype implementation. To realize our concept we chose four emotional states: amused, sad, angry, and neutral. We designed EmotionActuator through a series of studies to assess emotional classification via EEG, and create an EMS gesture set by comparing composed gestures from the literature to sign-language gestures. Through a final study with the end-to-end prototype interviews revealed that participants like implicit sharing of emotions and find the embodied output to be immersive, but want to have control over shared emotions and with whom. This work contributes a proof of concept system and set of design recommendations for designing embodied emotional feedback systems.

    Video: https://www.youtube.com/watch?v=OgOZmsa8xs8

  • Yomna Abdelrahman, Mohamed Khamis, Stefan Schneegass,; Florian Alt: Stay Cool! Understanding Thermal Attacks on Mobile-based User Authentication. In: Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI '17). ACM, New York, USA 2017, p. 3751-3763. doi:https://doi.org/10.1145/3025453.3025461 Citation Details

    PINs and patterns remain among the most widely used knowledge-based authentication schemes. As thermal cameras become ubiquitous and affordable, we foresee a new form of threat to user privacy on mobile devices. Thermal cameras allow performing thermal attacks, where heat traces, resulting from authentication, can be used to reconstruct passwords. In this work we investigate in details the viability of exploiting thermal imaging to infer PINs and patterns on mobile devices. We present a study (N=18) where we evaluated how properties of PINs and patterns influence their thermal attacks resistance. We found that thermal attacks are indeed viable on mobile devices; overlapping patterns significantly decrease successful thermal attack rate from 100% to 16.67%, while PINs remain vulnerable (>72% success rate) even with duplicate digits. We conclude by recommendations for users and designers of authentication schemes on how to resist thermal attacks.

    Video: https://www.youtube.com/watch?v=FxOBAvI-YFI

  • Mariam Hassib, Stefan Schneegass, Philipp Eiglsperger, Niels Henze, Albrecht Schmidt,; Florian Alt: EngageMeter: A System for Implicit Audience Engagement Sensing Using Electroencephalography. In: Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI '17). ACM, New York, USA 2017, p. 5114-5119. doi:https://doi.org/10.1145/3025453.3025669 Citation Details

    Obtaining information about audience engagement in presentations is a valuable asset for presenters in many domains. Prior literature mostly utilized explicit methods of collecting feedback which induce distractions, add workload on audience and do not provide objective information to presenters. We present EngageMeter - a system that allows fine-grained information on audience engagement to be obtained implicitly from multiple brain-computer interfaces (BCI) and to be fed back to presenters for real time and post-hoc access. Through evaluation during an HCI conference (Naudience=11, Npresenters=3) we found that EngageMeter provides value to presenters (a) in real-time, since it allows reacting to current engagement scores by changing tone or adding pauses, and (b) in post-hoc, since presenters can adjust their slides and embed extra elements. We discuss how EngageMeter can be used in collocated and distributed audience sensing as well as how it can aid presenters in long term use.

  • Florian Michahelles; Alexander Ilic; Kai Kunze; Mareike Kritzler; Stefan Schneegass: IoT 2016. In: IEEE Pervasive Computing, Vol 16 (2017) No 2, p. 87-89. doi:10.1109/MPRV.2017.25 Citation Details

    The 6th International Conference on the Internet of Things (IoT 2016) showed clear departure from the research on data acquisition and sensor management presented at previous series of this conference. Learn about this year's move toward more commercially applicable implementations and cross-domain applications.

  • Stefan Schneegass, Oliver Amft: Introduction to Smart Textiles. In: Stefan Schneegass, Oliver Amft (Ed.): Smart Textiles: Fundamentals, Design, and Interaction. Springer International Publishing, 2017, p. 1-15. doi:10.1007/978-3-319-50124-6_1 Citation Details

    This chapter introduces fundamental concepts related to wearable computing, smart textiles, and context awareness. The history of wearable computing is summarized to illustrate the current state of smart textile and garment research. Subsequently, the process to build smart textiles from fabric production, sensor and actuator integration, contacting and integration, as well as communication, is summarized with notes and links to relevant chapters of this book. The options and specific needs for evaluating smart textiles are described. The chapter concludes by highlighting current and future research and development challenges for smart textiles.

  • Jingyuan Cheng; Bo Zhou; Paul Lukowicz; Fernando Seoane; Matija Varga; Andreas Mehmann; Peter Chabrecek; Werner Gaschler; Karl Goenner; Hansjürgen Horter; Stefan Schneegass; Mariam Hassib; Albrecht Schmidt; Martin Freund; Rui Zhang; Oliver Amft: Textile Building Blocks: Toward Simple, Modularized, and Standardized Smart Textile. In: Stefan Schneegass, Oliver Amft (Ed.): Smart Textiles. Springer International Publishing, 2017, p. 303-331. doi:10.1007/978-3-319-50124-6_14 Citation Details

    Textiles are pervasive in our life, covering human body and objects, as well as serving in industrial applications. In its everyday use of individuals, smart textile becomes a promising medium for monitoring, information retrieval, and interaction. While there are many applications in sport, health care, and industry, the state-of-the-art smart textile is still found only in niche markets. To gain mass-market capabilities, we see the necessity of generalizing and modularizing smart textile production and application development, which on the one end lowers the production cost and on the other end enables easy deployment. In this chapter, we demonstrate our initial effort in modularization. By devising types of universal sensing fabrics for conductive and non-conductive patches, smart textile construction from basic, reusable components can be made. Using the fabric blocks, we present four types of sensing modalities, including resistive pressure, capacitive, bioimpedance, and biopotential. In addition, we present a multi-channel textile–electronics interface and various applications built on the top of the basic building blocks by ‘cut and sew’ principle.

  • Stefan Schneegass, Oliver Amft (Ed.): Smart Textiles - Fundamentals, Design, and Interaction. 1st Edition. Springer International Publishing, 2017. doi:10.1007/978-3-319-50124-6 Citation Details

    From a holistic perspective, this handbook explores the design, development and production of smart textiles and textile electronics, breaking with the traditional silo-structure of smart textile research and development.

    Leading experts from different domains including textile production, electrical engineering, interaction design and human-computer interaction (HCI) address production processes in their entirety by exploring important concepts and topics like textile manufacturing, sensor and actuator development for textiles, the integration of electronics into textiles and the interaction with textiles. In addition, different application scenarios, where smart textiles play a key role, are presented too.

    Smart Textiles would be an ideal resource for researchers, designers and academics who are interested in understanding the overall process in creating viable smart textiles.

  • Stefan Schneegass; Albrecht Schmidt; Max Pfeiffer: Creating user interfaces with electrical muscle stimulation. In: interactions, Vol 24 (2016) No 1, p. 74-77. doi:http://doi.acm.org/10.1145/3019606 Full text Citation Details

    Muscle movement is central to virtually everything we do, be it walking, writing, drawing, smiling, or singing. Even while we're standing still, our muscles are active, ensuring that we keep our balance. In a recent forum [1] we showed how electrical signals on the skin that reflect muscle activity can be measured. Here, we look at the reverse direction. We explain how muscles can be activated and how movements can be controlled with electrical signals.

  • Stefan Schneegass; Alexandra Voit: GestureSleeve: using touch sensitive fabrics for gestural input on the forearm for controlling smartwatches. In: Proceedings of the 2016 ACM International Symposium on Wearable Computers (ISWC '16). ACM, New York, USA 2016, p. 108-115. doi:https://doi.org/10.1145/2971763.2971797 Citation Details

    Smartwatches provide quick and easy access to information. Due to their wearable nature, users can perceive the information while being stationary or on the go. The main drawback of smartwatches, however, is the limited input possibility. They use similar input methods as smartphones but thereby suffer from a smaller form factor. To extend the input space of smartwatches, we present GestureSleeve, a sleeve made out of touch enabled textile. It is capable of detecting different gestures such as stroke based gestures or taps. With these gestures, the user can control various smartwatch applications. Exploring the performance of the GestureSleeve approach, we conducted a user study with a running application as use case. In this study, we show that input using the GestureSleeve outperforms touch input on the smartwatch. In the future the GestureSleeve can be integrated into regular clothing and be used for controlling various smart devices.

  • Stefan Schneegass, Youssef Oualil,; Andreas Bulling: SkullConduct: Biometric User Identification on Eyewear Computers Using Bone Conduction Through the Skull. In: Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI '16). ACM, New York, USA 2016, p. 1379-1384. doi:https://doi.org/10.1145/2858036.2858152 Citation Details

    Secure user identification is important for the increasing number of eyewear computers but limited input capabilities pose significant usability challenges for established knowledge-based schemes, such as passwords or PINs. We present SkullConduct, a biometric system that uses bone conduction of sound through the user's skull as well as a microphone readily integrated into many of these devices, such as Google Glass. At the core of SkullConduct is a method to analyze the characteristic frequency response created by the user's skull using a combination of Mel Frequency Cepstral Coefficient (MFCC) features as well as a computationally light-weight 1NN classifier. We report on a controlled experiment with 10 participants that shows that this frequency response is person-specific and stable -- even when taking off and putting on the device multiple times -- and thus serves as a robust biometric. We show that our method can identify users with 97.0% accuracy and authenticate them with an equal error rate of 6.9%, thereby bringing biometric user identification to eyewear computers equipped with bone conduction technology.

    Video: https://www.youtube.com/watch?v=5yG_nWocXNY

  • Florian Alt, Stefan Schneegass, Alireza Sahami Shirazi, Mariam Hassib,; Andreas Bulling: Graphical Passwords in the Wild: Understanding How Users Choose Pictures and Passwords in Image-based Authentication Schemes. In: Proceedings of the 17th International Conference on Human-Computer Interaction with Mobile Devices and Services (MobileHCI '15). ACM, New York, USA 2015, p. 316-322. doi:10.1145/2785830.2785882 Citation Details

    Common user authentication methods on smartphones, such as lock patterns, PINs, or passwords, impose a trade-off between security and password memorability. Image-based passwords were proposed as a secure and usable alternative. As of today, however, it remains unclear how such schemes are used in the wild. We present the first study to investigate how image-based passwords are used over long periods of time in the real world. Our analyses are based on data from 2318 unique devices collected over more than one year using a custom application released in the Android Play store. We present an in-depth analysis of what kind of images users select, how they define their passwords, and how secure these passwords are. Our findings provide valuable insights into real-world use of image-based passwords and inform the design of future graphical authentication schemes.

  • Max Pfeiffer, Tim Dünte, Stefan Schneegass, Florian Alt,; Michael Rohs: Cruise Control for Pedestrians: Controlling Walking Direction using Electrical Muscle Stimulation. In: In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI '15). ACM, New York, USA 2015, p. 2505-2514. doi:https://doi.org/10.1145/2702123.2702190 Citation Details

    Pedestrian navigation systems require users to perceive, interpret, and react to navigation information. This can tax cognition as navigation information competes with information from the real world. We propose actuated navigation, a new kind of pedestrian navigation in which the user does not need to attend to the navigation task at all. An actuation signal is directly sent to the human motor system to influence walking direction. To achieve this goal we stimulate the sartorius muscle using electrical muscle stimulation. The rotation occurs during the swing phase of the leg and can easily be counteracted. The user therefore stays in control. We discuss the properties of actuated navigation and present a lab study on identifying basic parameters of the technique as well as an outdoor study in a park. The results show that our approach changes a user's walking direction by about 16°/m on average and that the system can successfully steer users in a park with crowded areas, distractions, obstacles, and uneven ground.

    Video: https://www.youtube.com/watch?v=JSfnm_HoUv4

  • Sven Mayer, Katrin Wolf, Stefan Schneegass,; Niels Henze: Modeling Distant Pointing for Compensating Systematic Displacements. In: Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI '15). ACM, New York, USA 2015, p. 4165-4168. doi:https://doi.org/10.1145/2702123.2702332 Citation Details

    Distant pointing at objects and persons is a highly expressive gesture that is widely used in human communication. Pointing is also used to control a range of interactive systems. For determining where a user is pointing at, different ray casting methods have been proposed. In this paper we assess how accurately humans point over distance and how to improve it. Participants pointed at projected targets from 2m and 3m while standing and sitting. Testing three common ray casting methods, we found that even with the most accurate one the average error is 61.3cm. We found that all tested ray casting methods are affected by systematic displacements. Therefore, we trained a polynomial to compensate this displacement. We show that using a user-, pose-, and distant-independent quartic polynomial can reduce the average error by 37.3%.

    VIdeo: https://www.youtube.com/watch?v=f8NOERrhWfA

  • Stefan Schneegass, Frank Steimle, Andreas Bulling, Florian Alt,; Albrecht Schmidt: SmudgeSafe: geometric image transformations for smudge-resistant user authentication. In: Proceedings of the 2014 ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp '14). ACM, New York, USA 2014, p. 775-786. doi:10.1145/2632048.2636090 Citation Details

    Touch-enabled user interfaces have become ubiquitous, such as on ATMs or portable devices. At the same time, authentication using touch input is problematic, since finger smudge traces may allow attackers to reconstruct passwords. We present SmudgeSafe, an authentication system that uses random geometric image transformations, such as translation, rotation, scaling, shearing, and flipping, to increase the security of cued-recall graphical passwords. We describe the design space of these transformations and report on two user studies: A lab-based security study involving 20 participants in attacking user-defined passwords, using high quality pictures of real smudge traces captured on a mobile phone display; and an in-the-field usability study with 374 participants who generated more than 130,000 logins on a mobile phone implementation of SmudgeSafe. Results show that SmudgeSafe significantly increases security compared to authentication schemes based on PINs and lock patterns, and exhibits very high learnability, efficiency, and memorability.

  • Alireza Sahami Shirazi, Yomna Abdelrahman, Niels Henze, Stefan Schneegass, Mohammadreza Khalilbeigi,; Albrecht Schmidt: Exploiting thermal reflection for interactive systems. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '14). ACM, New York, USA 2014, p. 3483-3492. doi:10.1145/2556288.2557208 Citation Details

    Thermal cameras have recently drawn the attention of HCI researchers as a new sensory system enabling novel interactive systems. They are robust to illumination changes and make it easy to separate human bodies from the image background. Far-infrared radiation, however, has another characteristic that distinguishes thermal cameras from their RGB or depth counterparts, namely thermal reflection. Common surfaces reflect thermal radiation differently than visual light and can be perfect thermal mirrors. In this paper, we show that through thermal reflection, thermal cameras can sense the space beyond their direct field-of-view. A thermal camera can sense areas besides and even behind its field-of-view through thermal reflection. We investigate how thermal reflection can increase the interaction space of projected surfaces using camera-projection systems. We moreover discuss the reflection characteristics of common surfaces in our vicinity in both the visual and thermal radiation bands. Using a proof-of-concept prototype, we demonstrate the increased interaction space for hand-held camera-projection system. Furthermore, we depict a number of promising application examples that can benefit from the thermal reflection characteristics of surfaces.

  • Jonna R. Häkkilä, Maaret Posti, Stefan Schneegass, Florian Alt, Kunter Gultekin,; Albrecht Schmidt: Let me catch this!: experiencing interactive 3D cinema through collecting content with a mobile phone. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '14). ACM, New York, USA 2014, p. 1011-1020. doi:10.1145/2556288.2557187 Full text Citation Details

    The entertainment industry is going through a transformation, and technology development is affecting how we can enjoy and interact with the entertainment media content in new ways. In our work, we explore how to enable interaction with content in the context of 3D cinemas. This allows viewers to use their mobile phone to retrieve, for example, information on the artist of the soundtrack currently playing or a discount coupon on the watch the main actor is wearing. We are particularly interested in the user experience of the interactive 3D cinema concept, and how different interactive elements and interaction techniques are perceived. We report on the development of a prototype application utilizing smart phones and on an evaluation in a cinema context with 20 participants. Results emphasize that designing for interactive cinema experiences should drive for holistic and positive user experiences. Interactive content should be tied together with the actual video content, but integrated into contexts where it does not conflict with the immersive experience with the movie.

  • Nora Broy, Stefan Schneegass, Florian Alt,; Albrecht Schmidt: FrameBox and MirrorBox: tools and guidelines to support designers in prototyping interfaces for 3D displays. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '14). ACM, New York, USA 2014, p. 2037-2046. doi:https://doi.org/10.1145/2556288.2557183 Full text Citation Details

    In this paper, we identify design guidelines for stereoscopic 3D (S3D) user interfaces (UIs) and present the MirrorBox and the FrameBox, two UI prototyping tools for S3D displays. As auto-stereoscopy becomes available for the mass market we believe the design of S3D UIs for devices, for example, mobile phones, public displays, or car dashboards, will rapidly gain importance. A benefit of such UIs is that they can group and structure information in a way that makes them easily perceivable for the user. For example, important information can be shown in front of less important information. This paper identifies core requirements for designing S3D UIs and derives concrete guidelines. The requirements also serve as a basis for two depth layout tools we built with the aim to overcome limitations of traditional prototyping when sketching S3D UIs. We evaluated the tools with usability experts and compared them to traditional paper prototyping.

  • Thomas Kubitza; Norman Pohl; Tilman Dingler; Stefan Schneegass; Christian Weichel; Albrecht Schmidt: Ingredients for a New Wave of Ubicomp Products. In: IEEE Pervasive Computing, Vol 12 (2013) No 3, p. 5-8. doi:10.1109/MPRV.2013.51 Full text Citation Details

    The emergence of many new embedded computing platforms has lowered the hurdle for creating ubiquitous computing devices. Here, the authors highlight some of the newer platforms, communication technologies, sensors, actuators, and cloud-based development tools, which are creating new opportunities for ubiquitous computing.

  • Jonas Auda, Max Pascher,; Stefan Schneegass: Around the (Virtual) World - Infinite Walking in Virtual Reality Using Electrical Muscle Stimulation. In: ACM (Ed.): CHI'19 Proceedings. Glasgow . doi:https://doi.org/10.1145/3290605.3300661 Citation Details

    Virtual worlds are infinite environments in which the user can move around freely. When shifting from controller-based movement to regular walking as an input, the limitation of the real world also limits the virtual world. Tackling this challenge, we propose the use of electrical muscle stimulation to limit the necessary real-world space to create an unlimited walking experience. We thereby actuate the users‘ legs in a way that they deviate from their straight route and thus, walk in circles in the real world while still walking straight in the virtual world. We report on a study comparing this approach to vision shift – the state-of-the-art approach – as well as combining both approaches. The results show that particularly combining both approaches yield high potential to create an infinite walking experience.