Team

Team:

Jonas Keppel

Wissenschaftlicher Mitarbeiter

Jonas Keppel, M.Sc.

Raum:
SM 204
E-Mail:
Sprechstunde:
nach Vereinbarung
Adresse:
Universität Duisburg-Essen
Fakultät für Informatik
Mensch-Computer Interaktion
Schützenbahn 70
45127 Essen

Zur Person:

Jonas Keppel ist wissenschaftlicher Mitarbeiter der Arbeitsgruppe Mensch-Computer Interaktion an der Universität Duisburg-Essen. Im Bachelor studierte er Mathematik an der Universität Duisburg-Essen und arbeitete während des Studiums als studentische Hilfskraft, wobei er Lehrerfahrung als Übungsleiter und Korrektor sammelte. Zudem wurde er mehrfach mit dem Deutschlandstipendium der UDE ausgezeichnet. Seinen Masterabschluss in Technomathematik mit Anwendungsfach Informatik legte Jonas ebenfalls an der Universität Duisburg-Essen ab, wobei er seine Masterarbeit in Kooperation zwischen Mathematik und Informatik über ein Visualisierungstool zur Verteidigung von Angriffen auf Deep Learning Modelle schrieb. Momentan forscht er auch im Bereich der Mensch-Computer Interaktion und wirkt bei dem Projekt "Erweiterte Gesundheitsintelligenz für persönliche Verhaltensstrategien im Alltag" (Eghi) mit.

Forschungsgebiete:

Jonas Keppel auf Google Scholar und ResearchGate

Publikationen:

Filter:
  • Keppel, Jonas; Strauss, Marvin; Faltaous, Sarah; Liebers, Jonathan; Heger, Roman; Gruenefeld, Uwe; Schneegass, Stefan: Don't Forget to Disinfect: Understanding Technology-Supported Hand Disinfection Stations. In: Proc. ACM Hum.-Comput. Interact., Jg. 7 (2023). doi:10.1145/3604251PDFBIB DownloadDetails
    Don't Forget to Disinfect: Understanding Technology-Supported Hand Disinfection Stations

    The global COVID-19 pandemic created a constant need for hand disinfection. While it is still essential, disinfection use is declining with the decrease in perceived personal risk (e.g., as a result of vaccination). Thus this work explores using different visual cues to act as reminders for hand disinfection. We investigated different public display designs using (1) paper-based only, adding (2) screen-based, or (3) projection-based visual cues. To gain insights into these designs, we conducted semi-structured interviews with passersby (N=30). Our results show that the screen- and projection-based conditions were perceived as more engaging. Furthermore, we conclude that the disinfection process consists of four steps that can be supported: drawing attention to the disinfection station, supporting the (subconscious) understanding of the interaction, motivating hand disinfection, and performing the action itself. We conclude with design implications for technology-supported disinfection.

  • Keppel, Jonas; Gruenefeld, Uwe; Strauss, Marvin; Gonzalez, Luis Ignacio Lopera; Amft, Oliver; Schneegass, Stefan: Reflecting on Approaches to Monitor User's Dietary Intake, MobileHCI 2022, Vancouver, Canada 2022. PDFVolltextBIB DownloadDetails

    Monitoring dietary intake is essential to providing user feedback and achieving a healthier lifestyle. In the past, different approaches for monitoring dietary behavior have been proposed. In this position paper, we first present an overview of the state-of-the-art techniques grouped by image- and sensor-based approaches. After that, we introduce a case study in which we present a Wizard-of-Oz approach as an alternative and non-automatic monitoring method.

  • Keppel, Jonas; Öztürk, Alper; Herbst, Jean-Luc; Lewin, Stefan: Artificial Conscience - Fight the Inner Couch Potato, MobileHCI 2022, Vancouver, Canada 2022. PDFVolltextBIB DownloadDetails

    The Artificial Conscience concept aims to improve the user’s quality of life by giving recommendations for a healthier lifestyle and reacting to possibly harmful situations detected by the various sensors of the Huawei Eyewear. But autonomous reactions to situations that pose an immediate danger to the user’s health as well as methods for habit-forming and other supporting functions are only representing a subset of the possible design space. All functions of this concept are described and evaluated individually, both under the assumption of autonomous operation of the Huawei Eyewear and with the inclusion of other data sources and sensors (smartphone, smartwatch). In addition, an outlook is given on additional features for the Huawei Eyewear that could be implemented in future versions of the glasses. YouTube

  • Detjen, Henrik; Faltaous, Sarah; Keppel, Jonas; Prochazka, Marvin; Gruenefeld, Uwe; Sadeghian, Shadan; Schneegass, Stefan: Investigating the Influence of Gaze- and Context-Adaptive Head-up Displays on Take-Over Requests. In: Acm (Hrsg.): AutomotiveUI '22: Proceedings of the 14th International Conference on Automotive User Interfaces and Interactive Vehicular Applications. 2022. doi:10.1145/3543174.3546089VolltextBIB DownloadDetails

    In Level 3 automated vehicles, preparing drivers for take-over requests (TORs) on the head-up display (HUD) requires their repeated attention. Visually salient HUD elements can distract attention from potentially critical parts in a driving scene during a TOR. Further, attention is (a) meanwhile needed for non-driving-related activities and can (b) be over-requested. In this paper, we conduct a driving simulator study (N=12), varying required attention by HUD warning presence (absent vs. constant vs. TOR-only) across gaze-adaptivity (with vs. without) to fit warnings to the situation. We found that (1) drivers value visual support during TORs, (2) gaze-adaptive scene complexity reduction works but creates a benefit-neutralizing distraction for some, and (3) drivers perceive constant HUD warnings as annoying and distracting over time. Our findings highlight the need for (a) HUD adaptation based on user activities and potential TORs and (b) sparse use of warning cues in future HUD designs.

  • Faltaous, Sarah; Prochazka, Marvin; Auda, Jonas; Keppel, Jonas; Wittig, Nick; Gruenefeld, Uwe; Schneegass, Stefan: Give Weight to VR: Manipulating Users’ Perception of Weight in Virtual Reality with Electric Muscle Stimulation, Association for Computing Machinery, New York, NY, USA 2022. (ISBN 9781450396905) doi:10.1145/3543758.3547571) BIB DownloadDetails

    Virtual Reality (VR) devices empower users to experience virtual worlds through rich visual and auditory sensations. However, believable haptic feedback that communicates the physical properties of virtual objects, such as their weight, is still unsolved in VR. The current trend towards hand tracking-based interactions, neglecting the typical controllers, further amplifies this problem. Hence, in this work, we investigate the combination of passive haptics and electric muscle stimulation to manipulate users’ perception of weight, and thus, simulate objects with different weights. In a laboratory user study, we investigate four differing electrode placements, stimulating different muscles, to determine which muscle results in the most potent perception of weight with the highest comfort. We found that actuating the biceps brachii or the triceps brachii muscles increased the weight perception of the users. Our findings lay the foundation for future investigations on weight perception in VR.

  • Keppel, Jonas; Liebers, Jonathan; Auda, Jonas; Gruenefeld, Uwe; Schneegass, Stefan: ExplAInable Pixels: Investigating One-Pixel Attacks on Deep Learning Models with Explainable Visualizations. In: Proceedings of the 21st International Conference on Mobile and Ubiquitous Multimedia. Association for Computing Machinery, New York, NY, USA 2022, S. 231-242. doi:10.1145/3568444.3568469BIB DownloadDetails

    Nowadays, deep learning models enable numerous safety-critical applications, such as biometric authentication, medical diagnosis support, and self-driving cars. However, previous studies have frequently demonstrated that these models are attackable through slight modifications of their inputs, so-called adversarial attacks. Hence, researchers proposed investigating examples of these attacks with explainable artificial intelligence to understand them better. In this line, we developed an expert tool to explore adversarial attacks and defenses against them. To demonstrate the capabilities of our visualization tool, we worked with the publicly available CIFAR-10 dataset and generated one-pixel attacks. After that, we conducted an online evaluation with 16 experts. We found that our tool is usable and practical, providing evidence that it can support understanding, explaining, and preventing adversarial examples.