Oliver Korn
Oliver Korn
Advisors
Wolfgang Mehringer (M. Sc.), Dr. Dario Zanca, Prof. Dr. Björn Eskofier
Duration
12 / 2022 – 06 / 2023
Abstract
The popularity of Virtual Reality (VR) and its increasing fields of application have grown over the past years. Besides its most well-known use for entertainment purposes (e.g., games), VR is, for example, currently used in psychological research [1], therapy [2] and for educational purposes [3]. This rise of VR applications goes along with the necessity of a secure but, above all, usable authentication process within the virtual environment [4]. For example, if users were forced to use conventional log-in mechanisms (e.g., a physical keyboard), this would harm the user’s perceived presence within VR by either removing the VR-Headset or utilizing temporary see-through to circumvent the blocked but required visual feedback [5]. Therefore, three different VR-authentication approaches are currently proposed and researched: Knowledge-based, biometric, and multi-modal authentication [6].
The most predominant one, knowledge-based authentication, requires a PIN or an alphanumeric password for access control. While being a familiar log-in scheme for many users, this approach is relatively slow, requiring a more unfamiliar interaction than a physical keyboard, and is prone to shoulder surfing attacks. The latter is because the VR Headset fully blocks users’ sight, which enables an adversary to observe the entire procedure [6, 7].
Biometric authentication schemes for VR applications, in contrast, offer a high level of security, especially being robust to shoulder-surfing mentioned above attacks. In theory, this is achieved by utilizing individual user characteristics data, measured by corresponding built-in or plug-in sensors and trackers (e.g., EEG, body movements, and eye-tracking data). However, user identification and validation based on biometrics alone require a high classification accuracy to prevent false positives and impersonation attacks. This is why numerous approaches combine more than one authentication technique, hence multi-modal authentication (also named ”soft-biometrics”). By using multiple models, the approach can improve on last-named accuracy and security. However, a low acceptance of a partial scheme might have a bad impact on the combined one [6, 8, 9].
Gaze-based authentication can be an example of such a soft-biometric scheme by combining biometric- and knowledge-based elements. This is made possible by eye trackers significantly improving over the last years while evolving into more and more affordable (consumer) technology. This approach aims to authenticate the user based on the tracked eye gaze, e.g., measurements of eye saccades using the eye tracker while observing a given stimulus. Such a stimulus can be the task of entering a password by looking at the PIN pad, which also solves the observational attack issue [6, 10, 11]. However, entering information by active fixation does not scale well with password length because you often accidentally produce a wrong input (known as ”midas touch problem”) [12]. That is why a lot of current research was put into personal identification based on sampled eye-tracking data alone. Typical state-of-the-art approaches are tracking personal areas of interest of the subject in a given picture, fixation and scan-path analysis of a given stimulus (e.g., a page of text), eye-gaze velocity while following a moving stimulus or tracking of oculomotor features [8]. These solutions have in common that the test subjects were given a specific task in advance and then had their ocular features tracked based on this task. While some of these approaches performed well for classification and identification, they all relied on active decisions based on the given stimuli. However, it has been proven that people’s behavior changes depending on the time of day. Hence scheduling of the user study respectively data collection has an impact on the achieved results [8, 13].
Another promising ocular feature being explored in the given domain are pupillometric features, e.g., pupil-size fluctuations [8, 14]. Fluctuations of pupil size as a response to a specific stimulus correlate with high-level cognition processes, which is why these are capable of being utilized for user classification and identification processes. However, many state-of-the-art methods are based on stationary eye trackers attached to conventional screens, which require controlled conditions to minimize the stimulation of the eye by the surroundings and limit presentable stimuli to a 2Dspace [8, 14, 15].
Therefore, the proposed work builds on the existing research using the benefits of a controlled 3D virtual environment for data acquisition utilizing an eye tracker built-in to the VR-Headset. Through the obtained data, machine learning methods (e.g., a neural network model) are used to investigate the potential of the desired classification and identification based on pupillary data alone.
References:
[1] S. Gradl, M. Wirth, N. Mächtlinger, R. Poguntke, A. Wonner, N. Rohleder, and B. M. Eskofier. The stroop room: A virtual reality-enhanced stroop test. 25th ACM Symposium on Virtual Reality Software and Technology, pages 1–12, 2019.
[2] P. M. Emmelkamp and K. Meyerbröker. Virtual reality therapy in mental health. Annual Review of Clinical Psychology, 17, pages 495–519, 2021.
[3] T.-J Lin and Y.-J. Lan. Language learning in virtual reality environments: past, present and future. J. Educ. Technol. Soc. 18, 4:486–497, 2015.
[4] C. George and et. al. Seamless and secure vr: Adapting and evaluating established authentication systems for virtual reality. The 2017 Network and DistributedSystem Security Symposium (NDSS), 2017.
[5] Mark McGill, Daniel Boland, Roderick Murray-Smith, and Stephen Brewster. A dose of reality: Overcoming usability challenges in vr head-mounted displays. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, CHI ’15, page 2143–2152. Association for Computing Machinery, 2015.
[6] John M. Jones, Reyhan Duezguen, Peter Mayer, Melanie Volkamer, and Sanchari Das. A literature review on virtual reality authentication. In Steven Furnell and Nathan Clarke, editors, Human Aspects of Information Security and Assurance, pages 189–198, Cham, 2021. Springer International Publishing.
[7] Z. Yu, C. Liang, H.-N.and Fleming, and K.L. Man. An exploration of usable authentication mechanisms for virtual reality systems. TIEEE Asia Pacific Conferenceon Circuits and Systems (APCCAS), pages 458—-460, 2016.
[8] Virginio Cantoni, Nahumi Nugrahaningsih, Marco Porta, and Haochen Wang. Chapter 9 – biometric authentication to access controlled areas through eye tracking. In Maria De Marsico, Michele Nappi, and Hugo Proença, editors, Human Recognition in Unconstrained Environments, pages 197–216. Academic Press, 2017.
[9] M.R. Miller, F. Herrera, H. Jun, and et al. Personal identifiability of user tracking data during observation of 360-degree vr video. Sci Rep 10, 2020.
[10] Ceenu George, Daniel Buschek, Andrea Ngao, and Mohamed Khamis. Gazeroomlock: Using gaze and head-pose to improve the usability and observation resistance of 3d passwords in virtual reality. In Lucio Tommaso De Paolis and Patrick Bourdot, editors, Augmented Reality, Virtual Reality, and Computer Graphics, pages 61–81, Cham, 2020. Springer International Publishing.
[11] M. Khamis, C. Oechsner, F. Alt, and A. Bulling. vrpursuits: interaction in virtualreality using smooth pursuit eye movements. In Proceedings of the 2018 Interna-tional Conference on Advanced Visual Interfaces, pages 1–8, 2018.
[12] Pallavi Mohan, Wooi Boon Goh, Chi-Wing Fu, and Sai-Kit Yeung. Dualgaze: Addressing the midas touch problem in gaze mediated vr interaction. In 2018 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct), pages 79–84, 2018.
[13] María Juliana Leone, Diego Fernandez Slezak, Diego Golombek, and Mariano Sigman. Time to decide: Diurnal variations on the speed and quality of human decisions. Cognition, 158:44– 55, 2017.
[14] Nahumi Nugrahaningsih and Marco Porta. Pupil size as a biometric trait. In Virginio Cantoni, Dimo Dimov, and Massimo Tistarelli, editors, Biometric Authentication, pages 222–233, Cham, 2014. Springer International Publishing.
[15] S. Mathot. Pupillometry: Psychology, physiology, and function. Journal of Cognition, 1(1):, 16:1–23, 2018.
[16] Lim Jia Zheng, James Mountstephens, and Jason Teo. Four-class emotion classification in virtual reality using pupillometry. Journal of Big Data volume, 2020.
[17] Simon Eberz, Kasper Rasmussen, Vincent Lenders, and Ivan Martinovic. Preventing lunchtime attacks: Fighting insider threats with eye movement biometrics. NDSS Symposium 2015, 2015.
[18] Pujitha Mannaru, Balakumar Balasingam, Krishna Pattipati, Ciara Sibley, and Joseph Coyne. Cognitive context detection using pupillary measurements. In Barbara D. Broome, Timothy P. Hanratty, David L. Hall, and James Llinas, editors, Next-Generation Analyst IV, volume 9851, page 98510Q. International Society for Optics and Photonics, SPIE, 2016.
[19] Silvia Makowski, Paul Prasse, David R. Reich, Daniel Krakowczyk, Lena A. Jäger, and Tobias Scheffer. Deepeyedentificationlive: Oculomotoric biometric identification and presentationattack detection using deep neural networks. IEEE Transactions on Biometrics, Behavior, and Identity Science, 3(4):506–518, 2021.
[20] Roman Bednarik, Tomi Kinnunen, Andrei Mihaila, and Pasi Fränti. Eye-movements as a biometric. In Heikki Kalviainen, Jussi Parkkinen, and Arto Kaarna, editors, Image Analysis, pages 780–789. Springer Berlin Heidelberg, 2005.