Julia Jorkowitz
Julia Jorkowitz
Advisors
Robert Richer (M. Sc.), Luca Abel (M. Sc.), Dipl.-Ing. Stefan Grießhammer (Dept. of Palliative Medicine, University Hospital Erlangen), Dr. med. Tobias Steigleder (Dept. of Palliative Medicine, University Hospital Erlangen), Prof. Dr. Björn Eskofier
Duration
02/2024 – 06/2024
Abstract
In the pursuit of fast and efficient solutions to scientific challenges, open science is playing an increasingly important role. Enabling open access to research results not only increases reproducibility but also facilitates the verification of results and the creation of new scientific contributions [1]. Furthermore, open science provides access to research for the public, which creates the opportunity for new ideas and improvements to be introduced more easily by more people with diverse perspectives [2,3]. In this context, the objective comparison of algorithms is an important aspect in data science-related research areas, as it is crucial to identify the most effective and efficient solution for a certain problem. Such comparisons necessitate defined challenges (so-called “benchmarks”) consisting of standardized datasets as well as consistent evaluation strategies [4]. These allow researchers to objectively assess the performance of different algorithms and approaches against a common standard, leading to more effective and innovative solutions that can address the complexities of today’s global challenges. A large number of white papers in forums and web posts emphasize the importance of benchmarking and the high level of interest in it in the scientific community [5,6].
One application scenario in which systematic benchmarking is missing so far is for the extraction of the pre-ejection period (PEP). PEP is defined as the time interval between ventricular depolarization and the onset of blood ejection and has been acknowledged as a promising indicator of the influence of the sympathetic nervous system on the heart rate [7,8]. This poses an advantage over conventional heart rate variability (HRV) metrics, which are typically influenced by both sympathetic and parasympathetic activity. PEP is usually measured using an electrocardiogram (ECG) to extract the Q-wave onset, which corresponds to the beginning of the PEP, and an impedance cardiogram (ICG) to extract the B-point, corresponding to the end of the PEP [9]. Despite its potential, PEP is not widely used in research, which is due to various aspects. First, the measurement setup is more complicated compared to extracting HRV from ECG recordings. Second, the automatic extraction of both points is prone to error. Especially the B-point detection has been proven to be particularly difficult due to differences in the waveform between individuals, but also within the same person [10]. Lastly, there is a lack of publicly available benchmark datasets on which algorithms for extracting the fiducial points can be validated. For instance, searching for datasets containing PEP or ICG recordings on PhysioNet revealed zero results, while more than 150 ECG datasets with gold standard labels can be found [11]. This underscores the lack of systematic evaluation possibilities in this field.
The goal of this bachelor’s thesis is to address the last two issues by benchmarking PEP extraction algorithms from combined ECG and ICG signals. This approach will build on results from a previous bachelor’s thesis [12] and combine different methods for Q-wave detection in ECG and B-point detection in ICG signals, as well as multiple outlier correction algorithms using the tpcp Python library [13]. These pipelines will be systematically compared using hand-labeled references as a gold standard. As an extension of previous work, this approach will be performed on two different datasets collected using different study protocols and different measurement systems.
References
[1] Stodden, “The Scientific Method in Practice: Reproducibility in the Computational Sciences,” Social Science Research Network, Jan. 2010, doi: 10.2139/ssrn.1550193.
[2] L. Borgman, “The conundrum of sharing research data,” Journal of the Association for Information Science and Technology, vol. 63, no. 6, pp. 1059–1078, Apr. 2012, doi: 10.1002/asi.22634.
[3] C. Gentemann, “Why 2023 is the US Year of Open Science,” Tech. rep., Jan. 2023.
[4] V. Volz, D. Irawan, K. Van Der Blom, and B. Naujoks, “Benchmarking,” pp. 149–179, Jan. 2023, doi: 10.1007/978-3-031-25263-1_6.
[5] Stewart, “The Olympics of AI: Benchmarking Machine Learning Systems,” Towards Data Science, Sept. 2023.
[6] “Should benchmarkings be done at all? What is the point?,” https://scicomp. stackexchange.com/questions/34049/should-benchmarkings-be-done-at-all-what-is-the-point (accessed Mar. 12, 2024).
[7] B. Newlin and R. W. Levenson, “Pre‐ejection Period: Measuring Beta-adrenergic Influences Upon the Heart,” Psychophysiology, vol. 16, no. 6, pp. 546–552, Nov. 1979, doi: 10.1111/j.1469-8986.1979.tb01519.x.
[8] T. Larkin and A. L. Kasprowicz, “Validation of a Simple Method of Assessing Cardiac Preejection Period: A Potential Index of Sympathetic Nervous System Activity,” Perceptual and Motor Skills, vol. 63, no. 1, pp. 295–302, Aug. 1986, doi: 10.2466/pms.1986.63.1.295.
[9] M. Forouzanfar, F. C. Baker, I. M. Colrain, A. Goldstone, and M. De Zambotti, “Automatic analysis of pre‐ejection period during sleep using impedance cardiogram,” Psychophysiology, vol. 56, no. 7, Mar. 2019, doi: 10.1111/psyp. 13355.
[10] V. Ermishkin, V. A. Kolesnikov, E. V. Lukoshkova, V. P. Mokh, R. S. Sonina, N. V. Dupik, and S. A. Boitsov, “Variable impedance cardiography waveforms: how to evaluate the preejection period more accurately,” Journal of physics. Conference series 407, Dec. 2012, doi: 10.1088/1742-6596/407/1/012016.
[11] “PhysioNet”, https://physionet.org/ (accessed Mar. 12, 2024).
[12] S. Stühler, “Investigation of the Pre-Ejection Period as a Marker for Sympathetic Activity during Acute Psychosocial Stress,” Bachelor’s Thesis in Medical Engineering, Friedrich-Alexander-Universität, May 2023.
[13] A. Küderle, R. Richer, R. C. Sîmpetru, & B. M. Eskofier (2023). tpcp: Tiny Pipelines for Complex Problems—A set of framework independent helpers for algorithms development and evaluation. Journal of Open Source Software, 8(82), 4953. https://doi.org/10.21105/joss.04953