Psychophysiology. 2022 Apr 2:e14058. doi: 10.1111/psyp.14058. Online ahead of print.
ABSTRACT
Raw data are typically required to be processed to be ready for statistical analyses, and processing pipelines are often characterized by substantial heterogeneity. Here, we applied seven different approaches (trough-to-peak scoring by two different raters, script-based baseline correction, Ledalab as well as four different models implemented in the software PsPM) to two fear conditioning data sets. Selection of the approaches included was guided by a systematic literature search by using fear conditioning research as a case example. Our approach can be viewed as a set of robustness analyses (i.e., same data subjected to different processing pipelines) aiming to investigate if and to what extent these different quantification approaches yield comparable results given the same data. To our knowledge, no formal framework for the evaluation of robustness analyses exists to date, but we may borrow some criteria from a framework suggested for the evaluation of “replicability” in general. Our results from seven different SCR quantification approaches applied to two data sets with different paradigms suggest that there may be no single approach that consistently yields larger effect sizes and could be universally considered “best.” Yet, at least some of the approaches employed show consistent effect sizes within each data set indicating comparability. Finally, we highlight substantial heterogeneity also within most quantification approaches and discuss implications and potential remedies.
PMID:35365863 | DOI:10.1111/psyp.14058