RESQUE Profile for Max Mustermann Demoprofile

The “fingerprint” of how research is conducted, when only the best work is submitted.

Overview of analysis:
Name Felix Schönbrodt
Analysis date 2025-06-05
Academic age (years since PhD, minus child care etc.) 14 (PhD: 2010; subtracted years: 1)
# of analysed outputs 10
ORCID https://orcid.org/0000-0002-8282-3910
Gender Male
NumberChildren 2
AcademicAgeBonus_Explanation I stayed at home with my two kids and cared for my father. And I was sick for 3 years where I could not work.
YearHabil 2014
Note

This is a preview which shows some visual summaries of the RESQUE indicators. Not all indicators have been covered yet, and things might change substantially. No bonus points have been assigned to the theory indicators yet, and also not to some other indicators of sample characteristics.

Some parts of this profile are purely descriptive. For example, it is summarized the type of participant samples, or whether researchers predominantly work with psychophysiological data or rather focus on questionnaire studies.

Other parts contain an evaluative aspect: For example, research that is reproducible, which allows independent auditing because it provides open data and scripts is, ceteris paribus, better than research that does not have these aspects. Research outputs with these objective quality criteria of methodological rigor can gain “bonus points” which are summed across all provided research outputs and contribute to the Rigor Profile Overview.

We took care not to systematically disadvantage certain fields or research styles. Generally, the rigor score is a relative score, computed as “percentage of maximal points” (POMP) score across all indicators that are applicable. For any indicator, one can choose the option “not applicable” if an indicator principally cannot be attained by a research output. The points of such non-applicable indicators are removed from the maximum points and therefore do not lower the computed relative rigor score. However, in order to prevent gaming of this scheme, any “not applicable” claim needs to be justified. Only when the justification is accepted by the committee, the point is removed. With no or insufficient justification, in contrast, the indicator is set to “not available” (=0 points) and the maximum points are not adjusted.

Submitted research outputs

The 10 publications had the following methodological type:

Type of method # papers
Computational 5
Empirical Quantitative 4
Meta Analysis 1
Other Method 1
Theoretical 3

Team science in publications?

10 out of 10 submitted publications could be automatically retrieved with OpenAlex.

Team category Frequency %
Single authored 0 0%
Small team (<= 5 co-authors) 8 80%
Large team (6-15 co-authors) 0 0%
Big Team (>= 16 co-authors) 2 20%

Types of research data

Data type # of papers
Content Data 3
Behavioral 1
Other Type 1
Questionnaire Selfreport 1

⤷ Types of behavioral data

Data type # of papers
Performance 1

Types of samples

Type of population/sample and cultural background for the 3 papers with own new data collection (“Other” excluded):

Rare samples used in studies:

  • Inmates
    • Sample Size: not provided

Types of Research Designs

Contributorship profile (CRediT roles)

Based on 10 submitted publications, this is the self-reported contributorship profile:

Indicators of Research Transparency and Reproducibility

The relative rigor score (RRS) is computed as a “percentage of maximal points” (POMP) score of multiple indicators. The indicators are grouped into four categories: Open Data, Preregistration, Reproducible Code & Verification, and Open Materials. Indicators that are flagged as “not applicable” are removed from the maximum points and therefore do not lower the RRS.

The following charts are based on 9 scoreable publications.

What is a “good” Relative Rigor Score?

The RESQUE indicators cover current best practices, which often are not yet used broadly. Therefore the scores might look meager, even for very good papers. Tentative norm values for the overall rigor score, based on some first evaluation studies are:

  • 10-20% can be considered average
  • 30% is very good
  • >40% is excellent.
  • Quantity of openness: How often did they do it?
    Each small square represents one publication, where the open practice (e.g., open data) has been performed (), not performed (), or was not applicable ().
  • Quality of openness: How well did they do it?
    The colors of the bar below the squares are based on normative values of the Relative Rigor Score (i.e., “What quality of a practice could reasonably be expected in that field?”).

The general philosophy of RESQUE is: It doesn’t matter so much what kind of research you do - but when you do it, you should do it in a high quality. The radar chart with the Relative Rigor Score helps you to see how many quality indicators have been fulfilled in multiple areas of methodological rigor.

  • The width of each sector corresponds to the maximal number of rigor points one could gain. If many indicators are flagged as “not applicable”, then the maximal points get reduced and the sector gets more narrow.
  • The colored part of each sector shows the achieved rigor points. An entirely grey sector indicates that no rigor points could be awarded at all.
  • The quality indicators measure both the presence of a practice (e.g., is Open Data available?) and the quality of the practice (e.g., is does it have a codebook? Does have a persistent identifier?). Hence, even if the pie charts in the table above show the presence, a lack of quality indicators can lead to a low rigor score.

Scientific impact

BIP! Scholar (a non-commercial open-source service to facilitate fair researcher assessment) provides five impact classes based on norm values:

🚀 Top 0.01% ️🌟 Top 0.1% ️✨ Top 1% Top 10% Average (Bottom 90%)

This indicator reflects impact/attention of an article in the research community at large. It is based on AttRank, a variation of PageRank (known from the Google search algorithm) that accounts for the temporal evolution of the citation network. By that, it alleviates the bias against younger publications, which have not had the chance to accumulate a lot of citations. It models a researcher’s preference to read papers which received a lot of attention recently. It was evaluated (and vetted) in its performance to predict the ranking of papers concerning their future impact (i.e., citations). For more details, see BIP! glossary and the references therein.

We categorized papers into levels of involvement, based on the degrees of contributorship:

Involvement.Level Definition Publications
Very High (>=3 roles as *lead*) OR (>= 5 roles as (*lead* OR *equal*)) 5
High (1-2 roles as *lead*) OR (3-4 roles as *equal*) 4
Medium (1-2 roles as *equal*) OR (>= 5 roles as *support*) 1
Low All other combinations 0

From 10 submitted papers of Felix Schönbrodt, 10 were in the top 10% popularity class of all papers or better.

Internationality and Interdisciplinarity

The analysis is only based on the submitted publications (not the entire publication list) of the applicant. Publication and co-author data is retrieved from the OpenAlex data base. Note that preprints are not indexed by OpenAlex and therefore do not contribute to this analysis.

  • Internationality: All co-authors are retrieved from OpenAlex with their current affiliation. The index is measured by Pielou’s Evenness Index (Pielou 1966) of the country codes of all co-authors. It considers the 10 most frequent country codes.
  • Interdisciplinarity is measured by the Evenness Index of the fields (as classified by OpenAlex) of the publications. It considers the 6 most frequent fields.

The evenness indexes are normalized to a scale from 0 (no diversity, everything is one category) to 1 (maximum diversity, all categories are equally represented). It is computed as a normalized Shannon entropy.

Internationality
Interdisciplinarity
Only within country co-authors
Broad coauthor network from many countries
Single discipline
Many disciplines
48 unique identifiable co-authors:
  • 50% from 7 international countries
  • 50% from the same country
3 primary fields:
  • Psychology (8)
  • Computer Science (1)
  • Decision Sciences (1)
Co-authors' Country Code # of Co-authors
DE 24
US 10
NL 6
AT 2
AU 2
CH 2
IT 1
SE 1
The main subfields are (multiple categories per paper are possible):
Subfield # of papers
Experimental and Cognitive Psychology 6
Clinical Psychology 5
Applied Psychology 3
Social Psychology 3
Information Systems and Management 2
Sociology and Political Science 2

“Not applicable” justifications

Choosing “not applicable” indicates that an indicator principally cannot be attained by a research output. To avoid bias against certain research fields, the points of such non-applicable indicators are removed from the maximum points and therefore do not lower the computed relative rigor score. However, in order to prevent gaming of this scheme, any “not applicable” claim needs to be justified. Only when the justification is accepted by the committee, the point is removed. With no or insufficent justification, in contrast, the indicator should be set to “not available” (=0 points) and the maximum points are not adjusted. (Note: The latter correction currently needs to be done manually in the json file.)

These are all claims of non-applicability from this applicant:

P_Preregistration

Title Year Why not applicable?
Stefan & Schönbrodt (2023): Big little lies: a compendium and simulation of<i>p</i>-hacking strategies 2023 Purely exploratory
Carter et al. (2019): Correcting for Bias in Psychology: A Comparison of Meta-Analytic Methods 2019 Exploratory
Schönbrodt & Wagenmakers (2017): Bayes factor design analysis: Planning for compelling evidence 2017 exploratory/method development
Schönbrodt & Perugini (2013): At what sample size do correlations stabilize? 2013 Purely Exploratory

P_OpenMaterials

Title Year Why not applicable?
Stefan & Schönbrodt (2023): Big little lies: a compendium and simulation of<i>p</i>-hacking strategies 2023 Everything is contained in the source code.
Nosek et al. (2022): Replicability, Robustness, and Reproducibility in Psychological Science 2022 No material was used, other than the data itself.
Schönbrodt & Wagenmakers (2017): Bayes factor design analysis: Planning for compelling evidence 2017 The scripts contain everything
Schönbrodt & Perugini (2013): At what sample size do correlations stabilize? 2013 The scripts contain everything

P_Sample_RepresentativenessRelevance

Title Year Why not applicable?
Nosek et al. (2022): Replicability, Robustness, and Reproducibility in Psychological Science 2022 no human data

P_Data_Open

Title Year Why not applicable?
Etzler et al. (2023): Machine Learning and Risk Assessment: Random Forest Does Not Outperform Logistic Regression in the Prediction of Sexual Recidivism 2023 (automatically added: For reused data sets of other researchers and for ‘no data’, we automatically set P_Data_Open to ‘notApplicable’).
Nosek et al. (2022): Replicability, Robustness, and Reproducibility in Psychological Science 2022 Proprietary data, could not share

“Not suitable” justifications

These were the justification, why papers have been flagged as “generally not suitable for the RESQUE scheme”. These papers have been excluded from the rigor score computation, but not from the impact table.

The applicant had no claims of non-suitability or gave no justification.

Preprocessing Notes
  • For 1 publication(s) with P_Data_Source = ‘Reuse of someone else’s existing data set’ or P_Data = ‘No’, P_Data_Open has been set to ‘notApplicable’ and a justification has been added.)
  • 1 publication(s) were removed from the rigor score computations because no indicators were provided.

RESQUER package version: 0.8.1