↓ Skip to main content

Theoretical vs. empirical discriminability: the application of ROC methods to eyewitness identification

Overview of attention for article published in Cognitive Research: Principles and Implications, March 2018
Altmetric Badge

About this Attention Score

  • In the top 25% of all research outputs scored by Altmetric
  • Good Attention Score compared to outputs of the same age (71st percentile)
  • Above-average Attention Score compared to outputs of the same age and source (62nd percentile)

Mentioned by

twitter
8 X users

Citations

dimensions_citation
41 Dimensions

Readers on

mendeley
23 Mendeley
Title
Theoretical vs. empirical discriminability: the application of ROC methods to eyewitness identification
Published in
Cognitive Research: Principles and Implications, March 2018
DOI 10.1186/s41235-018-0093-8
Pubmed ID
Authors

John T. Wixted, Laura Mickes

Abstract

Receiver operating characteristic (ROC) analysis was introduced to the field of eyewitness identification 5 years ago. Since that time, it has been both influential and controversial, and the debate has raised an issue about measuring discriminability that is rarely considered. The issue concerns the distinction between empirical discriminability (measured by area under the ROC curve) vs. underlying/theoretical discriminability (measured by d' or variants of it). Under most circumstances, the two measures will agree about a difference between two conditions in terms of discriminability. However, it is possible for them to disagree, and that fact can lead to confusion about which condition actually yields higher discriminability. For example, if the two conditions have implications for real-world practice (e.g., a comparison of competing lineup formats), should a policymaker rely on the area-under-the-curve measure or the theory-based measure? Here, we illustrate the fact that a given empirical ROC yields as many underlying discriminability measures as there are theories that one is willing to take seriously. No matter which theory is correct, for practical purposes, the singular area-under-the-curve measure best identifies the diagnostically superior procedure. For that reason, area under the ROC curve informs policy in a way that underlying theoretical discriminability never can. At the same time, theoretical measures of discriminability are equally important, but for a different reason. Without an adequate theoretical understanding of the relevant task, the field will be in no position to enhance empirical discriminability.

X Demographics

X Demographics

The data shown below were collected from the profiles of 8 X users who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 23 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Unknown 23 100%

Demographic breakdown

Readers by professional status Count As %
Student > Bachelor 4 17%
Student > Ph. D. Student 4 17%
Researcher 3 13%
Student > Doctoral Student 2 9%
Professor > Associate Professor 2 9%
Other 3 13%
Unknown 5 22%
Readers by discipline Count As %
Psychology 8 35%
Biochemistry, Genetics and Molecular Biology 1 4%
Business, Management and Accounting 1 4%
Philosophy 1 4%
Computer Science 1 4%
Other 3 13%
Unknown 8 35%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 6. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 20 March 2018.
All research outputs
#5,960,551
of 24,527,525 outputs
Outputs from Cognitive Research: Principles and Implications
#171
of 353 outputs
Outputs of similar age
#98,038
of 338,161 outputs
Outputs of similar age from Cognitive Research: Principles and Implications
#4
of 8 outputs
Altmetric has tracked 24,527,525 research outputs across all sources so far. Compared to these this one has done well and is in the 75th percentile: it's in the top 25% of all research outputs ever tracked by Altmetric.
So far Altmetric has tracked 353 research outputs from this source. They typically receive a lot more attention than average, with a mean Attention Score of 43.8. This one has gotten more attention than average, scoring higher than 51% of its peers.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 338,161 tracked outputs that were published within six weeks on either side of this one in any source. This one has gotten more attention than average, scoring higher than 71% of its contemporaries.
We're also able to compare this research output to 8 others from the same source and published within six weeks on either side of this one. This one has scored higher than 4 of them.