Balancing validity and reliability as a function of sampling variability in forensic voice comparison DOI
Bruce Xiao Wang, Vincent Hughes

Science & Justice, Journal Year: 2024, Volume and Issue: 64(6), P. 649 - 659

Published: Oct. 11, 2024

Language: Английский

Combining reproducibility and repeatability studies with applications in forensic science DOI Creative Commons
Hina Arora, Naomi Kaplan‐Damary, Hal S. Stern

et al.

Law Probability and Risk, Journal Year: 2023, Volume and Issue: 22(1)

Published: Jan. 1, 2023

Abstract Studying the repeatability and reproducibility of decisions made during forensic examinations is important in order to better understand variation establish confidence procedures. For disciplines that rely on comparisons by trained examiners such as for latent prints, handwriting, cartridge cases, it has been recommended ‘black-box’ studies be used estimate reliability validity decisions. In a typical black-box study, are asked judge samples evidence they would practice, their recorded; ground truth about known study designers. The design includes repeated assessments different additionally, common subset provide same samples. We demonstrate statistical approach analyse data collected across these trials offers following advantages: i) we can make joint inference while utilizing both intra-examiner inter-examiner data, ii) account examiner–sample interactions may impact decision-making process. first continuous outcomes where an ordinal scale with many categories. next applied binary results presented from two studies.

Language: Английский

Citations

2

More unjustified inferences from limited data in DOI

Richard E. Gutierrez

Law Probability and Risk, Journal Year: 2024, Volume and Issue: 23(1)

Published: Jan. 1, 2024

Abstract In recent years, multiple scholars have criticized the design of studies exploring accuracy firearms examination methods. Rosenblum et al. extend those criticisms to work Guyll on practitioner performance when comparing fired cartridge cases. But while thoroughly dissect issues regarding equiprobability bias and positive predictive values in study, they do not delve as deeply into other areas such variability participant performance, well sampling participants test samples, that further undercut ability generalize al.’s results. This commentary extends what began explores how low rates error reported by likely underestimate potential for misidentifications casework. Ultimately, given convenience authors should gone beyond descriptive statistics instead draw conclusive inferences classify “a highly valid forensic technique.”

Language: Английский

Citations

0

Cross entropy and log likelihood ratio cost as performance measures for multi‐conclusion categorical outcomes scales DOI

Eric M. Warren,

John C. Handley,

H. David Sheets

et al.

Journal of Forensic Sciences, Journal Year: 2024, Volume and Issue: unknown

Published: Dec. 10, 2024

Abstract The inconclusive category in forensics reporting is the appropriate response many cases, but it poses challenges estimating an “error rate”. We discuss use of a class information‐theoretic measures related to cross entropy as alternative set metrics that allows for performance evaluation results presented using multi‐category scales. This paper shows how this metrics, and particular log likelihood ratio cost, which already with forensic methods machine learning communities, can be readily adapted widely used multiple conclusions Bayesian credible intervals on these estimated numerical methods. application published test shown. It demonstrated, results, reducing number categories proficiency from five or six three increases entropy, indicating higher was justified, they increased level agreement ground truth.

Language: Английский

Citations

0

Methodological problems in every black-box study of forensic firearm comparisons DOI Creative Commons
Maria Cuellar, Susan VanderPlas, Amanda Luby

et al.

Law Probability and Risk, Journal Year: 2024, Volume and Issue: 23(1)

Published: Jan. 1, 2024

Abstract Reviews conducted by the National Academy of Sciences (2009) and President’s Council Advisors on Science Technology (2016) concluded that field forensic firearm comparisons has not been demonstrated to be scientifically valid. Scientific validity requires adequately designed studies examiner performance in terms accuracy, repeatability, reproducibility. Researchers have performed “black-box” with goal estimating these measures. As statisticians expertise experimental design, we a literature search such date then evaluated design statistical analysis methods used each study. Our conclusion is all our methodological flaws are so grave they render invalid, is, incapable establishing scientific firearms examination. Notably, error rates among examiners, both collectively individually, remain unknown. Therefore, statements about common origin bullets or cartridge cases based examination “individual” characteristics do basis. We provide some recommendations for future studies.

Language: Английский

Citations

0

Balancing validity and reliability as a function of sampling variability in forensic voice comparison DOI
Bruce Xiao Wang, Vincent Hughes

Science & Justice, Journal Year: 2024, Volume and Issue: 64(6), P. 649 - 659

Published: Oct. 11, 2024

Language: Английский

Citations

0