Testing a Test: Beyond Sensitivity and Specificity



 

In this lecture, Dr. Schmidt covers performance evaluation of diagnostic tests. Traditional performance measures such as sensitivity, specificity and ROC curves are reviewed. Reasons for differences in diagnostic studies are examined including real differences, threshold effects, sources of bias, and random variation. Shortcomings of the traditional approaches to test evaluation are also discussed and alternative approaches such as diagnostic research (vs test research), clinical trial evaluation, and cost-effectiveness evaluation are presented.

Originally presented on January 29, 2015, in Salt Lake City, Utah.


Lecture Presenter

Robert Schmidt, MD, PhD, MBA

Robert Schmidt, MD, PhD, MBA

Director, Center for Effective Medical Testing
ARUP Laboratories
Assistant Professor of Pathology
University of Utah School of Medicine

Dr. Schmidt is the director of the Center for Effective Medical Testing at ARUP Laboratories and an assistant professor of pathology at the University of Utah School of Medicine. He received his medical degree from the University of Sydney in Sydney and completed his residency training in clinical pathology at the University Of Utah School of Medicine. He received an MS in biochemical engineering at the Massachusetts Institute of Technology, an MBA at the University of Chicago, a PhD in operations management at the University of Virginia, and an MMed in clinical epidemiology from the University of Sydney.

Prior to completing medical school, Dr. Schmidt was an assistant professor of operations management at the Carlson School of Management at the University of Minnesota and an associate professor of clinical operations management at the Marshall School of Business at the University of Southern California. Dr. Schmidt’s medical research focuses on diagnostic testing, specifically utilizing his business background to complement medical knowledge in performing evidence-based evaluation of diagnostic testing. His research includes comparative effectiveness, cost-effectiveness, and utilization analyses of diagnostic tests, as well as operations and technology management related to diagnostic testing.


Objectives

After this presentation, participants will be able to:

  • Calculate basic accuracy statistics such as sensitivity, specificity, likelihood ratios and Area under the Curve(AUC).
  • Understand reasons for differences in diagnostic accuracy: real differences, bias, random variation, cut-offs.
  • Understand the difference between tests conducted under ideal conditions vs real conditions.
  • Understand the role of higher-level approaches to performance evaluation.

Sponsored by:

University of Utah School of Medicine, and ARUP Laboratories