Not All Errors Are Created Equal: Examining Human-Algorithm System Performance For International Safeguards-Informed Visual Search Tasks

Year
2021
Author(s)
Zoe Gastelum - Sandia National Laboratories
Laura Matzen - Sandia National Laboratories
Mallory Stites - Sandia National Laboratories
Breannan Howell - Sandia National Laboratories
File Attachment
a222.pdf1.03 MB
Abstract
The International Atomic Energy Agency has expressed interest in deep learning models to support information processing for multiple safeguards verification activities, including surveillance data review and open source information monitoring. Modestly performing deep learning models have been shown to increase performance of human-algorithm systems, and in some domains deep learning models have exceeded performance levels of humans working alone. Yet even the best performing humans and algorithms make errors. Sandia National Laboratories is currently investigating a breadth of variables that impact human-algorithm system performance, focusing on model errors and user trust. In this paper, we will present results from two experimental tracks examining how error types and error frequencies from simulated deep learning models for a safeguards-informed object detection task impact user performance.