UNDERLYING CAUSES FOR IDENTIFICATION DEFICIENCIES OF RIID INSTRUMENTS

Year
2011
Author(s)
George Berzins - Applied Research Associates, Inc.
Randy Jones - Applied Research Associates, Inc.
Calvin Moss - Applied Research Associates, Inc.
L. Karch - DTRA/NTD
Richard Chiffelle - Applied Research Associates Inc.
Abstract
During the past several years, we have evaluated radioisotope identifiers (RIIDs) for the Defense Threat Reduction Agency (DTRA) at several DoD and DOE facilities. The primary purpose of this multi-year program is to evaluate detection and identification (ID) capabilities of COTS instruments by means of testing intended to simulate likely DoD missions. Each annual Cycle, requiring several months of actual testing, expands the test scenarios and increases the difficulty level above the previous Cycle. Each Cycle includes a series of 100 - 300 individual tests in which 12 - 20 RIIDs simultaneously accumulate data on target radioactive materials, which include industrial, medical, and special nuclear material (SNM). Performance is scored based on instrument reports to the operator. For a specific trial, an instrument may score a Correct (totally correct ID of the target material) or achieve lesser or Incorrect scores. An important component of the program, currently in Cycle 6, is providing feedback to RIID manufacturers to facilitate ID performance improvements. The concept of “Underlying Causes” (UCs) for incorrect identification or failed detection is the principal approach for communicating instrument deficiencies to manufacturers. A spectral analysis is performed for each RIID after each test sequence, ideally for every less-than-Correct test score, though time constraints usually limit the analysis to representative subsets of errors. The spectral examination, performed independently of the scoring, is often revealing in that groups of RIIDs appear to fail certain tests for similar reasons. We have collected most of the common reasons for failure into a set of 13 UCs. The 5 most prevalent include (1) peak-search threshold criteria, (2) calibration drifts and errors, (3) ineffective use of known information about secondary lines, (4) ineffective use of known relative-intensity information, and (5) instrument artifacts or operational difficulties. In the recently completed Cycle 5, these five UCs contributed to 1166 of the 1509 non-Correct RIID reports analyzed. As in previous Cycles, these were discussed in detail with the manufacturers. One gratifying result has been noticeable performance improvement between Cycles for several of the evaluated instruments.