Year
2008
Abstract
Path timeline models have been used in nuclear security analyses for over 20 years to calculate Probability of Interruption, PI. Whether PI is calculated by hand, a simple spreadsheet model, such as Estimate of Adversary Sequence Interruption (EASI), or a more complex computer tool, such as Systematic Analysis of Vulnerability to Intrusion (SAVI), the basic concept is the same: determine the Critical Detection Point (CDP), the latest detection location where the adversary can be detected and be interdicted by the response force, and then calculate Probability of Interruption by accumulating probabilities of detection down to the CDP. Where the adversary has a choice in how to attack parts of the physical security system, it is also assumed that they minimize detection down to the CDP and then minimize delay. As we will demonstrate in this paper this model for calculating PI assumes “worst-case” adversary behavior that in many cases rationale adversaries will stay away from. Our approach will start by considering a standard utility theory model of adversary behavior based on reasonable criteria for making trade-offs between detection, delay, response, and neutralization. Within this more general model, where the best decision maximizes expected utility, we will show under what conditions the current timeline model makes sense and how a variety of assumptions will lead to other “optimal” adversary behavior. There has been a significant amount of work performed in the last 50 years on non-standard utility theory where other metrics besides the expected utility are considered in the decision. We will discuss how this other work, and newer work on the role of ambiguity, affects our timeline results.