Year
2021
File Attachment
a238.pdf1.04 MB
Abstract
Video surveillance is an important safeguards technical measure used by the International Atomic Energy Agency (IAEA). At large, complex nuclear facilities, multiple surveillance cameras may be deployed to monitor the movement of safeguards-relevant objects through the site. Surveillance review is time-consuming in that it must be thoroughly reviewed to ensure that safeguards relevant object movements match operator declarations. This paper describes the use of deep machine learning algorithms to automatically track objects across multiple cameras' field of view with the aim of increasing the efficiency of the review process. The fundamental challenge of this task, namely, object reidentification or Re-ID, is to associate the same object given that each camera records different viewpoints, scene occlusions, and object scales. The application of object Re-ID in safeguards surveillance is even more challenging than classic person Re-ID or vehicle Re-ID problems, as multiple objects of the same category (e.g., spent fuel casks) may have a near identical appearance. One observation is that all safeguard objects we consider must be moved via vehicle or crane. Therefore, we use this spatial context information for an object, which provides the features of the vehicle/crane, to improve the prediction and output of the Re-ID task. Therefore, we develop a two-stream convolutional neural network (CNN) model that takes both the object and its surrounding regions as inputs. Moreover, our video datasets are usually taken from different scenes from the training data, which have a variety of differences in illumination and/or cluttered backgrounds. Such differences between the training and testing data can cause dramatically decreased performance of the Re-ID algorithm if we apply the trained model directly to the test videos. To tackle this problem, an advanced domain adaptation technique is proposed to mitigate the gaps between the data taken from different scenes. The inference results of both within- and across-video can be used in further analysis, such as event/activity recognition, anomaly detection, etc. In this paper, we will discuss the neural network of the Re-ID model and present the test results.