Data Fusion for a Vision-Radiological System: Calibration Algorithm Response to Sensor Location

Year
2016
Author(s)
Kelsey Stadnikia - University of Florida
Allan Martin - University of Florida
Abstract
In order to improve the ability to detect, locate, track and identify nuclear/radiological threats, the University of Florida nuclear detection community has teamed up with the 3D vision community to collaborate on a low cost data fusion system. The key is to develop an algorithm to fuse the data from multiple radiological and 3D vision sensors as one system. The system under development at the University of Florida is being assessed with various types of radiological detectors and widely available visual sensors. In fall 2015, a series of experiments were created utilizing two EJ-309 liquid organic scintillation detectors (one primary and one secondary) and a Microsoft KINECT V2 vision sensor. A LiDAR which is a highly sensitive vision sensor used to generate data for self-driving cars was also used to examine its capabilities in comparison with the Kinect. Each experiment consisted of 27 static measurements of the source arranged in a cube with three different distances in each dimension using a Cf-252 source. In each experiment, the Kinect and the primary detector remained at the origin for all of the experiments, the secondary detector location was changed in each experiment. The LiDAR’s location was based on the setup of the detectors. The location dependence of the sensor response will be evaluated to see if the system can fully calibrate its own sensor locations for the purpose of the data fusion data analysis stream. The calibration algorithm sensitivity to the relative location of the radiological vs. vision sensor as well as the source location is explored.