论文部分内容阅读
Multiple visual sensor fusion provides an effective way to improve the robustness and accuracy of video surveillance system. Traditional video fusion methods fuse the source videos using static image fusion methods frame-by-frame without considering the information in temporal dimension. The temporal information can’t be fully utilized in fusion procedure. Aiming at this problem, a visible and infrared video fusion method based on Uniform discrete curvelet transform(UDCT) and spatial-temporal information is proposed. The source videos are decomposed by using UDCT, and a set of local spatial-temporal energy based fusion rules are designed for decomposition coefficients. In these rules, we consider the current frame’s coefficients and the coefficients on temporal dimension which are the coefficients of adjacent frames. Experimental results demonstrated that the proposed method works well and outperforms comparison methods in terms of temporal stability and consistency as well as spatial-temporal information extraction.
Multiple visual sensor fusion provides an effective way to improve the robustness and accuracy of video surveillance system. Traditional video fusion methods fuse the source videos using static image fusion methods frame-by-frame without considering the information in temporal dimension. t be fully utilized in fusion procedure. Aiming at this problem, a visible and infrared video fusion method based on Uniform discrete curvelet transform (UDCT) and spatial-temporal information is proposed. The source videos are decomposed by using UDCT, and a set of local spatial-temporal energy based fusion rules are designed for decomposition coefficients. In these rules, we consider the current frame’s coefficients and the coefficients on adjacent dimensions. which are the coefficients of adjacent frames. Experimental results of the proposed method works well and outperforms comparison methods in terms of temporal stability and consistency as well as spatial- temporal information extraction.