题名红外与可见光图像融合技术研究
作者张蕾
学位类别博士
答辩日期2015-05
授予单位中国科学院大学
导师金龙旭
关键词红外图像 可见光图像 图像融合 多尺度几何分析 结构相似度
其他题名Research on Infrared and Visual Image Fusion
学位专业光学工程
中文摘要随着传感器技术的发展,红外成像传感器、可见光成像传感器在军事和安全监控等领域的应用得到了不断的推广。但这两类传感器的成像特点及局限性,使它们在某些成像环境下,利用单一传感器完成任务存在一定困难。如何利用红外与可见光成像传感器之间的互补信息,有效地发掘和综合图像的特征信息、突出红外目标、增强场景理解,一直是红外与可见光图像融合技术的研究热点。 本论文主要围绕非下采样Contourlet变换、双树复小波变换等多尺度变换方法,对红外与可见光图像融合算法进行深入、系统地研究,结合大量仿真实验,提出了几种图像融合算法。 首先,探讨了红外及可见光传感器的成像特性,对现有的各类图像融合算法进行总结和归纳,研究了多尺度几何变换理论,并对图像融合的评价指标及评价依据进行了详细的介绍和探讨。 其次,对非下采样Contourlet 变换理论进行了学习和研究。对非下采样Contourlet 变换带通方向子带系数间存在的空间相关性进行了探讨,并提出了一种基于该属性的图像融合策略。通过综合考虑图像分解后高频系数间的相关性,定义了一种新的空域相关函数对输入图像的边缘细节强度进行衡量,并根据其设定策略对红外和可见光图像进行融合。实验结果表明,该算法能够有效融合红外图像与可见光图像中的边缘及细节信息。 再次,为了在融合过程中充分获取细节及区域特征信息,文中提出了一种基于非下采样Contourlet变换与图像分割相结合的红外与可见光图像融合算法。在算法中,首先对红外图像进行图像过分割;然后结合非下采样Contourlet 变换,根据分割所得区域的各自属性采取不同的规则对红外与可见光图像进行融合;最后通过非下采样Contourlet逆变换得到融合图像。该算法在多尺度几何变换的基础上将区域作为整体进行融合,既有效保留细节信息,又大幅提高了源图像中重要信息及区域结构信息的传递量。实验对比表明,该算法具有更好的融合效果。 然后,深入研究了结构相似度理论,基于结构相似度理论构建了一种基于区域分类和多尺度变换相结合的红外与可见光图像融合框架。该框架能够进一步提升基于多尺度变换红外与可见光图像融合算法的融合效果,增强融合图像的可观测性和理解性。 最后,为验证该框架的有效性,论文分别提出了基于区域分类和双树复小波变换相结合的红外与可见光图像融合算法、基于区域分类和非下采样Contourlet变换相结合的红外与可见光图像融合算法,并对这两种算法进行了仿真实验。实验结果证明这两种红外与可见光图像融合算法均能更有效的保留源图像中的热目标信息及空间结构信息,融合图像更为清晰、信息含量更多。 综上所述,本文研究了红外与可见光图像融合技术,针对红外图像与可见光图像的特点,提出几种图像融合算法。仿真实验证明,所提出的算法均具有较好的融合效果。
英文摘要With the development of sensor technology, the infrared imaging sensor and visible light imaging sensor have played an increasingly significant role in enhancing the performance of some civilian and military applications. Due to the characteristics and limitations, it is difficult to complete the task using single imaging sensor in some environment. The focus of image fusion is to integrate complementary information from multiple images of the same scene, which comes from infrared imaging sensor and visible light imaging sensor, to create a single composite that contains all the typical features of the source images. The fused image will thus be more informative and suitable for human visual perception. In this thesis, infrared and visual image fusion algorithm are studied systematically, and a number of infrared and visual image fusion algorithms are proposed, which based on multi-resolution analysis tools, such as Dual-Tree Complex Wavelet Transform (DTCWT), Non-subsampled Contourlet Transform (NSCT), and combine with an amount of simulation experiments. Firstly, imaging properties of two types of sensors are introduced, which are infrared imaging sensor and visible light imaging sensor. The main-streamed techniques are compared and categorized to two categories including spatial domain analysis and multi-scale analysis. The quality evaluation method for image fusion has been discussed and summarized. Secondly, The NSCT is a fully shift-invariant, multi-scale, and multi-direction expansion that has a fast implementation. In date, the laws of image fusion algorithm based on NSCT underutilize the correlation of the bandpass directional subband coefficients. Considering the relationship of NSCT bandpass directional subband coefficients, a new spatial correlation function is defined, which can measure the edge and texture of image for infrared and visual image fusion. And a novel method for the fusion of infrared and visual images is presented, which based on the correlation of the bandpass directional subband coefficients of NSCT. Experimental results show that the algorithm can effectively integrate the edge and texture information of infrared and visual images. Thirdly, a novel algorithm is proposed for infrared and visual image fusion, which is based on NSCT and image segmentation. At first, watershed transform is applied to the gradient surface of infrared image. Then, infrared and visual images are decomposed by NSCT. Next, different laws are applied to different regions for fusion, which are based on region characteristics. Comparing with other fusion algorithms based on multi-scale analysis, the proposed approach outperforms in both important and region structure information. And then, we study the structural similarity theory, and construct infrared and visual image fusion framework based on region classified and multi-scale transform. This method could integrate hot target information and spatial structure information efficiently, and the fused image will be more informative and suitable for human visual perception. Finally, to assess the performance of the design fusion framework, two algorithms are presented. They are infrared and visual image fusion algorithm based on NSCT and region classified, and infrared and visual image fusion algorithm based on DTCWT and region classified. The designed fusion framework has proven to be efficient in infrared and visual image fusion as we showed in this paper. The both algorithms has proven to be more accurately in capture the hot target and significant region structure information from the infrared and visual images, the fused images are clearer and more informative content. In summary, the thesis systematically studies the infrared and visual image fusion technology. And several novel fusion algorithms for infrared and visual image fusion are proposed. Simulation experiments confirm that the proposed algorithms have better fusion performance.
公开日期2015-12-24
内容类型学位论文
源URL[http://ir.ciomp.ac.cn/handle/181722/48949]  
专题长春光学精密机械与物理研究所_中科院长春光机所知识产出
推荐引用方式
GB/T 7714
张蕾. 红外与可见光图像融合技术研究[D]. 中国科学院大学. 2015.
个性服务
查看访问统计
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。


©版权所有 ©2017 CSpace - Powered by CSpace