基于CNN的轻量级神经网络单幅图像超分辨率研究
投稿时间:2021-09-26  修订日期:2021-10-25  点此下载全文
引用本文:
摘要点击次数: 106
全文下载次数: 0
作者单位邮编
杨小琴* 南京工业大学浦江学院 211111
朱玉全 江苏大学 
计算机科学与通信工程学院 
中文摘要:为了将低分辨率图像增强为高分辨率图像并最终得到超分辨率(SR)图像,本文提出了具有混合残差和密集连接结构的轻量级神经网络(LNN)来提高单幅图像超分辨率(SISR)性能,构建了层间SR-LNN(SR-ILLNN)和简化SR-LNN(SR-SLNN)两种LNN。SR-ILLNN采用基于部分卷积的填充方案来避免边界信息的丢失,结合局部和全局跳跃连接来训练卷积层之间输出特征图上的残差,并在低分辨率和高分辨率图像上对SR-ILLNN进行训练。通过SR-SLNN删除SR-ILLNN的高分辨率(HR)特征层和共享特征层来降低SR-ILLNN的网络复杂度。从多样化2K(DIV2K)图像数据集中提取训练图像,测试评估SR的准确性和网络复杂性。实验结果表明,与传统方法相比,SR-ILLNN和SR-SLNN可以显著降低参数数量、内存容量和计算时间,同时保持相似的图像质量。
中文关键词:卷积神经网络  轻量级神经网络  单幅图像超分辨率  图像增强
 
Research on Super Resolution of Single Image based on CNN Lightweight Neural Network
Abstract:In order to enhance the low-resolution image into a high-resolution image and finally obtain the super-resolution (SR) image, a lightweight neural network (LNN) with mixed residual and dense connection structure is proposed to improve the performance of single image super-resolution (SISR). Two lnns are constructed: interlayer SR-LNN (SR-ILLNN) and simplified SR-LNN (SR-SLNN). SR-ILLNN adopts a filling scheme based on partial convolution to avoid the loss of boundary information, combines local and global jump connections to train the residuals on the output feature map between convolution layers, and trains SR-ILLNN on low resolution and high resolution images. SR-SLNN deletes the high-resolution (HR) feature layer and shared feature layer of SR-ILLNN to reduce the network complexity of SR-ILLNN. The training images were extracted from the diversified 2K (DIV2K) image data set, and the accuracy and network complexity of SR were tested and evaluated. Experimental results show that compared with traditional methods, SR-ILLNN and SR-SLNN can significantly reduce the number of parameters, memory capacity and calculation time, while maintaining similar image quality.
keywords:Convolutional neural network  Lightweight neural network  Single image super-resolution  Image enhancement  
查看全文   查看/发表评论   下载pdf阅读器