以生成對抗網路為基礎之低光源圖片增強系統使用閃光-非閃光圖片
No Thumbnail Available
Date
2022
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
本篇研究提出一以生成對抗網路為基礎之低光源影像增強系統。本系統藉由結合沒有使用閃光燈與有使用閃光燈的兩張影像,來生成一張同時具有真實光影分布與色彩細節豐富的影像。此系統主要目的在改善使用者在低光源環境下拍照的體驗。使用數位相機在低光源環境進行拍攝時,通常會調高感光元件的感光度(ISO值)來維持正常的亮度或延長相機快門時間,但這會產生明顯的噪點雜訊或造成影像的模糊。另一方面,攝影師會使用閃光燈來提供額外的照明,雖然使用閃光燈可以得到色彩真實的影像,但是有可能會破壞環境中的光影分布。例如產生額外的反光、陰影或是使被攝物看起來變得平坦。因此,本研究希望結合低光源圖像以及閃光燈影像兩者的特點,透過生成對抗網路來生成出較為真實的影像。本系統採用低光源圖像以及閃光燈影像兩種影像輸入修改後的生成對抗網路。此網路以Pix2PixHD 為基底並且做出幾項改良,其中包含調整模型架構,修改損失函數為相對平均最小平方(Relativistic average least square)並且在生成器中加入輕量級的注意力機制模組(Convolutional block attention module, CBAM)。為此,本研究同時建立一個低光源影像資料庫(CVIU Short exposure Flash Long exposure(SFL) dataset)。此資料庫共計210個影像組,其中每組皆包含三張影像:使用短時間曝光拍出的低光源圖像、使用閃光燈拍出的閃光燈影像和使用長時間曝光拍出的基準真相(ground truth) 。此資料庫的影像使用來訓練與評估本系統。實驗結果顯示,本系統在SFL資料庫測試集中實現了22.5267的峰值訊噪比(Peak signal-to-noise ratio, PSNR)和0.6662的結構相似性指數(Structural similarity index, SSIM)。
This study proposes a low-light image enhancement system based on a Generative Adversarial Network (GAN) that inputs two kinds of images, those with and without flash, to generate an enhanced image with real light and shadow distribution and rich color details. The main aim of this system is to improve the user's experience of taking pictures in a low-light environment. When using a digital camera to shoot in a low-light environment, it is common to increase the sensitivity (ISO value) of the photosensitive element or to prolong the camera shutter time to maintain normal brightness. However, these actions produce a noticeably noisy or blurred image. On the other hand, photographers may use a flash to provide additional lighting. This can achieve images with true colors, but it destroys the distribution of light and shadow in the environment, such as by creating extra reflections or shadows, or flattening the subject. Therefore, this study hopes to combine characteristics of images with and without flash to generate a more realistic image through a GAN.This study adopts a modified GAN with two kinds of image input: a low-light image and its corresponding flash image. The proposed network is based on Pix2PixHD with several enhancements, including adjusting the model architecture, modifying the loss function to relativistic average least square, and adding a lightweight attention mechanism module called a convolutional block attention module (CBAM) to the generator.To this end, this study established a low-light image (CVIU short exposure flash long exposure (SFL)) dataset. The SFL dataset consists of 210 image triples, each with three kinds of image: a low-light image captured using short exposures, a corresponding flash image captured using flash, and a corresponding ground truth image captured using long exposures. This dataset was used to train and evaluate the proposed system. Experimental results show that the system achieves a peak signal-to-noise ratio (PSNR) of 22.5267 and a structural similarity index measure (SSIM) of 0.6662 in the SFL database test set.
This study proposes a low-light image enhancement system based on a Generative Adversarial Network (GAN) that inputs two kinds of images, those with and without flash, to generate an enhanced image with real light and shadow distribution and rich color details. The main aim of this system is to improve the user's experience of taking pictures in a low-light environment. When using a digital camera to shoot in a low-light environment, it is common to increase the sensitivity (ISO value) of the photosensitive element or to prolong the camera shutter time to maintain normal brightness. However, these actions produce a noticeably noisy or blurred image. On the other hand, photographers may use a flash to provide additional lighting. This can achieve images with true colors, but it destroys the distribution of light and shadow in the environment, such as by creating extra reflections or shadows, or flattening the subject. Therefore, this study hopes to combine characteristics of images with and without flash to generate a more realistic image through a GAN.This study adopts a modified GAN with two kinds of image input: a low-light image and its corresponding flash image. The proposed network is based on Pix2PixHD with several enhancements, including adjusting the model architecture, modifying the loss function to relativistic average least square, and adding a lightweight attention mechanism module called a convolutional block attention module (CBAM) to the generator.To this end, this study established a low-light image (CVIU short exposure flash long exposure (SFL)) dataset. The SFL dataset consists of 210 image triples, each with three kinds of image: a low-light image captured using short exposures, a corresponding flash image captured using flash, and a corresponding ground truth image captured using long exposures. This dataset was used to train and evaluate the proposed system. Experimental results show that the system achieves a peak signal-to-noise ratio (PSNR) of 22.5267 and a structural similarity index measure (SSIM) of 0.6662 in the SFL database test set.
Description
Keywords
生成對抗網路, 低光源影像增強, 深度學習, 注意力機制, 閃光燈, 影像生成, Generative Adversarial Network, Low-Light Image Enhancement, Flash Image Enhancement, Deep Learning, Attention Mechanism, Flash, Image Generation