葉家宏Yeh, Chia-Hung林少擎Lin, Shao-Ching2023-12-082022-08-222023-12-082022https://etds.lib.ntnu.edu.tw/thesis/detail/1fa6dd50982aa578b3f7cbc9fc61f339/http://rportal.lib.ntnu.edu.tw/handle/20.500.12235/120301影像超解析度和單張影像超解析度相比較為複雜,要修復影像中的細節和背景且同時要保持時域一致性,雖然許多現存方法致力於完成此任務,但由於運動資訊不精確而造成的過度平滑仍未解決,為了處理此問題,我提出了基於編碼之影像超解析度,該方法直接從影像解碼器中得到運動補償幀和殘差幀,使用了兩個的神經網路分別將兩個幀上採樣,接著優化模組會將上述兩個神經網路的輸出結果結合並產生高品質影像。提出方法可以有效避免帶有複雜運動訊息的低解析度影像產生出模糊的高解析度影像,這是因為從影像解碼器中提取到的運動補償幀和殘差幀保有更精準的運動訊息。實驗結果顯示提出方法在REDS和Vid4兩個指標資料集上有更突出的成果。Unlike single-image-based super-resolution, video super-resolution (VSR) is twofold: to restore fine details while saving coarse ones and preserving motion consistency. Although many approaches have been proposed for this task, the over-smoothed problem caused by motion inconsistency remains challenging. This thesis presents a VSR framework called coding-based video super-resolution (CBVR) to address this problem. It directly utilizes the characteristics of the motion-compensated frame and its residuals obtained from the video decoder. Two separate networks are applied separately to up-sample motion-compensated frames and residuals in our method. A refinement model is used to integrate the output of the two networks to produce high-quality videos. The proposed method can effectively avoid blurry output HR frames created by mixing the values of multiple motion compensated input LR frames. This is because the video decoder's motion-compensated frames and residual frames have more precise motion information. Whether deep-learning-based or traditional, the previous estimating methods cannot obtain such accurate information. Experimental results demonstrate that our proposed CBVR can achieve state-of-the-art performance on REDS and Vid4 benchmarks compared to existing video super-resolution approaches.影像壓縮運動估計運動補償影像超解析度Video CompressionMotion EstimationMotion CompensationVideo Super-Resolution基於編碼之隱性運動感知的影像超解析度Coding-based video super-resolution with implicit motion perceptionetd