Wide Activation for Efficient and Accurate Image Super-Resolution
Wide Activation for Efficient and Accurate Image Super-Resolution
Jiahui Yu Yuchen Fan Jianchao Yang Ning Xu Zhaowen Wang Xinchao Wang Thomas Huang

Abstract
In this report we demonstrate that with same parameters and computationalbudgets, models with wider features before ReLU activation have significantlybetter performance for single image super-resolution (SISR). The resulted SRresidual network has a slim identity mapping pathway with wider ((2\times) to(4\times)) channels before activation in each residual block. To furtherwiden activation ((6\times) to (9\times)) without computational overhead,we introduce linear low-rank convolution into SR networks and achieve evenbetter accuracy-efficiency tradeoffs. In addition, compared with batchnormalization or no normalization, we find training with weight normalizationleads to better accuracy for deep super-resolution networks. Our proposed SRnetwork \textit{WDSR} achieves better results on large-scale DIV2K imagesuper-resolution benchmark in terms of PSNR with same or lower computationalcomplexity. Based on WDSR, our method also won 1st places in NTIRE 2018Challenge on Single Image Super-Resolution in all three realistic tracks.Experiments and ablation studies support the importance of wide activation forimage super-resolution. Code is released at:https://github.com/JiahuiYu/wdsr_ntire2018
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| multi-frame-super-resolution-on-proba-v | WDSR-MFSR | Normalized cPSNR: 0.9411827883122681 |
| multi-frame-super-resolution-on-proba-v | 3DWDSR | Normalized cPSNR: 0.9462525077016232 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.