摘要

Recently, single image super-resolution (SR) models based on deep convolutional neural network (CNN) have achieved significant advances in accuracy and speed. However, these models are not efficient enough for the image SR task. Firstly, we find that generic deep CNNs learn the low frequency features in all layers, which is redundant and leads to a slow training speed. Secondly, rectified linear unit (ReLU) only activate the positive response of the neuron, while the negative response is also worth being activated. In this paper, we propose a novel joint residual network (JRN) with three subnetworks, in which two shallow subnetworks aim to learn the low frequency information and one deep subnetwork aims to learn the high frequency information. In order to activate the negative part of the neurons and to preserve the sparsity of activation function, we propose a paired ReLUs activation scheme: one of the ReLUs is for positive activation and the other is for negative activation. The above two innovations lead to a much faster training, as well as a more efficient local structure. The proposed JRN achieves the same accuracy of a generic CNN with only 10.5% training iterations. The experiments on a wide range of images show that JRN is superior to the state-of-the-art methods both in accuracy and computational efficiency.