摘要

In many applications, it is natural to use interval data to describe various kinds of uncertainties. This paper is concerned with a one-layer interval perceptron with the weights and the outputs being intervals and the inputs being real numbers. In the original learning method for this interval perceptron, an absolute value function is applied for newly learned radii of the interval weights, so as to force the radii to be positive. This approach seems unnatural somehow, and might cause oscillation in the learning procedure as indicated in our numerical experiments. In this paper, a modified learning method is proposed for this one-layer interval perceptron. We do not use the function of the absolute value, and instead, we replace, in the error function, the radius of each interval weight by a quadratic term. This simple trick does not cause any additional computational work for the learning procedure, but it brings about the following three advantages: First, the radii of the intervals of the weights are guaranteed to be positive during the learning procedure without the help of the absolute value function. Secondly, the oscillation mentioned above is eliminated and the convergence of the learning procedure is improved, as indicated by our numerical experiments. Finally, a by-product is that the convergence analysis of the learning procedure is now an easy job, while the analysis for the original learning method is at least difficult, if not impossible, due to the non-smoothness of the absolute value function involved.