摘要

As a useful tool for sufficient dimension reduction, kernel inverse regression (KIR) can effectively relieve the curse of dimensionality by finding linear combinations of the predictor that contain all the relevant information for regression. However, KIR is sensitive to outliers, and will fail when the predictor distribution is heavy-tailed. In this paper, we discuss robust variations of KIR that do not have such limitations. The effectiveness of our proposed methods is demonstrated via simulation studies and an application to the automobile price data.