摘要

Focusing on the privacy issues in recommender systems, we propose a framework containing two perturbation methods for differentially private collaborative filtering to prevent the threat of inference attacks against users. To conceal individual ratings and provide valuable predictions, we consider some representative algorithms to calculate the predicted scores and provide specific solutions for adding Laplace noise. The DPI (Differentially Private Input) method perturbs the original ratings, which can be followed by any recommendation algorithms. By contrast, the DPM (Differentially Private Manner) method is based on the original ratings, which perturbs the measurements during implementation of the algorithms and releases the predicted scores. The experimental results showed that both methods can provide valuable prediction results while guaranteeing DP, which suggests it is a feasible solution and can be competent to make private recommendations.