摘要

In this paper we deal with the problem of improving the recent milestone results on the estimation of the generalization capability of a randomized learning algorithm based on Differential Privacy (DP). In particular, we derive new DP based multiplicative Chernoff and Bennett type generalization bounds, which improve over the current state-of-the-art Hoeffding type bound. Then, we prove that a randomized algorithm based on the data generating dependent prior and data dependent posterior Boltzmann distributions of Catoni (2007) [10] is Differentially Private and shows better generalization properties than the Gibbs classifier associated to the same distributions. With this aim, we also exploit a simple example. Finally, we discuss the advantages of using the Thresholdout procedure, one of the main results generated by the DP theory, for Model Selection and Error Estimation purposes, and we derive a new result which exploits our new generalization bounds.

  • 出版日期2017-4-1