Differential privacy for networks and tables
Vishesh Karwa, Temple University
Privacy preserving mechanisms such as differential privacy inject additional randomness in the data, beyond the sampling mechanism. Ignoring this additional noise can lead to inaccurate and invalid inferences. In this talk, we present two examples to perform statistical inference using data released from a differentially private mechanism. The first example centers around the beta model of random graphs, where the degree sequence is a sufficient statistic. Here, we release a noisy sufficient statistic, and then "de-noise" it to perform maximum likelihood estimation. The second example centers around a naive bayes classifier. Here, we explicitly incorporate the privacy mechanism into the likelihood function by treating the original data as missing. However, the corresponding likelihoods are intractable and we derive fast and accurate variational approximations to tackle such intractable likelihoods that arise due to privacy.