Censoring representations with an adversary

H Edwards, A Storkey - arXiv preprint arXiv:1511.05897, 2015 - arxiv.org
arXiv preprint arXiv:1511.05897, 2015arxiv.org
In practice, there are often explicit constraints on what representations or decisions are
acceptable in an application of machine learning. For example it may be a legal requirement
that a decision must not favour a particular group. Alternatively it can be that that
representation of data must not have identifying information. We address these two related
issues by learning flexible representations that minimize the capability of an adversarial
critic. This adversary is trying to predict the relevant sensitive variable from the …
In practice, there are often explicit constraints on what representations or decisions are acceptable in an application of machine learning. For example it may be a legal requirement that a decision must not favour a particular group. Alternatively it can be that that representation of data must not have identifying information. We address these two related issues by learning flexible representations that minimize the capability of an adversarial critic. This adversary is trying to predict the relevant sensitive variable from the representation, and so minimizing the performance of the adversary ensures there is little or no information in the representation about the sensitive variable. We demonstrate this adversarial approach on two problems: making decisions free from discrimination and removing private information from images. We formulate the adversarial model as a minimax problem, and optimize that minimax objective using a stochastic gradient alternate min-max optimizer. We demonstrate the ability to provide discriminant free representations for standard test problems, and compare with previous state of the art methods for fairness, showing statistically significant improvement across most cases. The flexibility of this method is shown via a novel problem: removing annotations from images, from unaligned training examples of annotated and unannotated images, and with no a priori knowledge of the form of annotation provided to the model.
arxiv.org
Showing the best result for this search. See all results