A Study of Face Obfuscation in ImageNet

Kaiyu Yang, Jacqueline Yau, Li Fei-Fei, Jia Deng, Olga Russakovsky

Abstract

We benchmark various deep neural networks on face-blurred images and observe a disparate impact on different categories. Still, the overall accuracy only drops slightly (<= 0.68), demonstrating that we can train privacy-aware visual classifiers with minimal impact on accuracy. Further, we experiment with transfer learning to 4 downstream tasks: object recognition, scene recognition, face attribute classification, and object detection. Results show that features learned on face-blurred images are equally transferable. Data and code are available here

Image obfuscation (blurring, mosaicing, etc.) is widely used for privacy protection. However, computer vision research often overlooks privacy by assuming access to original unobfuscated images. In this paper, we explore image obfuscation in the ImageNet challenge.

Most categories in the ImageNet challenge are not people categories; nevertheless, many incidental people are in the images, whose privacy is a concern. We first annotate faces in the dataset. Then we investigate how face blurring---a typical obfuscation technique---impacts classification accuracy. }.

Paper



Kaiyu Yang, Jacqueline Yau, Li Fei-Fei, Jia Deng, Olga Russakovsky
A Study of Face Obfuscation in ImageNet
International Conference on Machine Learning (ICML), 2022
[arXiv] [BibTex] [code and data]