Back to Search
Start Over
Human-interpretable and deep features for image privacy classification
- Source :
- 2023 IEEE International Conference on Image Processing (ICIP), Kuala Lumpur, Malaysia, 2023, pp. 3489-3492
- Publication Year :
- 2023
-
Abstract
- Privacy is a complex, subjective and contextual concept that is difficult to define. Therefore, the annotation of images to train privacy classifiers is a challenging task. In this paper, we analyse privacy classification datasets and the properties of controversial images that are annotated with contrasting privacy labels by different assessors. We discuss suitable features for image privacy classification and propose eight privacy-specific and human-interpretable features. These features increase the performance of deep learning models and, on their own, improve the image representation for privacy classification compared with much higher dimensional deep features.
- Subjects :
- Computer Science - Computer Vision and Pattern Recognition
Subjects
Details
- Database :
- arXiv
- Journal :
- 2023 IEEE International Conference on Image Processing (ICIP), Kuala Lumpur, Malaysia, 2023, pp. 3489-3492
- Publication Type :
- Report
- Accession number :
- edsarx.2310.19582
- Document Type :
- Working Paper
- Full Text :
- https://doi.org/10.1109/ICIP49359.2023.10222833