1. Style-Aware Normalized Loss for Improving Arbitrary Style Transfer
- Author
-
Pradeep Natarajan, Prem Natarajan, Jiaxin Cheng, Ayush Jaiswal, and Yue Wu
- Subjects
FOS: Computer and information sciences ,Stylized fact ,Computer science ,business.industry ,Computer Vision and Pattern Recognition (cs.CV) ,media_common.quotation_subject ,Computer Science - Computer Vision and Pattern Recognition ,Root cause ,Deception ,Machine learning ,computer.software_genre ,Style (sociolinguistics) ,Empirical research ,Transfer (computing) ,Pattern recognition (psychology) ,Artificial intelligence ,business ,computer ,Preference (economics) ,media_common - Abstract
Neural Style Transfer (NST) has quickly evolved from single-style to infinite-style models, also known as Arbitrary Style Transfer (AST). Although appealing results have been widely reported in literature, our empirical studies on four well-known AST approaches (GoogleMagenta, AdaIN, LinearTransfer, and SANet) show that more than 50% of the time, AST stylized images are not acceptable to human users, typically due to under- or over-stylization. We systematically study the cause of this imbalanced style transferability (IST) and propose a simple yet effective solution to mitigate this issue. Our studies show that the IST issue is related to the conventional AST style loss, and reveal that the root cause is the equal weightage of training samples irrespective of the properties of their corresponding style images, which biases the model towards certain styles. Through investigation of the theoretical bounds of the AST style loss, we propose a new loss that largely overcomes IST. Theoretical analysis and experimental results validate the effectiveness of our loss, with over 80% relative improvement in style deception rate and 98% relatively higher preference in human evaluation., Accepted as CVPR 2021 Oral Paper
- Published
- 2021