7 results on '"Gou, Jianping"'
Search Results
2. Multi-target Knowledge Distillation via Student Self-reflection
- Author
-
Gou, Jianping, Xiong, Xiangshuo, Yu, Baosheng, Du, Lan, Zhan, Yibing, and Tao, Dacheng
- Published
- 2023
- Full Text
- View/download PDF
3. Advancement of Mathematical Methods in Feature Representation Learning for Artificial Intelligence, Data Mining and Robotics.
- Author
-
Gou, Jianping, Du, Lan, Gou, Jianping, Ou, Weihua, and Zeng, Shaoning
- Subjects
Computer science ,Information technology industries ,3D reconstruction ,ADMM ,Aspect Level Sentiment Classification ,C-MAPSS ,Contrasitve Learning ,DCNN-BiLSTM ,Dempster-Shafer evidence theory ,GAT ,GCN ,Graph Convolutional Networks ,KGE ,MMD ,NMS ,Soft-NMS ,XSS attack ,YOLOX ,YoloV4 ,adversarial equilibrium ,adversarial example ,adversarial learning ,anchor-free ,anomaly detection ,anti-noise performance ,aspect-based sentiment analysis ,aspect-level sentiment classification ,attention mechanism ,background matting ,black-box attack ,blind image deblurring ,collaborative-representation-based classification ,commonsense knowledge graph ,computer vision ,confidence score ,contrastive learning ,correlation filters ,cost-weighted ,cross-domain classification ,cross-domain sentiment classification ,cross-working ,cyber-physical ,data analysis ,decoupling ,deep learning ,deep neural network ,deep reinforcement learning ,dependency trees ,dependency types ,discriminative feature learning ,domain adaptation ,elastic optical networks ,end-to-end ,ensemble attack ,extension theory ,external knowledge ,face recognition ,feature extraction ,feature reuse ,feature transformation ,fine-tuning ,fusion verification ,fuzzy k-means ,gait adjustment ,garbage quantity identification ,gated learning ,geometric mean metric ,graph attention mechanism ,graph convolutional networks ,graph neural networks ,hate speech detection ,head detection ,hypergraph matching ,image aesthetic assessment ,image classification ,image gradient orientations ,image prior ,image super-resolution ,industrial control systems ,information-theoretic metric learning ,intelligent design ,iterative majorization algorithm ,joint semantic learning ,kNN ,knowledge distillation ,knowledge graph embedding ,label propagation ,large-margin technique ,license plate recognition ,logarithm norm ,low-high level joint task ,machine learning ,matrix nuclear norm ,metric learning ,mixed noise removal ,models and algorithms ,motion deblurring ,multi-order attention ,multi-output ,multi-source domain adaptation ,multi-task learning ,multi-view stereo ,multidimensional scaling ,n/a ,object detection ,pairwise constraint propagation ,payloads ,pedestrian detection ,people counting ,plug-and-play ,power load forecasting ,rainy image recovery ,robustness ,routing, modulation and spectrum assignment ,scheme design ,second-order fitting ,second-order gradient ,semantic ,semi-supervised learning ,similarity metric ,small sample ,soft-NMS ,sparse channel ,sparsity ,stability ,state reconstruction ,state-dependent switching ,structure from motion ,switched system ,syntactic ,temporal knowledge graph ,time delay ,traffic detection ,transferability quantification ,uncertain temporal knowledge graph ,vehicle color recognition ,vehicle re-identification ,video surveillance ,visual tracking ,word embedding - Abstract
Summary: The present reprint contains 33 articles accepted and published in the Special Issue entitled "Advancement of Mathematical Methods in Feature Representation Learning for Artificial Intelligence, Data Mining and Robotics, 2022" in the MDPI journal, Mathematics, which covers a wide range of topics connected to the theory and applications of feature representation learning for image processing, artificial intelligence, data mining and robotics. These topics include, among others, elements from image blurring, image aesthetic quality assessment, pedestrian detection, visual tracking, vehicle re-identification, face recognition, 3D reconstruction, the stability of switched systems, domain adaption, deep reinforcement, sentiment analysis, graph convolutional networks, knowledge graphs, geometric metric learning, etc. It is hoped that this reprint will be interesting and useful for those working in the area of image processing, computer vision, machine learning, natural language processing and robotics, as well as for those with backgrounds in machine learning who are willing to become familiar with recent advancements in artificial intelligence, which, today, is present in almost all aspects of human life and activities.
4. Knowledge Distillation: A Survey
- Author
-
Gou, Jianping, Yu, Baosheng, Maybank, Stephen J., and Tao, Dacheng
- Published
- 2021
- Full Text
- View/download PDF
5. Feature fusion-based collaborative learning for knowledge distillation.
- Author
-
Li, Yiting, Sun, Liyuan, Gou, Jianping, Du, Lan, and Ou, Weihua
- Subjects
COLLABORATIVE learning ,MACHINE learning ,AUTOMATIC systems in automobiles ,DEEP learning ,CONCEPT mapping ,KNOWLEDGE transfer ,WIKIS ,GENE regulatory networks - Abstract
Deep neural networks have achieved a great success in a variety of applications, such as self-driving cars and intelligent robotics. Meanwhile, knowledge distillation has received increasing attention as an effective model compression technique for training very efficient deep models. The performance of the student network obtained through knowledge distillation heavily depends on whether the transfer of the teacher's knowledge can effectively guide the student training. However, most existing knowledge distillation schemes require a large teacher network pre-trained on large-scale data sets, which can increase the difficulty of knowledge distillation in different applications. In this article, we propose a feature fusion-based collaborative learning for knowledge distillation. Specifically, during knowledge distillation, it enables networks to learn from each other using the feature/response-based knowledge in different network layers. We concatenate the features learned by the teacher and the student networks to obtain a more representative feature map for knowledge transfer. In addition, we also introduce a network regularization method to further improve the model performance by providing a positive knowledge during training. Experiments and ablation studies on two widely used data sets demonstrate that the proposed method, feature fusion-based collaborative learning, significantly outperforms recent state-of-the-art knowledge distillation methods. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
6. Collaborative knowledge distillation via filter knowledge transfer.
- Author
-
Gou, Jianping, Hu, Yue, Sun, Liyuan, Wang, Zhi, and Ma, Hongxing
- Subjects
- *
KNOWLEDGE transfer , *DISTILLATION , *ENTROPY (Information theory) , *FILTERS & filtration , *RECOMMENDER systems , *INFORMATION filtering - Abstract
Knowledge distillation is a promising model compression technique that generally distills the knowledge from a complex teacher model to a lightweight student model. However, the performance gain of a student model is usually limited by the capacity gap between the large teacher model and the small student model. In this paper, we propose a new collaborative knowledge distillation method that makes use of a new strategy, named Filter Knowledge Transfer (FKT) to detect and learn the valuable filter information following from the teacher to the student. To be specific, the useful knowledge of filters measured by using information entropy is transferred between different peer networks and unimportant filters are reactivated according to the ratio based on the characterized filter information entropy during the online distillation process. Experimental results derived on four popular datasets, CIFAR-10/100, Market-1501, and Tiny-ImageNet, demonstrate the superiority of our proposed method over the others we considered. • Design filter knowledge transfer. • Propose a new collaborative knowledge distillation method. • Design filter knowledge entropy to reflect the importance of filter knowledge. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
7. Teacher–student complementary sample contrastive distillation.
- Author
-
Bao, Zhiqiang, Huang, Zhenhua, Gou, Jianping, Du, Lan, Liu, Kang, Zhou, Jingtao, and Chen, Yunwen
- Subjects
- *
DISTILLATION , *AUTODIDACTICISM , *TEACHER training , *PREDICTION models - Abstract
Knowledge distillation (KD) is a widely adopted model compression technique for improving the performance of compact student models, by utilizing the "dark knowledge" of a large teacher model. However, previous studies have not adequately investigated the effectiveness of supervision from the teacher model, and overconfident predictions in the student model may degrade its performance. In this work, we propose a novel framework, Teacher–Student Complementary Sample Contrastive Distillation (TSCSCD), that alleviate these challenges. TSCSCD consists of three key components: Contrastive Sample Hardness (CSH), Supervision Signal Correction (SSC), and Student Self-Learning (SSL). Specifically, CSH evaluates the teacher's supervision for each sample by comparing the predictions of two compact models, one distilled from the teacher and the other trained from scratch. SSC corrects weak supervision according to CSH, while SSL employs integrated learning among multi-classifiers to regularize overconfident predictions. Extensive experiments on four real-world datasets demonstrate that TSCSCD outperforms recent state-of-the-art knowledge distillation techniques. • We present a supervision correction component for knowledge distillation (KD). • We propose a self-learning component that constrains the overconfident prediction. • The effectiveness of framework is evaluated on four real-world datasets. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.