18 results on '"Negative transfer"'
Search Results
2. Boosting for regression transfer via importance sampling
- Author
-
Gupta, Shrey, Bi, Jianzhao, Liu, Yang, and Wildani, Avani
- Published
- 2023
- Full Text
- View/download PDF
3. Adaptive Generative Initialization in Transfer Learning
- Author
-
Bai, Wenjun, Quan, Changqin, Luo, Zhi-Wei, Kacprzyk, Janusz, Series Editor, and Lee, Roger, editor
- Published
- 2019
- Full Text
- View/download PDF
4. Improving Transfer Learning in Cross Lingual Opinion Analysis Through Negative Transfer Detection
- Author
-
Gui, Lin, Lu, Qin, Xu, Ruifeng, Wei, Qikang, Cao, Yuhui, Hutchison, David, Editorial Board Member, Kanade, Takeo, Editorial Board Member, Kittler, Josef, Editorial Board Member, Kleinberg, Jon M., Editorial Board Member, Mattern, Friedemann, Editorial Board Member, Mitchell, John C., Editorial Board Member, Naor, Moni, Editorial Board Member, Pandu Rangan, C., Editorial Board Member, Terzopoulos, Demetri, Editorial Board Member, Tygar, Doug, Editorial Board Member, Weikum, Gerhard, Series Editor, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Woeginger, Gerhard, Editorial Board Member, Zhang, Songmao, editor, Wirsing, Martin, editor, and Zhang, Zili, editor
- Published
- 2015
- Full Text
- View/download PDF
5. Experimental Analysis of Mandarin Tone Pronunciation of Tibetan College Students for Artificial Intelligence Speech Recognition
- Author
-
Shiliang Lyu and Fu Zhang
- Subjects
business.industry ,First language ,Speech recognition ,Tone (linguistics) ,Negative transfer ,Pronunciation ,Mandarin Chinese ,language.human_language ,Focus (linguistics) ,language ,Mandarin speech recognition ,Artificial intelligence ,Psychology ,business ,Amdo Tibetan - Abstract
Amdo Tibetan is a silent tone language. For college students who take Amdo Tibetan as their mother tongue, the tone of Mandarin has always been a major difficulty in their Mandarin learning. In the field of artificial intelligence and Mandarin speech recognition, the speech recognition of Tibetan native speakers is the focus of the current research. Based on this, this paper takes the college students whose native language is Amdo Tibetan as the experimental subjects, extracts the fundamental frequency signals of their Mandarin pronunciation through the experiment of acoustics of speech, and converts the fundamental frequency into the value of 5° through normalization processing. The results show that the main error of their pronunciations is in the tone value, and there is little difference of the tone pattern between their pronunciations and the standard Mandarin. The main cause of their pronunciation errors is mainly affected by the negative transfer of their mother tongue. The research results can be used for reference in the fields of Mandarin teaching, speech recognition and artificial intelligence.
- Published
- 2021
- Full Text
- View/download PDF
6. An Embodied Theory of Transfer of Mathematical Learning
- Author
-
Mitchell J. Nathan and Martha W. Alibali
- Subjects
Formative assessment ,Cognitive science ,Cohesion (linguistics) ,Conceptual blending ,Summative assessment ,Embodied cognition ,Computer science ,Negative transfer ,Transfer of learning ,Concreteness - Abstract
We present an embodied theory of transfer as applied to mathematical ideas. Project-based learning (PBL) offers important sites for investigating transfer because students and teachers must track mathematical invariant relations that are embedded in complex settings where knowledge and information is distributed and extended across various materials, symbolic inscriptions, and social interactions. We argue that transfer occurs when learners and teachers establish cohesion of their experiences by mapping modes of perceiving and acting that they successfully used in previous contexts to new contexts. Learners express that cohesion across contexts in a variety of ways, principally through speech, actions, and gestures, including gestural catchments and simulated actions. Students and teachers may engage in several forms of mapping, including explicit analogical mapping, constructing mappings implicitly via relational priming, and mapping relations via conceptual blending. Thus, we theorize that both teachers and students are integral to the transfer process. This embodied theory of transfer can account for near and far transfer, negative transfer, and “false” transfer, in which actors enact modes of perceiving and acting activated by cues that match only at a surface level. Embodied accounts of transfer have implications for educational practice. Some pedagogical approaches, such as concreteness fading, aim to foster and maintain cohesion of mathematical invariant relations. An embodied perspective on transfer can also inform the design of assessments, including both formative and summative assessments.
- Published
- 2021
- Full Text
- View/download PDF
7. A Study on Realtime Task Selection Based on Credit Information Updating in Evolutionary Multitasking
- Author
-
Yaqing Hou, Liang Feng, Qiang Zhang, Xiaopeng Wei, Cao Yumeng, and Hongwei Ge
- Subjects
Computer science ,Process (engineering) ,business.industry ,Negative transfer ,Machine learning ,computer.software_genre ,Task (project management) ,Search algorithm ,Human multitasking ,Artificial intelligence ,business ,Knowledge transfer ,Selection algorithm ,computer ,Selection (genetic algorithm) - Abstract
Recently, evolutionary multi-tasking (EMT) has been proposed as a new search paradigm for optimizing multiple problems simultaneously. Since the beneficial knowledge can be transferred among tasks to speed up the optimization process, EMT shows better performance in many problems compared with single-task evolutionary search algorithms. Notably, existing works on EMT have been devoted to the evolutionary search with two tasks. However, when multiple tasks (i.e., number of tasks >2) are involved, the existing methods might fail as it is necessary to decide which is the most suitable task to be selected for performing the knowledge transfer. To address this issue, we propose an online credit information based task selection algorithm to enhance the performance of EMT. Specifically, a credit matrix is introduced to express the transfer qualities among tasks. Then, we design two kinds of credit information and propose an updating mechanism to adjust the credit matrix online. After that, a greedy task selection mechanism is proposed for balancing the exploration and exploitation of the task selection process. Besides, we also propose an adaptive transfer rate to enhance positive transfer while reduce the impact of negative transfer. In the experiment, we compare our method with the existing works. The results clearly demonstrate the efficacy of the proposed method.
- Published
- 2021
- Full Text
- View/download PDF
8. Statistical Analysis and Automatic Recognition of Grammatical Errors in Teaching Chinese as a Second Language
- Author
-
Lijuan Zhou, Mengjie Zhong, Hongying Zan, and Yingjie Han
- Subjects
Computer science ,business.industry ,First language ,Negative transfer ,computer.software_genre ,TheoryofComputation_MATHEMATICALLOGICANDFORMALLANGUAGES ,Second language ,Statistical analysis ,Artificial intelligence ,Second language learners ,business ,computer ,Word (computer architecture) ,Natural language processing - Abstract
Foreigners make various grammatical errors when learning Chinese due to the negative transfer of their mother tongue, learning strategies, etc. At present, the research on grammatical errors mainly focuses on a certain word or a certain kind of errors, resulting in a lack of comprehensive understanding. In this paper, a statistical analysis on large-scale data sets of grammatical errors made by second language learners is conducted, including words with grammatical errors and their quantities. The statistical analysis gives people a more comprehensive understanding of grammatical errors and have certain guiding significance for teaching Chinese as a second language (TCSL). Because of the large proportion of grammatical errors of “的[de](of)”, the usages of “的[de](of)” are integrated into automatic recognition of Chinese grammatical errors. Experimental results show that the performance is overall improved.
- Published
- 2020
- Full Text
- View/download PDF
9. Attract, Perturb, and Explore: Learning a Feature Alignment Network for Semi-supervised Domain Adaptation
- Author
-
Changick Kim and Taekyung Kim
- Subjects
Scheme (programming language) ,Domain adaptation ,Exploit ,business.industry ,Computer science ,Negative transfer ,02 engineering and technology ,Semi-supervised learning ,010501 environmental sciences ,Machine learning ,computer.software_genre ,01 natural sciences ,Domain (software engineering) ,0202 electrical engineering, electronic engineering, information engineering ,Benchmark (computing) ,Feature (machine learning) ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,computer ,0105 earth and related environmental sciences ,computer.programming_language - Abstract
Although unsupervised domain adaptation methods have been widely adopted across several computer vision tasks, it is more desirable if we can exploit a few labeled data from new domains encountered in a real application. The novel setting of the semi-supervised domain adaptation (SSDA) problem shares the challenges with the domain adaptation problem and the semi-supervised learning problem. However, a recent study shows that conventional domain adaptation and semi-supervised learning methods often result in less effective or negative transfer in the SSDA problem. In order to interpret the observation and address the SSDA problem, in this paper, we raise the intra-domain discrepancy issue within the target domain, which has never been discussed so far. Then, we demonstrate that addressing the intra-domain discrepancy leads to the ultimate goal of the SSDA problem. We propose an SSDA framework that aims to align features via alleviation of the intra-domain discrepancy. Our framework mainly consists of three schemes, i.e., attraction, perturbation, and exploration. First, the attraction scheme globally minimizes the intra-domain discrepancy within the target domain. Second, we demonstrate the incompatibility of the conventional adversarial perturbation methods with SSDA. Then, we present a domain adaptive adversarial perturbation scheme, which perturbs the given target samples in a way that reduces the intra-domain discrepancy. Finally, the exploration scheme locally aligns features in a class-wise manner complementary to the attraction scheme by selectively aligning unlabeled target features complementary to the perturbation scheme. We conduct extensive experiments on domain adaptation benchmark datasets such as DomainNet, Office-Home, and Office. Our method achieves state-of-the-art performances on all datasets.
- Published
- 2020
- Full Text
- View/download PDF
10. Partial Domain Adaptation for Relation Extraction Based on Adversarial Learning
- Author
-
Juan Yang, Xiaofei Cao, and Xiangbin Meng
- Subjects
Relation (database) ,Computer science ,Negative transfer ,02 engineering and technology ,010501 environmental sciences ,computer.software_genre ,Data structure ,01 natural sciences ,Relationship extraction ,Domain (software engineering) ,Set (abstract data type) ,Noise ,020204 information systems ,Outlier ,0202 electrical engineering, electronic engineering, information engineering ,Data mining ,computer ,0105 earth and related environmental sciences - Abstract
Relation extraction methods based on domain adaptation have begun to be extensively applied in specific domains to alleviate the pressure of insufficient annotated corpus, which enables learning by utilizing the training data set of a related domain. However, the negative transfer may occur during the adaptive process due to differences in data distribution between domains. Besides, it is difficult to achieve a fine-grained alignment of relation category without fully mining the multi-mode data structure. Furthermore, as a common application scenario, partial domain adaptation (PDA) refers to domain adaptive behavior when the relation class set of a specific domain is a subset of the related domain. In this case, some outliers belonging to the related domain will reduce the performance of the model. To solve these problems, a novel model based on a multi-adversarial module for partial domain adaptation (MAPDA) is proposed in this study. We design a weight mechanism to mitigate the impact of noise samples and outlier categories, and embed several adversarial networks to realize various category alignments between domains. Experimental results demonstrate that our proposed model significantly improves the state-of-the-art performance of relation extraction implemented in domain adaptation.
- Published
- 2020
- Full Text
- View/download PDF
11. Grouping and Recurrent Feature Encoding Based Multi-task Learning for Pedestrian Attribute Recognition
- Author
-
Tang Bangjie, Huadong Pan, Yin Jun, Zheng Shaofei, and Zhang Xingming
- Subjects
Correlation ,Recurrent neural network ,business.industry ,Computer science ,Group learning ,Multi-task learning ,Negative transfer ,Deep neural networks ,Artificial intelligence ,Pedestrian ,business ,Attribute learning - Abstract
Pedestrian attribute recognition (PAR) in surveillance is to predict pedestrian visual features (somatotype, wearing style, etc.). Existing methods usually elaborately design complex multi-label deep neural networks to solve it, which is hard to take advantage of attribute correlations and prone to suffering from the negative transfer problem. In this paper, we proposed a grouping and recurrent feature encoding based multi-task learning method to solve these problems. We group attributes adaptively based on attribute learning state and use Bi-direction recurrent neural network (Bi-RNN) to acquire the encodings of different groups to build a auxiliary learning task. We optimize group learning and feature encoding simultaneously in an end-to-end multi-task learning (MTL) manner. Furthermore, we establish dynamic loss module to enable the model learn the weight automatically for different tasks in a closed-loop way. Finally, after finishing training, the proposed method allow us to remove auxiliary module and merge all group into one to get a concise yet effective model without weakening the performance. Extensive experimental results in two public datasets, PA-100K and RAP has demonstrated the performance superiority of our method.
- Published
- 2020
- Full Text
- View/download PDF
12. Attentive Multi-task Deep Reinforcement Learning
- Author
-
Roger Wattenhofer, Timo Bram, Gino Brunner, and Oliver Richter
- Subjects
Computer science ,Contrast (statistics) ,Negative transfer ,02 engineering and technology ,010501 environmental sciences ,01 natural sciences ,Task (project management) ,Human–computer interaction ,0202 electrical engineering, electronic engineering, information engineering ,Reinforcement learning ,020201 artificial intelligence & image processing ,Granularity ,State (computer science) ,Transfer of learning ,Knowledge transfer ,0105 earth and related environmental sciences - Abstract
Sharing knowledge between tasks is vital for efficient learning in a multi-task setting. However, most research so far has focused on the easier case where knowledge transfer is not harmful, i.e., where knowledge from one task cannot negatively impact the performance on another task. In contrast, we present an approach to multi-task deep reinforcement learning based on attention that does not require any a-priori assumptions about the relationships between tasks. Our attention network automatically groups task knowledge into sub-networks on a state level granularity. It thereby achieves positive knowledge transfer if possible, and avoids negative transfer in cases where tasks interfere. We test our algorithm against two state-of-the-art multi-task/transfer learning approaches and show comparable or superior performance while requiring fewer network parameters.
- Published
- 2020
- Full Text
- View/download PDF
13. A Comparative Study on Unsupervised Domain Adaptation for Coffee Crop Mapping
- Author
-
Edemir Ferreira, Mário S. Alvim, Jefersson A. dos Santos, and Hugo N. Oliveira
- Subjects
Domain adaptation ,Computer science ,business.industry ,Crop mapping ,Negative transfer ,02 engineering and technology ,Machine learning ,computer.software_genre ,01 natural sciences ,Visual behavior ,Rendering (computer graphics) ,0103 physical sciences ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Artificial intelligence ,010306 general physics ,business ,Classifier (UML) ,computer - Abstract
In this work, we investigate the application of existing unsupervised domain adaptation (UDA) approaches to the task of transferring knowledge between crop regions having different coffee patterns. Given a geographical region with fully mapped coffee plantations, we observe that this knowledge can be used to train a classifier and to map a new county with no need of samples indicated in the target region. Experimental results show that transferring knowledge via UDA strategies performs better than just applying a classifier trained in a region to predict coffee crops in a new one. However, UDA methods may lead to negative transfer, which may indicate that domains are excessively dissimilar, rendering transfer strategies ineffective. We observe a meaningful complementary contribution between coffee crop data; and a visual behavior suggests the existence of clusters of samples that are more likely to be drawn from a specific data.
- Published
- 2019
- Full Text
- View/download PDF
14. Adaptive Generative Initialization in Transfer Learning
- Author
-
Wenjun Bai, Changqin Quan, and Zhi-Wei Luo
- Subjects
Artificial neural network ,Computer science ,business.industry ,Initialization ,Negative transfer ,010501 environmental sciences ,Machine learning ,computer.software_genre ,01 natural sciences ,Task (project management) ,010104 statistics & probability ,Limit (mathematics) ,Artificial intelligence ,0101 mathematics ,Transfer of learning ,business ,computer ,MNIST database ,Generative grammar ,0105 earth and related environmental sciences - Abstract
In spite of numerous researches on transfer learning, the consensus on the optimal method in transfer learning has not been reached. To render a unified theoretical understanding of transfer learning, we rephrase the crux of transfer learning as pursuing the optimal initialisation in facilitating the to-be-transferred task. Hence, to obtain an ideal initialisation, we propose a novel initialisation technique, i.e., adapted generative initialisation. Not limit to boost the task transfer, more importantly, the proposed initialisation can also bound the transfer benefits in defending the devastating negative transfer. At first stage in our proposed initialisation, the in-congruency between a task and its assigned learner (model) can be alleviated through feeding the knowledge of the target learner to train the source learner, whereas the later generative stage ensures the adapted initialisation can be properly produced to the target learner. The superiority of our proposed initialisation over conventional neural network based approaches was validated in our preliminary experiment on MNIST dataset.
- Published
- 2018
- Full Text
- View/download PDF
15. Feature Learning and Transfer Performance Prediction for Video Reinforcement Learning Tasks via a Siamese Convolutional Neural Network
- Author
-
Yang Gao, Hao Wang, and Jinhua Song
- Subjects
Artificial neural network ,business.industry ,Computer science ,Deep learning ,Negative transfer ,02 engineering and technology ,Convolutional neural network ,020204 information systems ,Softmax function ,0202 electrical engineering, electronic engineering, information engineering ,Reinforcement learning ,020201 artificial intelligence & image processing ,Artificial intelligence ,Transfer of learning ,business ,Feature learning - Abstract
In this paper, we handle the negative transfer problem by a deep learning method to predict the transfer performance (positive/negative transfer) between two reinforcement learning tasks. We consider same domain transfer for video reinforcement learning tasks such as video games which can be described as images and perceived by an agent with visual ability. Our method directly trains a neural network from raw task descriptions without other prior knowledge such as models of tasks, target task samples and human experience. The architecture of our neural network consists of two parts: a siamese convolutional neural network to learn the features of each pair of tasks and a softmax layer to predict the binary transfer performance. We conduct extensive experiments in the maze domain and the Ms. PacMan domain to evaluate the performance of our method. The results show the effectiveness and superiority of our method compared with the baseline methods.
- Published
- 2018
- Full Text
- View/download PDF
16. English Lexical Stress Production by Native Speakers of Tibetan and Uyghur
- Author
-
Yingjie Zhao, Dan Hu, Jie Lian, and Hui Feng
- Subjects
American English ,Stress (linguistics) ,Negative transfer ,Syllable ,Psychology ,Affect (psychology) ,Linguistics - Abstract
This study studies the production of English lexical stress by native speakers of Tibetan and Uyghur, and the factors that may affect stress assignment. Thirty subjects in their twenties participated, with 10 native speakers (gender balanced) for each language, i.e. native speakers of Uyghur (NSUs), native speakers of Tibetan (NSTs) and native speakers of American English (NSAs). A total of 4,000 tokens are collected, judged and analyzed. Results indicate that: (1) Consistent with the prediction of Stress Typology Model, less negative transfer has been observed in NSTs than in NSUs in stress production. (2) Compared with NSAs, NSUs and NSTs employ different acoustic features when assigning stress. (3) Stress positions affect the accuracy of stress production by NSUs, and also the acoustic features of NSTs and NSUs when assigning stress. A speech-final lengthening effect is observed. (4) Syllable structures have little effect on the accuracy of stress production.
- Published
- 2018
- Full Text
- View/download PDF
17. Improving Transfer Learning in Cross Lingual Opinion Analysis Through Negative Transfer Detection
- Author
-
Qin Lu, Yuhui Cao, Ruifeng Xu, Qikang Wei, and Lin Gui
- Subjects
Cross lingual ,business.industry ,Computer science ,media_common.quotation_subject ,Gaussian ,Negative transfer ,Machine learning ,computer.software_genre ,Class (biology) ,symbols.namesake ,Noise ,Opinion analysis ,symbols ,Quality (business) ,Artificial intelligence ,Transfer of learning ,business ,computer ,media_common - Abstract
Transfer learning has been used as a machine learning method to make good use of available language resources for other resource-scarce languages. However, the cumulative class noise during iterations of transfer learning can lead to negative transfer which can adversely affect performance when more training data is used. In this paper, we propose a novel transfer learning method which can detect negative transfers. This approach detects high quality samples after certain iterations to identify class noise in new transferred training samples and remove them to reduce misclassifications. With the ability to detect bad training samples and remove them, our method can make full use of large unlabeled training data available in the target language. Furthermore, the most important contribution in this paper is the theory of class noise detection. Our new class noise detection method overcame the theoretic flaw of a previous method based on Gaussian distribution. We applied this transfer learning method with negative transfer detection to cross lingual opinion analysis. Evaluation on the NLP&CC 2013 cross-lingual opinion analysis dataset shows that the proposed approach outperforms the state-of-the-art systems.
- Published
- 2015
- Full Text
- View/download PDF
18. Cross-Domain Collaborative Recommendation by Transfer Learning of Heterogeneous Feedbacks
- Author
-
Ding Yonggang, Shijun Li, Sha Yang, Jun Wang, and Wei Yu
- Subjects
Multimedia ,Computer science ,business.industry ,Big data ,Negative transfer ,Construct (python library) ,Recommender system ,computer.software_genre ,Machine learning ,Bottleneck ,Similarity (psychology) ,Collaborative filtering ,Artificial intelligence ,Transfer of learning ,business ,computer - Abstract
With the rapid development of information society, the era of big data is coming. Various recommendation systems are developed to make recommendations by mining useful knowledge from massive data. The big data is often multi-source and heterogeneous, which challenges the recommendation seriously. Collaborative filtering is the widely used recommendation method, but the data sparseness is its major bottleneck. Transfer learning can overcome this problem by transferring the learned knowledge from the auxiliary data to the target data for cross-domain recommendation. Many traditional transfer learning models for cross-domain collaborative recommendation assume that multiple domains share a latent common rating pattern which may lead to the negative transfer, and only apply to the homogeneous feedbacks. To address such problems, we propose a new transfer learning model. We do the collective factorization to rating matrices of the target data and its auxiliary data to transfer the rating information among heterogeneous feedbacks, and get the initial latent factors of users and items, based on which we construct the similarity graphs. Further, we predict the missing ratings by the twin bridge transfer learning of latent factors and similarity graphs. Experiments show that our proposed model outperforms the state-of-the-art models for cross-domain recommendation.
- Published
- 2015
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.