20 results on '"Cross-lingual learning"'
Search Results
2. Ad astra or astray: Exploring linguistic knowledge of multilingual BERT through NLI task.
- Author
-
Tikhonova, Maria, Mikhailov, Vladislav, Pisarevskaya, Dina, Malykh, Valentin, and Shavrina, Tatiana
- Subjects
LANGUAGE models ,KNOWLEDGE transfer ,NATURAL languages ,MULTILINGUALISM ,INFERENCE (Logic) ,NUMERACY - Abstract
Recent research has reported that standard fine-tuning approaches can be unstable due to being prone to various sources of randomness, including but not limited to weight initialization, training data order, and hardware. Such brittleness can lead to different evaluation results, prediction confidences, and generalization inconsistency of the same models independently fine-tuned under the same experimental setup. Our paper explores this problem in natural language inference, a common task in benchmarking practices, and extends the ongoing research to the multilingual setting. We propose six novel textual entailment and broad-coverage diagnostic datasets for French, German, and Swedish. Our key findings are that the mBERT model demonstrates fine-tuning instability for categories that involve lexical semantics, logic, and predicate-argument structure and struggles to learn monotonicity, negation, numeracy, and symmetry. We also observe that using extra training data only in English can enhance the generalization performance and fine-tuning stability, which we attribute to the cross-lingual transfer capabilities. However, the ratio of particular features in the additional training data might rather hurt the performance for model instances. We are publicly releasing the datasets, hoping to foster the diagnostic investigation of language models (LMs) in a cross-lingual scenario, particularly in terms of benchmarking, which might promote a more holistic understanding of multilingualism in LMs and cross-lingual knowledge transfer. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
3. Using Cross Lingual Learning for Detecting Hate Speech in Portuguese
- Author
-
Firmino, Anderson Almeida, de Baptista, Cláudio Souza, de Paiva, Anselmo Cardoso, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Woeginger, Gerhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Strauss, Christine, editor, Kotsis, Gabriele, editor, Tjoa, A Min, editor, and Khalil, Ismail, editor
- Published
- 2021
- Full Text
- View/download PDF
4. Exploring Parameter Sharing Techniques for Cross-Lingual and Cross-Task Supervision
- Author
-
Pikuliak, Matúš, Šimko, Marián, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Woeginger, Gerhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Espinosa-Anke, Luis, editor, Martín-Vide, Carlos, editor, and Spasić, Irena, editor
- Published
- 2020
- Full Text
- View/download PDF
5. Combining Cross-lingual and Cross-task Supervision for Zero-Shot Learning
- Author
-
Pikuliak, Matúš, Šimko, Marián, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Woeginger, Gerhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Sojka, Petr, editor, Kopeček, Ivan, editor, Pala, Karel, editor, and Horák, Aleš, editor
- Published
- 2020
- Full Text
- View/download PDF
6. From Tokens to Trees: Mapping Syntactic Structures in the Deserts of Data-Scarce Languages
- Author
-
Vilares, David, Muñoz Ortiz, Alberto, Vilares, David, and Muñoz Ortiz, Alberto
- Abstract
[Abstract]: Low-resource learning in natural language processing focuses on developing effective resources, tools, and technologies for languages that are less popular within the industry and academia. This effort is crucial for several reasons, including ensuring that as many languages as possible are represented digitally, and enhancing access to language technologies for native speakers of minority languages. In this context, this paper outlines the motivation, research lines, and results from a Leonardo Grant - by FBBVA - on low-resource languages and parsing as sequence labeling. The project’s primary aim was to devise fast and accurate methods for low-resource syntactic parsing and to examine evaluation strategies as well as strengths and weaknesses in comparison to alternative parsing strategies.
- Published
- 2024
7. Conversational Question Generation in Russian
- Author
-
Olesia Makhnytkina, Anton Matveev, Aleksei Svischev, Polina Korobova, Dmitrii Zubok, Nikita Mamaev, and Artem Tchirkovskii
- Subjects
sonversational question generation ,automatic tutoring ,cross-lingual learning ,Telecommunication ,TK5101-6720 - Abstract
This papers aim is to discuss the possibilities of automatic generation of conversational questions in Russian. We are exploring the possibility of using CoQA dataset translated into Russian for training an encoder-decoder model. We consider some techniques to improve the quality of questions generated in Russian. Results are evaluated manually. Combining a neural network approach with a rules-based approach, we tried to develop a system for automatic examination of University students.
- Published
- 2020
- Full Text
- View/download PDF
8. Cross-Lingual Passage Re-Ranking With Alignment Augmented Multilingual BERT
- Author
-
Dongmei Chen, Sheng Zhang, Xin Zhang, and Kaijing Yang
- Subjects
Passage re-ranking ,cross-lingual learning ,pre-training tasks ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
The task of Cross-lingual Passage Re-ranking (XPR) aims to rank a list of candidate passages in multiple languages given a query, which is generally challenged by two main issues: (1) the query and passages to be ranked are often in different languages, which requires strong cross-lingual alignment, and (2) the lack of annotated data for model training and evaluation. In this article, we propose a two-stage approach to address these issues. At the first stage, we introduce the task of Cross-lingual Paraphrase Identification (XPI) as an extra pre-training to augment the alignment by leveraging a large unsupervised parallel corpus. This task aims to identify whether two sentences, which may be from different languages, have the same meaning. At the second stage, we introduce and compare three effective strategies for cross-lingual training. To verify the effectiveness of our method, we construct an XPR dataset by assembling and modifying two monolingual datasets. Experimental results show that our augmented pre-training contributes significantly to the XPR task. Besides, we directly transfer the trained model to test on out-domain data which are constructed by modifying three multi-lingual Question Answering (QA) datasets. The results demonstrate the cross-domain robustness of the proposed approach.
- Published
- 2020
- Full Text
- View/download PDF
9. Multi-Level Cross-Lingual Transfer Learning With Language Shared and Specific Knowledge for Spoken Language Understanding
- Author
-
Keqing He, Weiran Xu, and Yuanmeng Yan
- Subjects
Spoken language understanding ,cross-lingual learning ,linguistic knowledge transfer ,adversarial learning ,multi-level knowledge representation ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
Recently conversational agents effectively improve their understanding capabilities by neural networks. Such deep neural models, however, do not apply to most human languages due to the lack of annotated training data for various NLP tasks. In this paper, we propose a multi-level cross-lingual transfer model with language shared and specific knowledge to improve the spoken language understanding of low-resource languages. Our method explicitly separates the model into the language-shared part and language-specific part to transfer cross-lingual knowledge and improve the monolingual slot tagging, especially for low-resource languages. To refine the shared knowledge, we add a language discriminator and employ adversarial training to reinforce information separation. Besides, we adopt novel multi-level knowledge transfer in an incremental and progressive way to acquire multi-granularity shared knowledge rather than a single layer. To mitigate the discrepancies between the feature distributions of language specific and shared knowledge, we propose the neural adapters to fuse knowledge automatically. Experiments show that our proposed model consistently outperforms monolingual baseline with a statistically significant margin up to 2.09%, even higher improvement of 12.21% in the zero-shot setting.
- Published
- 2020
- Full Text
- View/download PDF
10. Source-Free Transductive Transfer Learning for Structured Prediction
- Author
-
Kurniawan, Kemal Maulana and Kurniawan, Kemal Maulana
- Abstract
Current transfer learning approaches require two strong assumptions: the source domain data is available and the target domain has labelled data. These assumptions are problematic when both the source domain data is private and the target domain has no labelled data. Thus, we consider the source-free unsupervised transfer setup in which the assumptions are violated across both languages and domains (genres). To transfer structured prediction models in the source-free setting, we propose two methods: Parsimonious Parser Transfer (PPT) designed for single-source transfer of dependency parsers across languages, and PPTX which is the multi-source version of PPT. Both methods outperform baselines. We then propose to improve PPTX with logarithmic opinion pooling (PPTX-LOP), and find that it is an effective multi-source transfer method for structured prediction in general. Next, we study if our proposed source-free transfer methods provide improvements when pretrained language models (PTLMs) are employed. We first propose Parsimonious Transfer for Sequence Tagging (PTST) which is a variation of PPT designed for sequence tagging. Then, we evaluate PTST and PPTX-LOP on domain adaptation of semantic tasks using PTLMs. We show that for globally normalised models, PTST and PPTX-LOP improve precision and recall respectively. Besides unlabelled data, the target domain may have models trained on various tasks (but not the task of interest). To investigate if these models can be used successfully to improve performance in source-free transfer, we propose two methods. We find that leveraging these models can improve recall over direct transfer with one of the proposed methods. Finally, we critically discuss and conclude the findings in this thesis. We cover relevant subsequent work and close with a discussion on limitations and future work.
- Published
- 2023
11. Cross-Lingual and Genre-Supervised Parsing and Tagging for Low-Resource Spoken Data
- Author
-
Fosteri, Iliana and Fosteri, Iliana
- Abstract
Dealing with low-resource languages is a challenging task, because of the absence of sufficient data to train machine-learning models to make predictions on these languages. One way to deal with this problem is to use data from higher-resource languages, which enables the transfer of learning from these languages to the low-resource target ones. The present study focuses on dependency parsing and part-of-speech tagging of low-resource languages belonging to the spoken genre, i.e., languages whose treebank data is transcribed speech. These are the following: Beja, Chukchi, Komi-Zyrian, Frisian-Dutch, and Cantonese. Our approach involves investigating different types of transfer languages, employing MACHAMP, a state-of-the-art parser and tagger that uses contextualized word embeddings, mBERT, and XLM-R in particular. The main idea is to explore how the genre, the language similarity, none of the two, or the combination of those affect the model performance in the aforementioned downstream tasks for our selected target treebanks. Our findings suggest that in order to capture speech-specific dependency relations, we need to incorporate at least a few genre-matching source data, while language similarity-matching source data are a better candidate when the task at hand is part-of-speech tagging. We also explore the impact of multi-task learning in one of our proposed methods, but we observe minor differences in the model performance.
- Published
- 2023
12. COCO-CN for Cross-Lingual Image Tagging, Captioning, and Retrieval.
- Author
-
Li, Xirong, Xu, Chaoxi, Wang, Xiaoxu, Lan, Weiyu, Jia, Zhengxiong, Yang, Gang, and Xu, Jieping
- Abstract
This paper contributes to cross-lingual image annotation and retrieval in terms of data and baseline methods. We propose COCO-CN, a novel dataset enriching MS-COCO with manually written Chinese sentences and tags. For effective annotation acquisition, we develop a recommendation-assisted collective annotation system, automatically providing an annotator with several tags and sentences deemed to be relevant with respect to the pictorial content. Having 20 342 images annotated with 27 218 Chinese sentences and 70 993 tags, COCO-CN is currently the largest Chinese–English dataset that provides a unified and challenging platform for cross-lingual image tagging, captioning, and retrieval. We develop conceptually simple yet effective methods per task for learning from cross-lingual resources. Extensive experiments on the three tasks justify the viability of the proposed dataset and methods. Data and code are publicly available at https://github.com/li-xirong/coco-cn. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
13. Improving hate speech detection using Cross-Lingual Learning.
- Author
-
Firmino, Anderson Almeida, de Souza Baptista, Cláudio, and de Paiva, Anselmo Cardoso
- Subjects
- *
HATE speech , *AUTOMATIC speech recognition , *LANGUAGE models , *NATURAL language processing , *PORTUGUESE language , *ITALIAN language - Abstract
The growth of social media worldwide has brought social benefits and challenges. One problem we highlight is the proliferation of hate speech on social media. We propose a novel method for detecting hate speech in texts using Cross-Lingual Learning. Our approach uses transfer learning from Pre-Trained Language Models (PTLM) with large corpora available to solve problems in languages with fewer resources for the specific task. The proposed methodology comprises four stages: corpora acquisition, the PTLM definition, training strategies, and evaluation. We carried out experiments using Pre-Trained Language Models in English, Italian, and Portuguese (BERT and XLM-R) to verify which best suited the proposed method. We used corpora in English (WH) and Italian (Evalita 2018) as the source language and the OffComBr-2 corpus in Portuguese (the target language). The results of the experiments showed that the proposed methodology is promising: for the OffComBr-2 corpus, the best state-of-the-art result was obtained (F1-measure = 92%). • The development of a new methodology for hate speech detection. • Portuguese hate speech detection using Cross-Lingual Learning. • Up to 20% performance improvement over other models using the OffComBr-2 corpus. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
14. Makine öğrenmesi kullanarak düşük kaynaklı diller için radyoloji raporu sınıflandırması : Beyin kanaması tespiti üzerine bır örnek olay incelemesi
- Author
-
Bayrak, Gıyaseddin, Ganiz, Murat Can, Marmara Üniversitesi, Fen Bilimleri Enstitüsü, Bilgisayar Mühendisliği Anabilim Dalı, and Bilgisayar Mühendisliği Bilim Dalı
- Subjects
BERT Machine Learning ,Deep Learning ,Derin Öğrenme ,Diller Arası Öğrenme ,Transfer Learning ,Cross-lingual learning ,Transfer Öğrenimi ,Makine Öğrenmesi ,Radiology ,Radyoloji ,Domain Adaptation ,Etki Alanı Uyarlaması ,BERT - Abstract
Radyoloji raporları hastalık teşhis ve yönetim sürecinde önemli bir role sahiptir. Bazı durumlarda radyoloji raporu, hastayı tedavi eden doktorun derhal harekete geçmesi gereken kritik bir bulguya işaret edebilir. Yapay Zeka, özellikle Doğal Dil İşleme modelleri kullanılarak bu tür vakaların radyoloji raporlarından tespit edilmesi sürecinin otomatikleştirilmesi çok önemlidir, daha hızlı karar verilmesini sağlar ve hayat kurtarabilir. Bu, Türkçe radyoloji raporlarında beyin kanaması tespiti bağlamında kritik bulguları tespit etmeye yönelik yeni bir çalışmadır. Denetimli modellerin eğitimi için yaklaşık 30.000 etiketli Beyin Kanama Bilgisayarlı Tomografi (CT) raporu ve eğitim öncesi ve ince ayar kelime yerleştirme ve dil modelleri için yaklaşık 190 bin rapor kullanıyoruz. Bildiğimiz kadarıyla bu çalışma, büyük ölçekli Türkçe radyoloji raporlarının kullanıldığı ilk çalışmadır. Ayrıca, bu tezde, önceden eğitilmiş dil modellerinde ince ayar yapmanın ve statik kelime kalıplama vektörlerinin başarım üzerindeki etkisini gösterilmiştir ve alana özgü verileri kullanarak ince ayarın sınıflandırma başarımını iyileştirdiği sonucuna varılmıştır. Radiology reports play a vital role in the disease diagnosis and management process, and on certain occasions, they may contain critical findings that require immediate action by the treating physician. Automating the process of detecting such cases using Artificial Intelligence, specifically Natural Language Processing models, can significantly improve the speed of decision-making and potentially save lives. This thesis presents a novel study on detecting critical findings related to brain hemorrhage in Turkish radiology reports. We used approximately 30,000 labeled Brain Hemorrhage Computed Tomography (CT) reports to train supervised models and around 190,000 reports for pre-training and fine-tuning word embeddings and language models in mono-lingual and cross-lingual settings. To the best of our knowledge, this is the first study to utilize a large scale of Turkish radiology reports. Additionally, we demonstrate the impact of adapting pre-trained language models and static embeddings to the domain on the performance, finding that fine-tuning using domain-specific data improves classification accuracy.
- Published
- 2023
15. Translation-Based Implicit Annotation Projection for Zero-Shot Cross-Lingual Event Argument Extraction
- Author
-
Lou, Chenwei, Gao, Jun, Yu, Changlong, Wang, Wei, Zhao, Huan, Tu, Weiwei, Xu, Ruifeng, Lou, Chenwei, Gao, Jun, Yu, Changlong, Wang, Wei, Zhao, Huan, Tu, Weiwei, and Xu, Ruifeng
- Abstract
Zero-shot cross-lingual event argument extraction (EAE) is a challenging yet practical problem in Information Extraction. Most previous works heavily rely on external structured linguistic features, which are not easily accessible in real-world scenarios. This paper investigates a translation-based method to implicitly project annotations from the source language to the target language. With the use of translation-based parallel corpora, no additional linguistic features are required during training and inference. As a result, the proposed approach is more cost effective than previous works on zero-shot cross-lingual EAE. Moreover, our implicit annotation projection approach introduces less noises and hence is more effective and robust than explicit ones. Experimental results show that our model achieves the best performance, outperforming a number of competitive baselines. The thorough analysis further demonstrates the effectiveness of our model compared to explicit annotation projection approaches. © 2022 ACM.
- Published
- 2022
16. Metin madenciliği ile Türkçede çeviri, sosyal iletişim ve edebi yazı analizi
- Author
-
Çalışkan, Sevil and Can, Fazlı
- Subjects
Multi-lingual data ,Spoken text analysis ,Text mining ,Stylometric analysis ,Cross-lingual learning ,Discourse analysis ,Transfer learning - Abstract
Cataloged from PDF version of article. Thesis (M.S.): Bilkent University, Department of Computer Engineering, İhsan Doğramacı Bilkent University, 2020. Includes bibliographical references (leaves 93-103). Text mining is an important research area considering the increase in text generation and the need for analysis. Text mining in Turkish is still not a wellinvested research area, compared to the other languages. In this thesis, we analyze different types of Turkish text from different points of views, having an overall review on text mining in Turkish at the end. First, we analyze the translation quality of a Turkish novel, My Names is Red novel, to English, French, and Spanish with the features generated for each chapter. With the proposed method, translation loyalties to the original text can be quantified without any parallel comparisons. Then, we analyze the Turkish spoken texts of 98 people in different age groups in terms of gender and age attributes of the speakers. We also analyze the difference between written and spoken texts in Turkish. Results show that it is possible to predict the attributes of the speaker from the spoken text and written and spoken texts are significantly different in terms of stylometric measures. Later on, we make an assessment on cross-lingual transferring performances of multilingual networks from English to Turkish. We see that transferring is possible; however zero-shot cross-lingual transferring still has its way to be competitive with monolingual networks for Turkish. Lastly, we conduct a time-based stylometric analysis of Ahmet Hamdi Tanpınar’s works. We see that Ahmet Hamdi Tanpınar shows some differences compared to his contemporaries. by Sevil Çalışkan M.S.
- Published
- 2020
17. Cross-Lingual Word Embeddings
- Author
-
Søgaard, Anders, Vulić, Ivan, Ruder, Sebastian, Faruqui, Manaal, Søgaard, Anders, Vulić, Ivan, Ruder, Sebastian, and Faruqui, Manaal
- Abstract
The majority of natural language processing (NLP) is English language processing, and while there is good language technology support for (standard varieties of) English, support for Albanian, Burmese, or Cebuano-and most other languages-remains limited. Being able to bridge this digital divide is important for scientific and democratic reasons but also represents an enormous growth potential. A key challenge for this to happen is learning to align basic meaning-bearing units of different languages. In this book, the authors survey and discuss recent and historical work on supervised and unsupervised learning of such alignments. Specifically, the book focuses on so-called cross-lingual word embeddings. The survey is intended to be systematic, using consistent notation and putting the available methods on comparable form, making it easy to compare wildly different approaches. In so doing, the authors establish previously unreported relations between these methods and are able to present a fast-growing literature in a very compact way. Furthermore, the authors discuss how best to evaluate cross-lingual word embedding methods and survey the resources available for students and researchers interested in this topic. Table of Contents: Preface / Introduction / Monolingual Word Embedding Models / Cross-Lingual Word Embedding Models: Typology / A Brief History of Cross-Lingual Word Representations / Word-Level Alignment Models / Sentence-Level Alignment Methods / Document-Level Alignment Models / From Bilingual to Multilingual Training / Unsupervised Learning of Cross-Lingual Word Embeddings / Applications and Evaluation / Useful Data and Software / General Challenges and Future Directions / Bibliography / Authors' Biographies.
- Published
- 2019
18. Learning Language-Independent Representations of Verbs and Adjectives from Multimodal Retrieval
- Author
-
Hansen, Victor Petren Bach, Sogaard, Anders, Hansen, Victor Petren Bach, and Sogaard, Anders
- Abstract
This paper presents a simple modification to previous work on learning cross-lingual, grounded word representations from image-word pairs that, unlike previous work, is robust across different parts of speech, e.g., able to find the translation of the adjective 'social' relying only on image features associated with its translation candidates. Our method does not rely on black-box image search engines or any direct cross-lingual supervision. We evaluate our approach on English-German and English-Japanese word alignment, as well as on existing English-German bilingual dictionary induction datasets.
- Published
- 2019
19. Cross-lingual learning for text processing: A survey.
- Author
-
Pikuliak, Matúš, Šimko, Marián, and Bieliková, Mária
- Subjects
- *
ARTIFICIAL intelligence , *MACHINE learning , *KNOWLEDGE transfer , *NATURAL languages - Abstract
Many intelligent systems in business, government or academy process natural language as an input during inference or they might even communicate with users in natural language. The natural language processing is currently often done with machine learning models. However, machine learning needs training data and such data are often scarce for low-resource languages. The lack of data and resulting poor performance of natural language processing can be solved with cross-lingual learning. Cross-lingual learning is a paradigm for transferring knowledge from one natural language to another. The transfer of knowledge can help us overcome the lack of data in the target languages and create intelligent systems and machine learning models for languages, where it was not possible previously. Despite its increasing popularity and potential, no comprehensive survey on cross-lingual learning was conducted so far. We survey 173 text processing cross-lingual learning papers and examine tasks, datasets and languages that were used. The most important contribution of our work is that we identify and analyze four types of cross-lingual transfer based on "what" is being transferred. Such insight might help other NLP researchers and practitioners to understand how to use cross-lingual learning for wide range of problems. In addition, we identify what we consider to be the most important research directions that might help the community to focus their future work in cross-lingual learning. We present a comprehensive table of all the surveyed papers with various data related to the cross-lingual learning techniques they use. The table can be used to find relevant papers and compare the approaches to cross-lingual learning. To the best of our knowledge, no survey of cross-lingual text processing techniques was done in this scope before. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
20. Predicting Linguistic Structure with Incomplete and Cross-Lingual Supervision
- Author
-
Täckström, Oscar
- Subjects
structured prediction ,semi-supervised learning ,latent-variable model ,Computer and Information Sciences ,multilingual learning ,named-entity recognition ,Data- och informationsvetenskap ,cross-lingual learning ,Language Technology (Computational Linguistics) ,dependency parsing ,linguistic structure prediction ,sentiment analysis ,indirect supervision ,Språkteknologi (språkvetenskaplig databehandling) ,partial supervision ,part-of-speech tagging ,ambiguous supervision - Abstract
Contemporary approaches to natural language processing are predominantly based on statistical machine learning from large amounts of text, which has been manually annotated with the linguistic structure of interest. However, such complete supervision is currently only available for the world's major languages, in a limited number of domains and for a limited range of tasks. As an alternative, this dissertation considers methods for linguistic structure prediction that can make use of incomplete and cross-lingual supervision, with the prospect of making linguistic processing tools more widely available at a lower cost. An overarching theme of this work is the use of structured discriminative latent variable models for learning with indirect and ambiguous supervision; as instantiated, these models admit rich model features while retaining efficient learning and inference properties. The first contribution to this end is a latent-variable model for fine-grained sentiment analysis with coarse-grained indirect supervision. The second is a model for cross-lingual word-cluster induction and the application thereof to cross-lingual model transfer. The third is a method for adapting multi-source discriminative cross-lingual transfer models to target languages, by means of typologically informed selective parameter sharing. The fourth is an ambiguity-aware self- and ensemble-training algorithm, which is applied to target language adaptation and relexicalization of delexicalized cross-lingual transfer parsers. The fifth is a set of sequence-labeling models that combine constraints at the level of tokens and types, and an instantiation of these models for part-of-speech tagging with incomplete cross-lingual and crowdsourced supervision. In addition to these contributions, comprehensive overviews are provided of structured prediction with no or incomplete supervision, as well as of learning in the multilingual and cross-lingual settings. Through careful empirical evaluation, it is established that the proposed methods can be used to create substantially more accurate tools for linguistic processing, compared to both unsupervised methods and to recently proposed cross-lingual methods. The empirical support for this claim is particularly strong in the latter case; our models for syntactic dependency parsing and part-of-speech tagging achieve the hitherto best published results for a wide number of target languages, in the setting where no annotated training data is available in the target language.
- Published
- 2013
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.