Dialogue act (DA) classification plays an important role in understanding, interpreting and modeling dialogue. Dialogue acts (DAs) represent the intended meaning of an utterance, which is associated with the illocutionary force (or the speaker's intention), such as greetings, questions, requests, statements, and agreements. In natural language processing (NLP) applications, developing a DA annotation scheme or A taxonomy is often a first step in working with a corpus. The development of these annotation schemes provides a set of DA labels that are used to manually label a specific corpus, thus capturing the fine-grained intended meanings of utterances. However, dialogue act annotation is a complex task that requires not only an understanding of the linguistic content of an utterance but also of the context in which it was uttered. Researchers who wish to annotate a new corpus are thus tasked with developing a new taxonomy based on a subset of an existing taxonomy, a combination of two or more existing taxonomies, or creating an entirely new taxonomy. Consequently, researchers are more commonly inclined toward developing newer dialogue act annotation schemes, further increasing the number of distinct DA labels and making it difficult to effectively train DA classification models that can be generalized across related or different corpora. Moreover, without access to the specific corpora, it is even more difficult to define and agree upon a fixed dialogue act annotation scheme that can be applied across different corpora, even within similar domains. In collaborative learning, DAs are used to represent the pragmatic goals of utterances. They offer many cues for assessing the effectiveness of collaboration and understanding the kinds of dialogue behaviors that impact learning, performance, and problem-solving abilities. By assigning DAs to collaborative dialogue, researchers can better assess the effectiveness of collaborative learning efforts. They also allow researchers to identify the kinds of dialogue patterns or behaviors that may have a positive or negative effect on the collaborative learning process, such as predicting learners' satisfaction with their partners. However, collaborative learning dialogue takes place in highly domain-specific contexts, which makes DA classification particularly challenging. To mitigate the manual effort required for dialogue act classification, the rise of more advanced machine learning and deep learning text classification models holds potential for training DA classification models for the automatic classification of dialogue acts in the collaborative learning context. However, existing dialogue act classification models often struggle to generalize effectively, even within similar domains or collaborative learning contexts, due to variations in dialogue patterns, domain-specific tasks, dialogue act labels, and the insufficient amount of data needed to train the classification models. This dissertation aims to investigate four main challenges for dialogue act classification in collaborative learning contexts: 1) limited data to adequately train a high performance classification model; 2) too many classes to train a high performance classifier due to fine-grained DA labels; 3) difficulty in mapping dialogue act labels across corpora due to variations in dialogue act annotation schemes; and 4) data privacy concerns restricting data sharing, which limits opportunities for model improvement via cross-corpora training. This dissertation work addresses these challenges by investigating two main transfer learning approaches: cross-corpora domain adaptation, which aims to mitigate the problem of insufficient unique data, and federated transfer learning, which aims to address the data privacy concerns that arise during DA classification model training. To examine the impact of the cross-corpora domain adaptation approach on DA classification, experiments were conducted using fine-tuned pretrained transformer models across three corpora of collaborative learning data. Experimental results showed that the cross-corpora fine-tuned models resulted in an overall improvement in accuracy and F1-scores compared to baseline models fine-tuned using any individual corpus. Additionally, the cross-corpora fine-tuned models outperformed baselines in scenarios with limited dialogue act representation. The results show that this approach has the potential to improve classification performance, especially when a corpus has limited representation of certain dialogue acts. This work highlights the potential benefits of this approach for future domain-specific dialogue act classification tasks. To investigate the impact of the federated transfer learning (FTL) approach on DA classification, I implemented FTL using two standard aggregation methods and conducted experiments using BERT and RoBERTa models. Taking the three corpora as representative of physically separate data locations, the results showed the feasibility of training a global model from multiple, distributed datasets concurrently. Although the experimental results showed that the FTL models underperformed in comparison to the baseline models, the findings represent a possibility for improvement for the FTL models. The protection of data privacy afforded by FTL is important for future data-driven investigations. Meanwhile, using a domain-related model as the global model during the federated transfer learning produced improved performance compared to using the original pretrained model. The main contributions of this research include the novel finding that shows cross-corpora domain adaptation approach produces improved performance for dialogue act classification in collaborative learning context. Contributions also include the implementation of FTL for DA classification. These approaches could potentially set a new benchmark for future work in cross-corpora domain adaptation, federated transfer learning, and dialogue act classification in the context of collaborative learning. This work could also serve as a helpful reference for NLP researchers using transfer learning to improve performance in downstream tasks. In addition, this work may have practical implications for various NLP applications in educational contexts, including the design and development of dialogue systems to support learners, educational technologies using collaborative techniques, and collaborative learning environments. [The dissertation citations contained here are published with the permission of ProQuest LLC. Further reproduction is prohibited without permission. Copies of dissertations may be obtained by Telephone (800) 1-800-521-0600. Web page: http://www.proquest.com/en-US/products/dissertations/individuals.shtml.]