Back to Search
Start Over
Neural-Hidden-CRF: A Robust Weakly-Supervised Sequence Labeler
- Publication Year :
- 2023
-
Abstract
- We propose a neuralized undirected graphical model called Neural-Hidden-CRF to solve the weakly-supervised sequence labeling problem. Under the umbrella of probabilistic undirected graph theory, the proposed Neural-Hidden-CRF embedded with a hidden CRF layer models the variables of word sequence, latent ground truth sequence, and weak label sequence with the global perspective that undirected graphical models particularly enjoy. In Neural-Hidden-CRF, we can capitalize on the powerful language model BERT or other deep models to provide rich contextual semantic knowledge to the latent ground truth sequence, and use the hidden CRF layer to capture the internal label dependencies. Neural-Hidden-CRF is conceptually simple and empirically powerful. It obtains new state-of-the-art results on one crowdsourcing benchmark and three weak-supervision benchmarks, including outperforming the recent advanced model CHMM by 2.80 F1 points and 2.23 F1 points in average generalization and inference performance, respectively.<br />Comment: 13 pages, 4 figures, accepted by SIGKDD-2023
Details
- Database :
- arXiv
- Publication Type :
- Report
- Accession number :
- edsarx.2309.05086
- Document Type :
- Working Paper
- Full Text :
- https://doi.org/10.1145/3580305.3599445