201. Temporal Information Extraction for Question Answering Using Syntactic Dependencies in an LSTM-based Architecture
- Author
-
Alexey Romanov, Yuanliang Meng, and Anna Rumshisky
- Subjects
FOS: Computer and information sciences ,Computer Science - Computation and Language ,Boosting (machine learning) ,Computer science ,business.industry ,Pattern recognition ,02 engineering and technology ,Syntax ,Computer Science - Information Retrieval ,030507 speech-language pathology & audiology ,03 medical and health sciences ,0202 electrical engineering, electronic engineering, information engineering ,Question answering ,020201 artificial intelligence & image processing ,Artificial intelligence ,Architecture ,0305 other medical science ,business ,Computation and Language (cs.CL) ,Temporal information ,Information Retrieval (cs.IR) - Abstract
In this paper, we propose to use a set of simple, uniform in architecture LSTM-based models to recover different kinds of temporal relations from text. Using the shortest dependency path between entities as input, the same architecture is used to extract intra-sentence, cross-sentence, and document creation time relations. A "double-checking" technique reverses entity pairs in classification, boosting the recall of positive cases and reducing misclassifications between opposite classes. An efficient pruning algorithm resolves conflicts globally. Evaluated on QA-TempEval (SemEval2015 Task 5), our proposed technique outperforms state-of-the-art methods by a large margin., Comment: EMNLP 2017
- Published
- 2017
- Full Text
- View/download PDF