Back to Search
Start Over
Evaluation and analysis of term scoring methods for term extraction.
- Source :
-
Information Retrieval Journal . Oct2016, Vol. 19 Issue 5, p510-545. 36p. - Publication Year :
- 2016
-
Abstract
- We evaluate five term scoring methods for automatic term extraction on four different types of text collections: personal document collections, news articles, scientific articles and medical discharge summaries. Each collection has its own use case: author profiling, boolean query term suggestion, personalized query suggestion and patient query expansion. The methods for term scoring that have been proposed in the literature were designed with a specific goal in mind. However, it is as yet unclear how these methods perform on collections with characteristics different than what they were designed for, and which method is the most suitable for a given (new) collection. In a series of experiments, we evaluate, compare and analyse the output of six term scoring methods for the collections at hand. We found that the most important factors in the success of a term scoring method are the size of the collection and the importance of multi-word terms in the domain. Larger collections lead to better terms; all methods are hindered by small collection sizes (below 1000 words). The most flexible method for the extraction of single-word and multi-word terms is pointwise Kullback-Leibler divergence for informativeness and phraseness. Overall, we have shown that extracting relevant terms using unsupervised term scoring methods is possible in diverse use cases, and that the methods are applicable in more contexts than their original design purpose. [ABSTRACT FROM AUTHOR]
Details
- Language :
- English
- ISSN :
- 13864564
- Volume :
- 19
- Issue :
- 5
- Database :
- Academic Search Index
- Journal :
- Information Retrieval Journal
- Publication Type :
- Academic Journal
- Accession number :
- 118194794
- Full Text :
- https://doi.org/10.1007/s10791-016-9286-2