1. Big Data Text Summarization - Hurricane Harvey
- Author
-
Geissinger, Jack, Long, Theo, Jung, James, Parent, Jordan, and Rizzo, Robert
- Subjects
abstractive summarization ,text summarization ,deep learning ,topic summarization ,neural networks ,NLP ,computational linguistics ,big data text summarization ,pointer-generator network ,big data ,template filling ,multi-document summarization ,Hurricane Harvey ,hurricanes ,TextRank ,information extraction ,natural language processing ,extractive summarization ,event summarization - Abstract
Natural language processing (NLP) has advanced in recent years. Accordingly, we present progressively more complex generated text summaries on the topic Hurricane Harvey. We utilized TextRank, which is an unsupervised extractive summarization algorithm. TextRank is computationally expensive, and the sentences generated by the algorithm aren’t always directly related or essential to the topic at hand. When evaluating TextRank, we found that a single sentence interjected and ruined the flow of the summary. We also found that ROUGE evaluation for our TextRank summary was quite low compared to a golden standard that was prepared for us. However, the TextRank summary had high marks for ROUGE evaluation compared to the Wikipedia article lead for Hurricane Harvey. To improve upon the TextRank algorithm, we utilized template summarization with named entities. Template summarization takes less time to run than TextRank but is supervised by the author of the template and script to choose valuable named entities. Thus, it is highly dependent on human intervention to produce reasonable and readable summaries that aren’t error-prone. As expected, the template summary evaluated well compared to the Gold Standard and the Wikipedia article lead. This result is mainly due to our ability to include named entities we thought were pertinent to the summary. Beyond extractive summaries like TextRank and template summarization, we pursued abstractive summarization using pointer-generator networks and multi-document summarization with pointer-generator networks and maximal marginal relevance. The benefit of using abstractive summarization is that it is more in-line with how humans summarize documents. Pointer-generator networks, however, require GPUs to run properly and a large amount of training data. Luckily, we were able to use a pre-trained network to generate summaries. The pointer-generator network is the centerpiece of our abstractive methods and allowed us to create summaries in the first place. NLP is at an inflection point due to deep learning, and our generated summaries using a state-of-the-art pointer-generator neural network are filled with details about Hurricane Harvey, including damage incurred, the average amount of rainfall, and the locations it affected the most. The summary is also free of grammatical errors. We also use a novel Python library, written by Logan Lebanoff at the University of Central Florida, for multi-document summarization using deep learning to summarize our Hurricane Harvey dataset of 500 articles and the Wikipedia article for Hurricane Harvey. The summary of the Wikipedia article is our final summary and has the highest ROUGE scores that we could attain. NSF: IIS-1619028 - BDTS_Hurricane_Harvey_final_report.docx: Editable version of the final report - BDTS_Hurricane_Harvey_final_report.pdf: PDF version of the final report - BDTS_Hurricane_Harvey_presentation.pptx: Editable version of the presentation slides - BDTS_Hurricane_Harvey_presentation.pdf: PDF version of the presentation slides Source file in zip: - freq_words.py - Finds the most frequent words in a JSON file that contains a sentences field. Requires a file to be passed through the -f option. - pos_tagging.py - Performs basic part-of-speech tagging on a JSON file that contains a sentences field. Requires a file to be passed through the -f option. - textrank_summarizer.py - Performs TextRank summarization with a JSON file that contains a sentences field. Requires a file to be passed through the -f option. - template_summarizer.py - Performs template summarization with a JSON file that contains a sentences field. Requires a file to be passed through the -f option. - wikipedia_content.py - Extracts content from a Wikipedia page given a topic and formats the information for the pointer-generator network using the “make_datafiles.py” script. Requires a topic to be given in the -t option and an output directory for “make_datafiles.py” to read from with the -o option. - make_datafiles.py - Called by "wikipedia_content.py" to convert story files to .bin files. - jusText.py - Used to clean up the large dataset - requirements.txt - Used with Anaconda for installing all of the dependencies. - small_dataset.json - Properly formatted JSON file for use with other files.
- Published
- 2018