Back to Search
Start Over
Can Language Models Take A Hint? Prompting for Controllable Contextualized Commonsense Inference
- Publication Year :
- 2024
-
Abstract
- Generating commonsense assertions within a given story context remains a difficult task for modern language models. Previous research has addressed this problem by aligning commonsense inferences with stories and training language generation models accordingly. One of the challenges is determining which topic or entity in the story should be the focus of an inferred assertion. Prior approaches lack the ability to control specific aspects of the generated assertions. In this work, we introduce "hinting," a data augmentation technique that enhances contextualized commonsense inference. "Hinting" employs a prefix prompting strategy using both hard and soft prompts to guide the inference process. To demonstrate its effectiveness, we apply "hinting" to two contextual commonsense inference datasets: ParaCOMET and GLUCOSE, evaluating its impact on both general and context-specific inference. Furthermore, we evaluate "hinting" by incorporating synonyms and antonyms into the hints. Our results show that "hinting" does not compromise the performance of contextual commonsense inference while offering improved controllability.<br />Comment: Submitted to ACL Rolling Review. arXiv admin note: text overlap with arXiv:2302.05406
Details
- Database :
- arXiv
- Publication Type :
- Report
- Accession number :
- edsarx.2410.02202
- Document Type :
- Working Paper