1. Abbreviated text input using language modeling
- Author
-
Stuart M. Shieber and Rani Nelken
- Subjects
Linguistics and Language ,Language identification ,Computer science ,business.industry ,Speech recognition ,Word error rate ,computer.software_genre ,Language and Linguistics ,Generative model ,Artificial Intelligence ,Component (UML) ,Redundancy (engineering) ,Overhead (computing) ,Language model ,Artificial intelligence ,business ,computer ,Software ,Natural language ,Natural language processing - Abstract
We address the problem of improving the efficiency of natural language text input under degraded conditions (for instance, on mobile computing devices or by disabled users), by taking advantage of the informational redundancy in natural language. Previous approaches to this problem have been based on the idea of prediction of the text, but these require the user to take overt action to verify or select the system's predictions. We propose taking advantage of the duality between prediction and compression. We allow the user to enter text in compressed form, in particular, using a simple stipulated abbreviation method that reduces characters by 26.4%, yet is simple enough that it can be learned easily and generated relatively fluently. We decode the abbreviated text using a statistical generative model of abbreviation, with a residual word error rate of 3.3%. The chief component of this model is an n-gram language model. Because the system's operation is completely independent from the user's, the overhead from cognitive task switching and attending to the system's actions online is eliminated, opening up the possibility that the compression-based method can achieve text input efficiency improvements where the prediction-based methods have not. We report the results of a user study evaluating this method.
- Published
- 2006
- Full Text
- View/download PDF