1. A CUDA implementation of the Continuous Space Language Model.
- Author
-
Thompson, Elizabeth and Anderson, Timothy
- Subjects
- *
HIGH performance computing research , *SOFTWARE architecture , *MODEL (Computer program language) , *CUDA (Computer architecture) , *SPACE & time in language - Abstract
The training phase of the Continuous Space Language Model (CSLM) was implemented in the NVIDIA hardware/software architecture Compute Unified Device Architecture (CUDA). A detailed explanation of the CSLM algorithm is provided. Implementation was accomplished using a combination of CUBLAS library routines, NVIDIA NPP functions, and CUDA kernel calls on three different CUDA enabled devices of varying compute capability and a time savings over the traditional CPU approach demonstrated. The efficiency of the CUDA version of the open source implementation is analyzed and compared to that using the Intel Math Kernel Libraries (MKL) on a variety of CUDA enabled and multi-core CPU platforms. It is demonstrated that substantial performance benefit can be obtained using CUDA, even with nonoptimal code. Techniques for optimizing performance are then provided. Furthermore, an analysis is performed to determine the conditions in which the performance of CUDA exceeds that of the multi-core MKL realization. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF