Back to Search Start Over

Training bidirectional recurrent neural network architectures with the scaled conjugate gradient algorithm

Authors :
Agathocleous, Michalis
Christodoulou, Chris C.
Promponas, Vasilis J.
Kountouris, P.
Vassiliades, Vassilis
Villa A.E.P.
Masulli P.
Rivero A.J.P.
Promponas, Vasilis J. [0000-0003-3352-4831]
Christodoulou, Chris C. [0000-0001-9398-5256]
Source :
25th International Conference on Artificial Neural Networks, ICANN 2016, Lect. Notes Comput. Sci.
Publication Year :
2016

Abstract

Predictions on sequential data, when both the upstream and downstream information is important, is a difficult and challenging task. The Bidirectional Recurrent Neural Network (BRNN) architecture has been designed to deal with this class of problems. In this paper, we present the development and implementation of the Scaled Conjugate Gradient (SCG) learning algorithm for BRNN architectures. The model has been tested on the Protein Secondary Structure Prediction (PSSP) and Transmembrane Protein Topology Prediction problems (TMPTP). Our method currently achieves preliminary results close to 73% correct predictions for the PSSP problem and close to 79% for the TMPTP problem, which are expected to increase with larger datasets, external rules, ensemble methods and filtering techniques. Importantly, the SCG algorithm is training the BRNN architecture approximately 3 times faster than the Backpropagation Through Time (BPTT) algorithm. © Springer International Publishing Switzerland 2016. 9886 LNCS 123 131 Sponsors: Conference code: 180929

Details

Database :
OpenAIRE
Journal :
25th International Conference on Artificial Neural Networks, ICANN 2016, Lect. Notes Comput. Sci.
Accession number :
edsair.od......4485..f9011aa5737bebc175278442c8a8833e