Back to Search
Start Over
A simple and efficient architecture for trainable activation functions
- Source :
- Neurocomputing 370 (2019) 1-15
- Publication Year :
- 2019
-
Abstract
- Learning automatically the best activation function for the task is an active topic in neural network research. At the moment, despite promising results, it is still difficult to determine a method for learning an activation function that is at the same time theoretically simple and easy to implement. Moreover, most of the methods proposed so far introduce new parameters or adopt different learning techniques. In this work we propose a simple method to obtain trained activation function which adds to the neural network local subnetworks with a small amount of neurons. Experiments show that this approach could lead to better result with respect to using a pre-defined activation function, without introducing a large amount of extra parameters that need to be learned.
Details
- Database :
- arXiv
- Journal :
- Neurocomputing 370 (2019) 1-15
- Publication Type :
- Report
- Accession number :
- edsarx.1902.03306
- Document Type :
- Working Paper
- Full Text :
- https://doi.org/10.1016/j.neucom.2019.08.065