Back to Search Start Over

Activation functions are not needed: the ratio net

Authors :
Zhou, Chi-Chun
Tu, Hai-Long
Hou, Yue-Jie
Ling, Zhen
Liu, Yi
Hua, Jian
Publication Year :
2020

Abstract

A deep neural network for classification tasks is essentially consist of two components: feature extractors and function approximators. They usually work as an integrated whole, however, improvements on any components can promote the performance of the whole algorithm. This paper focus on designing a new function approximator. Conventionally, to build a function approximator, one usually uses the method based on the nonlinear activation function or the nonlinear kernel function and yields classical networks such as the feed-forward neural network (MLP) and the radial basis function network (RBF). In this paper, a new function approximator that is effective and efficient is proposed. Instead of designing new activation functions or kernel functions, the new proposed network uses the fractional form. For the sake of convenience, we name the network the ratio net. We compare the effectiveness and efficiency of the ratio net and that of the RBF and the MLP with various kinds of activation functions in the classification task on the mnist database of handwritten digits and the Internet Movie Database (IMDb) which is a binary sentiment analysis dataset. It shows that, in most cases, the ratio net converges faster and outperforms both the MLP and the RBF.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2005.06678
Document Type :
Working Paper