Back to Search
Start Over
Optimizing the learning of binary mappings
- Source :
- Proceedings of the International Joint Conference on Neural Networks, 2003..
- Publication Year :
- 2004
- Publisher :
- IEEE, 2004.
-
Abstract
- When training simple sigmoidal feed-forward neural networks on binary mappings using gradient descent algorithms with a sum-squared-error cost function, the learning algorithm often gets stuck with some outputs totally wrong. This is because the weight updates depend on the derivative of the output sigmoid which goes to zero as the output approaches maximal error. Common solutions to this problem include offsetting the output targets, offsetting the sigmoid derivatives, and using a different cost function. Comparisons are difficult because of the different optimal parameter settings for each case. In this paper I use an evolutionary approach to optimize and compare the different approaches.
Details
- Database :
- OpenAIRE
- Journal :
- Proceedings of the International Joint Conference on Neural Networks, 2003.
- Accession number :
- edsair.doi...........0065b549c417e9d89075b558f50d4c80
- Full Text :
- https://doi.org/10.1109/ijcnn.2003.1224086