Research Article

Hybrid Optimized Back propagation Learning Algorithm for Multi-layer Perceptron

by  Arka Ghosh, Mriganka Chakraborty
journal cover
International Journal of Computer Applications
Foundation of Computer Science (FCS), NY, USA
Volume 60 - Issue 13
Published: December 2012
Authors: Arka Ghosh, Mriganka Chakraborty
10.5120/9749-3332
PDF

Arka Ghosh, Mriganka Chakraborty . Hybrid Optimized Back propagation Learning Algorithm for Multi-layer Perceptron. International Journal of Computer Applications. 60, 13 (December 2012), 1-5. DOI=10.5120/9749-3332

                        @article{ 10.5120/9749-3332,
                        author  = { Arka Ghosh,Mriganka Chakraborty },
                        title   = { Hybrid Optimized Back propagation Learning Algorithm for Multi-layer Perceptron },
                        journal = { International Journal of Computer Applications },
                        year    = { 2012 },
                        volume  = { 60 },
                        number  = { 13 },
                        pages   = { 1-5 },
                        doi     = { 10.5120/9749-3332 },
                        publisher = { Foundation of Computer Science (FCS), NY, USA }
                        }
                        %0 Journal Article
                        %D 2012
                        %A Arka Ghosh
                        %A Mriganka Chakraborty
                        %T Hybrid Optimized Back propagation Learning Algorithm for Multi-layer Perceptron%T 
                        %J International Journal of Computer Applications
                        %V 60
                        %N 13
                        %P 1-5
                        %R 10.5120/9749-3332
                        %I Foundation of Computer Science (FCS), NY, USA
Abstract

Standard neural network based on general back propagation learning using delta method or gradient descent method has some great faults like poor optimization of error-weight objective function, low learning rate, instability . This paper introduces a hybrid supervised back propagation learning algorithm which uses trust-region method of unconstrained optimization of the error objective function by using quasi-newton method . This optimization leads to more accurate weight update system for minimizing the learning error during learning phase of multi-layer perceptron. [13][14][15] In this paper augmented line search is used for finding points which satisfies Wolfe condition. In this paper, This hybrid back propagation algorithm has strong global convergence properties & is robust & efficient in practice.

References
  • Artificial Neural Networks By Dr. B. Yegnanarayana.
  • Neural Networks – A Comprehensive Foundation BySimon Haykin.
  • McCulloch, W. and Pitts, W. (1943). A logical calculus of the ideas immanent in nervous activity. Bulletin of Mathematical Biophysics, 7:115 - 133.
  • "Adaline (Adaptive Linear)". CS 4793: Introduction to Artificial Neural Networks. Department of Computer Science, University of Texas at San Antonio.
  • Rosenblatt, Frank (1957), The Perceptron--a perceiving and recognizing automaton. Report 85-460-1, Cornell Aeronautical Laboratory.
  • Bertsekas, D. P. , Tsitsiklis, J. N. (1996). Neuro-dynamic programming. Athena Scientific. pp. 512. ISBN 1-886529-10-8
  • 1. Nocedal, Math. Comput. 35, 773 (1980).
  • Nicol N. Schraudolph, Jin Yu, Simon G¨unter-"A Stochastic Quasi-Newton Method for Online Convex Optimization".
  • Mriganka Chakraborty. Article: Artificial Neural Network for Performance Modeling and Optimization of CMOS Analog Circuits. International Journal of Computer Applications 58(18):6-12, November 2012. Published by Foundation of Computer Science, New York, USA.
  • Arka Ghosh. Article: Comparative Study of Financial Time series Prediction By Artificial Neural Network Using gradient Descent Learning. International Journal of Scientific & Engineering Research Volume-3 Issue-1 :( p 1–7) ISSN-2229-5518, January 2012. Published by IJSER, France.
  • The Numerical Algorithms Group. "Keyword Index: Quasi-Newton". NAG Library Manual, Mark 23. Retrieved 2012-02-09.
  • Wolfe, Philip (1969). "Convergence conditions for ascent methods". SIAM Rev. 11 (2): 226–235. doi:10. 1137/1011036.
  • Rumelhart, D. E. , Hinton, G. E. , Williams, R. J. 'Learning Internal Representation by Error Propagation' chapitre 8, Parallel Distributed Processing: Explorations in the Micro structure of Cognition, Rumelhart, D. E. and McClelland, J. L. editor, MIT Press, Cambridge, MA, 1986
  • Fahlman, S. E. 'An Empirical Study of Learning Speed in Back-Propagation Networks' internal report: CMU-CS-88-162, Carnegie Mellon University, Pittsburgh, Juin 1988
  • Jacob, R. A. 'Increased rates of convergences through learning rate adaptation' Neural Networks, Vol. 1, 29 p. , 1988
  • Tallaneare, T. 'SuperSAB: Fast Adaptive backpropagation with good scaling properties' Neural Network, Vol. 3, pp. 561-573, 1990
  • Rigler, A. K. , Irvine, J. M. , Vogl, K. 'Rescalins of variables in backpropagation learning' Neural Networks, Vol. 4, pp. 225-229, 1991
  • Leonard, J. A. , Kramer. M. A. 'Improvement of the BackPropagation algorithm for training neural networks' Computer chem. Engng. , Vol. 14, No. 3, pp. 337-341, 1990
  • Van Ooyen, A. , Nienhuis, B. 'Improving the Convergence of the Back-Propagation Algorithm' Neural Networks, Vol. 5, pp. 465-471, 1992
  • Dennis, I. E. , Schnabel, R. B. Numerical Methods for Unconstrained Optimization and Nonlinear Equations Prentice-Hall, 1983.
  • The Power of Squares-http://mste. illinois. edu/patel/amar430/meansquare. html.
Index Terms
Computer Science
Information Sciences
No index terms available.
Keywords

Neural network Back-propagation learning Delta method Gradient Descent Wolfe condition Multi layer perceptron Quasi Newton

Powered by PhDFocusTM