في تدريب الشبكات العصبية الاصطناعية

المؤلفون

  • توفيق College of Education Ibn Al-Haitham, Baghdad University
  • نعوم College of Education Ibn Al-Haitham, Baghdad University

الملخص

Abstract

In this paper we describe several different training algorithms for feed forward neural networks. In all of these algorithms we use the gradient of the performance function, energy function, to determine how to adjust the weights such that the performance function is minimized, where the back propagation algorithm has been used to increase the speed of training. The above algorithms have a variety of different computation and thus different type of form of search direction and storage requirements, however non of the above algorithms has a global properties which suited to all problems

المراجع

References

B. Yegnanarayana, Artificial Neural Networks, Newdelhi,

R. Fletcher and C.M. Reeves, Function Minimization by

Conjugate Gradients, Computer Journal, Vol. 7, P. 149 – 154,

E. Polak and G. Ribiere, Note sure La Convergence does

methods Directions Conjugate, Rev. Fr. Infr, Rech open, 16-

R1, 6, 1969.

L.G. Dixon, Conjugate Gradient algorithms quadratic

termination with out linear search, Jor. of Tnst. of Math. and

its applications, Vol. 15, 1975.

A. Al - Bayati and N. Al - Assady, Conjugate Gradient

Methods, Technical Research Report, NO.1, School of

Computer Studies, Leeds University, U. K., 1996.

M. R. Hestenes and E. Stiefel, Methods of Conjugate Gradient

for Solving linear System, J. Res. NBS, Vol. 49, 1952.

التنزيلات

منشور

2023-05-24

كيفية الاقتباس

[1]
L.N.M.Tawfiq و R.S.Naoum, "في تدريب الشبكات العصبية الاصطناعية", jfath, م 9, عدد 2, ص 47–68, 2023.