في تدريب الشبكات العصبية الاصطناعية
الملخص
AbstractIn this paper we describe several different training algorithms for feed forward neural networks. In all of these algorithms we use the gradient of the performance function, energy function, to determine how to adjust the weights such that the performance function is minimized, where the back propagation algorithm has been used to increase the speed of training. The above algorithms have a variety of different computation and thus different type of form of search direction and storage requirements, however non of the above algorithms has a global properties which suited to all problems
المراجع
References
B. Yegnanarayana, Artificial Neural Networks, Newdelhi,
R. Fletcher and C.M. Reeves, Function Minimization by
Conjugate Gradients, Computer Journal, Vol. 7, P. 149 – 154,
E. Polak and G. Ribiere, Note sure La Convergence does
methods Directions Conjugate, Rev. Fr. Infr, Rech open, 16-
R1, 6, 1969.
L.G. Dixon, Conjugate Gradient algorithms quadratic
termination with out linear search, Jor. of Tnst. of Math. and
its applications, Vol. 15, 1975.
A. Al - Bayati and N. Al - Assady, Conjugate Gradient
Methods, Technical Research Report, NO.1, School of
Computer Studies, Leeds University, U. K., 1996.
M. R. Hestenes and E. Stiefel, Methods of Conjugate Gradient
for Solving linear System, J. Res. NBS, Vol. 49, 1952.
التنزيلات
منشور
كيفية الاقتباس
إصدار
القسم
الرخصة
الحقوق الفكرية (c) 2023 L.N.M.Tawfiq، R.S.Naoum

هذا العمل مرخص بموجب Creative Commons Attribution 4.0 International License.