Home
|
About Journal
|
Editorial Board
|
Guide
Guide for Authors
Manuscript Process
Ethics for Authors
|
Ethics Policy
|
Download
|
Contact
Convergence of On-Line Gradient Methods for Two-Layer Feedforward Neural Networks
Received:December 27, 1999
Key Words
:
on-line gradient method
feedforward neural network
convergence.
Fund Project
:
Supported by the Natural Science Foundation of China
Author Name
Affiliation
LI Zheng-xue
Dept. of Math.
,
Jilin University
,
Changchun 130023
,
China
WU Wei
Dept. of Math.
,
Dalian University of Technology
,
Dalian 116024
,
China
ZHANG Hong-wei
Dept. of Math.
,
Dalian University of Technology
,
Dalian 116024
,
China
Hits
:
3526
Download times
:
1987
Abstract
:
A discussion is given on the convergence of the on-line gradient methods for two-layer feedforward neural networks in general cases. The theories are applied to some usual activation functions and energy functions.
Citation:
DOI
:
10.3770/j.issn:1000-341X.2001.02.012
View Full Text
View/Add Comment
Copyright © Journal of Mathematical Research with Applications
Sponsored by:Dalian University of Technology, China Society for Industrial and Applied Mathematics
Address:No.2 Linggong Road, Ganjingzi District, Dalian City, Liaoning Province, P. R. China , Postal code :116024
Service Tel:86-411-84707392 Email:jmre@dlut.edu.cn
Designed by Beijing E-tiller Co.,Ltd.
Due to security factors, early browsers cannot log in. It is recommended to log in with IE10, IE11, Google, Firefox, and 360 new browsers.