$L^2(R^d)$ Approximation Capability of Incremental Constructive Feedforward Neural Networks with Random Hidden Units
Received:September 25, 2008  Revised:June 30, 2009
Key Words: approximation   incremental feedforward neural networks   RBF neural networks   TDI neural networks   random hidden units.  
Fund Project:Supported by the National Nature Science Foundation of China (Grant No.10871220) and ``Mathematics X'' of DLUT (Grant No.842328).
Author NameAffiliation
Jin Ling LONG School of Mathematical Sciences, Dalian University of Technology, Liaoning 116024, P. R. China
Department of Mathematics, Southeast University, Jiangsu 210096, P. R. China 
Zheng Xue LI School of Mathematical Sciences, Dalian University of Technology, Liaoning 116024, P. R. China 
Dong NAN College of Applied Science, Beijing University of Technology, Beijing 100022, P. R. China 
Hits: 2601
Download times: 1732
Abstract:
      This paper studies approximation capability to $L^2(R^d)$ functions of incremental constructive feedforward neural networks (FNN) with random hidden units. Two kinds of there-layered feedforward neural networks are considered: radial basis function (RBF) neural networks and translation and dilation invariant (TDI) neural networks. In comparison with conventional methods that existence approach is mainly used in approximation theories for neural networks, we follow a constructive approach to prove that one may simply randomly choose parameters of hidden units and then adjust the weights between the hidden units and the output unit to make the neural network approximate any function in $L^2(R^d)$ to any accuracy. Our result shows given any non-zero activation function $g: R^ \rightarrow R$ and $g(\left\|x\right\|_{R^d})\in L^2(R^d)$ for RBF hidden units, or any non-zero activation function $g(x)\in L^2(R^d)$ for TDI hidden units, the incremental network function $f_n$ with randomly generated hidden units converges to any target function in $L^2(R^d)$ with probability one as the number of hidden units $n\rightarrow \infty$, if one only properly adjusts the weights between the hidden units and output unit.
Citation:
DOI:10.3770/j.issn:1000-341X.2010.05.004
View Full Text  View/Add Comment