论文部分内容阅读
In 1991,Hornik proved that the collection of single hidden layer feedforward neural networks(SLFNs)with continuous,bounded,and non-constant activation functionσis dense in C(K)where K is a compact set in R~s(see Neural Networks,4(2),251-257(1991)).Meanwhile,he pointed out“Whether or not the continuity assumption can entirely be dropped is still an open quite challenging problem”.This paper replies in the affirmative to the problem and proves that for bounded and continuous almost everywhere(a.e.)activation functionσon R,the collection of SLFNs is dense in C(K)if and only ifσis un-constant a.e..
In 1991, Hornik proved that the collection of single hidden layer feedforward neural networks (SLFNs) with continuous, bounded, and non-constant activation functionσis dense in C (K) where K is a compact set in R ~ s (see Neural Networks, 4 (2), 251-257 (1991)). Meanwhile, he pointed out “Whether or not the continuity of the assumptions be able to now be an open quite challenging problem.” This paper replies in the affirmative to the problem and proves that for bounded and continuously almost everywhere (ae) activation function σon R, the collection of SLFNs is dense in C (k) if and only ifσis un-constant ae.