论文部分内容阅读
增量型极限学习机(incremental extreme learning machine,Ⅰ-ELM)在训练过程中,由于输入权值及隐层神经元阈值的随机获取,造成部分隐层神经元的输出权值过小,使其对网络输出贡献小,从而成为无效神经元.这个问题不但使网络变得更加复杂,而且降低了网络的稳定性.针对此问题,本文提出了一种给Ⅰ-ELM隐层输出加上偏置的改进方法(即Ⅱ-ELM),并分析证明了该偏置的存在性.最后对Ⅰ-ELM方法在分类和回归问题上进行仿真对比,验证Ⅱ-ELM的有效性.
During the course of training, due to the random access of hidden layer neuron threshold and incremental extreme learning machine (I-ELM), the output weights of some hidden layer neurons are too small, Which not only makes the network more complicated but also reduces the stability of the network.Aiming at this problem, this paper presents a method to add bias to the output of I-ELM hidden layer, (II-ELM), and the existence of the bias is proved by analysis.Finally I-ELM method is used to simulate the classification and regression to verify the validity of Ⅱ-ELM.