论文部分内容阅读
With the rapid growing demandsfrom industrial and academic communities,we need powerful tools to deal with the optimization problems or explore useful knowledge frommassive data for real world applications.Within computational intelligence and learn-ing field,threetypes of models,evolutionary algorithm (EA),extreme learning machine (ELM)and low-rank representation (LRR),are investigated in this thesis.The evolu-tionary algorithm is investigated from the perspective of solving complex optimization problems,while the perspective for the latter two models is tolearn effective features for pattern recognition.This thesis makesseveral contributions to intelligent information processing which can be summarized as follows. Two improved evolutionary algorithms,hierarchical particle swarm optimization with Latin sampling (MA-HPSOL) and hybrid learning clonal selection algorithm (HLCSA),are proposed. The hierarchical topology in MA-HPSOL is effective forexploration and avoiding being trapped in local optima.Further,the newly de-signed Latin sampling can effectively refine the solutions.HLCSA is inspired by the idea that learning mechanisms can guide the evolutionary process in which the Baldwinian learning pool with multiple strategies can adapt to complex optimiza-tion problems with different characteristics. Three extreme learning machine variants,discriminative graph regularized ELM (GELM),discriminative manifold ELM (DMELM) and unsupervised discrimina-tive ELM (UDELM),are proposed on the basis of considering the discriminative information or/and geometrical structureof data.Specifically,GELM enforcesthe training samples from the same class to have similar network outputs.DMELM further considers the discriminative information within the neighborhood of each data point.In DMELM,a unified graph Laplacian is designed to cover boththe within-class and between-class information.UDELM is an unsupervised extension of ELM by taking the structure and discriminative informationinto account,which greatly expands its applicability in dealing with unlabeled data. Two low-rank representation variants,structure preserving LRR (SPLRR) and manifold LRR (MLRR) are proposed by considering the data manifold when con-structing graph for semi-supervised learning. SPLRR imposes two fold constraints on LRR to preserve the local geometrical structure and without distorting the distant repulsion property.MLRR explicitly takes the data local manifold struc-ture into consideration in which the manifold information is exploited by sparse learning other than constructing the graph by certain predefined measure directly.