以類神經網路強化大規模多目標演化演算法之子代生成機制
No Thumbnail Available
Date
2025
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
演化演算法踏入多目標最佳化問題多年,在這領域擁有豐富且效果卓越的演算法,但隨著最佳化問題的決策空間維度上升,演化演算法在求解大規模多目標問題時,會因收斂速度低下而難以發揮效果。為此,大規模多目標演化演算法的研究在近年發展熱絡,研究者使用各種不同方法幫助演化演算法能更快更好的處理高維度問題,提升收斂速度並維持解的分布。類神經網路方法是其中一種有效技術,透過學習機制有效生成高品質子代,讓族群快速收斂,解決收斂速度不足的問題。此種方法起始時收斂快速,但演化到一定程度族群並不能持續蒐集出優秀的訓練資料,為了訓練資料的問題,本論文提出 large scale multi-objective evolutionary algorithm with neural network enhanced offspring generation (LSMOEA-NEO), 用抽樣生解策略生成新資料集,幫助訓練資料蒐集,也運用模型拆分方法讓模型規模變小,能更快的訓練好模型;為了避免使用訓練不佳的模型進行生解,加入對模型使用的時機控制,使LSMOEA-NEO 能夠與現有新穎大規模多目標演化演算法比較中取得更好的結果。在本論文實驗中,測試資料 LSMOP 的 9 個問題中有 6 個問題贏過其他大規模多目標演化演算法。
Evolutionary algorithms have been widely applied to multi-objective optimization problems, leading to the development of numerous effective and robust approaches. However, as the dimensionality of the decision space increases, these algorithms struggle with slow convergence, making it challenging to solve large-scale multi-objective optimization problems. In response, recent research has focused on enhancing their scalability, improving convergence speed, and maintaining solution diversity.Neural network-based techniques have emerged as a promising solution, leveraging learning mechanisms to generate high-quality offspring and accelerate convergence. Building on this approach, this paper proposes the Large-Scale Multi-Objective Evolutionary Algorithm with Neural Network-Enhanced Offspring Generation (LSMOEA-NEO). While neural networks initially drive rapid progress, their effectiveness diminishes over time due to difficulties in continuously acquiring high-quality training data. To address this limitation, LSMOEA-NEO incorporates a sampling-based offspring generation strategy to improve training data collection, a model decomposition strategy to reduce model size and accelerate training, and a usage control mechanism to prevent poorly trained models from generating low-quality offspring. Experimental results on the LSMOP benchmark demonstrate that LSMOEA-NEO outperforms existing large-scale multi-objective evolutionary algorithms on 6 out of 9 test problems, showcasing its superior problem-solving capabilities.
Evolutionary algorithms have been widely applied to multi-objective optimization problems, leading to the development of numerous effective and robust approaches. However, as the dimensionality of the decision space increases, these algorithms struggle with slow convergence, making it challenging to solve large-scale multi-objective optimization problems. In response, recent research has focused on enhancing their scalability, improving convergence speed, and maintaining solution diversity.Neural network-based techniques have emerged as a promising solution, leveraging learning mechanisms to generate high-quality offspring and accelerate convergence. Building on this approach, this paper proposes the Large-Scale Multi-Objective Evolutionary Algorithm with Neural Network-Enhanced Offspring Generation (LSMOEA-NEO). While neural networks initially drive rapid progress, their effectiveness diminishes over time due to difficulties in continuously acquiring high-quality training data. To address this limitation, LSMOEA-NEO incorporates a sampling-based offspring generation strategy to improve training data collection, a model decomposition strategy to reduce model size and accelerate training, and a usage control mechanism to prevent poorly trained models from generating low-quality offspring. Experimental results on the LSMOP benchmark demonstrate that LSMOEA-NEO outperforms existing large-scale multi-objective evolutionary algorithms on 6 out of 9 test problems, showcasing its superior problem-solving capabilities.
Description
Keywords
大規模, 多目標演化演算法, 演化演算法, 類神經網路, Large-Scale, Multi-Objective, Evolutionary Algorithm, Neural Networks