当前位置: X-MOL 学术Complex Intell. Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A novel tri-stage with reward-switching mechanism for constrained multiobjective optimization problems
Complex & Intelligent Systems ( IF 5.8 ) Pub Date : 2024-03-30 , DOI: 10.1007/s40747-024-01379-2
Jiqing Qu , Xuefeng Li , Hui Xiao

The effective exploitation of infeasible solutions plays a crucial role in addressing constrained multiobjective optimization problems (CMOPs). However, existing constrained multiobjective optimization evolutionary algorithms (CMOEAs) encounter challenges in effectively balancing objective optimization and constraint satisfaction, particularly when tackling problems with complex infeasible regions. Subsequent to the prior exploration, this paper proposes a novel tri-stage with reward-switching mechanism framework (TSRSM), including the push, pull, and repush stages. Each stage consists of two coevolutionary populations, namely \({\text {Pop}}_1\) and \({\text {Pop}}_2\). Throughout the three stages, \({\text {Pop}}_1\) is tasked with converging to the constrained Pareto front (CPF). However, \({\text {Pop}}_2\) is assigned with distinct tasks: (i) converging to the unconstrained Pareto front (UPF) in the push stage; (ii) utilizing constraint relaxation technique to discover the CPF in the pull stage; and (iii) revisiting the search for the UPF through knowledge transfer in the repush stage. Additionally, a novel reward-switching mechanism (RSM) is employed to transition between different stages, considering the extent of changes in the convergence and diversity of populations. Finally, the experimental results on three benchmark test sets and 30 real-world CMOPs demonstrate that TSRSM achieves competitive performance when compared with nine state-of-the-art CMOEAs. The source code is available at https://github.com/Qu-jq/TSRSM.



中文翻译:

一种新颖的带奖励切换机制的三阶段约束多目标优化问题

有效利用不可行的解决方案在解决约束多目标优化问题(CMOP)中起着至关重要的作用。然而,现有的约束多目标优化进化算法(CMOEA)在有效平衡目标优化和约束满足方面遇到了挑战,特别是在处理复杂不可行区域的问题时。在前面的探索之后,本文提出了一种新颖的三阶段奖励切换机制框架(TSRSM),包括推阶段、拉阶段和反推阶段。每个阶段由两个共同进化种群组成,即\({\text {Pop}}_1\)\({\text {Pop}}_2\)。在这三个阶段中,\({\text {Pop}}_1\)的任务是收敛到受限帕累托前沿 (CPF)。然而,\({\text {Pop}}_2\)被分配了不同的任务:(i)在推动阶段收敛到无约束帕累托前沿(UPF); (ii)利用约束松弛技术来发现拉动阶段的CPF; (iii) 在排斥阶段通过知识转移重新审视 UPF 的搜索。此外,考虑到群体收敛性和多样性的变化程度,采用一种新颖的奖励切换机制(RSM)在不同阶段之间进行过渡。最后,三个基准测试集和 30 个实际 CMOP 的实验结果表明,与 9 个最先进的 CMOEA 相比,TSRSM 实现了具有竞争力的性能。源代码可在 https://github.com/Qu-jq/TSRSM 获取。

更新日期:2024-03-30
down
wechat
bug