Chrome Extension
WeChat Mini Program
Use on ChatGLM

A Hyper-Transformer model for Controllable Pareto Front Learning with Split Feasibility Constraints

Tran Anh Tuan, Nguyen Viet Dung,Tran Ngoc Thang

Neural Networks(2024)

Cited 0|Views0
No score
Abstract
Controllable Pareto front learning (CPFL) approximates the Pareto optimal solution set and then locates a non-dominated point with respect to a given reference vector. However, decision-maker objectives were limited to a constraint region in practice, so instead of training on the entire decision space, we only trained on the constraint region. Controllable Pareto front learning with Split Feasibility Constraints (SFC) is a way to find the best Pareto solutions to a split multi-objective optimization problem that meets certain constraints. In the previous study, CPFL used a Hypernetwork model comprising multi-layer perceptron (Hyper-MLP) blocks. Transformer can be more effective than previous architectures on numerous modern deep learning tasks in certain situations due to their distinctive advantages. Therefore, we have developed a hyper-transformer (Hyper-Trans) model for CPFL with SFC. We use the theory of universal approximation for the sequence-to-sequence function to show that the Hyper-Trans model makes MED errors smaller in computational experiments than the Hyper-MLP model.
More
Translated text
Key words
Multi-objective optimization,Controllable pareto front learning,Transformer,Hypernetwork,Split feasibility problem
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined