Solving test case based problems with fuzzy dominance.

GECCO(2017)

Cited 1|Views7
No score
Abstract
Genetic algorithms and genetic programming lend themselves well to the field of machine learning, which involves solving test case based problems. However, most traditional multi-objective selection methods work with scalar objectives, such as minimizing false negative and false positive rates, that are computed from underlying test cases. In this paper, we propose a new fuzzy selection operator that takes into account the statistical nature of machine learning problems based on test cases. Rather than use a Pareto rank or strength computed from scalar objectives, such as with NSGA2 or SPEA2, we will compute a probability of Pareto optimality. This will be accomplished through covariance estimation and Markov chain Monte Carlo simulation in order to generate probabilistic objective scores for each individual. We then compute a probability that each individual will generate a Pareto optimal solution. This probability is directly used with a roulette wheel selection technique. Our method's performance is evaluated on the evolution of a feature selection vector for a binary classification on each of eight different activities. Fuzzy selection performance varies, outperforming both NSGA2 and SPEA2 in both speed (measured in generations) and solution quality (measured by area under the curve) in some cases, while underperforming in others.
More
Translated text
Key words
Genetic Algorithms, Machine Learning, Markov Chain Monte Carlo, Pareto Dominance
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined