Satisficing vs exploring when learning a constrained environment

SCIS&ISIS(2012)

Cited 0|Views3
No score
Abstract
Satisficing is an efficient strategy for applying existing knowledge in a complex, constrained, environment. We present a set of agent-based simulations that demonstrate a higher payoff for satisficing strategies than for exploring strategies when using approximate dynamic programming methods for learning complex environments. In our constrained learning environment, satisficing agents outperformed exploring agent by approximately six percent, in terms of the number of tasks completed.
More
Translated text
Key words
approximation theory,dynamic programming,learning (artificial intelligence),multi-agent systems,agent-based simulation,approximate dynamic programming method,constrained learning environment,exploring agent,exploring strategy,satisficing agent,satisficing strategy,q learning,approximate dynamic programming,satisficing
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined