|
Dorna Kashef Haghighi |
|
LINKS |
|
Final Report |
|
Learning automata with partially observable internal states is one of the interesting and crucial topics in modern AI research. One of the models studied in this field is that of Partially Observable Markov Decision Processes (POMDPs) In this paper we focus on POMDPs with deterministic actions and observations.
In order to learn the internal representation of a partially observable system, it is necessary to have a sufficient history, which should be gained by interacting with the system. The present paper arose from an attempt to learn a model from the given sufficient history such that its predictions of future action effects are accurate and its size is close to minimal. Read full text. |
|
Home | About Me | My Mentor | Project Outline | Journal | Final Report |