One of the primary goals of AI is the design, control and analysis of agents or systems that behave appropriately in a variety of circumstances. Good decision making often requires the existence of knowledge or beliefs about the agent's environment, as well as about its own abilities to observe and change the environment, and about its own goals and preferences. In this course we examine computational approaches for modeling uncertainty and solving decision problems. We will focus mainly on probabilistic models of reasoning, and on sequential decision making.
The course is intended for advanced undergraduate students and for graduate students, and provides an introduction to the on-going research in the field of reasoning under uncertainty. The topic covered include knowledge representation, planning and learning using different types of graphical models, with a focus on Bayesian (belief) netowrks, Markov random fields, temporal models (e.g. Hidden Markov models and Kalman filters) and Markov Decision Processes.