|Date:||Thu, April 26, 2012|
|Place:||Research I Seminar Room|
Abstract: Observable operator models (OOMs) are a generalization of hidden Markov models and model discrete-valued stochastic processes. Predictive state representations (PSRs) extend OOMs to stochastic input-output systems and are employed in the context of agent modeling and planning. Stochastic multiplicity automata (SMA) are weighted nondeterministic automata which generalize probabilistic automata and have been used in the context of probabilistic grammatical inference.
We present OOMs, PSRs and SMA under a common framework and examine the precise relationships between them. Furthermore, we establish a unified approach to learning such models from data. Many of the learning algorithms that have been proposed can be understood as variations of this basic learning scheme, and several turn out to be closely related or even equivalent.