This paper describes sufficient conditions for the existence of optimal policies for partially observable Markov decision processes (POMDPs) with Borel state, observation, and action sets, when the ...
Markov Models for disease progression are common in medical decision making (see references below). The parameters in a Markov model can be estimated by observing the time it takes patients in any ...
For uniformly ergodic Markov chains, we obtain new perturbation bounds which relate the sensitivity of the chain under perturbation to its rate of convergence to stationarity. In particular, we derive ...