Estimation, Inference and Learning of Partially-Observed Dynamical Systems
Loading...
Date
2019-02-25
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
Demand for learning, design and decision making is higher than ever before. Autonomous
vehicles need to learn how to ride safely by recognizing pedestrians, traffic signs, and other cars.
Companies and consumers need to identify possible changes in the environment and adapt their
strategies relatively fast to stay competitive. The complexity of biological systems necessitates
incorporation of the biological knowledge with mathematical models to find effective treatments
for many chronic fatal diseases. This dissertation addresses some of the critical issues concerning
estimation, identification and learning of complex dynamical systems observed through noisy data.
Nonlinear state-space models are a popular class of time series models with numerous applications
in fields such as cyber-physical systems, economics, biology and more. However, the
applicability of the existing techniques for inference of large systems or systems with big data sets,
two common scenarios in many real-world applications, becomes impossible. We have developed
a multi-fidelity Bayesian optimization algorithm for the inference of general nonlinear state-space
models (MFBO-SSM), which enables simultaneous sequential selection of parameters and approximators.
The accuracy and speed of the algorithm are demonstrated by numerical experiments
using synthetic gene expression data from a gene regulatory network model and real data from the
VIX stock price index.
Along with estimation and identification, control of dynamical systems has been on the center
of attention is many years. A Markov Decision Processes (MDPs) is a rich framework for
modeling the dynamical systems in varieties of fields. The optimal control of MDP with known
dynamics and finite state and action spaces is achievable using the Dynamic programming (DP)
framework. However, in complex applications, there is often uncertainty about the system dynamics.
In addition, many practical problems have large or continuous state and action spaces which
hinders the simple application of DP. Reinforcement learning is a powerful technique widely used
for adaptive control of MDPs with unknown dynamics. Existing RL techniques developed for
MDPs with unknown dynamics rely on data that is acquired via interaction with the system or via
simulation. While this is feasible in areas such as robotics or speech recognition, in other applications
such as biology, manufacturing, cyber-physical systems, and marketing, there is either a lack
of reliable simulators or inaccessibility to the real system due to practical limitations, including
cost, ethical, and physical considerations. We have developed Bayesian decision making framework
for control of MDPs with unknown dynamics and large, possibly continuous, state, action,
and parameter spaces in data-poor environments. The effectiveness of the proposed framework
is demonstrated using a simple dynamical system model with continuous state and action spaces,
as well as a more complex model for a metastatic melanoma gene regulatory network observed
through noisy synthetic gene expression data.
Finally, we have studied an instance of partially-observed dynamical systems with Boolean
state variable, called partially-observed Boolean dynamical systems (POBDS). This signal model
has applications in many areas such as genomics/metagenomics, brain networking signals, fault
propagation in sensor networks, communication and more. We have developed sets of optimal tools
for this signal model, which most are the first exact solutions for the entire class of nonlinear non-
Gaussian state-space models. These include optimal minimum mean-square error (MMSE) state
estimator for known POBDS, which is called the Boolean Kalman Smoother (BKS). For POBDS
with a significant uncertainty in the modeling process, we developed the maximum-likelihood
and the optimal Bayesian adaptive filters for simultaneous estimation of the state and parameters,
capable of tackling discrete, continuous or mixture of discrete/continuous parameters. In addition,
tools for control and learning for this signal model have been introduced. The performance and
applicability of all methods have been shown through important problems in genomics domain.
Description
Keywords
Dynamical Systems, Bayesian Optimization, Reinforcement Learning, Machine Learning, Inference, Dynamical Systems, Hidden Markov Model,