The human brain has a unique ability to easily model all sorts of data, allowing us to estimate things, predict things, identify things, and learn things in ways that would traditionally require the use of complex domain-specific modeling algorithms if they were done by a computer or a scientist with pencil and paper. A new paper, Hierarchical Models in the Brain, by Karl Friston of the Wellcome Trust Centre of Neuroimaging at the University College of London, offers a possible explanation. He describes hierarchical dynamic models and a generic method for their inversion. He goes on to show that "the brain has evolved the necessary anatomical and physiological equipment to implement this inversion, given sensory data." This means the brain could use this single Bayesian mechanism to implement a wide range of algorithms including models with unknown parameters such as state-space models, probabilistic dynamic models, static models, neural networks, nonlinear system identification, general linear models, and identification of nonlinear dynamic systems; as well as models with unknown states such as estimation with static models, hierarchical linear observation model, covariance component estimation, Gaussian process models, and deconvolution; and even models with unknown states and parameters such as Principal Components Analysis (PCA), factor analysis, probabilistic PCA, Independent component analysis (ICA), sparse coding, and blind deconvolution. Obviously, this paper has a lot of math in it, but even if you skip the math, it's an interesting read.