"Deep Learning" -- Artificial Neural Networks coupled to high-performance computing -- has revolutionized machine learning since it burst upon the stage a little more than a decade ago. It has been used with startling success to attack difficult problems such as natural language processing, voice recognition, image classification, dimensional reduction, and many others, changing the rate of progress in these fields from incremental to exponential. Most success stories in deep learning have been about engineering-type tasks -- recognizing stop signs, recognizing faces, translating documents, etc. Interest has recently developed in applying deep learning to more strictly scientific modeling problems -- "AI For Science" -- among researchers and funding agencies, with a push to find out what opportunities exist to apply deep learning to scientific tasks, and how those opportunities might be addressed.
I will review some ideas from a subfield of deep learning: "Probabilistic", or "Variational" deep learning, in which the objective is not to use data to learn to approximate a function, but rather to learn to approximate the probability distribution that gave rise to the data. This very flexible set of measure density estimation techniques is quite promising for both scientific modeling and for uncertainty quantification, more so, I would argue, than "classical" deep learning. I will discuss some examples from work in progress on probabilistic forecasting, posterior density sampling, and computational fluid dynamics.