Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Bayesian inference and big data: are we there y...

Bayesian inference and big data: are we there yet? by Jose Luis Hidalgo at Big Data Spain 2017

Bayesian inference has a reputation of being complex and only applicable to fairly small datasets. Very recent developments may be chaining that, and we are starting to see some successful real world application of Bayesian inference to very large datasets, and some tools to make it accessible to many more data science practitioners.

https://www.bigdataspain.org/2017/talk/bayesian-inference-and-big-data-are-we-there-yet

Big Data Spain 2017
November 16th - 17th Kinépolis Madrid

Big Data Spain

November 30, 2017
Tweet

More Decks by Big Data Spain

Other Decks in Technology

Transcript

  1. Clarification of some concepts Bayesian - "Bayes rule" - "Bayesian

    statistics" (vs. frequentist statistics) - "Reverse probability", Fisher definition - "Bayesian models"!
  2. Clarification of some concepts Inference - In classic statistics: "inferential"

    vs "descriptive" - In machine learning: "inference" vs "training" - In Bayesian statistics: estimation of parameters from data - … to make predictions - … to validate the model
  3. Clarification of some concepts Big Data - As many definitions

    as there are vendors interested in selling you something! - Incremental vs. something new - In our case: "big data" as the fact that we use increasingly larger amounts of data to get to some information/insight (we manage to extract weaker signals from oceans of noise)
  4. A bit of history Early Bayesian models - Treated analytically

    - Limited to what can be treated analytically... duh! Nineties: MCMC - Offers (the promise of) generic inference algorithms - Very hard and computationally expensive - Variational inference as an (even harder) alternative
  5. A bit of history Oughties: Probabilistic programming - Standard ways

    to explain probabilistic models to a computer - Bayesian models are a subset of probabilistic models - JAGS, BUGS, Stan... - Further developments, HMC, Gibbs sampling.. - Becomes quite popular in academic circles
  6. A bit of history Tens: “Practical” Probabilistic programming - Further

    advances in inference: NUTS, ADVI... - New technologies to speedup computations - GPU parallelization - Automatic Differentiation - "Tall" datasets (very large number of cases) - "Wide" datasets (very large number of features)
  7. Some sample applications From cognitive science - Exactly the opposite

    of what our NN friends are trying to do! - Models of human memory, of language understanding, etc. - Bayesian models are very well suited for this kind of studies From fin-tech - Large copula models become tractable using (Bayesian) inference algorithms
  8. Some sample applications From AI - Generative image recognition systems

    From business operations - The inventory information problem - Probabilistic model of inventory - Enables operational optimization
  9. Conclusions If you are a data science practitioner - Familiarize

    yourself with this kind of models - Learn about tools and libraries: Stan, PMC3, Edwars, etc. If you are responsible for technical infrastructure - Leveraging big data will require big compute... -... and not only for neural networks! If you are responsible for a business - Ask for more - Then ask again!