Home » Technical Topics » Machine Learning

Insights from workshop on Bayesian deep learning at neurips 21

  • ajitjaokar 
Bayesian strategies are making their way into deep learning systems.
Bayesian strategies are making their way into deep learning systems.

I have been exploring Bayesian strategies for the last year. Considering the limitations of neural network strategies (ex their need for large volumes of data) and the scenarios where we will never have enough data to model the problem, some Bayesian approaches could offer an alternative.

In that sense, I was interested to see a special workshop on Bayesian deep learning at neurips 21. The focus of the year’s program has been BDL methodologies and techniques in downstream / real world tasks.

I studied the abstracts of the papers (listed below) and chose six which I found interesting which I list below

Real problems, not addressible using traditional neural network models due to lack of data

The key lessons for me are

  1. Bayesian strategies are being increasingly employed with neural networks (after all the theme of the workshop)
  2. Bayesian strategies are being employed to overcome the limitations of neural networks (ex the availability of data)
  3. Bayesian neural networks are being employed in real life / mission critical applications
  4. Bayesian neural networks are explored in conjunction with advanced neural network strategies ex transformers
  5. Incorporating the knowledge of experts through Bayesian techniques into neural networks could extend neural networks into out of domain areas (ex in weather predictions)

Interesting papers below

Analytically Tractable Inference in Neural Networks – An Alternative to Backpropagation

Until now, neural networks have been predominantly relying on backpropagation and gradient descent as the inference engine in order to learn a neural network’s parameters. This is primarily because closed-form Bayesian inference for neural networks has been considered to be intractable. This short paper outlines a new analytical method for performing tractable approximate Gaussian inference (TAGI) in Bayesian neural networks.

Pathologies in Priors and Inference for Bayesian Transformers

Explore transformer models in terms of predictive uncertainty using Bayesian inference exist.

Deep Bayesian Learning for Car Hacking Detection

Investigate  Deep Bayesian Learning models to detect and analyze car hacking behaviors. The Bayesian learning methods can capture the uncertainty of the data and avoid overconfident issues.

Precision Agriculture Based on Bayesian Neural Network

  • Precision agriculture, utilizing various information to manage crop production, has become the important approach to imitate the food supply problem around the world. Accurate prediction of crop yield is the main task of precision agriculture.
  • neural networks are notoriously data-hungry and data collection in agriculture is expensive and time-consuming.
  • Bayesian neural network, extending the neural network with Bayes inference, is useful under such circumstance.
  • Moreover, Bayesian allows to estimate uncertainty associated with prediction which makes the result more reliable.
  • In this paper, a Bayesian neural network was applied a small dataset and the result shows Bayesian neural network is more reliable under such circumstance.

Benchmark for Out-of-Distribution Detection in Deep Reinforcement Learning

Out of distribution detection for RL is generally not well covered in the literature, and there is a lack of benchmarks for this task. Propose a benchmark to evaluate OOD detection methods in a Reinforcement Learning setting, by modifying the physical parameters of non-visual standard environments or corrupting the state observation for visual environments.

An Empirical Analysis of Uncertainty Estimation in Genomics Applications

Present an empirical analysis of uncertainty estimation approaches in Deep Learning models for genomic applications.

Robust Calibration For Improved Weather Prediction Under Distributional Shift

•           present preliminary results on improving out-of-domain weather prediction and uncertainty estimation as part of the Shifts Challenge on Robustness and Uncertainty under Real-World Distributional Shift challenge.

•           They find that by leveraging a mixture of experts in conjunction with an advanced data augmentation technique borrowed from the computer vision domain, in conjunction with robust post-hoc calibration of predictive uncertainties, we can potentially achieve more accurate and better-calibrated results with deep neural networks than with boosted tree models for tabular data.

Full list of papers below

Unveiling mode-connectivity of the ELBO landscape           

Infinite-channel deep convolutional Stable neural networks 

Analytically Tractable Inference in Neural Networks – An Alternative to Backpropagation        

Pathologies in Priors and Inference for Bayesian Transformers      

Resilience of Bayesian Layer-Wise Explanations under Adversarial Attacks          

An Empirical Comparison of GANs and Normalizing Flows for Density Estimation

Reproducible, incremental representation learning with Rosetta VAE         

Being a Bit Frequentist Improves Bayesian Neural Networks          

Stochastic Local Winner-Takes-All Networks Enable Profound Adversarial Robustness    

Non-stationary Gaussian process discriminant analysis with variable selection for high-dimensional functional data       

Benchmarking Bayesian Deep Learning on Diabetic Retinopathy Detection Tasks

Deep Classifiers with Label Noise Modeling and Distance Awareness       

Information-theoretic stochastic contrastive conditional GAN: InfoSCC-GAN         

Generalization Gap in Amortized Inference  

Evaluating Predictive Uncertainty and Robustness to Distributional Shift Using Real World Data     

Uncertainty Quantification in End-to-End Implicit Neural Representations for Medical Imaging          

Generation of data on discontinuous manifolds via continuous stochastic non-invertible networks     

Uncertainty Baselines: Benchmarks for Uncertainty & Robustness in Deep Learning           

Deep Bayesian Learning for Car Hacking Detection

Power-law asymptotics of the generalization error for GP regression under power-law priors and targets       

Contrastive Representation Learning with Trainable Augmentation Channel          

Structured Stochastic Gradient MCMC: a hybrid VI and MCMC approach  

An Empirical Study of Neural Kernel Bandits

On Feature Collapse and Deep Kernel Learning for Single Forward Pass Uncertainty           

Greedy Bayesian Posterior Approximation with Deep Ensembles   

On Symmetries in Variational Bayesian Neural Nets

Certifiably Robust Variational Autoencoders 

Contrastive Generative Adversarial Network for Anomaly Detection           

Kronecker-Factored Optimal Curvature        

Mixtures of Laplace Approximations for Improved Post-Hoc Uncertainty in Deep Learning         

Exploring the Limits of Epistemic Uncertainty Quantification in Low-Shot Settings 

Laplace Approximation with Diagonalized Hessian for Over-parameterized Neural Networks        

Multimodal Relational VAE    

Progress in Self-Certified Neural Networks  


Gaussian dropout as an information bottleneck layer           

Decomposing Representations for Deterministic Uncertainty Estimation    

Precision Agriculture Based on Bayesian Neural Network   

Relaxed-Responsibility Hierarchical Discrete VAEs 

Dependence between Bayesian neural network units          

The Peril of Popular Deep Learning Uncertainty Estimation Methods         

Depth Uncertainty Networks for Active Learning      

Mixture-of-experts VAEs can disregard variation in surjective multimodal data      

Can Network Flatness Explain the Training Speed-Generalisation Connection?    

Benchmark for Out-of-Distribution Detection in Deep Reinforcement Learning       

Dropout and Ensemble Networks for Thermospheric Density Uncertainty Estimation           

On Efficient Uncertainty Estimation for Resource-Constrained Mobile Applications           

Object-Factored Models with Partially Observable State     

Likelihood-free Density Ratio Acquisition Functions are not Equivalent to Expected Improvements

Embedded-model flows: Combining the inductive biases of model-free deep learning and explicit probabilistic modeling     

Evaluating Deep Learning Uncertainty Quantification Methods for Neutrino Physics Applications    

Constraining cosmological parameters from N-body simulations with Bayesian Neural Networks        

Latent Goal Allocation for Multi-Agent Goal-Conditioned Self-Supervised Imitation Learning         

Reliable Uncertainty Quantification of Deep Learning Models for a Free Electron Laser Scientific Facility          

Fast Finite Width Neural Tangent Kernel      

Bayesian Inference in Augmented Bow Tie Networks          

Biases in Variational Bayesian Neural Networks      

The Dynamics of Functional Diversity throughout Neural Network Training

Robust outlier detection by de-biasing VAE likelihoods        

Revisiting the Structured Variational Autoencoder   

Posterior Temperature Optimization in Variational Inference for Inverse Problems

Adversarial Learning of a Variational Generative Model with Succinct Bottleneck Representation           

Stochastic Pruning: Fine-Tuning, and PAC-Bayes bound optimization       

Towards Robust Object Detection: Bayesian RetinaNet for Homoscedastic Aleatoric Uncertainty Modeling 

Federated Functional Variational Inference  

Reflected Hamiltonian Monte Carlo  

Hierarchical Topic Evaluation: Statistical vs. Neural Models

Reducing redundancy in Semantic-KITTI: Study on data augmentations within Active Learning         

An Empirical Analysis of Uncertainty Estimation in Genomics Applications

Robust Calibration For Improved Weather Prediction Under Distributional Shift     

Diversity is All You Need to Improve Bayesian Model Averaging    

SAE: Sequential Anchored Ensembles