Mixture Density Networks with Julia

This post is also available as a Jupyter notebook. Related posts: JavaScript implementation. TensorFlow implementation. PyTorch implementation This post is a result of me trying to understand how to do deep learning in Julia using the excellent Flux package as well as getting a better understanding of conditional density estimation using a simple but effective technique—Mixture Density Networks (Bishop, 1994). This post follows very closely the PyTorch implementation (including paraphrasing some statements) listed above, which itself was adapted from the original TensorFlow and JavaScript implementations. [Read More]

Reinforcement learning with policy gradients in pure Python

This post is also available as a Jupyter notebook. It appears to be a right of passage for ML bloggers covering reinforcement learning to show how to implement the simplest algorithms from scratch without relying on any fancy frameworks. There is Karpathy’s now famous Pong from Pixels, and a simple Google search of “policy gradient from scratch” will yield a number of blog posts of implementations with varying levels of detail. [Read More]

Check your "correlation" matrix

Stochastic simulation is an essential tool for many businesses to play out likely and unlikely scenarios. For example, in insurance it is used for capital modelling requirements. An EU directive stipulates that an insurer should hold enough capital to meet its obligations over a 12 month period at a 99.5% confidence level (often quoted as the chance of an insurer being ruined during the year should be no more than 1 in 200). [Read More]