Skip to content

Latest commit

 

History

History
5 lines (3 loc) · 746 Bytes

modern-vi.md

File metadata and controls

5 lines (3 loc) · 746 Bytes

In current deep learning research, the model architecture selection is often solved empirically by a process of trial-and-error. Stochastic regulariser techniques such as dropout can slow down the training but circumvents over-fitting. The author surveyed how stochastic regulariser techniques can be used in deep leaerning and proposed some future research directions for this field.

Also, the paper gives a good review of Gaussian Process (GP). The authors showed that each GP covariance function has a one-to-one correspondence with the combination of both neural network non-linearities and weight regularisation.