In this talk we would discuss some factors that enable the neural nets to approximate the data well. We would draw analogy from physics which plays crucial role in deep learning with necessary mathematics. We would discuss how to strengthen neural network's capability to approximate functions. We will conclude with arguing about the question - why deep learning is essential instead of shallow learning by neural nets. Our talk is based on the paper:Lin, Henry W., Max Tegmark, and David Rolnick. "Why does deep and cheap learning work so well?." Journal of Statistical Physics (2016): 1-25."
Date: September 08, 2017 Venue: 109, GICT Building