Presented By: Bruno Olshausen | Professor; Helen Wills Neuroscience Institute & School of Optometry and Director; Redwood Center for Theoretical Neuroscience, U.C. Berkeley
Presented: May 15th | 11am – 12pm | Georgia Institute of Technology | IBB 1128
Talk Overview: Despite their seemingly impressive performance at image recognition and other perceptual tasks, deep convolutional neural networks are prone to be easily fooled, sensitive to adversarial attack, and have trouble generalizing to data outside the training domain that arise from everyday interactions with the real world. The premise of this talk is that these shortcomings stem from the lack of an appropriate mathematical framework for posing the problems at the core of deep learning – in particular, modeling hierarchical structure, and the ability to describe transformations, such as variations in pose, that occur when viewing objects in the real world. Here I will describe an approach that draws from a well-developed branch of mathematics for representing and computing these transformations: Lie theory. In particular, I shall describe a method for learning shapes and their transformations from images in an unsupervised manner using Lie Group Sparse Coding. Additionally, I will show how the generalized bispectrum can potentially be used to learn invariant representations that are complete and impossible to fool.


Previous articleПродаётся дом в Поти с большим участком | Real estate of Georgia for sale #georgia #realestate #дом
Next articleNEW 4 BED 3.5 BATH JEFFERSON GEORGIA #homebuying #hometour #house #atlanta #new #home #homeselling


Please enter your comment!
Please enter your name here