Speaker: Jianke Yang (UC San Diego)
Session Chair: Bharath Ramsundar (Deep Forest Sciences)
Abstract:Despite the success of equivariant neural networks in scientific applications, they require knowing the symmetry group a priori. Automatic symmetry discovery methods aim to relax this constraint and learn invariance and equivariance from data. We propose a framework, LieGAN, to automatically discover equivariances from a dataset using a paradigm akin to generative adversarial training. Specifically, a generator learns a group of transformations applied to the data, which preserves the original distribution and fools the discriminator. LieGAN represents symmetry as an interpretable Lie algebra basis and can discover various symmetries such as the rotation group and the restricted Lorentz group in trajectory prediction and top-quark tagging tasks. More generally, LieGAN can also be extended to discover the nonlinear symmetries in high-dimensional dynamics. The learned symmetry can be readily used in several existing equivariant neural networks to improve prediction accuracy and generalization. It can also improve the symbolic equation discovery and long-term forecasting for various dynamical systems.