network or the weights or the eigenspectra? • Do we learn anything from those visualizations? • Alternatively: Carefully observe the behavior of the method in “the wild”? • Alternatively: Attack the method adversarially? I think everyone would say “yes” to at least one of these. And yet, I don’t think any two people here agree on what constitutes interpretation!
arbitrary causal structure. • Our beliefs about the world don’t! • This is related to noise, missing data, point-spread functions, and symmetries. If a method applied to cosmology data delivers a result that isn’t rotationally covariant, we would (and should) reject it, right?
idea. For one, it only works in the limit. • Proposed to enforce rotational invariance. ◦ You know who you are! • Proposed to obviate adversarial attacks.
generative physical model? • But often you don’t have one! • Or if you do have a physical model, why not just use that? ◦ Or emulate it! • Use humans to validate? That “looks” okay? • Possibly an inspiration for the Ringberg Recommendations?
we do. • Are the natural sciences going through deep or fundamental change? • Are we changing our canon and training to match? No! • Can we disrupt natural science and make a better thing? • What are the implications of the changes for researchers at different stages? • How do we create integrated human and ML systems? • Does giving guidelines support or restrict our community?
◦ This is even true of PCA! • But what are the consequences of that? • Do adversarial attacks have something to say about that? • ML methods are not doing what we think they are (see also: interpretation).
propagate the uncertainties in the weights themselves? • Do adversarial attacks tell you anything useful about the model? • Can we make interpretable, low-dimensional latent spaces? • Can we see or infer causal structure? • (What qualifies as “Machine Learning™”?)
I’d say “no”. • What does it take for machine learning to deliver novel insights? This community is absolutely excellent and I have very high expectations of yall.