mlphysics
Table of Contents
- 1. Machine Learning & Physics
- 1.1. Geometric Foundations of ML
- 1.1.1. «With non parametric you can have a monotonic relation between varaibles instead of a linear one»
- 1.1.2. PCA & SVD & EFA
- 1.1.3. Geometric foundations of Deep Learning | by Michael Bronstein | Towards Data Science
- 1.1.4. Manifold hypothesis - Wikipedia
- 1.1.5. Curse of dimensionality
- 1.1.6. UMAP, tSNE y todas esas cosas - Carlos J. Gil Bellosta
- 1.1.7. UMAP: Uniform Manifold Approximation and Projection for Dimension Reduction — umap 0.5 documentation
- 1.1.8. Nonlinear dimensionality reduction
- 1.1.9. Nonlinear dimensionality reduction - Wikipedia
- 1.1.10. enjalot/latent-scope: A scientific instrument for investigating latent spaces
- 1.1.11. Replica theory shows deep neural networks think alike
- 1.1.12. Structured State Spaces for Sequence Modeling (S4) · Hazy Research
- 1.1.13. State Space Reconstruction
- 1.1.14. The Logic of Graph Neural Networks
- 1.1.15. Inductive bias - Wikipedia
- 1.2. No free lunch theorem - Wikipedia
- 1.3. Physics-informed Neural Networks / Lagrangian Neural Networks
- 1.4. ML to predict physical equations / dynamics
- 1.4.1. https://link.medium.com/EInzdSIRXhb ml_physics
- 1.4.2. Automated discovery of fundamental variables hidden in experimental data | Nature Computational Science
- 1.4.3. Learning to Simulate Complex Physics with Graph Networks | DeepMind
- 1.4.4. Symplectic encoders for physics-constrained variational dynamics inference | Scientific Reports
- 1.5. Renormalization group & Neural networks
- 1.6. Links
- 1.7. Prize for Compressing Human Knowledge
- 1.8. Thousand Brains Project | Numenta How the Brain Works + AI
- 1.9. Machine Learning for Inverse Problems in Computational Engineering
- 1.10. Generative Adversarial Networks (GANs)
- 1.11. Reading Group: Advanced Data Analysis by Prof. Cosma Shalizi - YouTube
- 1.12. hyperbolic neural networks
- 1.13. Hierarchical Holpfield Networks
- 1.14. ML & Control
- 1.15. ML & Quantum Physics
- 1.16. HD/VSA Hyperdimensional Computing / Vector Symbolic Architectures
- 1.17. Ising formulations of many NP problems
- 1.18. Bayesian Statistics & Neural Networks science someday_20230330
- 1.19. Thermodynamic AI is getting hotter - by Grigory Sapunov
- 1.20. Themesis - Alianna J. Maren - Physics + Neural Networks
- 1.1. Geometric Foundations of ML
1. Machine Learning & Physics
1.1. Geometric Foundations of ML
1.1.1. «With non parametric you can have a monotonic relation between varaibles instead of a linear one»
1.1.2. PCA & SVD & EFA
1.1.3. Geometric foundations of Deep Learning | by Michael Bronstein | Towards Data Science
Utilizan simetrías (CNN → simetría de translación, LSTM → simetría de time warping, transformers → simetría de permutación)
1.1.3.1. [2104.13478] Geometric Deep Learning: Grids, Groups, Graphs, Geodesics, and Gauges
- ICLR 2021 Keynote - “Geometric Deep Learning: The Erlangen Programme of ML” - M Bronstein - YouTube
- Geometric Deep Learning - Grids, Groups, Graphs, Geodesics, and Gauges
- GDL Course
- https://www.sci.unich.it/geodeep2022/slides/
- Category theory and Geometric Machine Learning
- Theoretical Foundations of Graph Neural Networks - YouTube
1.1.4. Manifold hypothesis - Wikipedia
The manifold hypothesis posits that many high-dimensional data sets that occur in the real world actually lie along low-dimensional latent manifolds inside that high-dimensional space.
If your space is non linear, maybe it’s linear in some high-dimensional abstract vector space nonsense
1.1.5. Curse of dimensionality
1.1.7. UMAP: Uniform Manifold Approximation and Projection for Dimension Reduction — umap 0.5 documentation
1.1.7.1. 1802.03426.pdf
1.1.13. State Space Reconstruction
1.1.14. The Logic of Graph Neural Networks
Graph Neural Networks are equivalent to Two-variable logic
https://en.wikipedia.org/wiki/Two-variable_logic
1.1.15. Inductive bias - Wikipedia
1.2. No free lunch theorem - Wikipedia
:ID: 89cf3855-537b-41ec-949d-fd31f30cd5ba
1.3. Physics-informed Neural Networks / Lagrangian Neural Networks
1.3.1. ICML 2016 not by the day | apeirotope
- Home | Steve Brunton’s Lab
- databookV2.pdf
- Miles Cranmer - The Next Great Scientific Theory is Hiding Inside a Neural Network (April 3, 2024) - YouTube
- Using sparse trajectory data to find Lagrangian Coherent Structures (LCS) in fluid flows - YouTube
- An introduction to Flow Matching · Cambridge MLG Blog
1.4. ML to predict physical equations / dynamics
1.4.1. https://link.medium.com/EInzdSIRXhb ml_physics
ML como cambio de paradigma de la física
Utilizar para no tener que sacar las ecuaciones de movimiento, porque son ya muy complicadas
Link a una serie de artículos
1.4.1.1. Slowly Strangled In Python’s Nest | by Marcus van der Erve | Societal Cycles | Medium
1.4.2. Automated discovery of fundamental variables hidden in experimental data | Nature Computational Science
Predict a double pendulum with ML
https://www.nature.com/articles/s43588-022-00281-6
https://www.engineering.columbia.edu/news/lipson-chen-ai-alternative-physics
1.4.3. Learning to Simulate Complex Physics with Graph Networks | DeepMind
Our framework—which we term “Graph Network-based Simulators” (GNS)—represents the state of a physical system with particles, expressed as nodes in a graph, and computes dynamics via learned message-passing
1.5. Renormalization group & Neural networks
- Resumen de Marco Tavora Alto nivel, se mete con algo de matemáticas pero no demasiado
- Renormalization Group connected to Neural Networks La mejor review del tema que he encontrado hasta el momento. Modelizar las NN como redes de spines (Ising, Boltzmann Machines…), las matemáticas son las mismas y hay muchas analogías.
1.6. Links
- http://physicsbaseddeeplearning.org/intro.html
- Physics guided neural networks → La función de coste tendrá el término de regularización para evitar overfitting, pero también un término de “sentido físico” que penalice soluciones que no tienen sentido físico
- So, what is a physics-informed neural network? - Ben Moseley → Término de regularización con una ecuación diferencial discretizada (por ejemplo mx’’ + μx’ + kx = 0) igualada a cero
- Machine Learning, Kolmogorov Complexity, and Squishy Bunnies NN para interpolar funciones no lineales (en este caso implementar un modelo de físicas con NN, imagino que algo parecido a “linealizarlo” para que pueda correr en una GPU). También las NN parecen funcionar muy bien en https://arxiv.org/abs/1910.07291%7CSolving%20the%20three-body%20problem%20using%20DNNs
- https://machinelearningmastery.com
- https://machine-learning-for-physicists.org/
- Machine Learning for Precipitation Nowcasting from Radar Images Predicción de tiempo basándose en patrones de imágenes de radar. Dicen que es más preciso a corto tiempo y localmente que simulaciones de modelos físicos
- Wave physics as an analog recurrent neural network
- Understanding Variational Autoencoders (VAEs)
- Self-regularizing restricted Boltzmann machines
- Variational Inference: Ising Model Maximizar la divergencia KL para quitarle el ruido a una imagen
- Machine Learning: A Probabilistic Perspective Libro muy muy completo, tiene de todo, desde estadística hasta ML avanzado.
- The Elements of Statistical Learning También avanzado, muy conocido
- ML Debugging
- mlbook This book is for readers looking to learn new machine learning algorithms or understand algorithms at a deeper level, for readers interested in seeing machine learning algorithms derived from start to finish
- https://distill.pub/ Interesting blog about advanced ML/Physics
- LabML Neural Networks A collection of implementations of neural network architectures and related algorithms kept simple and easy to read
- Are Neural Networks just Fuzzy Hashtables? Experiments using MINST dataset
- Binarized Neural Networks (Weights and Activations to +1 or -1)
- 〈 physics | machine learning 〉
Is down: https://web.archive.org/web/20210125143528/https://physicsml.github.io/
- Jupyter notebooks with examples of ML
- MartinuzziFrancesco/awesome-scientific-machine-learning: A curated list of awesome Scientific Machine Learning (SciML) papers, resources and software
1.8. Thousand Brains Project | Numenta How the Brain Works + AI
1.10. Generative Adversarial Networks (GANs)
- The intuition behind adversarial attacks on NN -> Las imágenes adversarias podrían estar creadas por la función de activación de las neuronas (x*(x>0) en vez de más robustas como tanh(x) o sigm(x)) que se sobreexcita al no estar acotada por arriba.
- Understanding Generative Adversarial Networks (GANs)
- Must-Read Papers on GANs
- Turing Learning and GANs Part 1
- Turing Learning and GANs Part 2
- GAN Visualization
- Natural Adversarial Examples
- http://bactra.org/weblog/2014-11-13-intriguing-properties.html Redes adversarias en profundidad
- Fawkes poner ruido encima de una imagen: imperceptible para humanos, las redes no dan una (no sé si es GAN realmente)
- A Gentle Introduction to GANs
- https://machinelearningmastery.com/
- Adversarial Patch
- We present a method to create adversarial image patches in the real world.
- universal (can be used to attack any scene)
- robust (work under a wide variety of transformations)
- targeted (cause a classifier to output any target class)
- universal (can be used to attack any scene)
- We present a method to create adversarial image patches in the real world.
1.12.
1.14. ML & Control
1.15. ML & Quantum Physics
1.16. HD/VSA Hyperdimensional Computing / Vector Symbolic Architectures
1.17. Ising formulations of many NP problems
https://arxiv.org/abs/1302.5843 Muchos problemas NP pueden formularse como Ising