Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Machine Learning for Materials (Lecture 8)

Sponsored · Your Podcast. Everywhere. Effortlessly. Share. Educate. Inspire. Entertain. You do you. We'll handle the rest.
Avatar for Aron Walsh Aron Walsh
February 12, 2024

Machine Learning for Materials (Lecture 8)

Slides linked to https://github.com/aronwalsh/MLforMaterials. Updated for 2026.

Avatar for Aron Walsh

Aron Walsh

February 12, 2024
Tweet

More Decks by Aron Walsh

Other Decks in Science

Transcript

  1. Module Contents 1. Introduction 2. Machine Learning Basics 3. Materials

    Data 4. Crystal Representations 5. Classical Learning 6. Deep Learning 7. Building a Model from Scratch 8. Accelerated Discovery 9. Generative Artificial Intelligence 10. Future Directions
  2. Every step balances system exploration and exploitation Key Concept #8

    Policy Environment Reward Use data to choose the next best experiment (or simulation) Action Feedback Observation Example: “Choose next anneal temperature to maximise conductivity”
  3. “A problem in artificial intelligence is one which is so

    complex that it cannot be solved using any normal algorithm” Hugh M. Cartwright, Applications of AI in Chemistry (1993)
  4. Accelerate Scientific Discovery Research can be broken down into a

    set of tasks that can each benefit from acceleration H. S. Stein and J. M. Gregoire, Chem. Sci. 10, 9640 (2019) Traditional research workflow
  5. Potential for speedup Accelerate Scientific Discovery H. S. Stein and

    J. M. Gregoire, Chem. Sci. 10, 9640 (2019) Research can be broken down into a set of tasks that can each benefit from acceleration
  6. Accelerate Scientific Discovery Workflow classification of published studies H. S.

    Stein and J. M. Gregoire, Chem. Sci. 10, 9640 (2019)
  7. Automation and Robotics Execution of physical tasks to achieve a

    target using autonomous or collaborative robots Image: https://www.thefifthindustrialrevolution.co.uk
  8. Automation and Robotics Robots can be tailored for a wide

    range of materials synthesis and characterisation tasks B. P. MacLeod et al, Science Advances 6, eaaz8867 (2020)
  9. Automation and Robotics Self-driving labs (SDL) are now operating N.

    J. Szymanski et al, Nature 624, 86 (2023); Phys. Rev. Energy 3, 011002 (2024) A-Lab, Berkeley
  10. Flexible Automation Systems Modular hardware with computer-controlled synthesis and characterisation

    DIGIBAT: In collaboration with Magda Titirici, Ifan Stephens, and others (ICL)
  11. Flexible Automation Systems Automation platforms designed to deliver complex research

    workflows (fixed platform or mobile) DIGIBAT: In collaboration with Magda Titirici, Ifan Stephens, and others (ICL) Usually a mix of proprietary code, with GUI and Python API for user control
  12. Automation and Robotics Robots can be equipped with sensors and

    artificial intelligence to interact with their environment S. Eppel et al, ACS Central Science 6, 1743 (2020) Adapting computer vision models for laboratory settings GT = ground truth Pred = predicted
  13. Automation and Robotics Robots can be equipped with sensors and

    artificial intelligence to interact with their environment https://www.youtube.com/watch?v=K7I2QJcIyBQ
  14. Optimisation Algorithms to efficiently achieve a desired research objective. Considerations:

    Objective function (O): Materials properties or device performance criteria, e.g. battery lifetime Parameter selection: Variables that can be controlled, e.g. temperature, pressure, composition Data acquisition: How the data is collected, e.g. instruments, measurements, automation
  15. Optimisation Algorithms Local optimisation – find the best solution in

    a limited region of the parameter space (x) Gradient based: iterate in the direction of the steepest gradient (∇O), e.g. gradient descent Hessian based: use information from the second derivatives (∇2O), e.g. quasi-Newton O x x1 xn Local minimum The same concepts are involved in ML model training
  16. Optimisation Algorithms Global optimisation – find the best solution from

    across the entire parameter space Numerical: iterative techniques to explore parameter space, e.g. downhill simplex, simulated annealing Probabilistic: incorporate probability distributions, e.g. Markov chain Monte Carlo, Bayesian optimisation O x Global minimum xn x1 The same concepts are involved in ML model training
  17. Bayesian Optimisation (BO) Use prior (measured or simulated) data to

    decide which experiment to perform next J. Močkus, Optimisation Techniques 1, 400 (1974) Probabilistic (Surrogate) Model Approximation of the true objective function O(x) ~ f(x), e.g. Gaussian process, GP(x,x') Acquisition Function Selection of the next sample point, e.g. upper confidence bound (UCB), probability of improvement (PI), expected improvement (EI) known new (parameters to sample)
  18. Bayesian Optimisation (BO) Use prior (measured or simulated) data to

    decide which experiment to perform next J. Močkus, Optimisation Techniques 1, 400 (1974) Probabilistic (Surrogate) Model (parameters to sample) Gaussian process: f(x) ~ GP(μ(x), k(x,xʹ)) mean function Gaussian kernel function k(x,xʹ) measures the similarity points x and xʹ • Kernel controls function smoothness and uncertainty • Similar x share information • Dissimilar x default to the mean with high uncertainty known new
  19. Bayesian Optimisation (BO) Bayesian optimisation for chemistry: Y. Wu et

    al, Digital Disc. 3, 1086 (2024) Use prior (measured or simulated) data to decide which experiment to perform next
  20. Upper confidence bound (UCB) trades off mean prediction and uncertainty

    Exploration–Exploitation Tradeoff xnext = max( μ(x) + βσ(x) ) Prediction based on prior knowledge Weighted Uncertainty What to do next N. Srinivas et al, IEEE Transactions on Information Theory, 58 (2012) x β < 1 focus on exploitation β ~ 1 balance risk and reward β > 1 focus on exploration A tunable (learnable) hyperparameter of UCB
  21. Applications of BO Maximise electrical conductivity of a composite (P3HT-CNT)

    thin-film D. Bash et al, Adv. Funct. Mater. 31, 2102606 (2021)
  22. Applications of BO D. Bash et al, Adv. Funct. Mater.

    31, 2102606 (2021) Maximise electrical conductivity of a composite (P3HT-CNT) thin-film
  23. Active Learning (AL) BO: find inputs that maximise the objective

    function AL: find inputs that minimise uncertainty Epistemic uncertainty* Posterior samples Target unknown regions that can improve the model Gaussian process is updated with new observations to yield revised function values and uncertainty *Epistemic = data-limited (reducible) uncertainty; Aleatoric = noise (irreducible)
  24. Integrated Research Workflows Feedback loop between optimisation model and automated

    experiments NIMS-OS: R. Tamura, K. Tsuda, S. Matsuda, arXiv:2304.13927 (2023)
  25. Integrated Research Workflows Feedback loop between optimisation model and automated

    experiments NIMS-OS: R. Tamura, K. Tsuda, S. Matsuda, arXiv:2304.13927 (2023)
  26. Reinforcement Learning (RL) Boston Dynamics Early applications in video games

    (maximise score), finance (profit), and robotics (task completion) Nintendo An agent interacts with an environment to learn decision-making strategies that achieve a specific goal
  27. Reinforcement Learning (RL) Digital lab RL schematic: S. J. D.

    Prince, https://udlbook.github.io/udlbook Virtual scientist New experiment Property change Composition, structure, processing Design strategy (explore/exploit)
  28. RL Policy This familiar equation is a softmax (Boltzmann) policy

    Data-driven decision making that adapts over time Probability of action at given state st Expected reward for action at Effective temperature for exploration/exploitation balance Sum over all possible actions Q(s,a) is the expected reward from taking action a in state s, following policy π
  29. RL of Metal-Organic Frameworks This familiar equation is a softmax

    (Boltzmann) policy Hyunsoo Park et al, Digital Discovery 3, 728 (2024) Selectivity: 𝑆𝐶𝑂2/𝐻2𝑂 = 𝐾𝐻,𝐶𝑂2 𝐾𝐻,𝐻2𝑂 Heat of adsorption : 𝑄𝑠𝑡
  30. Experimental Optimisation Strategies Advantages Disadvantages High- throughput (Enumeration) - Exhaustive

    search - Simple to implement and understand - Inefficient for high-dimensional spaces - Maximises experiments and dataset size Bayesian Optimisation - Efficiently exploit data - Works with noisy and expensive evaluations - Surrogate model & acquisition selection - Struggle with high- dimensional spaces Reinforcement Learning - Learns optimal policies through feedback - Can handle dynamic and complex environments - Sample hungry - Slow to converge
  31. Obstacles to Closed Loop Discovery • Materials complexity: complex structures,

    compositions, processing sensitivity • Data quality and reliability: errors and inconsistencies waste resources • Cost of automation: major investment required in infrastructure and training • Adaptability: systems and workflows may be difficult to reconfigure for new problems
  32. Class Outcomes 1. Select an appropriate optimisation strategy for a

    given problem 2. Assess the impact of AI optimisation tools on materials research and discovery Activity: Closed-loop optimisation