Design-driven Materials Intelligence
Abstract
The integration of artificial intelligence (AI) and machine learning (ML) into materials science heralds the era of materials intelligence, AI-driven systems that learn from materials data to predict, design, and optimise structures and properties while embedding domain knowledge. This thesis explores several key challenges at the intersection of AI/ML and materials science: the reliance on single-model explanations, the complexity of capturing non-linear relationships, and the need to balance interpretability with stakeholder expectations. To address these challenges, a thorough literature review is conducted from Chapters 1 to 3, and the thesis emphasises explanations of the same task through diverse similarly performing models. The thesis is structured into three core chapters, guided by design thinking principles:
Chapter 4: Rational Design introduces the Variance Tolerance Factor (VTF) framework to address the limitations of single-model explanations, which often generate conflicting insights across models. Using the Rashomon set concept, the VTF framework quantifies feature importance variability, offering a comprehensive perspective. The approach was validated against baseline methods and applied to chemical prediction tasks, demonstrating its utility in enhancing interpretability.
Chapter 5: Creative Design builds on rational design by advancing methods to interpret complex feature relationships in materials science. This part introduces Feature Interaction Scores (FIS) and Feature Interaction Scores Cloud (FISC) to explain interactions among features in material property predictions in the Rashomon set. From the study of the Rashomon set in practice, two fundamental axioms are proposed as guidance for generalisability.
Chapter 6: Optimal Design utilises explanation disagreement in the Rashomon set as a strategy, bridging the gap between stakeholder needs and ML models. The EXplanation AGREEment (EXAGREE) framework is proposed to align model explanations with stakeholder expectations while preserving predictive performance. This work seeks to improve the alignment between AI systems and the needs of materials scientists, engineers, and other stakeholders in the field.
Throughout the thesis, this study explores fundamental challenges in applying ML, especially explainable AI, to materials science, balancing predictive performance with interpretability, satisfying different stakeholder needs, and combining automated optimisation with domain expertise. By advancing methods to address these challenges, this research aims to contribute to the development of trustworthy scientist-centred ML technologies for materials science.
Description
Keywords
Citation
Collections
Source
Type
Book Title
Entity type
Access Statement
License Rights
Restricted until
2025-06-25
Downloads
File
Description