Poster #P51




Fine-tuning unifies foundational machine-learned interatomic potential architectures at ab initio accuracy

C. Dreßler, J. Hänseroth



Foundational machine-learned interatomic potentials (MLIPs) enable broad transferability, yet their predictive accuracy remains architecture-dependent in practice. We show that fine-tuning acts as a unifying layer, bringing diverse MLIP architectures to consistent, near ab initio accuracy. Benchmarking five state-of-the-art frameworks (MACE, GRACE, SevenNet, MatterSim, ORB) across chemically diverse systems, fine-tuning improves force accuracy by factors of 5 to 15 and energy accuracy by up to four orders of magnitude. Using compact datasets derived from short ab initio molecular dynamics trajectories constructed without active learning, fine-tuning harmonizes performance across different architectures. We further analyze the amount of training data required for effective fine-tuning and address whether suitable structures can be generated via molecular dynamics simulations driven by the foundation models themselves. Finally, we introduce the aMACEing Toolkit, a unified and reproducible fine-tuning interface designed to support interoperability and scalable workflows in an emerging atomistic machine learning software ecosystem.






 Prof. Dr. Christian Dreßler

  •   Technical University Ilmenau · Department of Theoretical Physics · Ilmenau (DE)