Purdue University Research Introduces Parsimonious Neural Networks (PNNs) That Learn Interpretable Laws of Physics Using Machine Learning
Machine learning systems use neural networks that are often too large and sophisticated to quickly find the most basic explanation for an event or observation, a phenomenon known as parsimony.
In the physical sciences, most models do not understand the underlying physics of the system in question, such as restrictions or symmetries. This limits their ability to generalize. Additionally, most machine learning models are difficult to understand. That is, most machine learning algorithms cannot learn physics or justify their predictions. These limitations are sometimes offset by large amounts of data in many subjects. But this is not always possible in fields like materials science, where data acquisition is expensive and time consuming.
Recent research by researchers at Purdue University found a technique which shows how machine learning can be used to discover physical principles from data. They found that applying parsimony to artificial neural networks through “stochastic optimization” allows them to better balance simplicity and precision, extracting meaningful physics from the data.
Learning new physics and justifying predictions is a challenge for machine learning models. Now machine learning has been used to understand Newton’s Second Law of Motion and Lindemann’s Law for predicting the melting temperature of materials, thanks to the approach created by researchers at Purdue.
For this, the researchers propose PNNs: parsimonious neural networks that aim to find a balance between parsimony and precision when characterizing training data. The PNN technique uses neural networks to allow complicated function compositions to balance parsimony while using genetic algorithms.
The idea behind this approach is that enforcing parsimony (eg, limiting adjustable parameters and favoring linear correlations between variables) will require that the resulting model be easily interpretable and remove symmetries from the problem.
The researchers used data from articles on Newton’s Second Equation of Motion and Lindemann’s Law of Fusion to train sparse neural networks.
Compared to a flexible anticipatory neural network, the resulting PNN lends itself to interpretation (like Newton’s laws) and provides a much more accurate description of particle dynamics when applied iteratively. The resulting PNNs are energy efficient and time-reversible, meaning that they learn non-trivial symmetries that are implicit in the data but not explicitly presented.
This approach is very versatile and can be used in circumstances where there is no underlying differential equation. The team first used PNNs to learn the equations of motion that govern the Hamiltonian dynamics of a particle in the presence and absence of friction in a highly nonlinear external potential.
The team also developed a tool that other researchers can use to create simpler and more interpretable machine learning models based on the findings of their work.