Using artificial intelligence to see the plasma benefit of fusion experiments in new ways
MIT researchers are testing the ability of a simplified turbulence theory to model plasma phenomena using a new machine learning technique.
To make fusion energy a viable resource for the global energy grid, researchers must understand the turbulent motion of plasmas: a mixture of ions and electrons swirling around in reactor vessels. Plasma particles, following magnetic field lines in toroidal chambers known as tokamaks, must be confined long enough for the fusion devices to produce significant net energy gains, a challenge when the hot edge of the plasma (over 1 million degrees Celsius) is only a few inches from the much cooler solid walls of the container.
Abhilash Mathews, a doctoral student in the Department of Nuclear Science and Engineering working at MIT’s Plasma Science and Fusion Center (PSFC), believes this advantage of plasma is a particularly rich source of unanswered questions. A turbulent frontier, it is essential for understanding plasma containment, refueling, and potentially damaging heat flows that can strike the surfaces of materials – factors that impact the design of fusion reactors.
To better understand the edge conditions, scientists are focusing on modeling the turbulence at this boundary using numerical simulations that will help predict the behavior of the plasma. However, “first principles” simulations of this region are among the most difficult and time consuming calculations in fusion research. Progress could be accelerated if researchers could develop “reduced” computer models that work much faster, but with quantified levels of precision.
For decades, Tokamak physicists have routinely used a reduced “two-fluid theory” rather than higher fidelity models to simulate borderline plasmas in experiments, despite uncertainty about the accuracy. In two recent publications, Mathews begins to directly test the accuracy of this model of reduced plasma turbulence in a new way: it combines physics with machine learning.
âA successful theory is supposed to predict what you’re going to observe,â says Mathews, âeg temperature, density, electrical potential, fluxes. And it is the relationships between these variables that fundamentally define a theory of turbulence. What our work essentially examines is the dynamic relationship between two of these variables: the turbulent electric field and the electronic pressure.
In the first article, published in Physical examination EMathews uses a novel deep learning technique that uses artificial neural networks to construct representations of the equations governing reduced fluid theory. With this framework, he demonstrates a way to calculate the turbulent electric field from a fluctuation of the pressure of electrons in the plasma in agreement with the reduced fluid theory. Models commonly used to relate the electric field to pressure collapse when applied to turbulent plasmas, but this one is robust even to noisy pressure measurements.
In the second article, published in Plasma physicsMathews further investigates this connection, contrasting it with more faithful turbulence simulations. This unique comparison of turbulence between models was previously difficult, if not impossible, to accurately assess. Mathews finds that in plasmas relevant to existing fusion devices, the turbulent fields predicted from the reduced fluid model are consistent with high-fidelity calculations. In this sense, the theory of reduced turbulence works. But to fully validate it, “you have to check every connection between every variable,” says Mathews.
Mathews’ advisor, Principal Investigator Jerry Hughes, notes that plasma turbulence is notoriously difficult to simulate, more so than the familiar turbulence seen in air and water. âThis work shows that, under the right conditions, physics-based machine learning techniques can paint a very complete picture of rapidly fluctuating on-board plasma, from a limited set of observations. I’m excited to see how we can apply this to new experiments, in which we hardly ever see all the quantities we want. “
These physics-based deep learning methods open up new avenues for testing old theories and expanding what can be observed from new experiments. David Hatch, a researcher at the Institute for Fusion Studies at the University of Texas at Austin, believes these applications are the start of a promising new technique.
âAbhi’s work is a major achievement with wide application potential,â he says. “For example, given the limited diagnostic measurements of a specific amount of plasma, physics-based machine learning could infer additional amounts of plasma in a neighboring domain, thereby increasing the information provided by a given diagnosis. also opens up new strategies for model validation.
Mathews anticipates some exciting research to come.
âTurning these techniques into fusion experiments for real-world onboard plasmas is a goal we have in mind, and work is currently underway,â he says. “But this is just the start.”
“Uncovering turbulent plasma dynamics via deep learning from partial observations” by A. Mathews, M. Francisquez, JW Hughes, DR Hatch, B. Zhu and BN Rogers, August 13, 2021, Physical examination E.
DOI: 10.1103 / PhysRevE.104.025205
“Turbulent field fluctuations in gyrocinetic and fluid plasmas” by A. Mathews, N. Mandell, M. Francisquez, JW Hughes and A. Hakim, November 1, 2021, Plasma physics.
DOI: 10.1063 / 5.0066064
Mathews was supported in this work by the Manson Benedict Fellowship, the Natural Sciences and Engineering Research Council of Canada, and the US Department of Energy Office of Science under the Fusion Energy Sciences program.