• Home
  • About
  • Contact us
Tech News, Magazine & Review WordPress Theme 2017
  • Computing
  • Entertainment
  • Gaming
  • Mobile
  • Science
  • Security
  • Services
  • Software
  • Space
No Result
View All Result
  • Computing
  • Entertainment
  • Gaming
  • Mobile
  • Science
  • Security
  • Services
  • Software
  • Space
Technovanguard — Be at the forefront of technology news
No Result
View All Result

Preparing for Aurora: Ensuring the Portability of Deep Learning Software to Explore Fusion Energy

Justin Rowell by Justin Rowell
29.09.2022
Home Computing

Jan. 6, 2022 — As part of a series aimed at sharing best practices in preparing applications for Aurora, Argonne National Laboratory is highlighting researchers’ efforts to optimize codes to run efficiently on graphics processing units.

As part of the Argonne Leadership Computing Facility’s (ALCF) Aurora Early Science Program, William Tang of the U.S Department of Energy’s (DOE) Princeton Plasma Physics Laboratory is leading a project, “Accelerated Deep Learning Discovery in Fusion Energy Science,” that uses artificial intelligence methods to improve predictive capabilities and mitigate large-scale disruptions in burning plasmas in tokamak systems such as ITER.

Best Practice

  • Encapsulate application performance with figures of merit

The project’s primary application, the FusionDL FRNN (Fusion Recurrent Neural Net) suite, contains a growing collection of machine learning models and implementations in multiple frameworks, including TensorFlow and PyTorch. Running on top of TensorFlow is Keras, a Python-based deep learning application programming interface (API).

Efforts to port FusionDL to Aurora, the ALCF’s forthcoming GPU-powered exascale supercomputer from Intel-HPE, have been led by Kyle Felker, an ALCF computational scientist. The ALCF is a DOE Office of Science user facility at Argonne National Laboratory.

Lessons Learned

  • Remain flexible with respect to the adoption of different deep learning frameworks
  • Look for commonalities among deep learning models and training/inference pipelines rather than over-optimizing one particular model

More powerful predictive models

Exascale systems such as Aurora stand to enable fusion researchers to train increasingly large-scale deep learning models able to predict with greater accuracy the onset of plasma instabilities in tokamak reactors. The increased processing and predictive powers of exascale will permit more exhaustive hyperparameter tuning campaigns that in turn can lead to better-optimized configurations for the AI models.

In addition, porting to exascale offers the potential to enable the training of more specialized or flexible models that can be shared in real-time with experimental facilities from exascale systems in order to perform more complex prediction tasks than, for example, simply estimating plasma disruption start times. Such tasks include providing a zoo of trained classifiers, each of which can fulfill separate roles in a plasma control system.

Consequently, the ported application is able to perform a broader, deeper set of operations, ranging from a standardized and highly accurate model that can safely shut down a reactor if it detects an imminent disruption, to more complex models that can provide live feedback to reactor operators about potential “disruption precursors” and advise as to which actuators might move plasma into a more stable state.

Moreover, the developers aim to accelerate and improve communications between experimental sites and the supercomputing facilities with which they interact; the turnaround times for data transfers and for training new model architectures are expected to shorten significantly.

Porting to exascale

Similar to its utilization in efforts to port the CANDLE suite, Data Parallel C++ (DPC++) has helped facilitate porting FusionDL to Aurora via Intel implementations and optimizations of underlying deep learning frameworks like TensorFlow and PyTorch.

In addition to MPI, FusionDL utilizes the oneCCL and oneDNN programming models, the latter implicitly via TensorFlow and PyTorch. These high-level Python frameworks rely on oneDNN for computationally intensive GPU processes, while oneCCL helps deliver optimal performance on multiple GPUs by distributing optimized communications patterns to allocate parallel training among different nodes.

Collaborating with Intel engineers to diagnose the causes of model underperformance relative to NVIDIA capabilities has helped the development team more deeply understand their models. The team has evaluated and profiled their software on NVIDIA systems, ThetaGPU’s A100s, via Nsight Systems software. The insights gleaned informed and helped calibrate Felker’s expectations on the upcoming Polaris testbed, and on Intel GPUs—and hence of Aurora.

Figures of merit

To assess the progress of their porting efforts, the developers encapsulate application performance in one or more figures of merit (FoM), which can be compared across hardware from different vendors, including NVIDIA, AMD, and Intel GPUs.

While training throughput can provide a useful FoM in terms of examples per second, the developers have found it insufficient for certain performance analyses, particularly during neural network training in initial I/O phases and for checkpointing between epochs.

In those instances where FoM fail on their own, the developers have implemented a more rigorous approach whereby they create regularly updated matrices of FoM that array vendors and hardware components, numerical precision settings (including float16, bfloat16, TensorFloat-32, and float32), and deep learning models.

Stay flexible and don’t get too attached

After implementing both TensorFlow and PyTorch in several models, including long short-term memory (LSTM) and temporal convolutional networks (TCN), the developers have come to accept a degree of flexibility with regards to the deep learning frameworks they adopt depending on the situation.

Furthermore, they have learned not to become too attached to any particular deep learning model. In the fast-evolving field of scientific machine learning and AI, new deep learning architectures are constantly supplanting existing, widely deployed architectures, thereby disincentivizing developers from expending excessive time and energy over-optimizing for any single model. A more efficient way for developers to work to guarantee performance portability is by searching among deep learning models and training/inference pipelines for commonalities such as data loading and batching, convolution operations, transformation operations of layer activations, and techniques for leveraging mixed-precision and quantization.

Source: Nils Heinonen, ALCF

The post Preparing for Aurora: Ensuring the Portability of Deep Learning Software to Explore Fusion Energy appeared first on Technovanguard.


Next Post
Konami Releases Castlevania-themed NFT Collection as Part of 35th Anniversary Celebration

Konami Releases Castlevania-themed NFT Collection as Part of 35th Anniversary Celebration

Recommended.

How to Preserve Your Capital in a Tightened Regulatory Environment

How to Preserve Your Capital in a Tightened Regulatory Environment

01.02.2024
Tech Industry Faces Unprecedented Workforce Challenges as Layoffs Surpass 2022 Numbers

Tech Industry Faces Unprecedented Workforce Challenges as Layoffs Surpass 2022 Numbers

01.02.2024

Trending.

Google’s Financial Triumphs and Challenges: 100 Million Google One Subscribers, Cloud Profits, and Strategic Moves

Google’s Financial Triumphs and Challenges: 100 Million Google One Subscribers, Cloud Profits, and Strategic Moves

01.02.2024
Singtel Collaborates with Nvidia, Unveils Nxera for AI-Focused Datacenters Across Southeast Asia

Singtel Collaborates with Nvidia, Unveils Nxera for AI-Focused Datacenters Across Southeast Asia

01.02.2024
Technovanguard — Be at the forefront of technology news

Technovanguard - The latest news from the world of IT and modern technologies.

Categories

  • Computing
  • Entertainment
  • Gaming
  • Internet
  • Mobile
  • News
  • Science
  • Security
  • Services
  • Software
  • Space
  • Без рубрики

Tags

best bitcoin casino best bitcoin gambling site best crypto casino bitcoin gambling site btc casino cloud services digital services FEATUREDNEWS IT linkedin connection message linkedin connection request template linkedin connect message examples linkedin networking message template linkedin sales message Recommended top bitcoin casinos Trending

Recent News

Lessons From The Trading Floor: Building Trust In The CFD Market

21.05.2025
Residential homes made of foam

Prejudice to Foam and Its Impact on People’s Lives

02.04.2025
  • Home
  • About
  • Contact us

© 2021 technovanguard.com. Submit news release

No Result
View All Result
  • Computing
  • Entertainment
  • Gaming
  • Mobile
  • Science
  • Security
  • Services
  • Software
  • Space

© 2021 technovanguard.com. Submit news release