Chrome Extension
WeChat Mini Program
Use on ChatGLM

TRANSOM: An Efficient Fault-Tolerant System for Training LLMs

CoRR(2023)

Cited 0|Views36
No score
Abstract
Large language models (LLMs) represented by chartGPT have achieved profound applications and breakthroughs in various fields. This demonstrates that LLMs with hundreds of billions or trillions of parameters will continue to transform our daily lives. However, training LLMs with super-large-scale parameters requires even larger and high-performance GPU clusters and continuous training periods lasting for months. Due to the inevitable hardware and software failures in large clusters, maintaining large-scale training sessions lasting more than a week has become extremely challenging. A significant amount of time is spent on tasks such as checkpoint saving and recovery, task restart submissions, and task anomaly checks, greatly reducing the efficiency of effective training. To address these issues, a novel fault-tolerant large model training system has been proposed, which we named TRANSOM. In this work, we have designed three key components: the Training pipeline Automatic Fault Tolerance and Recovery Mechanism (TOL), the Training Task Multi-dimensional Metric Automatic Anomaly Detection System (TEE), and the Training Checkpoint Asynchronous Access Automatic Fault Tolerance and Recovery Technology (TCE). Our preliminary results indicate that TRANSOM significantly accelerates the efficiency of large-scale LLMs training on clusters. For instance, the pre-training time for GPT-3 with 175B parameters has been reduced by 28%, and the checkpoint storage and recovery performance has improved by a factor of 20.
More
Translated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined