OpenELM: An Efficient Language Model Family with Open Training and Inference Framework
CoRR(2024)
Abstract
The reproducibility and transparency of large language models are crucial for
advancing open research, ensuring the trustworthiness of results, and enabling
investigations into data and model biases, as well as potential risks. To this
end, we release OpenELM, a state-of-the-art open language model. OpenELM uses a
layer-wise scaling strategy to efficiently allocate parameters within each
layer of the transformer model, leading to enhanced accuracy. For example, with
a parameter budget of approximately one billion parameters, OpenELM exhibits a
2.36
pre-training tokens.
Diverging from prior practices that only provide model weights and inference
code, and pre-train on private datasets, our release includes the complete
framework for training and evaluation of the language model on publicly
available datasets, including training logs, multiple checkpoints, and
pre-training configurations. We also release code to convert models to MLX
library for inference and fine-tuning on Apple devices. This comprehensive
release aims to empower and strengthen the open research community, paving the
way for future open research endeavors.
Our source code along with pre-trained model weights and training recipes is
available at . Additionally, models can be found on HuggingFace at:
.
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined