Phi-3 Technical Report: A Highly Capable Language Model Locally on Your Phone
CoRR(2024)
Abstract
We introduce phi-3-mini, a 3.8 billion parameter language model trained on
3.3 trillion tokens, whose overall performance, as measured by both academic
benchmarks and internal testing, rivals that of models such as Mixtral 8x7B and
GPT-3.5 (e.g., phi-3-mini achieves 69
being small enough to be deployed on a phone. The innovation lies entirely in
our dataset for training, a scaled-up version of the one used for phi-2,
composed of heavily filtered publicly available web data and synthetic data.
The model is also further aligned for robustness, safety, and chat format. We
also provide some initial parameter-scaling results with a 7B and 14B models
trained for 4.8T tokens, called phi-3-small and phi-3-medium, both
significantly more capable than phi-3-mini (e.g., respectively 75
MMLU, and 8.7 and 8.9 on MT-bench). Moreover, we also introduce phi-3-vision, a
4.2 billion parameter model based on phi-3-mini with strong reasoning
capabilities for image and text prompts.
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined