Gecko: Versatile Text Embeddings Distilled from Large Language Models
CoRR(2024)
Abstract
We present Gecko, a compact and versatile text embedding model. Gecko
achieves strong retrieval performance by leveraging a key idea: distilling
knowledge from large language models (LLMs) into a retriever. Our two-step
distillation process begins with generating diverse, synthetic paired data
using an LLM. Next, we further refine the data quality by retrieving a set of
candidate passages for each query, and relabeling the positive and hard
negative passages using the same LLM. The effectiveness of our approach is
demonstrated by the compactness of the Gecko. On the Massive Text Embedding
Benchmark (MTEB), Gecko with 256 embedding dimensions outperforms all existing
entries with 768 embedding size. Gecko with 768 embedding dimensions achieves
an average score of 66.31, competing with 7x larger models and 5x higher
dimensional embeddings.
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined