Meta-Task Prompting Elicits Embedding from Large Language Models
CoRR(2024)
摘要
In this work, we introduce a new unsupervised embedding method, Meta-Task
Prompting with Explicit One-Word Limitation (MetaEOL), for generating
high-quality sentence embeddings from Large Language Models (LLMs) without the
need for model fine-tuning or task-specific engineering. Leveraging meta-task
prompting, MetaEOL guides LLMs to produce embeddings through a series of
carefully designed prompts that address multiple representational aspects. Our
comprehensive experiments demonstrate that embeddings averaged from various
meta-tasks yield competitive performance on Semantic Textual Similarity (STS)
benchmarks and excel in downstream tasks, surpassing contrastive-trained
models. Our findings suggest a new scaling law for embedding generation,
offering a versatile, resource-efficient approach for embedding extraction
across diverse sentence-centric scenarios.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要