LLM-Ensemble: Optimal Large Language Model Ensemble Method for E-commerce Product Attribute Value Extraction
CoRR(2024)
摘要
Product attribute value extraction is a pivotal component in Natural Language
Processing (NLP) and the contemporary e-commerce industry. The provision of
precise product attribute values is fundamental in ensuring high-quality
recommendations and enhancing customer satisfaction. The recently emerging
Large Language Models (LLMs) have demonstrated state-of-the-art performance in
numerous attribute extraction tasks, without the need for domain-specific
training data. Nevertheless, varying strengths and weaknesses are exhibited by
different LLMs due to the diversity in data, architectures, and
hyperparameters. This variation makes them complementary to each other, with no
single LLM dominating all others. Considering the diverse strengths and
weaknesses of LLMs, it becomes necessary to develop an ensemble method that
leverages their complementary potentials. In this paper, we propose a novel
algorithm called LLM-ensemble to ensemble different LLMs' outputs for attribute
value extraction. We iteratively learn the weights for different LLMs to
aggregate the labels with weights to predict the final attribute value. Not
only can our proposed method be proven theoretically optimal, but it also
ensures efficient computation, fast convergence, and safe deployment. We have
also conducted extensive experiments with various state-of-the-art LLMs,
including Llama2-13B, Llama2-70B, PaLM-2, GPT-3.5, and GPT-4, on Walmart's
internal data. Our offline metrics demonstrate that the LLM-ensemble method
outperforms all the state-of-the-art single LLMs on Walmart's internal dataset.
This method has been launched in several production models, leading to improved
Gross Merchandise Volume (GMV), Click-Through Rate (CTR), Conversion Rate
(CVR), and Add-to-Cart Rate (ATC).
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要