LLM can Achieve Self-Regulation via Hyperparameter Aware Generation
CoRR(2024)
Abstract
In the realm of Large Language Models (LLMs), users commonly employ diverse
decoding strategies and adjust hyperparameters to control the generated text.
However, a critical question emerges: Are LLMs conscious of the existence of
these decoding strategies and capable of regulating themselves? The current
decoding generation process often relies on empirical and heuristic manual
adjustments to hyperparameters based on types of tasks and demands. However,
this process is typically cumbersome, and the decoding hyperparameters may not
always be optimal for each sample. To address the aforementioned challenges, we
propose a novel text generation paradigm termed Hyperparameter Aware Generation
(HAG). By leveraging hyperparameter-aware instruction tuning, the LLM
autonomously determines the optimal decoding strategy and configs based on the
input samples, enabling self-regulation. Our approach eliminates the need for
extensive manual tuning, offering a more autonomous, self-regulate model
behavior. Experimental results spanning six datasets across reasoning,
creativity, translation, and mathematics tasks demonstrate that
hyperparameter-aware instruction tuning empowers the LLMs to self-regulate the
decoding strategy and hyperparameter. HAG extends the current paradigm in the
text generation process, highlighting the feasibility of endowing the LLMs with
self-regulate decoding strategies.
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined