Partial Search in a Frozen Network is Enough to Find a Strong Lottery Ticket
CoRR(2024)
摘要
Randomly initialized dense networks contain subnetworks that achieve high
accuracy without weight learning – strong lottery tickets (SLTs). Recently,
Gadhikar et al. (2023) demonstrated theoretically and experimentally that SLTs
can also be found within a randomly pruned source network, thus reducing the
SLT search space. However, this limits the search to SLTs that are even sparser
than the source, leading to worse accuracy due to unintentionally high
sparsity. This paper proposes a method that reduces the SLT search space by an
arbitrary ratio that is independent of the desired SLT sparsity. A random
subset of the initial weights is excluded from the search space by freezing it
– i.e., by either permanently pruning them or locking them as a fixed part of
the SLT. Indeed, the SLT existence in such a reduced search space is
theoretically guaranteed by our subset-sum approximation with randomly frozen
variables. In addition to reducing search space, the random freezing pattern
can also be exploited to reduce model size in inference. Furthermore,
experimental results show that the proposed method finds SLTs with better
accuracy and model size trade-off than the SLTs obtained from dense or randomly
pruned source networks. In particular, the SLT found in a frozen graph neural
network achieves higher accuracy than its weight trained counterpart while
reducing model size by 40.3×.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要