Scaling Submodular Maximization Via Pruned Submodularity Graphs

international conference on artificial intelligence and statistics(2017)

引用 26|浏览114
暂无评分
摘要
We propose a new random pruning method (called "submodular sparsification (SS)") to reduce the cost of submodular maximization. The pruning is applied via a submodularity graph over the n ground elements, where each directed edge is associated with a pairwise dependency defined by the submodular function. In each step, SS prunes a 1-1/root c (for c>1) fraction of the nodes using weights on edges computed based on only a small number (O(logn)) of randomly sampled nodes. The algorithm requires log root c n steps with a small and highly parallelizable per-step computation. An accuracy-speed tradeoff parameter c, set as c=8, leads to a fast shrink rate root 2/4 and small iteration complexity log(2 root 2)n. Analysis shows that w.h.p., the greedy algorithm on the pruned set of size O(log(2)n) can achieve a guarantee similar to that of processing the original dataset. In news and video summarization tasks, SS is able to substantially reduce both computational costs and memory usage, while maintaining (or even slightly exceeding) the quality of the original (and much more costly) greedy algorithm.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要