Near-Optimal NP-Hardness of Approximating MAX k-CSPR

THEORY OF COMPUTING(2022)

引用 1|浏览8
暂无评分
摘要
We prove almost optimal hardness for MAX k-CSPR. In MAX k-CSPR, we are given a set of constraints, each of which depends on at most k variables. Each variable can take any value from 1, 2,. .., R. The goal is to find an assignment to variables that maximizes the number of satisfied constraints. We show that, for any k >= 2 and R >= 16, it is NP-hard to approximate MAX k- CSPR to within factor k(O(k))(logR)(k/2) =Rk-1. In the regime where 3 <= k = o(logR= log logR), this ratio improves upon Chan's O(k/Rk-2) factor NP-hardness of approximation of MAX k-CSPR (J. ACM 2016). Moreover, when k = 2, our result matches the best known hardness result of Khot, Kindler, Mossel and O'Donnell (SIAM J. Comp. 2007). We remark here that NPhardness of an approximation factor of 2(O(k)) log(kR)/Rk-1 is implicit in the (independent) work of Khot and Saket (ICALP 2015), which is better than our ratio for all k >= 3. In addition to the above hardness result, by extending an algorithm for MAX 2- CSPR by Kindler, Kolla and Trevisan (SODA 2016), we provide an Omega(logR/Rk-1)-approximation algorithm for MAX k-CSPR. Thanks to Khot and Saket's result, this algorithm is tight up to a factor of O(k(2)) when k <= R-O(1). In comparison, when 3 <= k is a constant, the previously best known algorithm achieves an O(k/Rk-1)-approximation for the problem, which is a factor of O(k logR) from the inapproximability ratio in contrast to our gap of O(k(2)).
更多
查看译文
关键词
constraint satisfaction, hardness of approximation, approximation algorithm
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要