Rating Worker Skills and Task Strains in Collaborative Crowd Computing: A Competitive Perspective

WWW '19: The Web Conference on The World Wide Web Conference WWW 2019(2019)

引用 5|浏览55
暂无评分
摘要
Collaborative crowd computing, e.g., human computation and crowdsourcing, involves a team of workers jointly solving tasks of varying difficulties. In such settings, the ability to manage the workflow based on workers' skills and task strains can improve output quality. However, many practical systems employ a simple additive scoring scheme to measure worker performance, and do not consider the task difficulty or worker interaction. Some prior works have looked at ways of measuring worker performance or task difficulty in collaborative settings, but usually assume sophisticated models. In our work, we address this question by taking a competitive perspective and leveraging the vast prior work on competitive games. We adapt TrueSkill's standard competitive model by treating the task as a fictitious worker that the team of humans jointly plays against. We explore two fast online approaches to estimate the worker and task ratings: (1) an ELO rating system, and (2) approximate inference with the Expectation Propagation algorithm. To assess the strengths and weaknesses of the various rating methods, we conduct a human study on Amazon's Mechanical Turk with a simulated ESP game. Our experimental design has the novel element of pairing a carefully designed bot with human workers; these encounters can be used, in turn, to generate a larger set of simulated encounters, yielding more data. Our analysis confirms that our ranking scheme performs consistently and robustly, and outperforms the traditional additive scheme in terms of predicted accuracy.
更多
查看译文
关键词
Human computation, collaborative., crowdsourcing, ranking, rating
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要