What kind of news gatekeepers do we want machines to be? Filter bubbles, fragmentation, and the normative dimensions of algorithmic recommendations

Computers in Human Behavior(2019)

引用 195|浏览22
暂无评分
摘要
Machines are increasingly aiding or replacing humans in journalistic work, primarily in news distribution. We examined whether news recommendation engines contribute to filter bubbles and fragmented news audiences by asking a diverse set of real-world participants (N = 168), using their personal Google accounts, to search Google News for news about Hillary Clinton and Donald Trump during the 2016 U.S. presidential campaign and report the first five stories they were recommended on each candidate. Users with different political leanings from different states were recommended very similar news, challenging the assumption that algorithms necessarily encourage echo chambers. Yet we also found a very high degree of homogeneity and concentration in the news recommendations. On average, the most recommended five news organizations comprised 69% of all recommendations. Five news organizations alone accounted for 49% of the total number of recommendations collected. Out of 14 organizations that dominated recommendations across the different searches, only three were born-digital, indicating that the news agenda constructed on Google News replicates traditional industry structures more than disrupts them. We use these findings to explore the challenges of studying machine behavior in news from a normative perspective, given the lack of agreed-upon normative standards for humans as news gatekeepers. This article suggests that because there is no one agreed-upon standard for humans as news gatekeepers, assessing the performance of machines in that role is doubly complicated.
更多
查看译文
关键词
Algorithm,Echo chamber,Filter bubble,Fragmentation,Gatekeeping,Google News,Journalism,News diversity,Personalization
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要