Comparing Dominance Hierarchy Methods Using A Data-Splitting Approach With Real-World Data

BEHAVIORAL ECOLOGY(2020)

引用 15|浏览17
暂无评分
摘要
The development of numerical methods for inferring social ranks has resulted in an overwhelming array of options to choose from. Previous work has established the validity of these methods through the use of simulated datasets, by determining whether a given ranking method can accurately reproduce the dominance hierarchy known to exist in the data. Here, we offer a complementary approach that assesses the reliability of calculated dominance hierarchies by asking whether the calculated rank order produced by a given method accurately predicts the outcome of a subsequent contest between two opponents. Our method uses a data-splitting "training-testing" approach, and we demonstrate its application to real-world data from wild vervet monkeys (Chlorocebus pygerythrus) collected over 3 years. We assessed the reliability of seven methods plus six analytical variants. In our study system, all 13 methods tested performed well at predicting future aggressive outcomes, despite some differences in the inferred rank order produced. When we split the dataset with a 6-month training period and a variable testing dataset, all methods predicted aggressive outcomes correctly for the subsequent 10 months. Beyond this 10-month cut-off, the reliability of predictions decreased, reflecting shifts in the demographic composition of the group. We also demonstrate how a data-splitting approach provides researchers not only with a means of determining the most reliable method for their dataset but also allows them to assess how rank reliability changes among age-sex classes in a social group, and so tailor their choice of method to the specific attributes of their study system.
更多
查看译文
关键词
data-splitting approach, dominance hierarchy, nonsequential approach, real-world data, reliability, sequential approach
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要