An Analysis of Design Process and Performance in Distributed Data Science Teams

semanticscholar(2019)

Cited 0|Views0
No score
Abstract
Purpose – Often, it is assumed that teams are better at solving problems than individuals working independently. However, recent work in engineering, design, and psychology contradicts this assumption. This work examines the behavior of teams engaged in data science competitions. Crowdsourced competitions have seen increased used for software development and data science, and platforms often encourage teamwork between participants.Design/methodology/approach – We specifically examine teams participating in data science competitions hosted by Kaggle. We analyze data provided by Kaggle to compare the effect of team size and interaction frequency on team performance. We also contextualize these results through a semantic analysis.Findings – This work demonstrates that groups of individuals working independently may outperform interacting teams on average, but that small, interacting teams are more likely to win competitions. The semantic analysis revealed differences in forum participation, verb usage, and pronoun usage when comparing top- and bottom-performing teams.Research limitations/implications- These results reveal a perplexing tension that must be explored further: true teams may experience better performance with higher cohesion, but nominal teams may perform even better on average with essentially no cohesion. A limitation of this research includes not factoring in team member experience level and reliance on extant data.Originality/Value – These results are potentially of use to designers of crowdsourced data science competitions as well as managers and contributors to distributed software development projects.
More
Translated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined