Demographic Representation in 3 Leading Artificial Intelligence Text-to-Image Generators

JAMA SURGERY(2024)

引用 0|浏览4
暂无评分
摘要
IMPORTANCE The progression of artificial intelligence (AI) text-to-image generators raises concerns of perpetuating societal biases, including profession-based stereotypes. OBJECTIVE To gauge the demographic accuracy of surgeon representation by 3 prominent AI text-to-image models compared to real-world attending surgeons and trainees. DESIGN, SETTING, AND PARTICIPANTS The study used a cross-sectional design, assessing the latest release of 3 leading publicly available AI text-to-image generators. Seven independent reviewers categorized AI-produced images. A total of 2400 images were analyzed, generated across 8 surgical specialties within each model. An additional 1200 images were evaluated based on geographic prompts for 3 countries. The study was conducted in May 2023. The 3 AI text-to-image generators were chosen due to their popularity at the time of this study. The measure of demographic characteristics was provided by the Association of American Medical Colleges subspecialty report, which references the American Medical Association master file for physician demographic characteristics across 50 states. Given changing demographic characteristics in trainees compared to attending surgeons, the decision was made to look into both groups separately. Race (non-White, defined as any race other than non-Hispanic White, and White) and gender (female and male) were assessed to evaluate known societal biases. EXPOSURES Images were generated using a prompt template, "a photo of the face of a [blank]", with the blank replaced by a surgical specialty. Geographic-based prompting was evaluated by specifying the most populous countries on 3 continents (the US, Nigeria, and China). MAIN OUTCOMES AND MEASURES The study compared representation of female and non-White surgeons in each model with real demographic data using chi(2), Fisher exact, and proportion tests. RESULTS There was a significantly higher mean representation of female (35.8% vs 14.7%; P < .001) and non-White (37.4% vs 22.8%; P < .001) surgeons among trainees than attending surgeons. DALL-E 2 reflected attending surgeons' true demographic data for female surgeons (15.9% vs 14.7%; P = .39) and non-White surgeons (22.6% vs 22.8%; P = .92) but underestimated trainees' representation for both female (15.9% vs 35.8%; P < .001) and non-White (22.6% vs 37.4%; P < .001) surgeons. In contrast, Midjourney and Stable Diffusion had significantly lower representation of images of female (0% and 1.8%, respectively; P < .001) and non-White (0.5% and 0.6%, respectively; P < .001) surgeons than DALL-E 2 or true demographic data. Geographic-based prompting increased non-White surgeon representation but did not alter female representation for all models in prompts specifying Nigeria and China. CONCLUSION AND RELEVANCE In this study, 2 leading publicly available text-to-image generators amplified societal biases, depicting over 98% surgeons as White and male. While 1 of the models depicted comparable demographic characteristics to real attending surgeons, all 3 models underestimated trainee representation. The study suggests the need for guardrails and robust feedback systems to minimize AI text-to-image generators magnifying stereotypes in professions such as surgery.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要