A Framework for Automated Measurement of Responsible AI Harms in Generative AI Applications

Ahmed Magooda, Alec Helyar, Kyle Jackson, David Sullivan, Chad Atalla,Emily Sheng, Dan Vann,Richard Edgar,Hamid Palangi,Roman Lutz, Hongliang Kong, Vincent Yun,Eslam Kamal, Federico Zarfati,Hanna Wallach, Sarah Bird,Mei Chen

CoRR(2023)

引用 0|浏览177
暂无评分
摘要
We present a framework for the automated measurement of responsible AI (RAI) metrics for large language models (LLMs) and associated products and services. Our framework for automatically measuring harms from LLMs builds on existing technical and sociotechnical expertise and leverages the capabilities of state-of-the-art LLMs, such as GPT-4. We use this framework to run through several case studies investigating how different LLMs may violate a range of RAI-related principles. The framework may be employed alongside domain-specific sociotechnical expertise to create measurements for new harm areas in the future. By implementing this framework, we aim to enable more advanced harm measurement efforts and further the responsible use of LLMs.
更多
查看译文
关键词
responsible ai harms,generative ai applications,automated
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要