Evaluating Model Performance Through a User-Centric Explainable Framework for Probabilistic Load Forecasting Models

2024 Third International Conference on Power, Control and Computing Technologies (ICPC2T)(2024)

引用 0|浏览3
暂无评分
摘要
Load forecasting models ensure efficient, secure, and stable operation of the modern power system. Probabilistic forecasting accounts for uncertainties associated with missing features that are often overlooked by deterministic approaches. However, machine learning-based probabilistic models are complicated. This paper proposes a user-centric explainable AI framework that presents global and local interpretations aligned with the expertise and explanation needs of the targeted user. The overall influence of temporal and spatial exogenous features at the model development stage is evaluated using the Permutation Feature Importance technique. Such an explanation provides a holistic picture of the knowledge gained by the Gradient Boosting Regressor-based probabilistic load forecasting model. Further-more, the proposed framework suggests the implementation of SHapely Additive exPlanations (SHAP) at the post-deployment stage for individual forecast instances. Local explanations provided by SHAP are used to distinguish between interval forecasts with higher and lower forecast accuracy. Such distinction is applied for both the lower and upper bounds of the forecast interval. This is specifically useful for the non-AI expert end-users that need load forecasts for their strategizing their daily operations. This work is validated on the Kaggle data set on the national load demand of Panama supported with several other exogenous features such as weather-related quantities, holidays, and date-time details. Results show the efficacy of the proposed framework and its ability to provide user-friendly interpretations aligned with users' explanation goals.
更多
查看译文
关键词
Explainable AI,load forecasting,power system,Gradient Boost Regressor,XAI,SHAP,PFI
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要