A comparative analysis of Convolution Neural Network models on Amazon Cloud Service

AIIPCC 2022; The Third International Conference on Artificial Intelligence, Information Processing and Cloud Computing(2022)

引用 0|浏览6
暂无评分
摘要
Deep Learning (DL) is increasingly used in Cloud Computing services, where almost unlimited computing resources are available to accelerate the training, testing, and deployment of models. However, it is important to mention the challenges that developers may face when using a Cloud services, for instance the variation of application requirements over time in terms of computation, memory and energy consumption. This variation may require migration to higher performance resources. Indeed, Cloud services dedicated to DL applications such as Graphics Processing Unit (GPU) resources are quite expensive, specifically for small and medium companies. In this context, it is beneficial for Cloud users to understand the needs of DL applications in order to guarantee the well-functioning of their applications over time and reduce the cost of the allocated resources. While considering the importance and the complexity of Convolutional Neural Networks (CNN) in DL, this paper presents a comparative analysis of different types of CNN models (ResNet50, VGG16, VGG19, Inception-v3, Xception) in order to find out when migrating to more powerful GPUs is advantageous in terms of execution time and cost. This analysis was conducted by extracting GPU usage, execution time and associated costs for training models and was performed using Amazon Elastic Computing (EC2) instances dedicated to DL and Amazon CloudWatch for monitoring model metrics. Experimental results showed that is recommended to migrate models using more than 90% of GPU performance to more powerful infrastructures compared to those using less than 90%.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要