Chrome Extension
WeChat Mini Program
Use on ChatGLM

Scenario-Based AI Benchmark Evaluation of Distributed Cloud/Edge Computing Systems

IEEE Transactions on Computers(2023)

Cited 8|Views52
No score
Abstract
Distributed cloud/edge (DCE) platform has become popular in recent years. This paper proposes a new AI benchmark suite for assessing the performance of DCE platforms in machine learning (ML) and cognitive science applications. The benchmark suite is custom-designed to satisfy scenario-based performance requirements, namely the model training time, inference speed, model accuracy, job response time, quality of service, and system reliability. These metrics are substantiated by intensive experiments with real-life AI workloads. Our work is specially tailored for supporting massive AI multitasking across distributed resources in the networking environment. Our benchmark experiments were conducted on an AI-oriented AIRS cloud built at the Chinese University of Hong Kong, Shenzhen. We have tested a large number of ML/DL programs to narrow down the inclusion of ten representative AI kernel codes in the benchmark suite. Our benchmark results reveal the advantages of using the DCE systems cost-effectively in smart cities, healthcare, community surveillance, and transportation services. Our technical contributions are in the AIRS cloud architecture, benchmark design, testing, and distributed AI computing requirements. Our work will benefit computer system designers and AI application developers on clouds, edge, and mobile devices, that are supported by 5G mobile networks and AIoT resources.
More
Translated text
Key words
Computer benchmarks,cloud/edge computing,machine learning,and artificial intelligence
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined