Latency Comparison of Cloud Datacenters and Edge Servers.

GLOBECOM(2020)

引用 41|浏览1
暂无评分
摘要
Edge computing has become a recent approach to bring computing resources closer to the end-user. While offline processing and aggregate data reside in the cloud, edge computing is promoted for latency-critical and bandwidth-hungry tasks. In this direction, it is crucial to quantify the expected latency reduction when edge servers are preferred over cloud locations. In this paper, we performed an extensive measurement to assess the latency characteristics of end-users with respect to the edge servers and cloud data centers. We also evaluated the impact of capacity limitations of edge servers on the latency under various user workloads. We measured latency from 8,456 end-users to 6,341 Akamai edge servers and 69 cloud locations. Measurements of latencies show that while 58% of end-users can reach a nearby edge server in less than 10 ms, only 29% of end-users obtain a similar latency from a nearby cloud location. Additionally, we observe that the latency distribution of end-users to edge servers follows a power-law distribution, which emphasizes the need for non-uniform server deployment and load balancing by an edge provider.
更多
查看译文
关键词
Edge computing,Fog computing,Cloud computing,Latency measurement
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要