A framework for collaborative sensing and processing of mobile data streams: demo.

MobiCom'16: The 22nd Annual International Conference on Mobile Computing and Networking New York City New York October, 2016(2016)

引用 3|浏览20
暂无评分
摘要
Emerging mobile applications involve continuous sensing and complex computations on sensed data streams. Examples include cognitive apps (e.g., speech recognition, natural language translation, as well as face, object, or gesture detection and recognition) and anticipatory apps that proactively track and provide services when needed. Unfortunately, today's mobile devices cannot keep pace with such apps, despite advances in hardware capability. Traditional approaches address this problem by computation offloading. One approach offloads by sending sensed streams to remote cloud servers via cellular networks or to cloudlets via Wi-Fi, where a clone of the app runs [2, 3, 4]. However, cloudlets may not be widely deployed and access to cloud infrastructure may yield high network delays and can be intermittent due to mobility. Morever, users might hesitate to upload private sensing data to the cloud or cloudlet. A second approach offloads to accelerators by rewriting code to use DSP or GPU within mobile devices. However, using accelerators requires substantial programming effort and produces varied benefits for diverse codes on heterogeneous devices.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要