Improving robustness and efficiency of edge computing models

WIRELESS NETWORKS(2022)

Cited 0|Views2
No score
Abstract
Existing designs of edge computing models are mostly targeted to improve the performance of accuracy. Yet, besides accuracy, robustness and inference efficiency are also crucial attributes to the performance. To achieve satisfied performance in edge-cloud computing frameworks, each distributed model is required to be both robust to perturbations and feasible for information uploading in wireless environments with limited bandwidth. In other words, feature encoders should be more robust and have faster inference time while maintaining accuracy at a competitive level. Therefore, to design accurate, robust and efficient models for bandwidth limited edge computing, we propose a systematic approach to autonomously optimize parameters and architectures of arbitrary deep neural networks. This approach employs a genetic algorithm based bi-generative adversarial network, which is utilized to autonomously develop and select the number of filters (for convolutional layers) and the number of neurons (for fully connected layers) from a wide range of values. To demonstrate the performance, we test our approach on ImageNet and ModelNet databases, and compare it with the state-of-the-art 3D volumetric network and two exclusively GA-based methods. Our results show that the proposed method can significantly improve performance by simultaneously optimizing multiple neural network parameters, regardless of the depth of the network.
More
Translated text
Key words
Neural architecture search, Robustness, Edge computing, Genetic algorithm
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined