Optimization Algorithm to Reduce Training Time for Deep Learning Computer Vision Algorithms Using Large Image Datasets With Tiny Objects.

IEEE Access(2023)

Cited 0|Views3
No score
Abstract
The optimization of convolutional neural networks (CNN) generally refers to the improvement of the inference process, making it as fast and precise as possible. While inference time is an essential factor in using these networks in real time, the training of CNNs using very large datasets can be costly in terms of time and computing power. This study proposes a technique to reduce the training time by an average of 75% without altering the results of CNN training with an algorithm which partitions the dataset and discards superfluous objects (targets). This algorithm is a tool that pre-processes the original dataset, generating a smaller and more condensed dataset to be used for network training. The effectiveness of this tool depends on the type of dataset used for training the CNN and is particularly effective with sequential images (video), large images and images with tiny targets generally from drones or traffic surveillance cameras (but applicable to any other type of image which meets the requirements). The tool can be parameterized to meet the characteristics of the initial dataset.
More
Translated text
Key words
large image datasets,tiny objects,deep learning,computer vision,optimization
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined