An Optimized Deep Neural Network Model for Image Classification in Resource-constrained Environments

PDF (1196KB), PP.64-84

Views: 0 Downloads: 0

Author(s)

Raafi Careem 1,* Md Gapar Md Johar 2

1. Department of Computer Science & Informatics, Uva Wellassa University, Badulla, Sri Lanka

2. Software Engineering and Digital Innovation Centre, Management and Science University, Shah Alam, Malaysia

* Corresponding author.

DOI: https://doi.org/10.5815/ijisa.2026.02.05

Received: 10 Oct. 2025 / Revised: 10 Nov. 2025 / Accepted: 6 Jan. 2026 / Published: 8 Apr. 2026

Index Terms

Deep Neural Network, GRMobiNet, Image Classification, Model Deployment, Optimization, Resource-constrained Environment

Abstract

Advances in deep learning have highlighted the need for models tailored for deployment in resource-constrained environments (RCEs), where memory and processing limitations present significant challenges, such as those found in mobile devices, Internet of things (IOT) devices, and embedded systems. This paper introduces GRMobiNet, a novel deep neural network (DNN) model designed to address these challenges in image classification tasks by balancing computational complexity with model accuracy in RCE settings. The model focuses on key performance goals inspired by previous state-of-the-art models, aiming to achieve a better balance between complexity and accuracy. These goals include reducing the model's computational complexity to fewer than 4 million parameters, limiting memory usage to under 16 megabytes, and achieving an accuracy greater than 80%. By meeting these objectives, GRMobiNet enhances both the effectiveness and efficiency of deep neural network deployment in RCE settings. GRMobiNet builds upon MobileNet as its baseline, incorporating advanced techniques such as depthwise separable convolutions, compound scaling, global average pooling, and quantization to optimize performance. Trained on ImageNet-10, a subset of ImageNet-1K, the model underwent rigorous performance evaluation. Experimental results demonstrate that GRMobiNet achieves its performance objectives, with a computational complexity of 3.2 million parameters, memory utilization of 12.6 megabytes, and a prediction accuracy of 92%, validating its suitability for RCEs. This research presents a scalable framework for balancing accuracy and computational efficiency, with significant implications for RCE devices. In future work, GRMobiNet will be tested on commercially available RCE mobile devices using real-world images to assess its practicality and evaluate its performance in terms of accuracy, confidence, and inference time for image classification in real-world scenarios.

Cite This Paper

Raafi Careem, Md Gapar Md Johar, " An Optimized Deep Neural Network Model for Image Classification in Resource-constrained Environments", International Journal of Intelligent Systems and Applications(IJISA), Vol.18, No.2, pp.64-84, 2026. DOI:10.5815/ijisa.2026.02.05

Reference

[1]A. Bhargava and A. Bansal, "Fruits and vegetables quality evaluation using computer vision: A review," J. King Saud Univ. Comput. Inf. Sci., vol. 33, pp. 243-257, 2021.
[2]A. Goel, C. Tung, Y.-H. Lu, and G. K. Thiruvathukal, "A survey of methods for low-power deep learning and computer vision," in 2020 IEEE 6th World Forum on Internet of Things (WF-IoT), 2020, pp. 1-6.
[3]D. W. Otter, J. R. Medina, and J. K. Kalita, "A survey of the usages of deep learning for natural language processing," IEEE transactions on neural networks and learning systems, vol. 32, pp. 604-624, 2020.
[4]S. Kuutti, R. Bowden, Y. Jin, P. Barber, and S. Fallah, "A survey of deep learning applications to autonomous vehicle control," IEEE Transactions on Intelligent Transportation Systems, vol. 22, pp. 712-733, 2020.
[5]S. M. McKinney, M. Sieniek, V. Godbole, J. Godwin, N. Antropova, H. Ashrafian, et al., "International evaluation of an AI system for breast cancer screening," Nature, vol. 577, pp. 89-94, 2020.
[6]L. Santos, F. N. Santos, P. M. Oliveira, and P. Shinde, "Deep learning applications in agriculture: A short review," in Robot 2019: Fourth Iberian Robotics Conference: Advances in Robotics, Volume 1, 2020, pp. 139-151.
[7]Y. Li, "Research and application of deep learning in image recognition," in 2022 IEEE 2nd International Conference on Power, Electronics and Computer Applications (ICPECA), 2022, pp. 994-999.
[8]I. Martinez-Alpiste, G. Golcarenarenji, Q. Wang, and J. M. Alcaraz-Calero, "Smartphone-based real-time object recognition architecture for portable and constrained systems," Journal of Real-Time Image Processing, vol. 19, pp. 103-115, 2022.
[9]X. Zhao, "Research and application of deep learning in image recognition," in Journal of Physics: Conference Series, 2023, p. 012047.
[10]S. Divya, B. Adepu, and P. Kamakshi, "Image Enhancement and Classification of CIFAR-10 Using Convolutional Neural Networks," in 2022 4th International Conference on Smart Systems and Inventive Technology (ICSSIT), 2022, pp. 1-7.
[11]X. Zhao, P. Huang, and X. Shu, "Wavelet-Attention CNN for image classification," Multimedia Systems, vol. 28, pp. 915-924, 2022.
[12]A. R. Choudhuri, B. G. Thakurata, B. Debnath, D. Ghosh, H. Maity, N. Chattopadhyay, et al., "MNIST Image Classification Using Convolutional Neural Networks," in Modeling, Simulation and Optimization: Proceedings of CoMSO 2021, ed: Springer, 2022, pp. 255-266.
[13]N. Saidulu, K. A. Monsley, K. S. Yadav, and R. H. Laskar, "Exploration of Deep Convolutional Neural Networks (Via Transfer Learning) for Handwritten Character Recognition," in 2022 Second International Conference on Power, Control and Computing Technologies (ICPC2T), 2022, pp. 1-6.
[14]M. Tan and Q. Le, "Efficientnet: Rethinking model scaling for convolutional neural networks," in International conference on machine learning, 2019, pp. 6105-6114.
[15]A. Ignatov, R. Timofte, W. Chou, K. Wang, M. Wu, T. Hartley, et al., "Ai benchmark: Running deep neural networks on android smartphones," in Computer Vision – ECCV 2018 Workshops, 2019, pp. pp. 288–314.
[16]Keras, "Keras Applications," https://keras.io/api/applications/, 2023.
[17]A. Howard, M. Sandler, G. Chu, L.-C. Chen, B. Chen, M. Tan, et al., "Searching for mobilenetv3," in Proceedings of the IEEE/CVF international conference on computer vision, 2019, pp. 1314-1324.
[18]A. G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, et al., "Mobilenets: Efficient convolutional neural networks for mobile vision applications," arXiv preprint arXiv:1704.04861, 2017.
[19]W. Wang, Y. Li, T. Zou, X. Wang, J. You, and Y. Luo, "A novel image classification approach via dense-MobileNet models," Mobile Information Systems, vol. 2020, 2020.
[20]M. Kim, Y. Kwon, J. Kim, and Y. Kim, "Image Classification of Parcel Boxes under the Underground Logistics System Using CNN MobileNet," Applied Sciences, vol. 12, p. 3337, 2022.
[21]L. Zhao and L. Wang, "A new lightweight network based on MobileNetV3," KSII Transactions on Internet & Information Systems, vol. 16, 2022.
[22]S. Benkrama and N. E. H. Hemdani, "Deep Learning with EfficientNetB1 for detecting brain tumors in MRI images," in 2023 International Conference on Advances in Electronics, Control and Communication Systems (ICAECCS), 2023, pp. 1-6.
[23]S. A. Albelwi, "Deep Architecture based on DenseNet-121 Model for Weather Image Recognition," Int. J. Adv. Comput. Sci. Appl, vol. 13, pp. 559-565, 2022.
[24]C. Szegedy, S. Ioffe, V. Vanhoucke, and A. A. Alemi, "Inception-v4, inception-resnet and the impact of residual connections on learning," in Thirty-first AAAI conference on artificial intelligence, 2017.
[25]M. Shafiq and Z. Gu, "Deep residual learning for image recognition: a survey," Applied Sciences, vol. 12, p. 8972, 2022.
[26]X. Wan, F. Ren, and D. Yong, "Using Inception-Resnet v2 for face-based age recognition in scenic spots," in 2019 IEEE 6th International Conference on Cloud Computing and Intelligence Systems (CCIS), 2019, pp. 159-163.
[27]M. Zaman and M. S. Hamidi, "A Relative Comparison of Different CNN Models Trained on A Dataset in The Perspective of Bangladesh," in 2021 IEEE Region 10 Symposium (TENSYMP), 2021, pp. 1-5.
[28]A. Pujara, "Image Classification with MobileNet," Published in Analytics Vidhya, 2020.
[29]Q. Xiang, X. Wang, R. Li, G. Zhang, J. Lai, and Q. Hu, "Fruit image classification based on Mobilenetv2 with transfer learning technique," in Proceedings of the 3rd international conference on computer science and application engineering, 2019, pp. 1-7.
[30]M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L.-C. Chen, "Mobilenetv2: Inverted residuals and linear bottlenecks," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 4510-4520.
[31]C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, "Rethinking the inception architecture for computer vision," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 2818-2826.
[32]M. Wei, Q. Wu, H. Ji, J. Wang, T. Lyu, J. Liu, et al., "A skin disease classification model based on densenet and convnext fusion," Electronics, vol. 12, p. 438, 2023.
[33]H. Li, A. Kadav, I. Durdanovic, H. Samet, and H. P. Graf, "Pruning filters for efficient convnets," arXiv preprint arXiv:1608.08710, 2016.
[34]P. Molchanov, A. Mallya, S. Tyree, I. Frosio, and J. Kautz, "Importance estimation for neural network pruning," in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019, pp. 11264-11272.
[35]A. Polino, R. Pascanu, and D. Alistarh, "Model compression via distillation and quantization," arXiv preprint arXiv:1802.05668, 2018.
[36]J. Yang, X. Shen, J. Xing, X. Tian, H. Li, B. Deng, et al., "Quantization networks," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 7308-7316.
[37]J. Yim, D. Joo, J. Bae, and J. Kim, "A gift from knowledge distillation: Fast optimization, network minimization and transfer learning," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 4133-4141.
[38]S. Fu, Z. Li, Z. Liu, and X. Yang, "Interactive knowledge distillation for image classification," Neurocomputing, vol. 449, pp. 411-421, 2021.
[39]H. Liu, K. Simonyan, O. Vinyals, C. Fernando, and K. Kavukcuoglu, "Hierarchical representations for efficient architecture search," arXiv preprint arXiv:1711.00436, 2017.
[40]T. Elsken, J. H. Metzen, and F. Hutter, "Neural architecture search: A survey," The Journal of Machine Learning Research, vol. 20, pp. 1997-2017, 2019.
[41]F. Chollet, "Xception: Deep learning with depthwise separable convolutions," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 1251-1258.
[42]K. Wu, Y. Guo, and C. Zhang, "Compressing deep neural networks with sparse matrix factorization," IEEE transactions on neural networks and learning systems, vol. 31, pp. 3828-3838, 2019.
[43]H. Qassim, A. Verma, and D. Feinzimer, "Compressed residual-VGG16 CNN model for big data places image recognition," in 2018 IEEE 8th annual computing and communication workshop and conference (CCWC), 2018, pp. 169-175.
[44]G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, "Densely connected convolutional networks," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 4700-4708.
[45]J. Park, P. Aryal, S. R. Mandumula, and R. P. Asolkar, "An Optimized DNN Model for Real-Time Inferencing on an Embedded Device," Sensors, vol. 23, p. 3992, 2023.
[46]Y. Gulzar, "Fruit image classification model based on MobileNetV2 with deep transfer learning technique," Sustainability, vol. 15, p. 1906, 2023.
[47]Q. Sun, C. Bai, T. Chen, H. Geng, X. Zhang, Y. Bai, et al., "Fast and Efficient DNN Deployment via Deep Gaussian Transfer Learning," 2021 IEEE/CVF International Conference on Computer Vision (ICCV), pp. 5360-5370, 2021.
[48]T. Liang, J. Glossner, L. Wang, S. Shi, and X. Zhang, "Pruning and quantization for deep neural network acceleration: A survey," Neurocomputing, vol. 461, pp. 370-403, 2021.
[49]D. Passos and P. Mishra, "A tutorial on automatic hyperparameter tuning of deep spectral modelling for regression and classification tasks," Chemometrics and Intelligent Laboratory Systems, vol. 223, p. 104520, 2022.
[50]K. Kamal and E.-Z. Hamid, "A comparison between the VGG16, VGG19 and ResNet50 architecture frameworks for classification of normal and CLAHE processed medical images," 2023.
[51]A. Krizhevsky, I. Sutskever, and G. E. Hinton, "Imagenet classification with deep convolutional neural networks," Advances in neural information processing systems, vol. 25, pp. 1097-1105, 2012.
[52]A. Krizhevsky, I. Sutskever, and G. E. Hinton, "Imagenet classification with deep convolutional neural networks," Communications of the ACM, vol. 60, pp. 84-90, 2017.
[53]O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, et al., "Imagenet large scale visual recognition challenge," International journal of computer vision, vol. 115, pp. 211-252, 2015.
[54]J. Cui, R. Wang, S. Si, and C.-J. Hsieh, "Scaling up dataset distillation to imagenet-1k with constant memory," in International Conference on Machine Learning, 2023, pp. 6565-6590.
[55]H. A. Al-Jubouri and S. M. Mahmmod, "A comparative analysis of automatic deep neural networks for image retrieval," TELKOMNIKA (Telecommunication Computing Electronics and Control), vol. 19, pp. 858-871, 2021.
[56]M. Sadhasivam, M. K. Geetha, and J. G. M. Britto, "Efficient Deep Learning Architecture for the classification of diseased plant leaves," Indonesian Journal of Electrical Engineering and Computer Science, vol. 33, pp. 198-206, 2024.
[57]Y. Zhong, Z. Teng, and M. Tong, "LightMixer: A novel lightweight convolutional neural network for tomato disease detection," Frontiers in Plant Science, vol. 14, p. 1166296, 2023.
[58]N. Ma, X. Zhang, H.-T. Zheng, and J. Sun, "Shufflenet v2: Practical guidelines for efficient cnn architecture design," in Proceedings of the European conference on computer vision (ECCV), 2018, pp. 116-131.
[59]K. Han, Y. Wang, Q. Tian, J. Guo, C. Xu, and C. Xu, "Ghostnet: More features from cheap operations," in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020, pp. 1580-1589.
[60]A. Bochkovskiy, C.-Y. Wang, and H.-Y. M. Liao, "Yolov4: Optimal speed and accuracy of object detection," arXiv preprint arXiv:2004.10934, 2020.