Sign up for our daily Newsletter and stay up to date with all the latest news!

Subscribe I am already a subscriber

You are using software which is blocking our advertisements (adblocker).

As we provide the news for free, we are relying on revenues from our banners. So please disable your adblocker and reload the page to continue using this site.
Thanks!

Click here for a guide on disabling your adblocker.

Sign up for our daily Newsletter and stay up to date with all the latest news!

Subscribe I am already a subscriber

Enhancing the ability to tell an avocado's ripeness

Attention mechanisms such as the Convolutional Block Attention Module (CBAM) can help emphasize and refine the most relevant feature maps such as color, texture, spots, and wrinkle variations for the avocado ripeness classification.

However, the CBAM lacks global context awareness, which may prevent it from capturing long-range dependencies or global patterns such as relationships between distant regions in the image. Further, more complex neural networks can improve model performance but at the cost of increasing the number of layers and train parameters, which may not be suitable for resource-constrained devices. This paper presents the Hybrid Attention Convolutional Neural Network (HACNN) model for classifying avocado ripeness on resource-constrained devices. It aims to perform local feature enhancement and capture global relationships, leading to a more comprehensive feature extraction by combining attention modules for the Convolutional Neural Network models. The proposed HACNN model combines transfer learning in the Convolutional Neural Network with hybrid attention mechanisms, including Spatial, Channel, and Self-Attention Modules, to effectively capture the intricate features of avocado ripeness from fourteen thousand images. Extensive experiments demonstrate that the transfer learning HACNN with the EfficienctNet-B3 model significantly outperforms conventional models regarding the performance and accuracy of 96.18%, 92.64%, and 91.25% for train, validation, and test models, respectively. In addition, this model consumed 59.81 MB of memory and an average inference time of 280.67 ms with TensorFlow Lite on a smartphone. Although the transfer learning HACNN with the ShuffleNetV1 (1.0x) model consumes the least resources, its testing accuracy is only 82.89%, which is insufficient for practical applications. Thus, the transfer learning HACNN with the MobileNetV3 Large model is an exciting option for resource-constrained devices. It has a test accuracy of 91.04%, an average memory usage of 26.52 MB, and an average inference time of 86.94 ms on the smartphone.

These findings indicated that the proposed method enhances avocado ripeness classification accuracy and ensures feasibility for practical implementation in low-resource environments.

Nuanmeesri, S. (2025). Enhanced hybrid attention deep learning for avocado ripeness classification on resource-constrained devices. Scientific Reports, 15(1), 1-15. https://doi.org/10.1038/s41598-025-87173-7

Source: Nature

Publication date: