Structural defect classification with pre-trained inception network: insights from sample knowledge and model transfer learning through benchmark data sets

Authors

  • J. Prawin Author
  • Ahmed Umar Murad Author

DOI:

https://doi.org/10.5281/zenodo.17405831

Keywords:

Inception network; structural defects; data imbalance; classification;

Abstract

Deep Convolutional Neural Networks (CNNs) have been extensively employed for automatically classifying defects in civil infrastructure from images. The use of pre-trained CNN models is advantageous due to limited training data and their adaptability by fine-tuning for specific tasks or datasets. This paper evaluates the effectiveness of the pre-trained CNN models, Inception ResNetV2, ResNet50 and MobileNet in detecting and classifying structural defects in images of metallic and non-metallic structural components using benchmark datasets: NEU-DET (6 classes), GC10-DET (10 classes), MCDS (6 classes), and BCL (3 classes). The NEU-DET and GC10-DET datasets focus on metallic surface defects, while MCDS and BCL datasets cover non-metallic surface defects (concrete and masonry). Models are trained, validated, and tested individually for each dataset, addressing data imbalance. Performance evaluation studies involves adjusting the last layers during fine-tuning (model transfer) and testing across different datasets (sample knowledge), providing insights into adaptability and generalization. The study finds that the F1-score reaches 99% for NEU-DET, 78% for GC10-DET, 85% for MCDS, and 99% for BCL when tested on the same dataset. However, across datasets with common or similar classes, F1-scores remain above 90% for NEU-DET and GC10-DET but drop to 53% and 76% for MCDS and BCL, respectively, when trained on different datasets. Conversely, F1 scores exceed 90% for all datasets when training data includes all samples of various datasets with similar classes. The results of the study concluded that transfer learning with multiple datasets of similar classes enhances performance and fine-tuning of the last few layers proves sufficient for accuracy within the same dataset and slightly increased fine-tuning enhancing classification across different datasets.

Published

2025-04-01

Issue

Section

Articles