DETECTION AND CLASSIFICATION OF SUBMICRON SURFACE DEFECTS BASED ON THEIR INTERFERENCE IMAGES USING DEEP LEARNING
DOI:
https://doi.org/10.20998/2413-3000.2024.9.11Keywords:
submicron defects, interferometric images, deep learning, neural network, MobileNetV2, classificationAbstract
An automated method for detecting and classifying submicron surface defects on mirrors used in high-precision optical systems has been developed, utilizing interferometric image analysis and deep learning. This approach replaces manual inspection with a neural network, delivering faster and more objective defect diagnostics, eliminating human bias, and accelerating quality control in serial production of optical components. A synthetic dataset of interferometric images was generated to train the model, simulating scratch-type defects on mirror surfaces through specialized software based on the Linnik interferometer model. The dataset encompasses three surface classes: flat surfaces, single scratches, and multiple scratches. The neural network is built upon MobileNetV2, pre-trained on ImageNet, with fine-tuning of its final blocks to adapt to the task’s specifics. The architecture incorporates GlobalAveragePooling2D for feature compression, Dense layers with ReLU activation and BatchNormalization, Dropout to mitigate overfitting, and a Softmax output layer for classifying the three categories. Data augmentation and soft voting techniques were employed to enhance the model’s generalization ability. Classification accuracy, assessed using the accuracy metric, achieves 96% on the synthetic validation set and 82.7% on real images acquired from a Linnik interferometer. The highest accuracy is observed for flat surfaces, while the lowest occurs for multiple scratches, highlighting challenges posed by real-world conditions such as noise and artifacts. The method proves its practical value for automated diagnostics, with future enhancements tied to improving synthetic data realism-potentially by incorporating modeled noise-and extending the model’s adaptability to additional defect types like indentations or protrusions, thereby broadening its applicability in optical system manufacturing
References
Goodfellow I., Bengio Y., Courville A. Deep learning. Cambridge: MIT Press, 2017. 800 p.
Ronneberger O., Fischer P., Brox T. U-Net: Convolutional networks for biomedical image segmentation // Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015: 18th International Conference, Munich, Germany, October 5–9, 2015, Proceedings, Part III. 2015. P. 234–241. https://doi.org/10.1007/978-3-319-24574-4_28
Howard A. G., Zhu M., Chen B., Kalenichenko D., Wang W., Weyand T., Andreetto M., Adam H. MobileNets: Efficient convolutional neural networks for mobile vision applications // arXiv preprint arXiv: 1704.04861. 2017. 9 p. https://doi.org/10.48550/arXiv.1704.04861
He K., Zhang X., Ren S., Sun J. Deep residual learning for image recognition // 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, June 27–30, 2016. P. 770–778. https://doi.org/10.1109/CVPR.2016.90
Krizhevsky A., Sutskever I., Hinton G. E. ImageNet classification with deep convolutional neural networks // Communications of the ACM. 2017. Vol. 60, No. 6. P. 84–90. https://doi.org/10.1145/3065386
Dosovitskiy A., Beyer L., Kolesnikov A. et al. An image is worth 16x16 words: Transformers for image recognition at scale // International Conference on Learning Representations (ICLR) 2021. 2021. 22 p. https://doi.org/10.48550/arXiv.2010.11929
Szegedy C., Vanhoucke V., Ioffe S. et al. Rethinking the inception architecture for computer vision // 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, June 27–30, 2016. P. 2818–2826. https://doi.org/10.1109/CVPR.2016.308
Simonyan K., Zisserman A. Very deep convolutional networks for large-scale image recognition // International Conference on Learning Representations (ICLR) 2015. 2015. 14 p. https://doi.org/10.48550/arXiv.1409.1556
Chen L.-C., Papandreou G., Kokkinos I., Murphy K., Yuille A. L. DeepLab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs // IEEE Transactions on Pattern Analysis and Machine Intelligence. 2018. Vol. 40, No. 4. P. 834–848. https://doi.org/10.1109/TPAMI.2017.2699184
Wang Z., Bovik A. C., Sheikh H. R., Simoncelli E. P. Image quality assessment: From error visibility to structural similarity // IEEE Transactions on Image Processing. 2004. Vol. 13, No. 4. P. 600–612. https://doi.org/10.1109/TIP.2003.819861
Daghigh V., Daghigh H., Lacy T. E., Naraghi M. Review of machine learning applications for defect detection in composite materials // Machine Learning with Applications. 2024. Vol. 18. P. 100600 (18 pages). https://doi.org/10.1016/j.mlwa.2024.100600
P. Trentsios, M. Wolf, D. Gerhard. Overcoming the Sim-to-Real Gap in Autonomous Robots // Procedia CIRP. 2022. Vol. 109. P. 287 - 292. https://doi.org/10.1016/j.procir.2022.05.251
Goodfellow I., Pouget-Abadie J., Mirza M., Xu B., Warde-Farley D., Ozair S., Courville A., Bengio Y. Generative adversarial nets // Advances in Neural Information Processing Systems 27 (NIPS 2014). 2014. P. 2672–2680. https://doi.org/10.48550/arXiv.1406.2661
Russakovsky O., Deng J., Su H. et al. ImageNet large scale visual recognition challenge // International Journal of Computer Vision. 2015. Vol. 115, No. 3. P. 211–252. https://doi.org/10.1007/s11263-015-0816-y
Kingma D. P., Ba J. Adam: A method for stochastic optimization // International Conference on Learning Representations (ICLR) 2015. 2015. 15 p. https://doi.org/10.48550/arXiv.1412.6980
Downloads
Published
Issue
Section
License
Copyright (c) 2024 Олександр Кравченко

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
Our journal abides by the Creative Commons copyright rights and permissions for open access journals.
Authors who publish with this journal agree to the following terms:
Authors hold the copyright without restrictions and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License (CC BY-NC-SA 4.0) that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.
Authors are able to enter into separate, additional contractual arrangements for the non-commercial and non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
Authors are permitted and encouraged to post their published work online (e.g., in institutional repositories or on their website) as it can lead to productive exchanges, as well as earlier and greater citation of published work.