Drilling rock image segmentation and analysis using segment anything model

Liqun Shan, Yanchang Liu, Ke Du, Shovon Paul, Xingli Zhang, Xiali Hei

Abstract view|250|times       PDF download|170|times

Abstract


Image processing and analysis techniques are commonly utilized in various fields such as geology, underwater engineering, environmental conservation, marine resource exploration, and soil and geological assessments, particularly for examining drilling rock samples. However, processing images of rocks drilled underwater is challenging due to the intricate nature of aquatic settings, where factors such as light reflection and refraction, irregular sizes of rocks, and overlapping particles introduce noise, obscure textures, and distort colors in the images. Although improved versions of the mask region-based convolutional neural network have shown promise for quick and accurate analysis of large sets of underwater rock images, these methods can still be affected by inconsistencies in rock appearance, texture, and lighting. To address these issues, a comprehensive approach is introduced using the segment anything model. Our methodology begins with the application of Gaussian filters to reduce noise and smooth images, followed by the deployment of underwater image enhancement. Further, histogram equalization is applied to better the contrast and employ the segment anything model approach for the detailed understanding of rock features by extracting information on rock size and shape. EeEquivalent area circle diameter and axial ratio are used to generate particle size alignment maps and to ascertain shape details. Our approach has achieved an average precision rate of 80.6%, outperforming other strategies and yielding more precise rock information analysis.

Document Type: Original article

Cited as: Shan, L., Liu, Y., Du, K., Paul, S., Zhang, X., Hei, X. Drilling rock image segmentation and analysis using segment anything model. Advances in Geo-Energy Research, 2024, 12(2): 89-101. https://doi.org/10.46690/ager.2024.05.02


Keywords


Segmentation, segment anything model, underwater rock image, granular analysis

Full Text:

PDF

References


Akkaynak, D., Treibitz, T. A revised underwater image formation model. Paper Presented at Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, USA, 18-23 June, 2018.

Almutairi, A., Saira, S., Wang, Y., et al. Effect of fines migration on oil recovery from carbonate rocks. Advances in Geo-Energy Research, 2023, 8(1): 61-70.

Anantharaman, R., Velazquez, M., Lee, Y. Utilizing mask RCNN for detection and segmentation of oral diseases. Paper Presented at Proceedings of the IEEE International Conference on Bioinformatics and Biomedicine (BIBM), Madrid, Spain, 3-6 December, 2018.

Bello, R. W., Mohamed, A. S. A., Talib, A. Z. Enhanced mask R-CNN for herd segmentation. International Journal of Agricultural and Biological Engineering, 2021, 14(4): 238-244.

Cai, J., Zhao, L., Zhang, F., et al. Advances in multiscale rock physics for unconventional reservoirs. Advances in Geo-Energy Research, 2022, 6(4): 271-275.

Carion, N., Massa, F., Synnaeve, G., et al. End-to-end object detection with transformers. Paper Presented at Proceedings of the European Conference on Computer Vision, Glasgow, UK, 23-28 August, 2020.

Chen, H., Jin, Y., Li, G., et al. Automated cement fragment image segmentation and distribution estimation via a holistically-nested convolutional network and morphological analysis. Powder Technology, 2018, 339: 306- 313.

Dosovitskiy, A., Beyer, L., Kolesnikov, A., et al. An image is worth 16×16 words: Transformers for image recognition at scale. Paper Presented at International Conference on Learning Representations, Vienna, Austria, 3-7 May, 2021.

Fan, H., Tian, Z., Xu, X., et al. Rockfill material segmentation and gradation calculation based on deep learning. Case Studies in Construction Materials, 2022, 17: e01216.

Frei, M., Kruis, F. Image-based size analysis of agglomerated and partially sintered particles via convolutional neural networks. Powder Technology, 2020, 360: 324-336.

Ganugula, P., Kumar, Y. S. S. S., Reddy, N. K., et al. MOSAIC: Multi-object segmented arbitrary stylization using CLIP. Paper Presented at Proceedings of the IEEE/CVF International Conference on Computer Vision, Paris, France, 2-3 October, 2023.

Giannakis, I., Bhardwaj, A., Sam, L., et al. Deep learning universal crater detection using Segment Anything Model (SAM). Icarus, 2024, 408: 115797.

Goodfellow, I., Bengio, Y., Courville, A. Deep Learning. Cambridge, USA, MIT Press, 2016.

Goodfellow, I., Pouget-Abadie, J., Mirza, M., et al. Generative adversarial nets. Paper Presented at Advances in Neural Information Processing Systems, Montreal, Canada, 8-13 December, 2014.

He, K., Gkioxari, G., Dollár, P., et al. Mask R-CNN. Paper Presented at Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22-29 October, 2017.

Julka, S., Granitzer, M. Knowledge distillation with segment anything (SAM) model for planetary geological mapping. Paper Presented at International Conference on Machine Learning, Optimization, and Data Science, Grasmere, UK, 22-26 September, 2023.

Kirillov, A., Mintun, E., Ravi, N., et al. Segment anything. Paper Presented at Proceedings of the IEEE/CVF International Conference on Computer Vision, Paris, France, 1-6 October, 2023.

Liang, Z., Nie, Z., An, A., et al. A particle shape extraction and evaluation method using a deep convolutional neural network and digital image processing. Powder Technology, 2019, 353: 156-170.

Long, J., Shelhamer, E., Darrell, T. Fully convolutional networks for semantic segmentation. Paper Presented at Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, USA, 7-12 June, 2015.

Mazurowski, M. A., Dong, H., Gu, H., et al. Segment anything model for medical image analysis: An experimental study. Medical Image Analysis, 2023, 89: 102918.

Minaee, S., Boykov, Y., Porikli, F., et al. Image segmentation using deep learning: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021, 44(7): 3523-3542.

Murphy, K. Machine Learning: A Probabilistic Perspective. Cambridge, USA, MIT Press, 2012.

Osco, L. P., Wu, Q., de Lemos, E. L., et al. The segment anything model (SAM) for remote sensing applications: From zero to one shot. International Journal of Applied Earth Observation and Geoinformation, 2023, 124: 103540.

Qadir, H. A., Shin, Y., Solhusvik, J., et al. Polyp detection and segmentation using Mask R-CNN: Does a deeper feature extractor CNN always perform better? Paper Presented at Proceedings of the International Symposium on Medical Information and Communication Technology, Oslo, Norway, 8-10 May, 2019.

Rahmon, G., Bunyak, F., Seetharaman, G., et al. Motion U-Net: Multi-cue encoder-decoder network for motion segmentation. Paper Presented at Proceedings of the International Conference on Pattern Recognition, Milan, Italy, 10-15 January, 2021.

Roland, S., Zimmermann, Julien, N., et al. Faster training of Mask R-CNN by focusing on instance boundaries. Computer Vision and Image Understanding, 2019, 188: 102795.

Sahu, S., Singh, A., Ghrera, S., et al. An approach for denoising and contrast enhancement of retinal fundus image using CLAHE. Optics & Laser Technology, 2019, 110: 87-98.

Schult, J., Engelmann, F., Hermans, A., et al. Mask3d: Mask transformer for 3d semantic instance segmentation. Paper Presented at 2023 IEEE International Conference on Robotics and Automation, London, UK, 29 May-2 June, 2023.

Shan, L., Bai, X., Liu, C., et al. Super-resolution reconstruction of digital rock CT images based on residual attention mechanism. Advances in Geo-Energy Research, 2022, 6(2): 157-168.

Suvorov, R., Logacheva, E., Mashikhin, A., et al. Resolutionrobust large mask inpainting with fourier convolutions. Paper Presented at Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, USA, 3-8 January, 2022.

Tahir, H., Khan, M. S., Tariq, M. O. Performance analysis and comparison of faster R-CNN, mask R-CNN and ResNet50 for the detection and counting of vehicles. Paper Presented at Proceedings of the International Conference on Computing, Communication, and Intelligent Systems, Greater Noida, India, 19-20 February, 2021.

Toldo, M., Maracani, A., Michieli, U., et al. Unsupervised domain adaptation in semantic segmentation: A review. Technologies, 2020, 8(2): 35.

Ullo, S., Mohan, A., Sebastianelli, A., et al. A new mask R-CNN-based method for improved landslide detection. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2021, 14: 3799-3810.

Wen, J., Cui, J., Zhao, Z., et al. SyreaNet: A physically guided underwater image enhancement framework integrating synthetic and real images. Paper Presented at Proceedings of the IEEE International Conference on Robotics and Automation, London, United Kingdom, 29 May-2 June, 2023.

Yamada, T., Di Santo, S. Instance segmentation of piled rock particles based on mask R-CNN. Paper Presented at Proceedings of the IEEE International Geoscience and Remote Sensing Symposium, Kuala Lumpur, Malaysia, 17-22 July, 2022.

Yamagiwa, H., Takase, Y., Kambe, H., et al. Zero-shot edge detection with SCESAME: Spectral clustering-based ensemble for segment anything model estimation. Paper Presented at Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Hawaii, USA, 4-8 January, 2024.

Zhao, H., Zhang, Y., Liu, S., et al. PSANet: Point-wise spatial attention network for scene parsing. Paper Presented at Proceedings of the European Conference on Computer Vision, Munich, Germany, 8-14 September, 2018.

Zhou, X., Gong, Q., Liu, Y., et al. Automatic segmentation of TBM muck images via a deep-learning approach to estimate the size and shape of rock chips. Automation in Construction, 2021, 126: 103685




DOI: https://doi.org/10.46690/ager.2024.05.02

Refbacks

  • There are currently no refbacks.


Copyright (c) 2024 The Author(s)

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Copyright ©2018. All Rights Reserved