Scientific Journal

Applied Aspects of Information Technology


Nowadays, means of preventive management in various spheres of human life are actively developing. The task of automated screening is to detect hidden problems at an early stage without human intervention, while the cost of responding to them is low. Visual inspection is often used to perform a screening task. Deep artificial neural networks are especially popular in image processing. One of the main problems when working with them is the need for a large amount of well-labeled data for training. In automated screening systems, available neural network approaches have limitations on the reliability of predictions due to the lack of accurately marked training data, as obtaining quality markup from professionals is very expensive, and sometimes not possible in principle. Therefore, there is a contradiction between increasing the requirements for the precision of predictions of neural network models without increasing the time spent on the one hand, and the need to reduce the cost of obtaining the markup of educational data. In this paper, we propose the parametric model of the segmentation dataset, which can be used to generate training data for model selection and benchmarking; and the multi-task learning method for training and inference of deep neural networks for semantic segmentation. Based on the proposed method, we develop a semi-supervised approach for segmentation of salient regions for classification task. The main advantage of the proposed method is that it uses semantically-similar general tasks, that have better labeling than original one, what allows users to reduce the cost of the labeling process. We propose to use classification task as a more general to the problem of semantic segmentation. As semantic segmentation aims to classify each pixel in the input image, classification aims to assign a class to all of the pixels in the input image. We evaluate our methods using the proposed dataset model, observing the Dice score improvement by seventeen percent. Additionally, we evaluate the robustness of the proposed method to different amount of the noise in labels and observe consistent improvement over baseline version. 

  1. Nørgaard, M. F. & Grauslund, J. “Automated Screening for Diabetic Retinopathy – A Systematic Review”. Ophthalmic Res. 2018. 60(1): 9–17. DOI: 10.1159/000486284.
  2. Jingwen, Fu, Xiaoyan, Zhu & Yingbin, Li. “Recognition Of Surface Defects On Steel Sheet Using Transfer Learning”, eprint arXiv:1909.03258. USA. 2019.
  3. Tymchenko, B., Marchenko, P. & Spodarets, D. “Deep Learning Approach to Diabetic Retinopathy Detection”, eprint arXiv:2003.02261. USA. 2020.
  4. Ohring, G., Lord, S. & Derber, J. “Applications of satellite remote sensing in numerical weather and climate prediction”. Advances in Space Research. 2012; 30(11): 2433-2439. DOI: 10.1016/S0273-1177(02)80298-8.
  5. Hartley, R. & Zisserman, A. “Multiple View Geometry in Computer Vision”. Australian National University. University of Oxford. Canberra: Australia. 2011. p.112-114. DOI: 10.1017/CBO9780511811685.
  6. Liu, X., Deng, Z. & Yang, Y. “Recent progress in semantic image segmentation”, eprint arXiv:1809.10198. USA. 2018.
  7. “CS231n: Convolutional Neural Networks for Visual Recognition”. – Available from: – [Accessed: Jan, 2021].
  8. Mustafa, S. & Kimura, A., “A SVM-based diagnosis of melanoma using only useful image features”. 2018 International Workshop on Advanced Image Technology (IWAIT). Chiang Mai: Thailand. 2018. p.1–4. DOI: 10.1109/IWAIT.2018.8369646.
  9. Nasiri, S., Jung, M., Helsper, J. & Fathi M. “Detect and Predict Melanoma Utilizing TCBR and Classification of Skin Lesions in a Learning Assistant System”. Bioinformatics and Biomedical Engineering, Lecture Notes in Computer Science. Springer International Publishing. 2018; Vol. 10813: 531–542. 


    Yousefikamal, P. “Breast Tumor Classification and Segmentation using Convolutional Neural Networks”, eprint arXiv:1905.04247. USA. 2019.
  11. Alahyari, A., Hinneck, A., Tariverdi, R. & Pozo, D. “Segmentation and Defect Classification of the Power Line Insulators: A Deep Learning-based Approach”, eprint arXiv:2009.10163. USA. 2020.
  12. Biertimpel, D., Shkodrani, S., Baslamisli, A. & Baka, N. “Prior to Segment: Foreground Cues for Novel Objects in Partially Supervised Instance Segmentation”, eprint arXiv:2011.11787. USA. 2020.
  13. Tao He, Jixiang Guo, Jianyong Wang, Xiuyuan Xu & Zhang Yi. “Multi-Task Learning For The Segmentation Of Thoracic Organs At Risk In CT Images”.  Machine Intelligence Laboratory. Sichuan University. Sichuan: China. 2019.
  14. Ronneberger, O., Fischer, P. & Brox, T. “U-Net: Convolutional Networks for Biomedical Image Segmentation”, eprint arXiv:1505.04597. USA. 2015.
  15. Yang, X., Zeng, Z., Yeo, S., Tan, C., Tey, H. & Su, Y. “A Novel Multi-task Deep Learning Model for Skin Lesion Segmentation and Classification”, eprint arXiv:1703.01025. USA. 2017.
  16. Le, T., Thome, N., Sylvain, B. & Bismuth, V. “Multitask Classification and Segmentation for Cancer Diagnosis in Mammography”.  Medical Imaging with Deep Learning, eprint arXiv: 2019,1909.05397
  17. Tymchenko, B., Marchenko, P. & Spodarets, D. Segmentation of cloud organization patterns from satellite images using deep neural networks.  Herald of advanced information technology. Publ. Science i Technical. Odesa: Ukraine. 2020; Vol. 3 No. 1: 352–361. DOI: 10.15276/hait 01.2020.2.
  18. Borys Tymchenko, Eugene Khvedchenya, Philip Marchenko, Dmitry Spodarets. “Classification of skin lesions using multi-task deep neural networks”. Herald of advanced information technologyPubl. Science i Technical. Odesa: Ukraine. 2020; Vol. 3 No. 3: 136–148.DOI: 10.15276/hait.03.2020.3.
  19. Danquah, R. A. “Handling Imbalanced Data: A Case Study for Binary Class Problems”, eprint arXiv:2010.04326. USA. 2020.
  20. Tymchenko, B., Hramatik, A. & Tulchiy, H.“Classifying mixed patterns of proteins in microscopic images with deep neural networks”. Herald of advanced information technologyPubl. Science i Technical. Odesa: Ukraine. 2019; Vol. 2 No. 1:  29–36. DOI://10.15276/hait.02.2019.3.
  21. Volkova, N. “Detector Quasi-Periodic Texture Segmentation Method for Dermatological Images Processing”. Herald of Advanced Information Technology. Publ. Science i Technical. Odesa: Ukraine. 2019; Vol.2 No.4: 268–277. DOI:10.15276/hait.04.2019.3.
  22. Bertels, J., Eelbode, T., Berman, M., Vandermeulen, D., Maes, F., Bisschops, R. & Blaschko, M. “Optimizing the Dice Score and Jaccard Index for Medical Image Segmentation: Theory & Practice”, eprint arXiv:1911.01685. USA. 2019.
  23.  “fastai/imagenette”. – Available from: – [Accessed: Jan, 2021]. 
  24. “MNIST Database”.  –  Available from: – [Accessed: Jan, 2021].
  25. “zalandoresearch/fashion-mnist”. – Available from: – [Accessed: Jan, 2021].
  26. Tian, Y., Luo, P., Wang, X. & Tang, X. “Pedestrian Detection aided by Deep Learning Semantic Tasks”, eprint arXiv:1412.0069. USA. 2014.
  27. Yu, T., Kumar, S., Gupta, A., Levine, S., Hausman, K. & Finn, C. “Gradient Surgery for Multi-Task Learning”, eprint arXiv:2001.06782. USA. 2020.
  28. Carbonneau, M. A., Cheplygina, V., Granger, E. & Gagnon, G. “Multiple Instance Learning: A Survey of Problem Characteristics and Applications”, eprint arXiv:1612.03365. USA. 2016.
  29. Caruana, R. “Multitask Learning”. Machine Learning 28. 1997. p. 41–75. DOI: 10.1023/A:1007379606734.
  30. Otsu, N. “A Threshold Selection Method from Gray-Level Histograms”. IEEE Transactions on Systems, Man, and Cybernetics.  Jan. 1979; Vol. 9 Issue 1:  62–66.  DOI: 10.1109/TSMC.1979.4310076.
  31. He, K., Zhang, X., Ren, S. & Sun, J. “Deep Residual Learning for Image Recognition”, eprint arXiv:1512.03385. USA. 2016.
  32. “PyTorch”. – Available from: – [Accessed: Sep, 2020].
  33.  “Accelerated DL R&D”. – Available from: – [Accessed: Sep, 2020].
Last download:
17 May 2022


[ © KarelWintersky ] [ All articles ] [ All authors ]
[ © Odessa National Polytechnic University, 2018.]