Masked Face Recognition Using FaceNet Algorithm

Abstract

In the past three years, the entire world has been exposed to a new virus, COVID-19. It has become very difficult to deal with the fingerprint system, as the exchange of things and contact with devices by more than one person is a good way to transmit the virus. Accordingly, many institutions and organizations have resorted to using face prints as an alternative to handprints to identify people and record the attendance of employees in general. Since the virus is a virus that infects the respiratory system, people are forced to wear face masks or the so-called masks to avoid transmission of the virus during coughing by the infected person. Since the wearing of masks greatly affects the facial features, it was necessary to come up with a technology that allows the identification of masked faces, and this is the subject of our study. Since the face loses many of its features while wearing masks, an algorithm was proposed to recognize the face and train the convolutional neural networks (CNN) through some of the main facial features, including 3D imaging, as it was concluded that the triple loss does not apply to our data sets, as 3D selection has less loss compared to 2D image, Due to its ability to select all feature samples from feature spaces with larger distances between layers and reduced distances between regions, a large cosine loss was utilized as the training loss function. In order to reach a model that deals more with areas not covered by the mask, the input unit was designed, as its function is to combine the Inception- Resent unit and the Convolutional Mass unit, and so, any portion of the face that isn't covered gains weight, thus increasing the importance of those areas in the recognition of masked faces, and experiments that were conducted on several sets of masked faces data showed that the algorithm works to significantly increase the accuracy of masking face recognition, and it can accurately recognize the face using masks.

Country : Iraq

1 Yahya Abdulsattar Mohammed

  1. Computer Engineering Department, University of Mosul, Mosul- Iraq

IRJIET, Volume 8, Issue 3, March 2024 pp. 19-27

doi.org/10.47001/IRJIET/2024.803003

References

  1. A. Al.zu’bi, F. Alb.alas, T. Al-Hadh.rami, L. B. You.nis, and A. Bash.ayreh, “Masked face recognition using deep learning: A review,” Electron., vol. 10, no. 21, pp. 1–35, 2021, doi: 10.3390/electronics10212666.
  2. D. Fitousi, N. Rotschild, C. Pnini, and O. Azizi, “Understanding the Impact of Face Masks on the Processing of Facial Identity, Emotion, Age, and Gender,” Front. Psychol., vol. 12, no. November, pp. 1–13, 2021, doi: 10.3389/fpsyg.2021.743793.
  3. H. De.ng, Z. Fe.ng, G. Qia.n, X. Lv, H. Li, and G. L.i, “MFCosface: a masked-face recognition algorithm based on large margin cosine loss,” Applied Sciences (Switzerland), vol. 11, no. 16. 2021. doi: 10.3390/app11167310.
  4. File:///J:/MASKED_FACE/pepar/5.pdf, “Masked face recognition using convolutional neural network,” 2019 Int. Co.nf. Sust.ain. Te.chnol. Ind. 4.0, STI 2019, no. Dece.mber 2019, 2019, doi: 10.1109/STI47673.2019.9068044.
  5. P. W.ang and J. Di, “Deep learning-based object classification through multimode fiber via a CNN-architecture SpeckleNet,” Appl. Opt., vol. 57, no. 28, p. 82.58, 2018, doi: 10.1364/ao.57.008258.
  6. S. Alb.awi, T. A. Moham.med, and S. Al-Za.wi, “Understanding of a convolutional neural network,” Proc. 20.17 Int. Conf. Eng. Techn.ol. ICET 2017, vol. 2018-Janua, no. August, pp. 1–6, 2018, doi: 10.11.09/ICEngTech.nol.2017.8308186.
  7. B. A. Krizh.evsky, I. Sutsk.ever, and G. E. Hi.nton, “ImageNet Classification with Deep Convolutional Neural Networks,” Commun. ACM, vol. 60, no. 6, pp. 84–90, 2012.
  8. K. Simo.nyan and A. Zisser.man, “Very deep convolutional networks for large-scale image recog.nition,” 3.rd Int. Conf. Le.arn. Rep.resent. ICLR 2.015 - Conf. Tra.ck Proc., pp. 1–14, 2015.
  9. T. Gwyn, K. R.oy, and M. A.tay, “Face recognition using popular deep net architectures: A brief comparative study,” Futur. Internet, vol. 13, no. 7, pp. 1–15, 2021, doi: 10.3390/fi13070164.
  10. A.G. How.ard et al., “MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications,” 2017, [Online]. Ava.ilable: http://arxiv.org/abs/17.04.04861
  11. A.S. Sanc.hez-Mor.eno, J. Oliva.res-Merc.ado, A. He.rnandez-Sua.rez, K. Tosc.ano-Me.dina, G. San..chez-Pe.rez, and G. Beni.tez-Garcia, “Effici.ent face recog.nition system for operating in unconstrained environments,” J. Imaging, vol. 7, no. 9, 2021, doi: 10.3390/jimaging7090161.
  12. Z. Hu.ang, J. Zh.ang, and H. Shan, “When Age-Invariant Face Recognition Meets Face Age Synthesis: A Multi-Task Learning Framework,” Proc. IE.EE Comput. Soc. Conf. Com.put. Vis. Patt.ern Reco.gnit., pp. 7278–7287, 2021, doi: 10.110.9/CVPR464.37.2021.00720.
  13. Y. Zhong et al., “Unequal-training for deep face recognition with long-tailed noisy data,” Proc. IE.EE Co.mput. Soc. Co.nf. Comput. Vis. Pat.tern Rec.ognit., vol. 20.19-June, pp. 7804–78.13, 2019, doi: 10.110.9/CVPR.2019.008.00.
  14. C. Qi and F. Su, “Contrastive-center loss for deep neural networks,” Proc. - Int. Conf. Im.age Pr.ocess. ICIP, vol. 20.17-Septe, pp. 2851–2855, 2018, doi: 10.1109/ICIP.2017.8296.803.
  15. A.Chiurco et al., “Real-time Detection of Worker’s Emotions for Advanced Human-Robot Interaction during Collaborative Tasks in Smart Factories,” Proc.edia Com.put. Sci., vol. 200, no. 2019, pp. 1875–1884, 2022, doi: 10.1016/j.procs.2022.01.388.
  16. A.Hyv and E. Oja, “Independent Component Analysis: A Tutorial Introduction,” Technometrics, vol. 49, no. 3, pp. 357–359, 2007, doi: 10.1198/004017007000000191.
  17. A.Hannachi, “Independent Component Analysis,” pp. 265–293, 2021, doi: 10.1007/978-3-030-67073-3_12.
  18. A.Tha.rwat, “Independent co.mponent analysis: An intro.duction,” Appl. Comput. Informatics, vol. 17, no. 2, pp. 222–249, 2018, doi: 10.1016/j.aci.2018.08.006.
  19. Y. F. Pu, “Measurement Units and Physical Dimensions of Fractance-Part I: Position of Purely Ideal Fractor in Chua’s Axioma.tic Circuit Element System and Fractional-Order Reactance of Fractor in Its Natural Implementation,” IEEE Access, vol. 4, pp. 3379–3397, 20.16, doi: 10.1109/ACCESS.2016.2585818.
  20. X. Chen et al., “Holstein Cattle Face Re-Identification Unifying Global and Part,” 2022.
  21. Z. Huang, Q. Zhang, and G. Zhang, “MLCRNet: Multi-Level Context Refinement for Semantic Segmentation in Aerial Images,” Remote Sens., vol. 14, no. 6, p. 1498, 2022, doi: 10.3390/rs14061498.
  22. C. D. T.an, F. Min, M. Wa.ng, H. R. Zhang, and Z. H. Zhang, “Discovering Patterns With Weak-Wildcard Gaps,” IEEE Access, vol. 4, pp. 4922–4932, 2016, doi: 10.1109/ACCESS.2016.2593953.
  23. S. Wo.o, J. Park, J. Y. L.ee, and I. S. Kweon, “CBAM: Convolutional block attention module,” Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinfo.rmatics), vol. 11211 LNCS, pp. 3–19, 2018, doi: 10.1007/978-3-030-01234-2_1.
  24. X. Wang, R. Girshick, A. Gupta, and K. He, “Non-local Neural Networks,” no. 1, pp. 1–10.
  25.  “Tuning the False Positive Rate / False Negative Rate with Phishing Detection Models,” International Journal of Engineering and Advanced Technology, vol. 9, no. 1S5. pp. 7–13, 2019. doi: 10.35940/ijeat.a1002.1291s52019.
  26. J. S. Akosa, “Predictive accuracy: A misleading performance measure for highly imbalanced data,” SAS Global Forum, vol. 942. pp. 1–12, 2017.
  27. A.M. Martínez, “Recognizing imprecisely localized, partially occluded, and expression variant faces from a single sample per class,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 24, no. 6, pp. 748–763, 2002, doi: 10.1109/TPAMI.2002.1008382.
  28. Y. Su.n, X. W.ang, and X. Ta.ng, “Hybrid deep learning for face verification,” Proc. IEEE Int. Conf. Comput. Vis., pp. 14.89–149.6, 20.13, doi: 10.11.09/ICCV.2013.188.
  29. K. G.uo, S. Wu, and Y. Xu, “Face recog.nition using both visible light image and near-infrared image and a deep network,” CAAI Tr.ans. Intell. Te.chnol., vol. 2, no. 1, pp. 39–47, 2017, doi: 10.1016/j.trit.2017.03.001.
  30. S. Ruan et al., “Multi-pose face recognition based on deep learning in unconstrained scene,” Appl. Sci., vol. 10, no. 13, 20.20, doi: 10.3390/app10134669.
  31. B. Ríos-Sánchez, D. Costa-da-Silva, N. Martín-Yuste, and C. Sánchez-Ávila, “Deep learning for facial recognition on single sample per person scenarios with varied capturing conditions,” Appl. Sci., vol. 9, no. 24, 2019, doi: 10.3390/app9245474.
  32. O. M. Parkhi, A. Vedaldi, and A. Zisserman, “Deep Face Recognition,” no. Section 3, pp. 41.1-41.12, 2015, doi: 10.5244/c.29.41.
  33. Y. Taigman, M. Yang, M. Ranzato, and L. Wolf, “DeepFace: Closing the gap to human-level performance in face verification,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., pp. 1701–1708, 2014, doi: 10.1109/CVPR.2014.220.
  34. H. M. Yao, W. E. I. Sha, and L. J. Jiang, “Applying convolutional neural networks for the source reconstruction,” Prog. Electromagn. Res. M, vol. 76, no. August, pp. 91–99, 2018, doi: 10.2528/PIERM18082907.
  35. H. Hu, S. A. A. Shah, M. Bennamoun, and M. Molton, “2D and 3D face recognition using convolutional neural network,” IEEE Reg. 10 Annu. Int. Conf. Proceedings/TENCON, vol. 2017-Decem, pp. 133–138, 2017, doi: 10.1109/TENCON.2017.8227850.
  36. J. Cai, J. Chen, and X. Liang, “Single-sample face recognition based on intra-class differences in a variation model,” Sensors (Switzerland), vol. 15, no. 1, pp. 1071–1087, 2015, doi: 10.3390/s150101071.