A deep learning-based face recognition attendance system
Department of Computer Science, Faculty of Physical Sciences, Akwa Ibom State University Ikot Akpaden, Mkpat Enin.
Research Article
Global Journal of Engineering and Technology Advances, 2023, 17(01), 009–022.
Article DOI: 10.30574/gjeta.2023.17.1.0165
Publication history:
Received on 10 July 2023; revised on 29 September 2023; accepted on 02 October 2023
Abstract:
Deep learning-based face recognition system have produced high accuracy and better performance when compared to other methods of face recognition like the eigen faces. Modern face recognition systems consist of different phases such as face detection, face alignment, feature extraction, face representation and face recognition. This paper proposes a deep learning approach in developing a face recognition-based class attendance system. The Multitask Convolutional Neural Network (MTCNN) is used for the face detection and alignment phase and a lightweight hybrid high performance Deepface Python framework based on the ‘Deepface’ Deep Convolutional Neural Network is employed for the feature extraction, face representation and face recognition phases with FaceNet-512 pretrained model. Because Convolutional Neural Networks (CNNs) perform better with larger datasets, image augmentation will be used on the original photos to enlarge the tiny dataset. The attendance record is stored in a MySQL database and accessed by an Application Programming Interface (API) developed using Hypertext Pre-processor (PHP) CodeIgniter framework. Cosine Similarity is used as the similarity metrics to compare the facial embeddings. A sliding camera system is deployed to aids the full coverage of the class participants irrespective of the size of the class. The test result show that all class participants were correctly identified and captured in the class attendance register generated.
Keywords:
Deep Convolutional Neural Network; Application Programming Interface; Facial Embeddings; Image Augmentation
Full text article in PDF:
Copyright information:
Copyright © 2023 Author(s) retain the copyright of this article. This article is published under the terms of the Creative Commons Attribution Liscense 4.0