The Impact of Optimization Algorithms on The Performance of Face Recognition Neural Networks
Abstract
Face recognition has aroused great interest in a range of industries due to its practical applications nowadays. It is a biometric method that is used to identify and certify people with unique biological traits in a reliable and timely manner. Although iris and fingerprint recognition technologies are more accurate, face recognition technology is the most common and frequently utilized since it is simple to deploy and execute and does not require any physical input from the user. This study compares Neural Networks using (SGD, Adam, or L-BFGS-B) optimizers, with different activation functions (Sigmoid, Tanh, or ReLU), and deep learning feature extraction methodologies including Squeeze Net, VGG19, or Inception model. The inception model outperforms the Squeeze Net and VGG19 in terms of accuracy. Based on the findings of the inception model, we achieved 93.6% of accuracy in a neural network with four layers and forty neurons by utilizing the SGD optimizer with the ReLU activation function. We also noticed that using the ReLU activation function with any of the three optimizers achieved the best results based on findings of the inception model, as it achieved 93.6%, 89.1%, and 94% of accuracy for each of the optimization algorithms SGD, Adam, and BFGS, respectively.
This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium provided the original work is properly cited.
Keywords
Full Text:
PDFTime cited: 2
DOI: http://dx.doi.org/10.55579/jaec.202264.370
Refbacks
Copyright (c) 2022 Journal of Advanced Engineering and Computation
This work is licensed under a Creative Commons Attribution 4.0 International License.