Team Work

AN ENHANCED MULTI-MODAL BIOMETRIC AUTHENTICATION

ABSTRACT:

Biometric authentication is a promising approach to securing the Internet of Things (IoT). Although existing research shows that using multiple biometrics for authentication helps increase recognition accuracy, the majority of biometric approaches for IoT today continue to rely on a single modality. We propose a multimodal biometric approach for IoT based on face and voice modalities that is designed to scale to the limited resources of an IoT device. Our work builds on the foundation of Gofman et al. [7] in implementing face and voice feature level fusion on mobile devices. We used discriminant correlation analysis (DCA) to fuse features from face and voice and used the K-nearest neighbours (KNN) algorithm to classify the features. The approach was implemented on the Raspberry Pi IoT device and was evaluated on a dataset of face images and voice files acquired using a Samsung Galaxy S5 device in real-world conditions such as dark rooms and noisy settings. The results show that fusion increased recognition accuracy by 52.45% compared to using face alone and 81.62% compared to using voice alone. It took an average of 1.34 seconds to enrol a user and 0.91 seconds to perform the authentication. To further optimize execution speed and reduce power consumption, we implemented classification on a field-programmable gate array (FPGA) chip that can be easily integrated into an IoT device. Experimental results showed that the proposed FPGA-accelerated KNN could achieve 150x faster execution time and 12x lower energy consumption compared to a CPU.

EXISTING SYSTEM :

We implemented our approach on a Raspberry Pi 3 Model B with a quad-core 64-bit ARM Cortex A53 1.2 GHz processor and 1 GB of RAM. The entire system was implemented in Python. Modules utilized included: NumPy [22] for matrix computations; Scikit-learn [23] for data pre-processing, principal component analysis (PCA), and KNN implementation; OpenCV [24] for image pre-processing and extraction of HOG features; Scikit-image [25] to extract LBP features; and LibROSA [20] for voice pre-processing and calculation of MFCC features. The DCA algorithm was implemented from scratch using NumPy

EXISTING SYSTEM DISADVANTAG:

1.LESS ACCURACY

2.LOW EFFICIENCY

PROPOSED SYSTEM :

We have proposed a multimodal biometrics approach based on fusing features from face and voice using DCA in order to help perform secure authentication on IoT. We have implemented a prototype of our approach on the Raspberry Pi IoT device as well as on the FPGA chip, which can be integrated with the IoT device, in order to further reduce computation time and power consumption. Our results show that the multimodal approach achieves significantly lower EER than unimodal approaches and the DCA approach used by Gofman et al [7], by whom our work is inspired. Also, the approach is sufficiently fast for a CPU and can be made even faster and more power-efficient when implemented on the FPGA.

PROPOSED SYSTEM ADVANTAGES:

1.HIGH ACCURACY

2.HIGH EFFICIENCY

SYSTEM REQUIREMENTS
SOFTWARE REQUIREMENTS:
• Programming Language : Python
• Font End Technologies : TKInter/Web(HTML,CSS,JS)
• IDE : Jupyter/Spyder/VS Code
• Operating System : Windows 08/10

HARDWARE REQUIREMENTS:

 Processor : Core I3
 RAM Capacity : 2 GB
 Hard Disk : 250 GB
 Monitor : 15″ Color
 Mouse : 2 or 3 Button Mouse
 Key Board : Windows 08/10

For More Details of Project Document, PPT, Screenshots and Full Code
Call/WhatsApp – 9966645624
Email – info@srithub.com

Facebook
Twitter
WhatsApp
LinkedIn

Enquire Now

Leave your details here for more details.