Social applications have been developed over the past years, and there is a great need for safe methods to verify consumer identity.
It is very important to integrate multi -factor verification capabilities in applications. In social apps, the verification procedures eliminate unwanted access to personal information between the two parties. Facial verification is not completely new, as most devices are built as a security move. It offers strong protection than many traditional methods, especially against risks such as fashing, brot force attacks, and account hacking.
Sketch
What do you expect
In this article, I will drive you by creating a multi -factor verification system for chat applications Streas.ioAnd to ensure effective user facial ID verification to allow only authorized access to your app. I will clarify them all with relevant code examples.
Provisions
The necessary conditions to follow this tutorial as well are:
Intermediate Knowledge of Nod dot JS/Express for Basid aspect
Knowledge of reaction for the front and aspect
Streas.io API Key
Before starting, we will briefly highlight the selection facial verification device: Face-api.js.
Face-API JS is a facial recognition package designed for integration with Javascript -powered applications. This tanker was built on the upper part of the Flow Library and provides a large face address based on machine learning model and abstract calculations.
In addition to all these features, it is friendly to use and can also be used locally with its default model. Here is a link to him Page of DocumentsWhich provides examples of the relevant code.
It provides features such as facial detection, facial capture, and facial match, which use Yukidin algorithm Making precise discrimination. Now we will set it up with our chat application project in the next section.
Project Setup
As mentioned earlier, it is a complete stack project that contains both front and back and upside. In this section, we will set up both codes bases before proceeding in the demo project section.
Front &
We will strengthen the application using the Weight Framework for Front End.
npm create vite@latest
After creating a react application, install fee-APIJS with this command:
npm i face-api.js
It will install face The package and the desired dependence. You can then install the Stream -powered Chat SD, which will form the central crooks of the project.
npm i stream-chat stream-chat-react
After a successful completion, we eventually accompanied the project structure scatter. Local testing of our Front & Application Local Wee, we have to host the facial models locally through a facial package. Here is a Link Models. Please copy the model folder and paste into a public folder in the code project. Next, we will set our backbone project.
Hill
Basid is designed to store user details and ensure user verification before access to a chat request. The Mongo will be the DB selection database, and we will use the Express.com JS Library as a backdrop of selection API development environment. Ease of setup l kid, please clone it The code base And install it on the local PC. It already loads with the required installation files. With no smooth backbone experience, you can use Mango DB Inays Option as a database to store user details. With this, we will now start the code project in the next section.
Demo Project: Connecting facial identity and verification
In this section, we will run by setting a verification page on the Front End where the user can register its details, usernames, email and passwords on the registration page. They are obliged to take a snapshot of their face, and the face will be called API to detect the face in the image. Unless it is successful, they will not be allowed to move beyond it.
Then, the syllable faceDescriptor The function is called, which manufactures a unique face description of the user’s face based on the machine learning models provided. These values ​​are securely protected in the Mongo DB Database through Express.com JS Passead after successfully registering. The application is linked to the multi -factor verification system, which contains both password -based verification and facial verification procedures.
When the first obstacle (password verification) is completed, the user needs to take a face match, compare it to a user’s face detector -protected from the registration page. Comparisons are obtained using the doorstep on the basis of the door. If it meets the doorstep, it is said that the face is confronted, and the user has access to a chat request. Otherwise, the user has been denied access to the Stream.U -driven chat application. The related source code pieces will be provided with highlighting these steps as well as photos.
We will start making a defective registration page for our chat application, of course using the Rea React. We will import and start the necessary packages.
import React, {useState, useRef, useEffect} from 'react';
import * as faceapi from 'face-api.js'
import {useNavigate} from 'react-router-dom'
import axios from 'axios';
const Register =()=> {
const navigate= useNavigate("/")
const userRef = useRef();
const passwordRef= useRef();
const emailRef = useRef();
const FullRef = useRef()
In pieces of the aforementioned code, we imported useful rect hooks and initiated our installation Face-api.js Toll Axios We will serve as a tool for our API application selection for this project. useRef Hook will be used to track user inputs. Then we explained the register function and started different useRef Hooks for various input fields.
useEffect(()=> {
const loadModels =async() => {
await faceapi.nets.tinyFaceDetector.loadFromUri('/models');
await faceapi.nets.faceLandmark68Net.loadFromUri('/models');
await faceapi.nets.faceRecognitionNet.loadFromUri('/models');
await faceapi.nets.faceExpressionNet.loadFromUri('/models');
await faceapi.nets.tinyFaceDetector.loadFromUri('/models');
setModelIsLoaded(true);
startVideo();
}
loadModels() }, ())
In the aforementioned code, useEffect Hook is asked to ensure that locally stored face-api Models are started and active in our application. Are safe in models models Sub -folder within public After moving the folder, starting our models, we will now set our camecard feature on our web page.
const (faceDetected, setFaceDetected) = useState(false);
const startVideo = () => {
navigator.mediaDevices
.getUserMedia({ video: true })
.then((stream) => {
videoRef.current.srcObject = stream;
})
.catch((err) => console.error("Error accessing webcam: ", err));
};
const captureSnapshot = async () => {
const canvas = snapshotRef.current;
const context = canvas.getContext('2d');
context.drawImage(videoRef.current, 0, 0, canvas.width, canvas.height);
const dataUrl = canvas.toDataURL('image/jpeg');
setSnapshot(dataUrl);
const detection = await faceapi
.detectSingleFace(canvas, new faceapi.TinyFaceDetectorOptions())
.withFaceLandmarks()
.withFaceDescriptor();
if (detection) {
const newDescriptor = detection.descriptor;
setDescriptionValue(newDescriptor)
console.log( newDescriptor);
setSubmitDisabled(false)
stopVid()
} else {
console.error("No face detected in snapshot");
}
};
const stopVid =() => {
navigator.mediaDevices
.getUserMedia({ video: false })
const stream = videoRef?.current?.srcObject;
if (stream) {
stream.getTracks().forEach(track => {track.stop()})
videoRef.current.srcObject = null;
setCameraActive(false)
}
}
const handleVideoPlay = async () => {
const video = videoRef.current;
const canvas = canvasRef.current;
const displaySize = { width: video.width, height: video.height };
faceapi.matchDimensions(canvas, displaySize);
setInterval(async () => {
if (!cameraActive) return ;
const detections = await faceapi.detectAllFaces(
video,
new faceapi.TinyFaceDetectorOptions()
);
const resizedDetections = faceapi.resizeResults(detections, displaySize);
canvas.getContext('2d').clearRect(0, 0, canvas.width, canvas.height);
faceapi.draw.drawDetections(canvas, resizedDetections);
const detected = detections.length > 0;
if (detected && !faceDetected) {
captureSnapshot();
}
setFaceDetected(detections.length > 0);
}, 100);
};
In the aforementioned code, we start by explaining A. useState When the user’s face is detected during the signup process. Subsequently, the function to mobilize the browser camcard is again activated. With that, we can again be mobilized handlePlayFunction In the code, this function oversees facial detection as facial models have already started. stopVid When the user’s face detection is successfully completed, the function is also dynamic.
In this section, we also activated the browser Cam Carder Tool in our application to provide us with real -time video. CaptureSnapshot The function helps to get a snapshot from existing video exhibition.
const RegSubmit = async (e) => {
e.preventDefault();
console.log("hello");
try {
const res = await axios.post(BACKEND_URL, {
username: userRef.current.value,
email: emailRef.current.value,
FullName: FullRef.current.value,
password: passwordRef.current.value,
faceDescriptor: descriptionValue,
});
console.log(res.data);
setError(false);
navigate("/login");
console.log("help");
} catch (err) {
console.error(err);
setError(true);
}
};
With all the values ​​obtained, regSubmit Then the function is explained. When the process is implemented, it stores the user details provided with the facial description of the face on our background server, after which the next section can be accessed for verification.
The following is a complete registration code.
import React, { useState, useRef, useEffect } from 'react';
import * as faceapi from 'face-api.js';
import { useNavigate } from 'react-router-dom';
import axios from 'axios';
const Register = () => {
const navigate = useNavigate("/");
const userRef = useRef();
const passwordRef = useRef();
const emailRef = useRef();
const FullRef = useRef();
const snapshotRef = useRef(null);
const videoRef = useRef(null);
const canvasRef = useRef(null);
const (modelIsLoaded, setModelIsLoaded) = useState(false);
const (detections, setDetections) = useState(());
const (error, setError) = useState(false);
const (snapshot, setSnapshot) = useState(null);
const (cameraActive, setCameraActive) = useState(true);
const (submitDisabled, setSubmitDisabled) = useState(true);
const (descriptionValue, setDescriptionValue) = useState(null);
const (faceDetected, setFaceDetected) = useState(false);
useEffect(() => {
const loadModels = async () => {
await faceapi.nets.tinyFaceDetector.loadFromUri('/models');
await faceapi.nets.faceLandmark68Net.loadFromUri('/models');
await faceapi.nets.faceRecognitionNet.loadFromUri('/models');
await faceapi.nets.faceExpressionNet.loadFromUri('/models');
await faceapi.nets.tinyFaceDetector.loadFromUri('/models');
setModelIsLoaded(true);
startVideo();
};
loadModels();
}, ());
const RegSubmit = async (e) => {
e.preventDefault();
console.log("hello");
try {
const res = await axios.post('http://localhost:5000/v1/users', {
username: userRef.current.value,
email: emailRef.current.value,
FullName: FullRef.current.value,
password: passwordRef.current.value,
faceDescriptor: descriptionValue
});
console.log(res.data);
setError(false);
navigate("/login");
console.log("help");
} catch (err) {
console.log(err);
setError(true);
}
};
const startVideo = () => {
navigator.mediaDevices
.getUserMedia({ video: true })
.then((stream) => {
videoRef.current.srcObject = stream;
})
.catch((err) => console.error("Error accessing webcam: ", err));
};
const stopVid = () => {
navigator.mediaDevices.getUserMedia({ video: false });
const stream = videoRef?.current?.srcObject;
if (stream) {
stream.getTracks().forEach((track) => track.stop());
videoRef.current.srcObject = null;
setCameraActive(false);
}
};
const captureSnapshot = async () => {
const canvas = snapshotRef.current;
const context = canvas.getContext('2d');
context.drawImage(videoRef.current, 0, 0, canvas.width, canvas.height);
const dataUrl = canvas.toDataURL('image/jpeg');
setSnapshot(dataUrl);
const detection = await faceapi
.detectSingleFace(canvas, new faceapi.TinyFaceDetectorOptions())
.withFaceLandmarks()
.withFaceDescriptor();
if (detection) {
const newDescriptor = detection.descriptor;
setDescriptionValue(newDescriptor);
console.log(newDescriptor);
setSubmitDisabled(false);
stopVid();
if (storedDescriptor && isMatchingFace(storedDescriptor, newDescriptor)) {
setInterval(alert("face matched"), 100);
} else {
alert("No Match Found!");
}
} else {
console.error("No face detected in snapshot");
}
};
const handleVideoPlay = async () => {
const video = videoRef.current;
const canvas = canvasRef.current;
const displaySize = { width: video.width, height: video.height };
faceapi.matchDimensions(canvas, displaySize);
setInterval(async () => {
if (!cameraActive) return;
const detections = await faceapi.detectAllFaces(
video,
new faceapi.TinyFaceDetectorOptions()
);
const resizedDetections = faceapi.resizeResults(detections, displaySize);
canvas.getContext('2d').clearRect(0, 0, canvas.width, canvas.height);
faceapi.draw.drawDetections(canvas, resizedDetections);
const detected = detections.length > 0;
if (detected && !faceDetected) {
captureSnapshot();
}
setFaceDetected(detected);
}, 100);
};
return (
<div className="flex flex-col w-full h-screen justify-center">
<div className="flex flex-col">
<form className="flex flex-col mb-2 w-full" onSubmit={RegSubmit}>
<h3 className="flex flex-col mx-auto mb-5">Registration Pageh3>
<div className="flex flex-col mb-2 w-(50%) mx-auto items-center">
<input
type="text"
placeholder="Email"
className="w-full rounded-2xl h-(50px) border-2 p-2 mb-2 border-gray-900"
required
ref={emailRef}
/>
<input
type="text"
placeholder="Username"
className="w-full rounded-2xl h-(50px) border-2 p-2 mb-2 border-gray-900"
required
ref={userRef}
/>
<input
type="text"
placeholder="Full Name"
className="w-full rounded-2xl h-(50px) border-2 p-2 mb-2 border-gray-900"
required
ref={FullRef}
/>
<input
type="password"
placeholder="Password"
className="w-full rounded-2xl h-(50px) border-2 p-2 mb-2 border-gray-900"
required
ref={passwordRef}
/>
<div>
{!modelIsLoaded && cameraActive && !descriptionValue ? (
<p>Loadingp>
) : (
<>
{!descriptionValue && (
<>
<video
ref={videoRef}
width="200"
height="160"
onPlay={handleVideoPlay}
autoPlay
muted
/>
<canvas
ref={canvasRef}
width="200"
height="160"
style={{ position: 'absolute', top: 0, left: 0 }}
/>
<p>
{faceDetected ? (
<span style={{ color: 'green' }}>Face Detectedspan>
) : (
<span style={{ color: 'red' }}>No Face Detectedspan>
)}
p>
<canvas
ref={snapshotRef}
width="480"
height="360"
style={{ display: 'none' }}
/>
>
)}
>
)}
{snapshot && (
<div style={{ marginTop: '20px' }}>
<h4>Face Snapshot:h4>
<img
src={snapshot}
alt="Face Snapshot"
width="200"
height="160"
/>
div>
)}
div>
<div className="mt-2">
<button type="button" onClick={stopVid}>
Stop Video
button>
div>
<button
disabled={submitDisabled}
className="mx-auto mt-4 rounded-2xl cursor-pointer text-white bg-primary w-(80%) lg:w-(50%) h-(40px) text-center items-center justify-center"
type="submit"
>
Register
button>
div>
<div className="flex flex-col mt-1 w-full">
<p className="flex justify-center">
Registered previously?Â
<a href="/login" className="text-blue-600 underline">
Login
a>
p>
div>
{error && (
<p className="text-red-600 text-center mt-2">
Error while registering, try again
p>
)}
form>