Build a React Native MRZ Scanner using Vision Camera
In the previous series of articles, we’ve created a react-native-vision-camera frame processor plugin of Dynamsoft Label Recognizer to recognize text. In this article, we are going to build an MRZ scanner using this plugin to better illustrate how to use it.
MRZ stands for machine-readable zone, which is usually at the bottom of the identity page at the beginning of a passport.1 It can be read by a computing device with a camera to get information like document type, name, document number, nationality, date of birth, sex, and document expiration date.
A demo video of the final result:
In the above demo, we can see the two-line MRZ code with 44 characters each line is recognized and parsed.
This article is Part 3 in a 3-Part Series.
Other React Native Vision Camera Frame Processor Plugins
Links related to Dynamsoft Label Recognizer
Build a React Native MRZ Scanner
New Project
npx react-native init MRZScanner
If you need to enable TypeScript:
npx react-native init MRZScanner --template react-native-template-typescript
Add Camera Permission
For Android, add the following to android\app\src\main\AndroidManifest.xml
:
<uses-permission android:name="android.permission.CAMERA" />
For iOS add the following to ios\MRZScanner\Info.plist
:
<key>NSCameraUsageDescription</key>
<string>For mrz scanning</string>
Install Dependencies
npm install
npm install vision-camera-dynamsoft-label-recognizer react-native-worklets-core react-native-vision-camera react-native-svg
Extra steps are needed:
-
Update the
babel.config.js
file for the frame processor plugin:module.exports = { presets: ['module:metro-react-native-babel-preset'], + plugins: [['react-native-worklets-core/plugin']], };
- For iOS, run
pod install
in the ios folder. -
You can run the project using the following commands:
npx react-native run-android # for Android npx react-native run-ios # for iOS
Open the Camera
Open App.tsx
and replace its content with the following:
import * as React from 'react';
import { SafeAreaView, StyleSheet, Text } from 'react-native';
import { Camera, useCameraDevice } from 'react-native-vision-camera';
export default function App() {
const [hasPermission, setHasPermission] = React.useState(false);
const device = useCameraDevice('back')
React.useEffect(() => {
(async () => {
const status = await Camera.requestCameraPermission();
setHasPermission(status === 'granted');
})();
}, []);
return (
<SafeAreaView style={styles.container}>
{device != null &&
hasPermission && (
<>
<Camera
style={StyleSheet.absoluteFill}
device={device}
isActive={true}
/>
</>
)}
</SafeAreaView>
);
}
const styles = StyleSheet.create({
container: {
flex: 1,
},
barcodeText: {
fontSize: 20,
color: 'white',
fontWeight: 'bold',
},
});
After we run the app, we can see the camera opened.
Set up a Scan Region and Recognize Text from that Region
Normally, we don’t want to read text from the entire camera frame. We can set up a scan region so that the frame processor will crop the camera frame and send it for Dynamsoft Label Recognizer to extract text. In the case of MRZ scanning, we only need to recognize the region where the MRZ code exists.
-
Define a scan region. The value is in percent.
import { initLicense, recognize, ScanConfig, ScanRegion, DLRLineResult, DLRResult } from 'vision-camera-dynamsoft-label-recognizer'; const scanRegion:ScanRegion = { left: 5, top: 40, width: 90, height: 10 }
-
Initialize the license for Dynamsoft Label Recognizer. Here, we use a one-day trial license. You can apply for your license here.
const license = "DLS2eyJoYW5kc2hha2VDb2RlIjoiMjAwMDAxLTE2NDk4Mjk3OTI2MzUiLCJvcmdhbml6YXRpb25JRCI6IjIwMDAwMSIsInNlc3Npb25QYXNzd29yZCI6IndTcGR6Vm05WDJrcEQ5YUoifQ=="; //public trial const result = await initLicense(license);
-
Define the frame processor for the vision camera.
const frameProcessor = useFrameProcessor((frame) => { 'worklet' runAtTargetFps(1, () => { 'worklet' let config:ScanConfig = {}; config.scanRegion = scanRegion; let scanResult = recognize(frame,config); console.log(scanResult) }) }}, [])
Add the props for the vision camera:
<Camera style={StyleSheet.absoluteFill} device={device} isActive={true} + frameProcessor={frameProcessor} />
-
Draw a rectangle using react-native-svg to indicate which region will be processed.
<Svg preserveAspectRatio='xMidYMid slice' style={StyleSheet.absoluteFill} viewBox={getViewBox()}> <Rect x={scanRegion.left/100*getFrameSize().width} y={scanRegion.top/100*getFrameSize().height} width={scanRegion.width/100*getFrameSize().width} height={scanRegion.height/100*getFrameSize().height} strokeWidth="2" stroke="red" /> </Svg>
All right, the app can now recognize MRZ text from the camera.
We can take a step further to parse the MRZ result. Some third-party libraries like mrz can do this.
Result Validation
Sometimes, there may be some misrecognized characters. We can validate the result in two ways.
- Characters recognized by Dynamsoft Label Recognizer have confidence values. We can check the confidence value to know which character may be misrecognized.
- MRZ provides validation codes. We can calculate the validation codes and check whether it matches the validation codes recognized.
Source Code
Check out the source code to have a try:
https://github.com/tony-xlh/react-native-mrz-scanner