Build a React Native MRZ Scanner using Vision Camera

In the previous series of articles, we’ve created a react-native-vision-camera frame processor plugin of Dynamsoft Label Recognizer to recognize text. In this article, we are going to build an MRZ scanner using this plugin to better illustrate how to use it.

MRZ stands for machine-readable zone, which is usually at the bottom of the identity page at the beginning of a passport.1 It can be read by a computing device with a camera to get information like document type, name, document number, nationality, date of birth, sex, and document expiration date.

A demo video of the final result:

In the above demo, we can see the two-line MRZ code with 44 characters each line is recognized and parsed.

Build a React Native MRZ Scanner

New Project

npx react-native init MRZScanner

If you need to enable TypeScript:

npx react-native init MRZScanner --template react-native-template-typescript

Add Camera Permission

For Android, add the following to android\app\src\main\AndroidManifest.xml:

<uses-permission android:name="android.permission.CAMERA" />

For iOS add the following to ios\MRZScanner\Info.plist:

<key>NSCameraUsageDescription</key>
<string>For barcode scanning</string>

Install Dependencies

npm install
npm install vision-camera-dynamsoft-label-recognizer react-native-reanimated react-native-vision-camera react-native-svg

Extra steps are needed:

  1. Update the babel.config.js file for the frame processor plugin:

    module.exports = {
      presets: ['module:metro-react-native-babel-preset'],
    +  plugins: [
    +    [
    +      'react-native-reanimated/plugin',
    +      {
    +        globals: ['__recognize'],
    +      },
    +    ],
    +  ]
    };
    
  2. For iOS, run pod install in the ios folder.
  3. You can run the project using the following commands:

    npx react-native run-android # for Android
    npx react-native run-ios     # for iOS
    

Open the Camera

Open App.tsx and replace its content with the following:

import * as React from 'react';
import { SafeAreaView, StyleSheet, Text } from 'react-native';
import { Camera, useCameraDevices, useFrameProcessor } from 'react-native-vision-camera';
import { ScanConfig, recognize } from 'vision-camera-dynamsoft-label-recognizer';
import * as REA from 'react-native-reanimated';

export default function App() {
  const [hasPermission, setHasPermission] = React.useState(false);
  const devices = useCameraDevices();
  const device = devices.back;

  React.useEffect(() => {
    (async () => {
      const status = await Camera.requestCameraPermission();
      setHasPermission(status === 'authorized');
    })();
  }, []);

  return (
    <SafeAreaView style={styles.container}>
      {device != null &&
      hasPermission && (
        <>
          <Camera
            style={StyleSheet.absoluteFill}
            device={device}
            isActive={true}
          />
        </>
      )}
    </SafeAreaView>
  );
}

const styles = StyleSheet.create({
  container: {
    flex: 1,
  },
  barcodeText: {
    fontSize: 20,
    color: 'white',
    fontWeight: 'bold',
  },
});

After we run the app, we can see the camera opened.

Set up a Scan Region and Recognize Text from that Region

Normally, we don’t want to read text from the entire camera frame. We can set up a scan region so that the frame processor will crop the camera frame and send it for Dynamsoft Label Recognizer to extract text. In the case of MRZ scanning, we only need to recognize the region where the MRZ code exists.

  1. Define a scan region. The value is in percent.

    const scanRegion:ScanRegion = {
      left: 5,
      top: 40,
      width: 90,
      height: 10
    }
    
  2. Define the frame processor for the vision camera.

    const frameProcessor = useFrameProcessor((frame) => {
      'worklet'
      let config:ScanConfig = {};
      config.license = "DLS2eyJoYW5kc2hha2VDb2RlIjoiMjAwMDAxLTE2NDk4Mjk3OTI2MzUiLCJvcmdhbml6YXRpb25JRCI6IjIwMDAwMSIsInNlc3Npb25QYXNzd29yZCI6IndTcGR6Vm05WDJrcEQ5YUoifQ=="; //public trial
      config.scanRegion = scanRegion;
      let scanResult = recognize(frame,config);
      console.log(scanResult)
    }}, [])
    

    Add the props for the vision camera:

     <Camera
       style={StyleSheet.absoluteFill}
       device={device}
       isActive={true}
    +  frameProcessor={frameProcessor}
    +  frameProcessorFps={1}
     />
    
  3. Draw a rectangle using react-native-svg to indicate which region will be processed.

    <Svg preserveAspectRatio='xMidYMid slice' style={StyleSheet.absoluteFill} viewBox={getViewBox()}>
      <Rect 
        x={scanRegion.left/100*getFrameSize().width}
        y={scanRegion.top/100*getFrameSize().height}
        width={scanRegion.width/100*getFrameSize().width}
        height={scanRegion.height/100*getFrameSize().height}
        strokeWidth="2"
        stroke="red"
      />
    </Svg>
    

All right, the app can now recognize text from the camera.

Load the Model and Template for MRZ

MRZ code has a limited character set. It contains numbers, uppercase A-Z and <. We can train a custom OCR model and uses a custom template to optimize the performance of reading MRZ code for Dynamsoft Label Recognizer. Existing models and template files have already been made. You can find them here.

  1. Put the model files in a folder named MRZ. For Android, put it under android\app\src\main\assets\. For iOS, add it as folder references.

  2. Update App.tsx to use specify the custom model folder and which model files to load.

    config.customModelConfig = {customModelFolder:"MRZ",customModelFileNames:["NumberUppercase","NumberUppercase_Assist_1lIJ","NumberUppercase_Assist_8B","NumberUppercase_Assist_8BHR","NumberUppercase_Assist_number","NumberUppercase_Assist_O0DQ","NumberUppercase_Assist_upcase"]};
    
  3. Update App.tsx to use a custom template to read MRZ code.

    config.template = "{\"CharacterModelArray\":[{\"DirectoryPath\":\"\",\"FilterFilePath\":\"\",\"Name\":\"NumberUppercase\"}],\"LabelRecognizerParameterArray\":[{\"BinarizationModes\":[{\"BlockSizeX\":0,\"BlockSizeY\":0,\"EnableFillBinaryVacancy\":1,\"LibraryFileName\":\"\",\"LibraryParameters\":\"\",\"Mode\":\"BM_LOCAL_BLOCK\",\"ThreshValueCoefficient\":15}],\"CharacterModelName\":\"NumberUppercase\",\"LetterHeightRange\":[5,1000,1],\"LineStringLengthRange\":[44,44],\"MaxLineCharacterSpacing\":130,\"LineStringRegExPattern\":\"(P[OM<][A-Z]{3}([A-Z<]{0,35}[A-Z]{1,3}[(<<)][A-Z]{1,3}[A-Z<]{0,35}<{0,35}){(39)}){(44)}|([A-Z0-9<]{9}[0-9][A-Z]{3}[0-9]{2}[(01-12)][(01-31)][0-9][MF][0-9]{2}[(01-12)][(01-31)][0-9][A-Z0-9<]{14}[0-9<][0-9]){(44)}\",\"MaxThreadCount\":4,\"Name\":\"locr\",\"TextureDetectionModes\":[{\"Mode\":\"TDM_GENERAL_WIDTH_CONCENTRATION\",\"Sensitivity\":8}],\"ReferenceRegionNameArray\":[\"DRRegion\"]}],\"LineSpecificationArray\":[{\"Name\":\"L0\",\"LineNumber\":\"\",\"BinarizationModes\":[{\"BlockSizeX\":30,\"BlockSizeY\":30,\"Mode\":\"BM_LOCAL_BLOCK\"}]}],\"ReferenceRegionArray\":[{\"Localization\":{\"FirstPoint\":[0,0],\"SecondPoint\":[100,0],\"ThirdPoint\":[100,100],\"FourthPoint\":[0,100],\"MeasuredByPercentage\":1,\"SourceType\":\"LST_MANUAL_SPECIFICATION\"},\"Name\":\"DRRegion\",\"TextAreaNameArray\":[\"DTArea\"]}],\"TextAreaArray\":[{\"LineSpecificationNameArray\":[\"L0\"],\"Name\":\"DTArea\",\"FirstPoint\":[0,0],\"SecondPoint\":[100,0],\"ThirdPoint\":[100,100],\"FourthPoint\":[0,100]}]}";
    config.templateName = "locr";
    

After doing the above settings, the app can now successfully recognize MRZ text.

We can take a step further to parse the MRZ result. Some third-party libraries like mrz can do this.

Result Validation

Sometimes, there may be some misrecognized characters. We can validate the result in two ways.

  1. Characters recognized by Dynamsoft Label Recognizer have confidence values. We can check the confidence value to know which character may be misrecognized.
  2. MRZ provides validation codes. We can calculate the validation codes and check whether it matches the validation codes recognized.

Source Code

Check out the source code to have a try:

https://github.com/xulihang/react-native-mrz-scanner

References