How to Build a PWA Document Scanner with Ionic Vue
In this article, we are going to create a mobile document scanning app using Ionic Vue. Ionic is a cross-platform framework for developing mobile apps with web technologies. An Ionic app can not only run as a progressive web app (PWA) in browsers but also run as a native app on Android and iOS.
Dynamsoft Document Normalizer is used to detect the document boundaries and perform perspective transform to crop the image.
Demo video:
An online demo running on netlify: link.
Build an Ionic Vue Document Scanner
Let’s build the app in steps.
Overview of the App
It has several major interfaces:
-
The home page which lists the scanned images and perform operations.
-
The scanner where the document is detected and automatically captured.
-
The cropper where we can adjust the detected polygon of the document.
New project
First, install Ionic according to its guide.
After installation, create a new project:
ionic start documentScanner blank --type vue
We can then run ionic serve
to have a test.
Install Dependencies
-
Install
capacitor-plugin-camera
to open the camera.npm install capacitor-plugin-camera
-
Install
capacitor-plugin-dynamsoft-document-normalizer
to detect the documents.npm install capacitor-plugin-dynamsoft-document-normalizer
-
Install
@capacitor/filesystem
and@capacitor/share
to save the scanned document on the native platforms.npm install @capacitor/filesystem @capacitor/share
-
Install
jsPDF
to generate a PDF file.npm install jsPDF
Create a Document Scanner Component
Next, let’s create a document scanner component (DocumentScanner.vue
) to open the camera, detect the document and take a photo.
<template>
<div ref="container" class="container">
<div class="dce-video-container"></div>
<SVGOverlay :viewBox="viewBox" :quad="quadResultItem"></SVGOverlay>
</div>
<ion-fab slot="fixed" vertical="bottom" horizontal="end">
<ion-fab-button>
<ion-icon :icon="chevronUpCircle"></ion-icon>
</ion-fab-button>
<ion-fab-list side="top">
<ion-fab-button @click="stopCamera">
<ion-icon :icon="stop"></ion-icon>
</ion-fab-button>
<ion-fab-button @click="switchCamera">
<ion-icon :icon="cameraReverse"></ion-icon>
</ion-fab-button>
<ion-fab-button @click="toggleTorch">
<ion-icon :icon="flashlight"></ion-icon>
</ion-fab-button>
</ion-fab-list>
</ion-fab>
<ion-loading :is-open="!initialized" message="Loading..." />
</template>
<script lang="ts" setup>
import { onBeforeUnmount, onMounted, ref } from 'vue';
import { IonFab, IonFabButton, IonIcon, IonFabList, IonLoading } from '@ionic/vue';
import { CameraPreview } from 'capacitor-plugin-camera';
import { DocumentNormalizer, intersectionOverUnion } from 'capacitor-plugin-dynamsoft-document-normalizer';
import { DetectedQuadResultItem } from 'dynamsoft-document-normalizer'
import { Capacitor, PluginListenerHandle } from "@capacitor/core";
import SVGOverlay from '@/components/SVGOverlay.vue';
import {
chevronUpCircle,
flashlight,
stop,
cameraReverse,
} from 'ionicons/icons';
const previousResults:DetectedQuadResultItem[] = [];
const quadResultItem = ref<undefined|DetectedQuadResultItem>()
const viewBox = ref("0 0 1280 720");
const emit = defineEmits<{
(e: 'onStopped'): void
(e: 'onScanned',blob:Blob,path:string|undefined,detectionResults:DetectedQuadResultItem[]): void
}>();
const container = ref<HTMLDivElement|undefined>();
const initialized = ref(false);
let torchOn = false;
let onPlayedListener:PluginListenerHandle|undefined;
let interval:any;
let detecting = false;
onMounted(async () => {
try {
if (container.value && Capacitor.isNativePlatform() === false) {
await CameraPreview.setElement(container.value);
}
await CameraPreview.initialize();
await CameraPreview.requestCameraPermission();
await DocumentNormalizer.initialize();
if (onPlayedListener) {
onPlayedListener.remove();
}
onPlayedListener = await CameraPreview.addListener("onPlayed", () => {
updateViewBox();
startScanning();
});
await CameraPreview.startCamera();
} catch (error) {
alert(error);
}
initialized.value = true;
});
onBeforeUnmount(async () => {
if (onPlayedListener) {
onPlayedListener.remove();
}
stopScanning();
await CameraPreview.stopCamera();
});
const startScanning = () => {
stopScanning();
interval = setInterval(captureAndDetect,100);
}
const stopScanning = () => {
clearInterval(interval);
}
const stopCamera = async () => {
if (onPlayedListener) {
onPlayedListener.remove();
}
stopScanning();
await CameraPreview.stopCamera();
emit("onStopped");
}
const captureAndDetect = async () => {
if (detecting === true) {
return;
}
let results:DetectedQuadResultItem[] = [];
detecting = true;
try {
if (Capacitor.isNativePlatform()) {
await CameraPreview.saveFrame();
results = (await DocumentNormalizer.detectBitmap({})).results;
}else{
if (container.value) {
const video = container.value.getElementsByTagName("video")[0] as any;
const response = await DocumentNormalizer.detect({source:video});
results = response.results;
}
}
if (results.length>0) {
quadResultItem.value = results[0];
checkIfSteady(results);
}else{
quadResultItem.value = undefined;
}
console.log(results);
} catch (error) {
console.log(error);
}
detecting = false;
}
const updateViewBox = async () => {
const res = (await CameraPreview.getResolution()).resolution;
const width = parseInt(res.split("x")[0]);
const height = parseInt(res.split("x")[1]);
const orientation = (await CameraPreview.getOrientation()).orientation;
let box:string;
if (orientation === "PORTRAIT") {
if (!Capacitor.isNativePlatform()) {
box = "0 0 "+width+" "+height;
}else{
box = "0 0 "+height+" "+width;
}
}else{
box = "0 0 "+width+" "+height;
}
console.log(box);
viewBox.value = box;
}
const checkIfSteady = (results:DetectedQuadResultItem[]) => {
console.log(results);
if (results.length>0) {
const result = results[0];
if (previousResults.length >= 3) {
if (steady() == true) {
console.log("steady");
takePhotoAndStop();
}else{
console.log("shift and add result");
previousResults.shift();
previousResults.push(result);
}
}else{
console.log("add result");
previousResults.push(result);
}
}
}
const steady = () => {
if (previousResults[0] && previousResults[1] && previousResults[2]) {
const iou1 = intersectionOverUnion(previousResults[0].location.points,previousResults[1].location.points);
const iou2 = intersectionOverUnion(previousResults[1].location.points,previousResults[2].location.points);
const iou3 = intersectionOverUnion(previousResults[2].location.points,previousResults[0].location.points);
if (iou1>0.9 && iou2>0.9 && iou3>0.9) {
return true;
}else{
return false;
}
}
return false;
}
const takePhotoAndStop = async () => {
stopScanning();
let blob:Blob|undefined;
let detectionResults:DetectedQuadResultItem[] = [];
let path;
if (Capacitor.isNativePlatform()) {
const photo = await CameraPreview.takePhoto({includeBase64:true});
blob = await getBlobFromBase64(photo.base64!);
detectionResults = (await DocumentNormalizer.detect({path:photo.path})).results;
path = photo.path;
console.log(detectionResults);
}else{
const photo = await CameraPreview.takePhoto({});
console.log(photo);
if (photo.blob) {
blob = photo.blob;
}else if (photo.base64) {
blob = await getBlobFromBase64(photo.base64);
}
const img = await loadBlobAsImage(blob!);
console.log(img);
detectionResults = (await DocumentNormalizer.detect({source:img})).results;
}
if (blob && detectionResults) {
emit("onScanned", blob, path, detectionResults);
}
}
const getBlobFromBase64 = async (base64:string):Promise<Blob> => {
if (!base64.startsWith("data")) {
base64 = "data:image/jpeg;base64," + base64;
}
const response = await fetch(base64);
const blob = await response.blob();
return blob;
}
const loadBlobAsImage = (blob:Blob):Promise<HTMLImageElement> => {
return new Promise((resolve) => {
const img = document.createElement("img");
img.onload = function(){
resolve(img);
};
img.src = URL.createObjectURL(blob);
});
}
const switchCamera = async () => {
const currentCamera = (await CameraPreview.getSelectedCamera()).selectedCamera;
const result = await CameraPreview.getAllCameras();
const cameras = result.cameras;
const currentCameraIndex = cameras.indexOf(currentCamera);
let desiredIndex = 0
if (currentCameraIndex < cameras.length - 1) {
desiredIndex = currentCameraIndex + 1;
}
await CameraPreview.selectCamera({cameraID:cameras[desiredIndex]});
}
const toggleTorch = () => {
if (initialized.value) {
torchOn = !torchOn;
CameraPreview.toggleTorch({on:torchOn});
}
}
</script>
<style scoped>
.container {
width:100%;
height:100%;
}
</style>
An SVG overlay component is used to highlight the detected document:
<script lang="tsx">
//https://vuejs.org/guide/typescript/overview about JSX
import { Capacitor } from '@capacitor/core';
import { defineComponent, PropType } from 'vue';
import { type DetectedQuadResultItem } from 'dynamsoft-document-normalizer'
export default defineComponent({
name: 'SVGOverlay',
props: {
quad: {
type: Object as PropType<DetectedQuadResultItem|undefined>,
required: true,
},
viewBox: {
type: String,
required: true,
},
},
setup(props) {
const getPointsData = (quad:DetectedQuadResultItem) => {
const points = quad.location.points;
let pointsData = points[0].x+","+ points[0].y + " ";
pointsData = pointsData+ points[1].x +"," + points[1].y + " ";
pointsData = pointsData+ points[2].x +"," + points[2].y + " ";
pointsData = pointsData+ points[3].x +"," + points[3].y;
return pointsData;
}
return () => (
<svg
id="overlay"
class={Capacitor.isNativePlatform() ? "fixed" : "absolute"}
preserveAspectRatio="xMidYMid slice"
viewBox={props.viewBox}
xmlns="<http://www.w3.org/2000/svg>">
{props.quad && (
<polygon xmlns="<http://www.w3.org/2000/svg>"
points={getPointsData(props.quad)}
/>
)}
</svg>
);
},
});
</script>
<style scoped>
#overlay {
left: 0;
top: 0;
width: 100%;
height: 100%;
}
.fixed {
position: fixed;
}
.absolute {
position: absolute;
}
#overlay polygon {
fill: rgba(85,240,40,0.5);
stroke: green;
stroke-width: 2px;
}
</style>
Here, we are using JSX/TSX for Vue. We have to install the @vitejs/plugin-vue-jsx
plugin:
npm install --save-dev @vitejs/plugin-vue-jsx
Then, update vite.config.ts
:
export default defineConfig({
plugins: [
vue(),
vueJsx(),
legacy()
],
})
Start Document Scanning in the Home Page
-
Initialize Document Normalizer’s license when the page is mounted. You can apply for a license here.
const initialized = ref<boolean>(false); onMounted(async () => { try { await DocumentNormalizer.initLicense({license:"LICENSE-KEY"}); }catch(error) { alert(error); } initialized.value = true; });
-
Add the following in
HomePage.vue
’s template to add the document scanner component.<div :class="'footer'+(mode!='normal'?' hidden':'')"> <button class="shutter-button round" @click="startScanning">Scan</button> </div> <div class="scanner fullscreen" v-if="mode==='scanning'"> <DocumentScanner @on-scanned="onScanned" @on-stopped="onStopped"></DocumentScanner> </div>
-
Code to control the scanning and get the scanning result. We have to set the background to transparent so that the webview will not be blocked by the camera view.
const mode = ref<"scanning"|"cropping"|"normal">("normal"); const startScanning = () => { document.documentElement.style.setProperty('--ion-background-color', 'transparent'); mode.value = "scanning"; } const onStopped = () => { mode.value = "normal"; document.documentElement.style.setProperty('--ion-background-color', ionBackground); } const onScanned = (blob:Blob,path:string|undefined,results:DetectedQuadResultItem[]) => {}
Use Cropper to Adjust the Detected Polygon
Next, let’s use the image cropper web component to adjust the detected polygon of the scanned document.
-
Add the image cropper web component through CDN by adding the following in
index.html
.<script type="module"> import { defineCustomElements } from 'https://cdn.jsdelivr.net/npm/image-cropper-component/dist/esm/loader.js'; defineCustomElements(); </script>
-
Update
vite.config.ts
to treatimage-cropper
as a custom component.vue({ template: { compilerOptions: { // treat image cropper as custom elements isCustomElement: (tag) => tag.includes('image-cropper') } } }),
-
Add the component in the template.
<div :class="'cropper fullscreen'+(mode!='cropping'?' hidden':'')" > <image-cropper :img="img" v-on:canceled="onCanceled" v-on:confirmed="onConfirmed"></image-cropper> </div>
-
Display the photo taken and set the detected polygon for the cropper in the
onScanned
event.const onScanned = (blob:Blob,path:string|undefined,results:DetectedQuadResultItem[]) => { photoPath = path; document.documentElement.style.setProperty('--ion-background-color', ionBackground); const url = URL.createObjectURL(blob); const image = document.createElement("img"); image.src = url; image.onload = function(){ img.value = image; if (results.length>0) { const item = results[0]; const quadItem:Quad = {points:item.location.points}; document.querySelector("image-cropper")!.quad = quadItem; console.log(quadItem); } } console.log(results); mode.value = "cropping"; }
-
Get the cropped image if it is confirmed.
const scannedImages = ref<string[]>([]); const onCanceled = () => { mode.value = "normal"; } const onConfirmed = () => { mode.value = "normal"; loadCroppedImage(); } const loadCroppedImage = async () => { const cropper = document.querySelector("image-cropper"); if (cropper) { const quad = await cropper.getQuad(); const quadItem:any = quad; quadItem.area = 0; let response; if (Capacitor.isNativePlatform()) { if (photoPath) { response = await DocumentNormalizer.normalize({path:photoPath,quad:quadItem,template:"NormalizeDocument_Color",includeBase64:true}); } }else{ response = await DocumentNormalizer.normalize({source:cropper.img,quad:quadItem,template:"NormalizeDocument_Color",includeBase64:true}); } if (response) { let base64 = response.result.base64; if (base64) { if (!base64.startsWith("data")) { base64 = "data:image/jpeg;base64," + base64; } const newList:string[] = []; for (let index = 0; index < scannedImages.value.length; index++) { const element = scannedImages.value[index]; newList.push(element); } newList.push(base64); scannedImages.value = newList; } } } }
Display Scanned Images
Display scanned images in a viewer.
Template:
<div class="documentViewer" ref="viewer">
<div class="image" v-for="(dataURL,index) in scannedImages" :key="index" >
<img :src="dataURL" alt="scanned" />
</div>
</div>
CSS:
.documentViewer {
width: 100%;
height: calc(100% - 50px);
display: grid;
grid-template-columns: repeat(auto-fill, 48%);
grid-row-gap: 10px;
grid-column-gap: 4%;
overflow: auto;
}
.image {
display: flex;
align-items: center;
justify-content: center;
margin: 20px;
height: 200px;
border: 1px solid gray;
}
.image img {
width: 100%;
height: 100%;
object-fit: contain;
}
Download the Images as PDF
Add a button to the toolbar to save the images as PDF.
Template:
<ion-toolbar>
<ion-title>Docs Scan</ion-title>
<ion-buttons slot="primary">
<ion-button @click="saveImages">
<ion-icon :icon="save"></ion-icon>
</ion-button>
</ion-buttons>
</ion-toolbar>
JavaScript:
const saveImages = async () => {
if (!viewer.value) {
return;
}
const images = viewer.value.getElementsByTagName("img");
if (images.length === 0) {
alert("No images");
return;
}
let orientation:"p" | "portrait" | "l" | "landscape" | undefined = "p";
if (images[0].naturalWidth>images[0].naturalHeight) {
orientation = "l";
}
const options:jsPDFOptions = {orientation:orientation,unit: "px", format: [images[0].naturalWidth, images[0].naturalHeight]};
const doc = new jsPDF(options);
for (let index = 0; index < images.length; index++) {
const image = images[index];
if (index > 0) {
if (image.naturalWidth>image.naturalHeight) {
orientation = "l";
}else{
orientation = "p";
}
doc.addPage([image.naturalWidth,image.naturalHeight],orientation);
}
doc.addImage(image, 0, 0, image.naturalWidth, image.naturalHeight);
}
if (Capacitor.isNativePlatform()) {
const data = doc.output("datauristring");
const fileName = "scanned.pdf";
const writingResult = await Filesystem.writeFile({
path: fileName,
data: data,
directory: Directory.Cache
});
Share.share({
title: fileName,
text: fileName,
url: writingResult.uri,
});
}else{
doc.save("scanned.pdf");
}
}
All right, we’ve now completed the demo.
Add Progressive Web App Support
We can add Progressive Web App (PWA) support for the app so that we can install it like an app and use it offline.
We need to add a service worker and a Web manifest. While it’s possible to add both of these to an app manually, we can use the Vite plugin to conveniently add them.
-
Run the following to install the PWA plugin for an Ionic Vue project:
npm install -D vite-plugin-pwa
-
Next, update your
vite.config.js
orvite.config.ts
file and add vite-plugin-pwa:import { defineConfig } from 'vite'; import vue from '@vitejs/plugin-vue'; import { VitePWA } from 'vite-plugin-pwa'; export default defineConfig({ plugins: [vue(), VitePWA({ registerType: 'autoUpdate' })], });
Turn the App to an Android or iOS App
With Capacitor, we can turn the app into an Android or iOS app.
-
Add platforms.
ionic cap add android ionic cap add ios
-
Sync files to native projects.
ionic cap sync
-
Run the app.
ionic cap run android ionic cap run ios
Native Quirks
There are some native quirks we have to handle.
-
Camera permission.
For Android, add the following to
AndroidManifest.xml
.<uses-permission android:name="android.permission.CAMERA" />
For iOS, add the following to
Info.plist
.<key>NSCameraUsageDescription</key> <string>For document scanning</string>
-
Safe area.
The fullscreen scanner and cropper may be blocked by the status bar. We can set their top position to
env(safe-area-inset-top)
..fullscreen { position: absolute; left:0; top:0; top: env(safe-area-inset-top); width: 100%; height: 100%; height: calc(100% - env(safe-area-inset-top)); }
Source Code
Check out the demo’s source code to have a try:
https://github.com/tony-xlh/Ionic-Vue-Document-Scanner
Disclaimer:
The wrappers and sample code on Dynamsoft Codepool are community editions, shared as-is and not fully tested. Dynamsoft is happy to provide technical support for users exploring these solutions but makes no guarantees.