Getting Started with iOS
- System Requirements
- Build Your First Application
System Requirements
- Supported OS: iOS 11 or higher (iOS 13 and higher recommended).
- Supported ABI: arm64 and x86_64.
- Development Environment: Xcode 13 and above (Xcode 14.1+ recommended).
Build Your First Application
This guide will walk you through the process of creating a HelloWorld app for normalizing documents via a camera video input.
Note:
- Xcode 14.0 is used in this guide.
- You can get the source code of the HelloWorld app from the following link
Create a New Project
-
Open Xcode and select create a new project.
-
Select iOS -> App for your application.
-
Input your product name (HelloWorld), interface (StoryBoard) and language (Objective-C/Swift).
-
Click on the Next button and select the location to save the project.
-
Click on the Create button to finish.
Add the SDK
There are three ways to add the SDK into your project - Manually, via CocoaPods, or via Swift Package Manager.
Add the Frameworks Manually
-
Download the SDK package from the Dynamsoft website. After unzipping, you can find the following xcframeworks under the Dynamsoft\Frameworks directory:
File Description Mandatory/Optional DynamsoftDocumentNormalizer.xcframework
The Dynamsoft Document Normalizer module extracts structural information from document images, including document boundaries, shadow areas, and text areas. It uses this information to generate normalized document images through processes such as deskewing, shadow removal, and distortion correction. Mandatory DynamsoftCore.xcframework
The Dynamsoft Core module lays the foundation for Dynamsoft SDKs based on the DCV (Dynamsoft Capture Vision) architecture. It encapsulates the basic classes, interfaces, and enumerations shared by these SDKs. Mandatory DynamsoftCaptureVisionRouter.xcframework
The Dynamsoft Capture Vision Router module is the cornerstone of the Dynamsoft Capture Vision (DCV) architecture. It focuses on coordinating batch image processing and provides APIs for setting up image sources and result receivers, configuring workflows with parameters, and controlling processes. Mandatory DynamsoftImageProcessing.xcframework
The Dynamsoft Image Processing module facilitates digital image processing and supports operations for other modules, including the Barcode Reader, Label Recognizer, and Document Normalizer. Mandatory DynamsoftLicense.xcframework
The Dynamsoft License module manages the licensing aspects of Dynamsoft SDKs based on the DCV (Dynamsoft Capture Vision) architecture. Mandatory DynamsoftCameraEnhancer.xcframework
The Dynamsoft Camera Enhancer module controls the camera, transforming it into an image source for the DCV (Dynamsoft Capture Vision) architecture through ISA implementation. It also enhances image quality during acquisition and provides basic viewers for user interaction. Mandatory DynamsoftUtility.xcframework
The Dynamsoft Utility module defines auxiliary classes, including the ImageManager, and implementations of the CRF (Captured Result Filter) and ISA (Image Source Adapter) . These are shared by all Dynamsoft SDKs based on the DCV (Dynamsoft Capture Vision) architecture. Optional -
Drag and drop the above six (seven if the Utility framework is included) xcframeworks into your Xcode project. Make sure to check Copy items if needed and Create groups to properly copy the framework into your project’s folder.
-
Click on the project settings then go to General –> Frameworks, Libraries, and Embedded Content. Set the Embed field to Embed & Sign for all included xcframeworks.
Add the Frameworks via CocoaPods
-
Add the frameworks in your Podfile.
target 'HelloWorld' do use_frameworks! pod 'DynamsoftDocumentNormalizerBundle','2.2.1101' end
-
Execute the pod command to install the frameworks and generate workspace(HelloWorld.xcworkspace):
pod install
Add the xcframeworks via Swift Package Manager
-
In your Xcode project, go to File –> AddPackages.
-
In the top-right section of the window, search “https://github.com/Dynamsoft/document-normalizer-spm”
-
Select
document-normalizer-spm
, then click Add Package. -
Check all the frameworks and add.
Initialize License
Initialize the license first. It is suggested to initialize the license in AppDelegate
file.
- Objective-C
- Swift
// Import the DynamsoftLicense module to init license #import <DynamsoftLicense/DynamsoftLicense.h> // Add LicenseVerificationListener to the interface @interface AppDelegate ()<LicenseVerificationListener> @end @implementation AppDelegate - (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions { // Override point for customization after application launch. [DSLicenseManager initLicense:@"Put your" verificationDelegate:self]; return YES; } -(void)onLicenseVerified:(BOOL)isSuccess error:(NSError *)error { // Add your code to do when license server returns. } ... @end
// Import the DynamsoftLicense module to init license import DynamsoftLicense // Add LicenseVerificationListener to the interface class AppDelegate: UIResponder, UIApplicationDelegate, LicenseVerificationListener { var window: UIWindow? func application(_ application: UIApplication, didFinishLaunchingWithOptions launchOptions: [UIApplication.LaunchOptionsKey: Any]?) -> Bool { // Override point for customization after application launch. LicenseManager.initLicense("Put your license key here.", verificationDelegate: self) return true } // Implement the callback method of LicenseVerificationListener. func onLicenseVerified(_ isSuccess: Bool, error: Error?) { // Add your code to do when license server returns. } ... }
Note:
- Network connection is required for the license to work.
- The license string here will grant you a time-limited trial license.
- You can request a 30-day trial license via the Request a Trial License link. Offline trial license is also available by contacting us.
Main ViewController for Realtime Detection of Quads
In the main view controller, your app will scan documents via video streaming and display the detect quadrilateral area on the screen. First of all, import the headers in the ViewController file.
- Objective-C
- Swift
#import <DynamsoftCore/DynamsoftCore.h> #import <DynamsoftCaptureVisionRouter/DynamsoftCaptureVisionRouter.h> #import <DynamsoftDocumentNormalizer/DynamsoftDocumentNormalizer.h> #import <DynamsoftImageProcessing/DynamsoftImageProcessing.h> #import <DynamsoftCameraEnhancer/DynamsoftCameraEnhancer.h>
import DynamsoftCore import DynamsoftCaptureVisionRouter import DynamsoftDocumentNormalizer import DynamsoftUtility import DynamsoftCameraEnhancer
Get Prepared with the Camera Module
Create the instances of CameraEnhancer
and CameraView
.
- Objective-C
- Swift
@property (nonatomic, strong) DSCameraEnhancer *dce; @property (nonatomic, strong) DSCameraView *cameraView; ... - (void)setUpCamera { _cameraView = [[DSCameraView alloc] initWithFrame:self.view.bounds]; [_cameraView setAutoresizingMask:UIViewAutoresizingFlexibleWidth]; [self.view addSubview:_cameraView]; [_cameraView addSubview:_captureButton]; _dce = [[DSCameraEnhancer alloc] init]; [_dce setCameraView:_cameraView]; DSDrawingLayer * layer = [_cameraView getDrawingLayer:DSDrawingLayerIdDDN]; [layer setVisible:true]; // You can enable the frame filter feature of Dynamsoft Camera Enhancer. //[_dce enableEnhancedFeatures:DSEnhancerFeatureFrameFilter]; }
var cameraView:CameraView! let dce:CameraEnhancer! ... func setUpCamera() { // Create a camera view and add it as a sub view of the current view. cameraView = .init(frame: view.bounds) cameraView.autoresizingMask = [.flexibleWidth, .flexibleHeight] view.insertSubview(cameraView, at: 0) // Bind the camera enhancer with the camera view. dce = CameraEnhancer() dce.cameraView = cameraView // Additional step: Highlight the detected document boundary. let layer = cameraView.getDrawingLayer(DrawingLayerId.DDN.rawValue) layer?.visible = true // You can enable the frame filter feature of Dynamsoft Camera Enhancer. // dce.enableEnhancedFeatures(.frameFilter) }
Initialize Capture Vision Router
Once the camera component is set up, declare and create an instance of CaptureVisionRouter
and set its input to the Camera Enhancer object you created in the last step.
- Objective-C
- Swift
@property (nonatomic, strong) DSCaptureVisionRouter *cvr; ... - (void)setUpCvr { _cvr = [[DSCaptureVisionRouter alloc] init]; }
let cvr:CaptureVisionRouter! func setUpDCV() { cvr = CaptureVisionRouter() }
Bind your CaptureVisionRouter
instance with the created CameraEnhancer
instance.
- Objective-C
- Swift
- (void)setUpCvr { ... NSError *cvrError; [_cvr setInput:_dce error:&cvrError]; }
func setUpCvr() { try? cvr.setInput(dce) }
Set up Result Receiver
-
Add
CapturedResultReceiver
to your ViewController.- Objective-C
- Swift
-
@interface ViewController ()<DSCapturedResultReceiver>
-
class ViewController: UIViewController, CapturedResultReceiver { ... }
-
Implement
onNormalizedImagesReceived
method to receive the normalized images as the captured results.- Objective-C
- Swift
-
-(void)onNormalizedImagesReceived:(DSNormalizedImagesResult *)result { if (_implementCapture && result!=nil && result.items[0].imageData!=nil) { _implementCapture = false; ImageViewController *imageViewController = [[ImageViewController alloc] init]; NSError * error; imageViewController.resultUIImage = [result.items[0].imageData toUIImage:&error]; dispatch_async(dispatch_get_main_queue(), ^{ [self.navigationController pushViewController:imageViewController animated:YES]; }); } }
-
func onNormalizedImagesReceived(_ result: NormalizedImagesResult) { print("Normalized image received") if let items = result.items, items.count > 0 { guard let data = items[0].imageData else { return } let resultView = ImageViewController() resultView.data = data if implementCapture { DispatchQueue.main.async { self.present(resultView, animated: true) } } } }
-
Add the result receiver to the
CaptureVisionRouter
.- Objective-C
- Swift
-
- (void)setUpCvr { ... NSError *cvrError; [_cvr addResultReceiver:self error:&cvrError]; DSMultiFrameResultCrossFilter *filter = [[DSMultiFrameResultCrossFilter alloc] init]; [filter enableResultCrossVerification:DSCapturedResultItemTypeNormalizedImage isEnabled:true]; [_cvr addResultFilter:filter error:&cvrError]; }
-
func setUpCvr() { try? cvr.addResultReceiver(self) let filter = MultiFrameResultCrossFilter.init() filter.enableResultCrossVerification(.normalizedImage, isEnabled: true) try? cvr.addResultFilter(filter) }
-
Add a
confirmCapture
button to confirm the result.- Objective-C
- Swift
-
@property (nonatomic, strong) UIButton *captureButton; ... - (void)addCaptureButton { [self.view addSubview:self.captureButton]; } - (UIButton *)captureButton { NSLog(@"Start adding button"); CGFloat screenWidth = [UIScreen mainScreen].bounds.size.width; CGFloat screenHeight = [UIScreen mainScreen].bounds.size.height; if (!_captureButton) { _captureButton = [UIButton buttonWithType:UIButtonTypeCustom]; _captureButton.frame = CGRectMake((screenWidth - 150) / 2.0, screenHeight - 100, 150, 50); _captureButton.backgroundColor = [UIColor grayColor]; _captureButton.layer.cornerRadius = 10; _captureButton.layer.borderColor = [UIColor darkGrayColor].CGColor; [_captureButton setTitle:@"Capture" forState:UIControlStateNormal]; [_captureButton setTitleColor:[UIColor whiteColor] forState:UIControlStateNormal]; [_captureButton addTarget:self action:@selector(setCapture) forControlEvents:UIControlEventTouchUpInside]; } return _captureButton; }
-
var captureButton:UIButton! var implementCapture:Bool = false ... func addCaptureButton() { let w = UIScreen.main.bounds.size.width let h = UIScreen.main.bounds.size.height let SafeAreaBottomHeight: CGFloat = UIApplication.shared.statusBarFrame.size.height > 20 ? 34 : 0 let photoButton = UIButton(frame: CGRect(x: w / 2 - 60, y: h - 100 - SafeAreaBottomHeight, width: 120, height: 60)) photoButton.setTitle("Capture", for: .normal) photoButton.backgroundColor = UIColor.green photoButton.addTarget(self, action: #selector(confirmCapture), for: .touchUpInside) DispatchQueue.main.async(execute: { [self] in view.addSubview(photoButton) }) } @objc func confirmCapture() { implementCapture = true }
Configure the methods viewDidLoad, viewWillAppear, and viewWillDisappear
- Objective-C
- Swift
- (void)viewDidLoad { [super viewDidLoad]; [self setUpCamera]; [self setUpCvr]; [self addCaptureButton]; } - (void)viewWillAppear:(BOOL)animated { [super viewWillAppear:animated]; [_dce open]; NSError *cvrError; [_cvr startCapturing:DSPresetTemplateDetectAndNormalizeDocument error:&cvrError]; } - (void)viewWillDisappear:(BOOL)animated { [super viewWillAppear:animated]; [_dce close]; }
override func viewDidLoad() { super.viewDidLoad() setUpCamera() setUpCvr() addCaptureButton() } override func viewWillAppear(_ animated: Bool) { super.viewWillAppear(animated) dce.open() try? cvr.startCapturing(PresetTemplate.detectAndNormalizeDocument.rawValue) } override func viewWillDisappear(_ animated: Bool) { super.viewWillDisappear(animated) dce.close() }
Display the Normalized Image
-
Create a new
UIViewController
classImageViewController
. -
Add a property
normalizedImage
to the header file ofImageViewController
(Objective-C only).@property (nonatomic, strong) UIImage * normalizedImage;
-
Configure the
ImageViewController
to display the normalized image..- Objective-C
- Swift
-
#import "ImageViewController.h" #import <DynamsoftCameraEnhancer/DynamsoftCameraEnhancer.h> @interface ImageViewController() @property (nonatomic, strong) UIImageView *imageView; @end @implementation ImageViewController -(void)viewDidLoad { NSLog(@"ImageViewController loaded"); [super viewDidLoad]; [self setUpView]; } - (void)setUpView { _imageView = [[UIImageView alloc] initWithFrame:self.view.bounds]; [self.view addSubview:_imageView]; NSError *coreError; [_imageView setContentMode:UIViewContentModeScaleAspectFit]; [_imageView setImage:self.normalizedImage]; } @end
-
class ImageViewController: UIViewController{ var normalizedImage:UIImage! var imageView:UIImageView! override func viewDidLoad() { super.viewDidLoad() setUpView() } func setUpView() { imageView = UIImageView.init(frame: view.bounds) imageView.contentMode = .scaleAspectFit view.addSubview(imageView) DispatchQueue.main.async { [self] in imageView.image = normalizedImage } } }
Configure Camera Permissions
Add Privacy - Camera Usage Description to the info.plist
of your project to request camera permission. An easy way to do this is to access your project settings, go to Info and then add this Privacy property to the iOS target properties list.
Additional Steps for iOS 12.x or Lower Versions
If your iOS version is 12.x or lower, please add the following additional steps:
- Remove the methods
application:didDiscardSceneSessions:
andapplication:configurationForConnectingSceneSession:options:
from yourAppDelegate
file. - Remove the
SceneDelegate.Swift
file (SceneDelegate.h
&SceneDelegate.m
for Objective-C). - Remove the
Application Scene Manifest
from your info.plist file. -
Declaire the window in your
AppDelegate.Swift
file (AppDelegate.h
for Objective-C).- Objective-C
- Swift
-
@interface AppDelegate : UIResponder <UIApplicationDelegate> @property (strong, nonatomic) UIWindow *window; @end
-
import UIKit @main class AppDelegate: UIResponder, UIApplicationDelegate { var window: UIWindow? }
Build and Run the Project
- Select the device that you want to run your app on.
- Run the project, then your app will be installed on your device.
Note:
- You can get the source code of the HelloWorld app from the following link