Which Barcode Scanner SDK Is Most Accurate? Dynamsoft vs Scandit vs Scanbot vs Strich — 83 Real-World Images

This is Part 3 of our SDK benchmark series. Part 1 compared Dynamsoft and Scandit on rotated barcodes, achieving up to 12.9% higher accuracy. Part 2 tested skewed DataMatrix, where Dynamsoft found 51% more codes. Part 3 broadens the comparison to four SDKs across Dynamsoft’s full open real-world test dataset.

On this static-image dataset, Dynamsoft Barcode Reader detected 428 total barcodes and 292 unique codes — the highest counts across all four SDKs — while averaging 278 ms per image, the fastest of the four. Strich found 58 total, Scanbot 242, and Scandit 178. Across all four SDKs combined, 337 unique barcodes were discoverable: Dynamsoft found 86.6% of them.

The previous benchmarks in this series focused on specific challenges — rotation and perspective distortion — against a single competitor. This benchmark takes a different approach: use Dynamsoft’s own publicly available barcode test dataset (83 images spanning 16 categories of real-world difficulty) and run all four major commercial barcode SDKs against it cold, with default configurations, no pre-processing, and no tuning.

Scope note: This test feeds static images to each SDK’s decode API in a web browser. It measures image-decoding performance only. Real-time camera scanning involves additional components — frame acquisition, preview rendering, autofocus, and SDK-side stream optimization — that each vendor tunes independently. The findings here should not be extrapolated directly to camera-mode scenarios.

Try the Dynamsoft Barcode Reader online demo to upload any image and see results immediately. Or start a free 30-day trial to integrate it into your own application.

What You’ll Find in This Report

  • Aggregate detection results across all 83 images: total barcodes, unique barcodes, speed
  • Per-category analysis across air travel, batch scanning, low light, shadows, crumpled codes, and more
  • A detailed look at the batch-scanning gap: 68 barcodes (Dynamsoft) vs 12 (Scanbot), 10 (Scandit), and 1 (Strich) from one image
  • Speed comparison: Dynamsoft at 278 ms/image versus Strich (646 ms), Scanbot (790 ms), and Scandit (11,210 ms)
  • Edge cases where competing SDKs detected barcodes that Dynamsoft missed — there are some

Key Takeaways

  • On this 83-image static dataset, Dynamsoft detected 428 total barcodes and 292 unique barcodes — more than any other SDK tested.
  • Scanbot found 242 total, Scandit 178, and Strich 58. Strich is architecturally limited to one barcode per call by design, making direct totals comparison with multi-barcode SDKs somewhat misleading.
  • Dynamsoft averaged 278 ms per image in the browser — the lowest of the four. Strich averaged 646 ms, Scanbot 790 ms, and Scandit 11,210 ms under default configuration.
  • The largest single gap is on high-density batch images: one shelf image yielded 68 barcodes from Dynamsoft, 12 from Scanbot, 10 from Scandit, and 1 from Strich.
  • No SDK is perfect: Scanbot and Scandit each detected one Code 128 on a shadow-obstructed image that Dynamsoft and Strich both missed.
  • These results apply to static image decoding via the web SDK. Camera-mode and real-time stream performance are separate concerns not covered by this test.

Common Developer Questions

Which barcode SDK is most accurate on static real-world images via web SDK?

Based on this particular benchmark — 83 static images, four web SDKs, default settings, browser environment — Dynamsoft Barcode Reader detected the most total barcodes (428) and the most unique barcodes (292). Whether the same ranking holds for camera-mode or native SDK deployments would require a separate evaluation.

Strich is a web-focused SDK primarily designed for real-time single-barcode scanning in live camera streams. Its single-barcode-per-call architecture is a known design constraint, which explains why it returns 1 result on images containing 68 or 94 codes. Comparing its totals directly to multi-barcode SDKs on a batch dataset is not entirely fair to Strich’s intended use case. For what this benchmark does measure — decoding static images via a browser API — Strich also returned no results on PDF417 and some shadow-obstructed images, which suggests limitations beyond just the multi-barcode constraint. Teams should evaluate Strich separately on camera-stream scenarios where it is purpose-built.

Why is Scandit’s average processing time so high in this benchmark?

Scandit’s web SDK averaged 11,210 ms per image under default configuration in the browser — a 26-second delay on a single batch-scanning image containing many small codes. This likely reflects the SDK’s default exhaustive scanning mode on large images. Speed-accuracy trade-offs can sometimes be tuned, but default settings are what most integration teams encounter first.

Is the test dataset public and reproducible?

Yes. All 83 images are freely downloadable from the Dynamsoft barcode test sheet (linked in the methodology section below). The dataset covers air travel, low-light conditions, strong-light conditions, shadows, batch warehouse scanning, crumpled barcodes, dense barcodes, healthcare labels, Data Matrix codes, retail products, and more. The benchmark code is not publicly released; only the image dataset is available for download.

Benchmark Methodology: How We Tested Four Barcode SDKs

Test Dataset

The 83-image dataset is sourced from Dynamsoft’s public barcode test sheet, covering 16 categories of real-world barcode difficulty:

Category Images Barcode Types
Air Travel 2 ITF, PDF417
Barcodes in Low Light 1 Code 128
Barcodes in Strong Light 4 EAN-13, QR Code
Barcode with Shadow 4 UPC-A, QR Code, Code 128
Batch Scanning 2 UPC-A, QR Code (high-density)
Crumpled Barcodes 5 EAN-13, Code 128
Custom Scan Parameters 12 Mixed
DataMatrix 15 Data Matrix
Dense Barcodes 3 Mixed
Healthcare 2 Mixed
Incomplete Barcodes 2 Mixed
Multiple Barcode Scanning 16 Mixed multi-symbology
Off-Screen 2 Mixed
Poorly Printed 2 Code 128, Code 39
Retail 3 EAN-13, UPC-A
Tiny Barcodes 4 Mixed

SDKs Under Test

SDK Platform
Dynamsoft Barcode Reader Web (browser, JavaScript SDK)
Strich.io Web (browser, JavaScript SDK)
Scanbot Web (browser, JavaScript SDK)
Scandit Web (browser, JavaScript SDK)

All four SDKs were tested in a web browser with default configurations. No custom tuning, no pre-processing, no SDK-specific optimizations. Test images were fed to each SDK’s standard decode API; timing is measured per-image.

What this test does not cover: real-time camera stream performance, native (non-browser) SDK variants, or any vendor-specific camera-mode optimisations. In industrial and logistics deployments, camera-mode scanning — where the SDK processes a continuous video feed rather than discrete images — is equally common. All four vendors provide camera-mode APIs with their own frame-selection, autofocus, and low-light optimisations. A separate benchmark would be needed to evaluate those paths.

Barcode SDK Accuracy Results: All 83 Images Compared

Overall Summary

SDK Total Barcodes Unique Barcodes Total Time Avg Time/Image
Dynamsoft 428 292 23,051 ms 278 ms
Strich.io 58 55 53,617 ms 646 ms
Scanbot 242 182 65,587 ms 790 ms
Scandit 178 139 930,452 ms 11,210 ms

Across all four SDKs combined, 337 unique barcodes were found. Dynamsoft accounted for 292 of them — 86.6% of everything that was discoverable by any SDK.

Bar chart comparing total barcodes detected across four SDKs: Dynamsoft 428, Scanbot 242, Scandit 178, Strich 58

Why Total vs. Unique Both Matter

“Total barcodes” counts every decode, including duplicates within multi-barcode images. “Unique barcodes” counts distinct payload values. On batch-scanning images, Dynamsoft decodes far more individual codes per image, which drives both counts higher. On single-code images, both metrics converge. The 337 unique-across-all-SDKs number is the most useful upper bound: it represents the set of codes that at least one SDK found, and Dynamsoft covers 86.6% of that set without any tuning.

Category-by-Category Analysis

Batch Scanning: The Widest Gap

The two batch-scanning images expose the largest performance difference in the dataset.

batch-scanning-1.jpg (a warehouse shelf of UPC-A codes):

SDK Barcodes Found Time
Dynamsoft 68 4,248 ms
Strich.io 1 708 ms
Scanbot 12 3,041 ms
Scandit 10 26,053 ms

batch-scanning-2.jpg (a tray of QR codes):

SDK Barcodes Found Time
Dynamsoft 94 1,057 ms
Strich.io 1 592 ms
Scanbot 19 2,980 ms
Scandit 10 21,022 ms

For applications that process static images containing multiple barcodes — uploaded photos, document scans, archived images — the gap between 94 and 1 is a significant practical difference. Note that in industrial warehouse and logistics deployments, camera-mode scanning is also widely used; in those workflows, the SDK controls frame selection and can accumulate codes across multiple frames, which changes the comparison dynamic substantially.

Air Travel Barcodes

Air travel labels combine ITF (luggage) and PDF417 (boarding passes) in a single test category.

  • air-travel-luggage-bag.jpg (ITF barcode): Dynamsoft, Scanbot, and Scandit all found 2 barcodes; Strich found 0.
  • air-travel-ticket.jpg (PDF417 boarding pass): Dynamsoft and Scanbot found the barcode; Strich and Scandit returned nothing.

PDF417 is the standard for boarding pass data (BCBP format). Strich returning zero and Scandit also missing this image are worth noting for developers working on aviation or travel applications — at least under the web SDK at default settings tested here.

Shadow and Lighting Challenges

These categories test whether SDKs can handle non-ideal capture conditions.

barcodes-in-strong-light-1.jpg (EAN-13 in bright light): Dynamsoft, Strich, and Scandit succeeded; Scanbot failed.

barcode-with-shadow-1.jpg (UPC-A with Code 128 partially in shadow):

SDK Barcodes Found
Dynamsoft 1 (UPC-A only)
Strich.io 0
Scanbot 2 (Code 128 + UPC-A)
Scandit 2 (Code 128 + UPC-A)

This is one of the images where Dynamsoft was not the top scorer. Both Scanbot and Scandit detected the Code 128 that Dynamsoft missed, while Strich detected nothing. Benchmark integrity requires reporting this. Across the shadow category overall, Dynamsoft and competing SDKs (excluding Strich) performed comparably on simpler images; the difference emerges on the batch images where the dataset grows in complexity.

barcodes-in-strong-light-3.jpg (QR code with Chinese Unicode content): Dynamsoft, Scanbot, and Scandit all read it successfully; Strich returned nothing.

Crumpled and Damaged Barcodes

For crumpled-barcodes-1.jpg and crumpled-barcodes-2.jpg, all four SDKs successfully read the codes. crumpled-barcodes-3.jpg and crumpled-barcodes-4.jpg returned zero results from all SDKs — evidence that the damage level in those images is beyond what any current SDK handles with default settings.

Multi-Barcode Scanning

The multiple-barcode-scanning category contains the most complex images: mixed symbologies (QR, Code 128, Data Matrix, EAN, PDF417) across 16 images. This is where the 428 vs 242 vs 178 vs 58 aggregate gap is largely built. Dynamsoft consistently decoded more codes per image across mixed-symbology scenes, particularly where codes appear at varying sizes or partial occlusion is present.

DataMatrix

The 15 DataMatrix images test standalone DM codes independent of the skewed-DataMatrix problem covered in Part 2 of this series. Dynamsoft’s lead on DataMatrix detection remains consistent across the standard test images in this dataset, as does its speed advantage on these typically smaller files.

Speed Analysis: Processing Time per Image

SDK Avg Time/Image Total Barcodes Found
Dynamsoft 278 ms 428
Strich.io 646 ms 58
Scanbot 790 ms 242
Scandit 11,210 ms 178

On this dataset, Dynamsoft completed the 83 images in 23 seconds total. Scandit’s average of 11,210 ms is heavily influenced by the high-density batch images (26 s and 21 s respectively), which likely triggered its default exhaustive-search mode on large files. Strich and Scanbot are in a similar order of magnitude to Dynamsoft on most images; the overall average difference is modest for single-code images.

These timings are specific to the browser environment and image types in this dataset. Camera-mode latency — where the SDK processes video frames in real time — is a different metric and is not measured here.

How to Choose a Barcode Scanner SDK Based on These Results

The findings below are scoped to static image decoding via web SDK at default settings. Camera-mode performance, native SDK performance, and tuned configurations can all yield different results — each vendor provides optimisation knobs, and camera-mode APIs involve additional pipeline stages not captured here.

Static Image Processing (Document Scanners, Upload Pipelines)

For applications where the input is a discrete image — a scanned document, a photo upload, a batch archive — the dataset results are most directly applicable. Dynamsoft’s 86.6% coverage of all discoverable unique codes on a diverse 83-image set, at 278 ms per image, provides a strong out-of-the-box baseline. Other SDKs may close the gap with tuning.

Real-Time Camera Scanning

This benchmark does not evaluate camera-mode performance. In retail, logistics, and industrial settings, real-time scanning over a video stream is a common (sometimes primary) deployment mode. Each SDK vendor optimises its camera pipeline differently — frame selection strategies, resolution scaling, autofocus integration, and multi-frame accumulation all affect real-world detection rates. Teams building camera-based workflows should run a dedicated live-capture evaluation against their target hardware and barcode types.

Multi-Barcode Detection

The batch-scanning results show a meaningful difference for applications that need to decode many barcodes from a single image (warehouse shelf photos, medical tray scans, document archives). Dynamsoft found 68 and 94 codes on the two batch images; Scanbot found 12 and 19. If multi-barcode detection from static images is a core requirement, this dataset provides a reproducible starting reference.

PDF417 Support

Dynamsoft and Scanbot decoded the boarding-pass PDF417 image; Strich and Scandit (web SDK, default config) returned nothing. For applications that must read PDF417 — boarding passes, driver’s licenses, shipping labels — this is a relevant data point, though it reflects a single test image.

Processing Speed

Scandit’s 11-second average on this dataset is an outlier driven by batch images with many small codes. On simpler single-code images all four SDKs are broadly comparable in speed. If Scandit’s default exhaustive mode is a concern for your image types, its SDK offers configurable search strategies.

Comparison to the Series So Far

Benchmark Scope Dynamsoft Advantage
Part 1: Rotated barcodes 186 images, 2 SDKs +9.7% avg accuracy (up to +12.9% at 135°)
Part 2: Skewed DataMatrix 42 images, 2 SDKs +50.9% more codes detected
Part 3: Open real-world dataset 83 images, 4 SDKs +77% over Scanbot, +140% over Scandit, +638% over Strich

Across all three benchmarks Dynamsoft has scored highest on total codes found. The advantage is more pronounced on geometrically complex inputs (skewed DataMatrix, high-density batches) than on simple single-code images where SDKs converge. All three benchmarks use static images; camera-mode performance has not been evaluated in this series.

Conclusion

Tested against 83 publicly available real-world images in a web browser at default settings:

  • Dynamsoft detected 428 total and 292 unique barcodes — the highest in both categories.
  • Dynamsoft averaged 278 ms per image — the lowest processing time of the four.
  • Scanbot was the closest competitor on detection count (242 total). Scandit found 178; Strich, limited to single-barcode output by design, found 58.
  • No SDK decoded every image correctly. This benchmark does not determine which SDK is universally “best” — it reports what each one does on this specific dataset, in this specific environment.

What this test does not answer: how each SDK performs in real-time camera mode, with native (non-browser) libraries, or with per-SDK tuning applied. Camera-mode scanning is a significantly different evaluation path, and all four vendors invest in optimising it. If your deployment is primarily camera-based, a separate live-capture benchmark on your target hardware is the appropriate next step.

For static image decoding through a web SDK at default settings, Dynamsoft Barcode Reader is the highest-scoring solution on this dataset across both detection count and processing speed.

The test image dataset is freely available at the link in the methodology section above. To evaluate Dynamsoft Barcode Reader in your own application, start a free 30-day trial — no credit card required.