Skip to content

Latest commit

 

History

History
1540 lines (1159 loc) · 115 KB

README.md

File metadata and controls

1540 lines (1159 loc) · 115 KB

PDF417.mobi SDK for Android

Build Status

PDF417.mobi SDK for Android is SDK that enables you to perform scans of various barcodes in your app. You can simply integrate the SDK into your app by following the instructions below and your app will be able to benefit the scanning feature for following barcode standards:

Using PDF417.mobi in your app requires a valid license key. You can obtain a trial license key by registering to Microblink dashboard. After registering, you will be able to generate a license key for your app. License key is bound to package name of your app, so please make sure you enter the correct package name when asked.

See below for more information about how to integrate PDF417.mobi SDK into your app and also check latest [Release notes](Release notes.md).

Table of contents

Android PDF417.mobi integration instructions

The package contains Android Archive (AAR) that contains everything you need to use PDF417.mobi library. Besides AAR, package also contains a demo project that contains following modules:

  • Pdf417MobiDemo shows how to use simple Intent-based API to scan single barcode.
  • Pdf417MobiDemoCustomUI demonstrates advanced integration within custom scan activity.
  • Pdf417MobiDirectAPIDemo demonstrates how to perform scanning of Android Bitmaps

Source code of all demo apps is given to you to show you how to perform integration of PDF417.mobi SDK into your app. You can use this source code and all resources as you wish. You can use demo apps as basis for creating your own app, or you can copy/paste code and/or resources from demo apps into your app and use them as you wish without even asking us for permission.

PDF417.mobi is supported on Android SDK version 10 (Android 2.3.3) or later.

The library contains one activity: Pdf417ScanActivity. It is responsible for camera control and recognition. You can also create your own scanning UI - you just need to embed RecognizerView into your activity and pass activity's lifecycle events to it and it will control the camera and recognition process. For more information, see Embedding RecognizerView into custom scan activity.

Quick Start

Quick start with demo app

  1. Open Android Studio.
  2. In Quick Start dialog choose Import project (Eclipse ADT, Gradle, etc.).
  3. In File dialog select Pdf417MobiDemo folder.
  4. Wait for project to load. If Android studio asks you to reload project on startup, select Yes.

Integrating PDF417.mobi into your project using Maven

Maven repository for PDF417.mobi SDK is: http://maven.microblink.com. If you do not want to perform integration via Maven, simply skip to Android Studio integration instructions or Eclipse integration instructions.

Using gradle or Android Studio

In your build.gradle you first need to add PDF417.mobi maven repository to repositories list:

repositories {
	maven { url 'http://maven.microblink.com' }
}

After that, you just need to add PDF417.mobi as a dependency to your application (make sure, transitive is set to true):

dependencies {
    compile('com.microblink:pdf417.mobi:6.0.1@aar') {
    	transitive = true
    }
}

If you plan to use ProGuard, add following lines to your proguard-rules.pro:

-keep class com.microblink.** { *; }
-keepclassmembers class com.microblink.** { *; }
-dontwarn android.hardware.**
-dontwarn android.support.v4.**

Import Javadoc to Android Studio

Current version of Android Studio will not automatically import javadoc from maven dependency, so you have you do that manually. To do that, follow these steps:

  1. In Android Studio project sidebar, ensure project view is enabled
  2. Expand External Libraries entry (usually this is the last entry in project view)
  3. Locate pdf417.mobi-6.0.1 entry, right click on it and select Library Properties...
  4. A Library Properties pop-up window will appear
  5. Click the second + button in bottom left corner of the window (the one that contains + with little globe)
  6. Window for definining documentation URL will appear
  7. Enter following address: https://pdf417.github.io/pdf417-android/
  8. Click OK

Using android-maven-plugin

Android Maven Plugin v4.0.0 or newer is required.

Open your pom.xml file and add these directives as appropriate:

<repositories>
   	<repository>
       	<id>MicroblinkRepo</id>
       	<url>http://maven.microblink.com</url>
   	</repository>
</repositories>

<dependencies>
	<dependency>
		  <groupId>com.microblink</groupId>
		  <artifactId>pdf417.mobi</artifactId>
		  <version>6.0.1</version>
		  <type>aar</type>
  	</dependency>
</dependencies>

Android studio integration instructions

  1. In Android Studio menu, click File, select New and then select Module.

  2. In new window, select Import .JAR or .AAR Package, and click Next.

  3. In File name field, enter the path to LibPdf417Mobi.aar and click Finish.

  4. In your app's build.gradle, add dependency to LibRecognizer and appcompat-v7:

    dependencies {
    	compile project(':LibRecognizer')
    	compile "com.android.support:appcompat-v7:25.0.0"
    }
    
  5. If you plan to use ProGuard, add following lines to your proguard-rules.pro:

    -keep class com.microblink.** { *; }
    -keepclassmembers class com.microblink.** { *; }
    -dontwarn android.hardware.**
    -dontwarn android.support.v4.**
    

Import Javadoc to Android Studio

  1. In Android Studio project sidebar, ensure project view is enabled
  2. Expand External Libraries entry (usually this is the last entry in project view)
  3. Locate LibRecognizer-unspecified entry, right click on it and select Library Properties...
  4. A Library Properties pop-up window will appear
  5. Click the + button in bottom left corner of the window
  6. Window for choosing JAR file will appear
  7. Find and select LibRecognizer-javadoc.jar file which is located in root folder of the SDK distribution
  8. Click OK

Eclipse integration instructions

We do not provide Eclipse integration demo apps. We encourage you to use Android Studio. We also do not test integrating PDF417.mobi with Eclipse. If you are having problems with PDF417.mobi, make sure you have tried integrating it with Android Studio prior contacting us.

However, if you still want to use Eclipse, you will need to convert AAR archive to Eclipse library project format. You can do this by doing the following:

  1. In Eclipse, create a new Android library project in your workspace.
  2. Clear the src and res folders.
  3. Unzip the LibPdf417Mobi.aar file. You can rename it to zip and then unzip it using any tool.
  4. Copy the classes.jar to libs folder of your Eclipse library project. If libs folder does not exist, create it.
  5. Copy the contents of jni folder to libs folder of your Eclipse library project.
  6. Replace the res folder on library project with the res folder of the LibPdf417Mobi.aar file.

You’ve already created the project that contains almost everything you need. Now let’s see how to configure your project to reference this library project.

  1. In the project you want to use the library (henceforth, "target project") add the library project as a dependency
  2. Open the AndroidManifest.xml file inside LibPdf417Mobi.aar file and make sure to copy all permissions, features and activities to the AndroidManifest.xml file of the target project.
  3. Copy the contents of assets folder from LibPdf417Mobi.aar into assets folder of target project. If assets folder in target project does not exist, create it.
  4. Clean and Rebuild your target project
  5. If you plan to use ProGuard, add same statements as in Android studio guide to your ProGuard configuration file.
  6. Add appcompat-v7 library to your workspace and reference it by target project (modern ADT plugin for Eclipse does this automatically for all new android projects).

Performing your first scan

  1. You can start recognition process by starting Pdf417ScanActivity activity with Intent initialized in the following way:

    // Intent for Pdf417ScanActivity Activity
    Intent intent = new Intent(this, Pdf417ScanActivity.class);
    
    // set your licence key
    // obtain your licence key at http://microblink.com/login or
    // contact us at http://help.microblink.com
    intent.putExtra(Pdf417ScanActivity.EXTRAS_LICENSE_KEY, "Add your licence key here");
    
    RecognitionSettings settings = new RecognitionSettings();
    // setup array of recognition settings (described in chapter "Recognition 
    // settings and results")
    settings.setRecognizerSettingsArray(setupSettingsArray());
    intent.putExtra(Pdf417ScanActivity.EXTRAS_RECOGNITION_SETTINGS, settings);
    
    // Starting Activity
    startActivityForResult(intent, MY_REQUEST_CODE);
  2. After Pdf417ScanActivity activity finishes the scan, it will return to the calling activity and will call method onActivityResult. You can obtain the scanning results in that method.

    @Override
    protected void onActivityResult(int requestCode, int resultCode, Intent data) {
    	super.onActivityResult(requestCode, resultCode, data);
    	
    	if (requestCode == MY_REQUEST_CODE) {
    		if (resultCode == Pdf417ScanActivity.RESULT_OK && data != null) {
    			// perform processing of the data here
    			
    			// for example, obtain parcelable recognition result
    			Bundle extras = data.getExtras();
    			RecognitionResults result = data.getParcelableExtra(Pdf417ScanActivity.EXTRAS_RECOGNITION_RESULTS);
    
    			// get array of recognition results
    			BaseRecognitionResult[] resultArray = result.getRecognitionResults();				
    			// Each element in resultArray inherits BaseRecognitionResult class and
    			// represents the scan result of one of activated recognizers that have
    			// been set up. More information about this can be found in 
    			// "Recognition settings and results" chapter
    					
    			// Or, you can pass the intent to another activity
    			data.setComponent(new ComponentName(this, ResultActivity.class));
    			startActivity(data);
    		}
    	}
    }

    For more information about defining recognition settings and obtaining scan results see Recognition settings and results.

Advanced PDF417.mobi integration instructions

This section will cover more advanced details in PDF417.mobi integration. First part will discuss the methods for checking whether PDF417.mobi is supported on current device. Second part will cover the possible customization of builtin Pdf417ScanActivity activity, third part will describe how to embed RecognizerView into your activity and fourth part will describe how to use direct API to recognize directly android bitmaps without the need of camera.

Checking if PDF417.mobi is supported

PDF417.mobi requirements

Even before starting the scan activity, you should check if PDF417.mobi is supported on current device. In order to be supported, device needs to have camera.

Android 2.3 is the minimum android version on which PDF417.mobi is supported. For best performance and compatibility, we recommend Android 5.0 or newer.

Camera video preview resolution also matters. In order to perform successful scans, camera preview resolution cannot be too low. PDF417.mobi requires minimum 320p camera preview resolution in order to perform scan. It must be noted that camera preview resolution is not the same as the video record resolution, although on most devices those are the same. However, there are some devices that allow recording of HD video (720p resolution), but do not allow high enough camera preview resolution (for example, Sony Xperia Go supports video record resolution at 720p, but camera preview resolution is only 320p - PDF417.mobi does not work on that device).

PDF417.mobi is native application, written in C++ and available for multiple platforms. Because of this, PDF417.mobi cannot work on devices that have obscure hardware architectures. We have compiled PDF417.mobi native code only for most popular Android ABIs. See Processor architecture considerations for more information about native libraries in PDF417.mobi and instructions how to disable certain architectures in order to reduce the size of final app.

Checking for PDF417.mobi support in your app

To check whether the PDF417.mobi is supported on the device, you can do it in the following way:

// check if PDF417.mobi is supported on the device
RecognizerCompatibilityStatus status = RecognizerCompatibility.getRecognizerCompatibilityStatus(this);
if(status == RecognizerCompatibilityStatus.RECOGNIZER_SUPPORTED) {
	Toast.makeText(this, "PDF417.mobi is supported!", Toast.LENGTH_LONG).show();
} else {
	Toast.makeText(this, "PDF417.mobi is not supported! Reason: " + status.name(), Toast.LENGTH_LONG).show();
}

However, some recognizers require camera with autofocus. If you try to start recognition with those recognizers on a device that does not have camera with autofocus, you will get an error. To prevent that, when you prepare the array with recognition settings (see Recognition settings and results for settings reference), you can easily filter out all settings that require autofocus from array using the following code snippet:

// setup array of recognition settings (described in chapter "Recognition 
// settings and results")
RecognizerSettings[] settArray = setupSettingsArray();
if(!RecognizerCompatibility.cameraHasAutofocus(CameraType.CAMERA_BACKFACE, this)) {
	setarr = RecognizerSettingsUtils.filterOutRecognizersThatRequireAutofocus(setarr);
}

Customization of Pdf417ScanActivity activity

Pdf417ScanActivity intent extras

This section will discuss possible parameters that can be sent over Intent for Pdf417ScanActivity activity that can customize default behaviour. There are several intent extras that can be sent to Pdf417ScanActivity actitivy:

  • # Pdf417ScanActivity.EXTRAS_CAMERA_TYPE - with this extra you can define which camera on device will be used. To set the extra to intent, use the following code snippet:

     intent.putExtra(Pdf417ScanActivity.EXTRAS_CAMERA_TYPE, (Parcelable)CameraType.CAMERA_FRONTFACE);
  • # Pdf417ScanActivity.EXTRAS_CAMERA_ASPECT_MODE - with this extra you can define which camera aspect mode will be used. If set to ASPECT_FIT (default), then camera preview will be letterboxed inside available view space. If set to ASPECT_FILL, camera preview will be zoomed and cropped to use the entire view space. To set the extra to intent, use the following code snippet:

     intent.putExtra(Pdf417ScanActivity.EXTRAS_CAMERA_ASPECT_MODE, (Parcelable)CameraAspectMode.ASPECT_FIT);
  • # Pdf417ScanActivity.EXTRAS_RECOGNITION_SETTINGS - with this extra you can define settings that affect whole recognition process. This includes both array of recognizer settings and global recognition settings. More information about recognition settings can be found in chapter Recognition settings and results. To set the extra to intent, use the following code snippet:

     RecognitionSettings recognitionSettings = new RecognitionSettings();
     // define additional settings; e.g set timeout to 10 seconds
     recognitionSettings.setNumMsBeforeTimeout(10000);
     // setup recognizer settings array
     recognitionSettings.setRecognizerSettingsArray(setupSettingsArray());
     intent.putExtra(Pdf417ScanActivity.EXTRAS_RECOGNITION_SETTINGS, recognitionSettings);
  • # Pdf417ScanActivity.EXTRAS_RECOGNITION_RESULTS - you can use this extra in onActivityResult method of calling activity to obtain recognition results. For more information about recognition settings and result, see Recognition settings and results. You can use the following snippet to obtain scan results:

     RecognitionResults results = data.getParcelableExtra(Pdf417ScanActivity.EXTRAS_RECOGNITION_RESULTS);
  • # Pdf417ScanActivity.EXTRAS_OPTIMIZE_CAMERA_FOR_NEAR_SCANNING - with this extra you can give a hint to PDF417.mobi to optimize camera parameters for near object scanning. When camera parameters are optimized for near object scanning, macro focus mode will be preferred over autofocus mode. Thus, camera will have easier time focusing on to near objects, but might have harder time focusing on far objects. If you expect that most of your scans will be performed by holding the device very near the object, turn on that parameter. By default, this parameter is set to false.

  • # Pdf417ScanActivity.EXTRAS_BEEP_RESOURCE - with this extra you can set the resource ID of the sound to be played when scan completes. You can use following snippet to set this extra:

     intent.putExtra(Pdf417ScanActivity.EXTRAS_BEEP_RESOURCE, R.raw.beep);
  • # Pdf417ScanActivity.EXTRAS_SPLASH_SCREEN_LAYOUT_RESOURCE - with this extra you can set the resource ID of the layout that will be used as camera splash screen while camera is being initialized. You can use following snippet to set this extra:

     intent.putExtra(Pdf417ScanActivity. EXTRAS_SPLASH_SCREEN_LAYOUT_RESOURCE, R.layout.camera_splash);
  • # Pdf417ScanActivity.EXTRAS_SHOW_FOCUS_RECTANGLE - with this extra you can enable showing of rectangle that displays area camera uses to measure focus and brightness when automatically adjusting its parameters. You can enable showing of this rectangle with following code snippet:

     intent.putExtra(Pdf417ScanActivity.EXTRAS_SHOW_FOCUS_RECTANGLE, true);
  • # Pdf417ScanActivity.EXTRAS_ALLOW_PINCH_TO_ZOOM - with this extra you can set whether pinch to zoom will be allowed on camera activity. Default is false. To enable pinch to zoom gesture on camera activity, use the following code snippet:

     intent.putExtra(Pdf417ScanActivity.EXTRAS_ALLOW_PINCH_TO_ZOOM, true);
  • # Pdf417ScanActivity.EXTRAS_CAMERA_VIDEO_PRESET - with this extra you can set the video resolution preset that will be used when choosing camera resolution for scanning. For more information, see javadoc. For example, to use 720p video resolution preset, use the following code snippet:

     intent.putExtra(Pdf417ScanActivity.EXTRAS_CAMERA_VIDEO_PRESET, (Parcelable)VideoResolutionPreset.VIDEO_RESOLUTION_720p);
  • # Pdf417ScanActivity.EXTRAS_SET_FLAG_SECURE - with this extra you can request setting of FLAG_SECURE on activity window which indicates that the display has a secure video output and supports compositing secure surfaces. Use this to prevent taking screenshots of the activity window content and to prevent content from being viewed on non-secure displays. To set FLAG_SECURE on camera activity, use the following code snippet:

     intent.putExtra(Pdf417ScanActivity.EXTRAS_SET_FLAG_SECURE, true);
  • # Pdf417ScanActivity.EXTRAS_LICENSE_KEY - with this extra you can set the license key for PDF417.mobi. You can obtain your licence key from Microblink website or you can contact us at http://help.microblink.com. Once you obtain a license key, you can set it with following snippet:

     // set the license key
     intent.putExtra(Pdf417ScanActivity.EXTRAS_LICENSE_KEY, "Enter_License_Key_Here");

    Licence key is bound to package name of your application. For example, if you have licence key that is bound to mobi.pdf417.demo app package, you cannot use the same key in other applications. However, if you purchase Premium licence, you will get licence key that can be used in multiple applications. This licence key will then not be bound to package name of the app. Instead, it will be bound to the licencee string that needs to be provided to the library together with the licence key. To provide licencee string, use the EXTRAS_LICENSEE intent extra like this:

     // set the license key
     intent.putExtra(Pdf417ScanActivity.EXTRAS_LICENSE_KEY, "Enter_License_Key_Here");
     intent.putExtra(Pdf417ScanActivity.EXTRAS_LICENSEE, "Enter_Licensee_Here");
  • # Pdf417ScanActivity.EXTRAS_IMAGE_LISTENER - with this extra you can set your implementation of ImageListener interface that will obtain images that are being processed. Make sure that your ImageListener implementation correctly implements Parcelable interface with static CREATOR field. Without this, you might encounter a runtime error. For more information and example, see Using ImageListener to obtain images that are being processed. By default, ImageListener will receive all possible images that become available during recognition process. This will introduce performance penalty because most of those images will probably not be used so sending them will just waste time. To control which images should become available to ImageListener, you can also set ImageMetadata settings with Pdf417ScanActivity.EXTRAS_IMAGE_METADATA_SETTINGS

  • # Pdf417ScanActivity.EXTRAS_IMAGE_METADATA_SETTINGS - with this extra you can set ImageMetadata settings which will define which images will be sent to ImageListener interface given via Pdf417ScanActivity.EXTRAS_IMAGE_LISTENER extra. If ImageListener is not given via Intent, then this extra has no effect. You can see example usage of ImageMetadata Settings in chapter Obtaining various metadata with MetadataListener and in provided demo apps.

  • # Pdf417ScanActivity.EXTRAS_SHOW_DIALOG_AFTER_SCAN - with this extra you can prevent showing of dialog after each barcode scan. By default, each time scanner finds and decodes a barcode, a dialog with barcode's contents will be shown. To prevent this, use the following snippet:

     // disable showing of dialog after scan
     intent.putExtra(Pdf417ScanActivity.EXTRAS_SHOW_DIALOG_AFTER_SCAN, false);

Customizing Pdf417ScanActivity appearance

Besides possibility to put various intent extras for customizing Pdf417ScanActivity behaviour, you can also change strings it displays. The procedure for changing strings in Pdf417ScanActivity activity are explained in Translation and localization section.

Modifying other resources.

Generally, you can also change other resources that Pdf417ScanActivity uses, but you are encouraged to create your own custom scan activity instead (see Embedding RecognizerView into custom scan activity).

Changing viewfinder appearance

To change the colour of viewfinder in Pdf417ScanActivity, change or override the colours defined in res/values/colors.xml (colours default_frame and recognized_frame).

Embedding RecognizerView into custom scan activity

This section will discuss how to embed RecognizerView into your scan activity and perform scan.

  1. First make sure that RecognizerView is a member field in your activity. This is required because you will need to pass all activity's lifecycle events to RecognizerView.
  2. It is recommended to keep your scan activity in one orientation, such as portrait or landscape. Setting sensor as scan activity's orientation will trigger full restart of activity whenever device orientation changes. This will provide very poor user experience because both camera and PDF417.mobi native library will have to be restarted every time. There are measures for this behaviour and will be discussed later.
  3. In your activity's onCreate method, create a new RecognizerView, define its settings and listeners and then call its create method. After that, add your views that should be layouted on top of camera view.
  4. Override your activity's onStart, onResume, onPause, onStop and onDestroy methods and call RecognizerView's lifecycle methods start, resume, pause, stop and destroy. This will ensure correct camera and native resource management. If you plan to manage RecognizerView's lifecycle independently of host activity's lifecycle, make sure the order of calls to lifecycle methods is the same as is with activities (i.e. you should not call resume method if create and start were not called first).

Here is the minimum example of integration of RecognizerView as the only view in your activity:

public class MyScanActivity extends Activity implements ScanResultListener, CameraEventsListener {
	private static final int PERMISSION_CAMERA_REQUEST_CODE = 69;
	private RecognizerView mRecognizerView;
		
	@Override
	protected void onCreate(Bundle savedInstanceState) {				
		// create RecognizerView
		mRecognizerView = new RecognizerView(this);
		   
		RecognitionSettings settings = new RecognitionSettings();
		// setup array of recognition settings (described in chapter "Recognition 
		// settings and results")
		RecognizerSettings[] settArray = setupSettingsArray();
		if(!RecognizerCompatibility.cameraHasAutofocus(CameraType.CAMERA_BACKFACE, this)) {
			settArray = RecognizerSettingsUtils.filterOutRecognizersThatRequireAutofocus(settArray);
		}
		settings.setRecognizerSettingsArray(settArray);
		mRecognizerView.setRecognitionSettings(settings);
		
		try {
		    // set license key
		    mRecognizerView.setLicenseKey(this, "your license key");
		} catch (InvalidLicenceKeyException exc) {
		    finish();
		    return;
		}
		
		// scan result listener will be notified when scan result gets available
		mRecognizerView.setScanResultListener(this);
		// camera events listener will be notified about camera lifecycle and errors
		mRecognizerView.setCameraEventsListener(this);
		
		// set camera aspect mode
		// ASPECT_FIT will fit the camera preview inside the view
		// ASPECT_FILL will zoom and crop the camera preview, but will use the
		// entire view surface
		mRecognizerView.setAspectMode(CameraAspectMode.ASPECT_FILL);
		   
		mRecognizerView.create();
		
		setContentView(mRecognizerView);
	}
	
	@Override
	protected void onStart() {
	   super.onStart();
	   // you need to pass all activity's lifecycle methods to RecognizerView
	   mRecognizerView.start();
	}
	
	@Override
	protected void onResume() {
	   	super.onResume();
	   	// you need to pass all activity's lifecycle methods to RecognizerView
       mRecognizerView.resume();
	}

	@Override
	protected void onPause() {
	   	super.onPause();
	   	// you need to pass all activity's lifecycle methods to RecognizerView
		mRecognizerView.pause();
	}

	@Override
	protected void onStop() {
	   super.onStop();
	   // you need to pass all activity's lifecycle methods to RecognizerView
	   mRecognizerView.stop();
	}
	
	@Override
	protected void onDestroy() {
	   super.onDestroy();
	   // you need to pass all activity's lifecycle methods to RecognizerView
	   mRecognizerView.destroy();
	}

	@Override
	public void onConfigurationChanged(Configuration newConfig) {
	   super.onConfigurationChanged(newConfig);
	   // you need to pass all activity's lifecycle methods to RecognizerView
	   mRecognizerView.changeConfiguration(newConfig);
	}
		
    @Override
    public void onScanningDone(RecognitionResults results) {
    	// this method is from ScanResultListener and will be called when scanning completes
    	// RecognitionResults may contain multiple results in array returned
    	// by method getRecognitionResults().
    	// This depends on settings in RecognitionSettings object that was
    	// given to RecognizerView.
    	// For more information, see chapter "Recognition settings and results")
    	
    	// After this method ends, scanning will be resumed and recognition
    	// state will be retained. If you want to prevent that, then
    	// you should call:
    	// mRecognizerView.resetRecognitionState();

		// If you want to pause scanning to prevent receiving recognition
		// results, you should call:
		// mRecognizerView.pauseScanning();
		// After scanning is paused, you will have to resume it with:
		// mRecognizerView.resumeScanning(true);
		// boolean in resumeScanning method indicates whether recognition
		// state should be automatically reset when resuming scanning
    }
    
    @Override
    public void onCameraPreviewStarted() {
        // this method is from CameraEventsListener and will be called when camera preview starts
    }
    
    @Override
    public void onCameraPreviewStopped() {
        // this method is from CameraEventsListener and will be called when camera preview stops
    }

    @Override
    public void onError(Throwable exc) {
        /** 
         * This method is from CameraEventsListener and will be called when 
         * opening of camera resulted in exception or recognition process
         * encountered an error. The error details will be given in exc
         * parameter.
         */
    }
    
    @Override
    @TargetApi(23)
    public void onCameraPermissionDenied() {
    	/**
    	 * Called on Android 6.0 and newer if camera permission is not given
    	 * by user. You should request permission from user to access camera.
    	 */
    	 requestPermissions(new String[]{Manifest.permission.CAMERA}, PERMISSION_CAMERA_REQUEST_CODE);
    	 /**
    	  * Please note that user might have not given permission to use 
    	  * camera. In that case, you have to explain to user that without
    	  * camera permissions scanning will not work.
    	  * For more information about requesting permissions at runtime, check
    	  * this article:
    	  * https://developer.android.com/training/permissions/requesting.html
    	  */
    }
    
    @Override
    public void onAutofocusFailed() {
	    /**
	     * This method is from CameraEventsListener will be called when camera focusing has failed. 
	     * Camera manager usually tries different focusing strategies and this method is called when all 
	     * those strategies fail to indicate that either object on which camera is being focused is too 
	     * close or ambient light conditions are poor.
	     */
    }
    
    @Override
    public void onAutofocusStarted(Rect[] areas) {
	    /**
	     * This method is from CameraEventsListener and will be called when camera focusing has started.
	     * You can utilize this method to draw focusing animation on UI.
	     * Areas parameter is array of rectangles where focus is being measured. 
	     * It can be null on devices that do not support fine-grained camera control.
	     */
    }

    @Override
    public void onAutofocusStopped(Rect[] areas) {
	    /**
	     * This method is from CameraEventsListener and will be called when camera focusing has stopped.
	     * You can utilize this method to remove focusing animation on UI.
	     * Areas parameter is array of rectangles where focus is being measured. 
	     * It can be null on devices that do not support fine-grained camera control.
	     */
    }
}

Scan activity's orientation

If activity's screenOrientation property in AndroidManifest.xml is set to sensor, fullSensor or similar, activity will be restarted every time device changes orientation from portrait to landscape and vice versa. While restarting activity, its onPause, onStop and onDestroy methods will be called and then new activity will be created anew. This is a potential problem for scan activity because in its lifecycle it controls both camera and native library - restarting the activity will trigger both restart of the camera and native library. This is a problem because changing orientation from landscape to portrait and vice versa will be very slow, thus degrading a user experience. We do not recommend such setting.

For that matter, we recommend setting your scan activity to either portrait or landscape mode and handle device orientation changes manually. To help you with this, RecognizerView supports adding child views to it that will be rotated regardless of activity's screenOrientation. You add a view you wish to be rotated (such as view that contains buttons, status messages, etc.) to RecognizerView with addChildView method. The second parameter of the method is a boolean that defines whether the view you are adding will be rotated with device. To define allowed orientations, implement OrientationAllowedListener interface and add it to RecognizerView with method setOrientationAllowedListener. This is the recommended way of rotating camera overlay.

However, if you really want to set screenOrientation property to sensor or similar and want Android to handle orientation changes of your scan activity, then we recommend to set configChanges property of your activity to orientation|screenSize. This will tell Android not to restart your activity when device orientation changes. Instead, activity's onConfigurationChanged method will be called so that activity can be notified of the configuration change. In your implementation of this method, you should call changeConfiguration method of RecognizerView so it can adapt its camera surface and child views to new configuration. Note that on Android versions older than 4.0 changing of configuration will require restart of camera, which can be slow.

RecognizerView reference

The complete reference of RecognizerView is available in Javadoc. The usage example is provided in pdf417MobiDemoCustomUI demo app provided with SDK. This section just gives a quick overview of RecognizerView's most important methods.

This method should be called in activity's onCreate method. It will initialize RecognizerView's internal fields and will initialize camera control thread. This method must be called after all other settings are already defined, such as listeners and recognition settings. After calling this method, you can add child views to RecognizerView with method addChildView(View, boolean).

This method should be called in activity's onStart method. It will initialize background processing thread and start native library initialization on that thread.

This method should be called in activity's onResume method. It will trigger background initialization of camera. After camera is loaded, it will start camera frame recognition, except if scanning loop is paused.

This method should be called in activity's onPause method. It will stop the camera, but will keep native library loaded.

This method should be called in activity's onStop method. It will deinitialize native library, terminate background processing thread and free all resources that are no longer necessary.

This method should be called in activity's onDestroy method. It will free all resources allocated in create() and will terminate camera control thread.

This method should be called in activity's onConfigurationChanged method. It will adapt camera surface to new configuration without the restart of the activity. See Scan activity's orientation for more information.

With this method you can define which camera on device will be used. Default camera used is back facing camera.

Define the aspect mode of camera. If set to ASPECT_FIT (default), then camera preview will be letterboxed inside available view space. If set to ASPECT_FILL, camera preview will be zoomed and cropped to use the entire view space.

Define the video resolution preset that will be used when choosing camera resolution for scanning.

With this method you can set recognition settings that contains information what will be scanned and how will scan be performed. For more information about recognition settings and results see Recognition settings and results. This method must be called before create().

With this method you can reconfigure the recognition process while recognizer is active. Unlike setRecognitionSettings, this method must be called while recognizer is active (i.e. after resume was called). For more information about recognition settings see Recognition settings and results.

With this method you can set a OrientationAllowedListener which will be asked if current orientation is allowed. If orientation is allowed, it will be used to rotate rotatable views to it and it will be passed to native library so that recognizers can be aware of the new orientation. If you do not set this listener, recognition will be performed only in orientation defined by current activity's orientation.

With this method you can set a ScanResultListener which will be notified when recognition completes. After recognition completes, RecognizerView will pause its scanning loop and to continue the scanning you will have to call resumeScanning method. In this method you can obtain data from scanning results. For more information see Recognition settings and results.

With this method you can set a CameraEventsListener which will be notified when various camera events occur, such as when camera preview has started, autofocus has failed or there has been an error while using the camera or performing the recognition.

This method pauses the scanning loop, but keeps both camera and native library initialized. Pause and resume scanning methods count the number of calls, so if you called pauseScanning() twice, you will have to call resumeScanning twice to actually resume scanning.

With this method you can resume the paused scanning loop. If called with true parameter, implicitly calls resetRecognitionState(). If called with false, old recognition state will not be reset, so it could be reused for boosting recognition result. This may not be always a desired behaviour. Pause and resume scanning methods count the number of calls, so if you called pauseScanning() twice, you will have to call resumeScanning twice to actually resume scanning loop.

This method lets you set up RecognizerView to not automatically resume scanning first time resume is called. An example use case of when you might want this is if you want to display onboarding help when opening camera first time and want to prevent scanning in background while onboarding is displayed over camera preview.

With this method you can reset internal recognition state. State is usually kept to improve recognition quality over time, but without resetting recognition state sometimes you might get poorer results (for example if you scan one object and then another without resetting state you might end up with result that contains properties from both scanned objects).

With this method you can add your own view on top of RecognizerView. RecognizerView will ensure that your view will be layouted exactly above camera preview surface (which can be letterboxed if aspect ratio of camera preview size does not match the aspect ratio of RecognizerView and camera aspect mode is set to ASPECT_FIT). Boolean parameter defines whether your view should be rotated with device orientation changes. The rotation is independent of host activity's orientation changes and allowed orientations will be determined from OrientationAllowedListener. See also Scan activity's orientation for more information why you should rotate your views independently of activity.

This method returns true if camera thinks it has focused on object. Note that camera has to be active for this method to work. If camera is not active, returns false.

This method requests camera to perform autofocus. If camera does not support autofocus feature, method does nothing. Note that camera has to be active for this method to work.

This method returns true if camera supports torch flash mode. Note that camera has to be active for this method to work. If camera is not active, returns false.

If torch flash mode is supported on camera, this method can be used to enable/disable torch flash mode. After operation is performed, SuccessCallback will be called with boolean indicating whether operation has succeeded or not. Note that camera has to be active for this method to work and that callback might be called on background non-UI thread.

You can use this method to define the scanning region and define whether this scanning region will be rotated with device if OrientationAllowedListener determines that orientation is allowed. This is useful if you have your own camera overlay on top of RecognizerView that is set as rotatable view - you can thus synchronize the rotation of the view with the rotation of the scanning region native code will scan.

Scanning region is defined as Rectangle. First parameter of rectangle is x-coordinate represented as percentage of view width, second parameter is y-coordinate represented as percentage of view height, third parameter is region width represented as percentage of view width and fourth parameter is region height represented as percentage of view height.

View width and height are defined in current context, i.e. they depend on screen orientation. If you allow your ROI view to be rotated, then in portrait view width will be smaller than height, whilst in landscape orientation width will be larger than height. This complies with view designer preview. If you choose not to rotate your ROI view, then your ROI view will be laid out either in portrait or landscape, depending on setting for your scan activity in AndroidManifest.xml

Note that scanning region only reflects to native code - it does not have any impact on user interface. You are required to create a matching user interface that will visualize the same scanning region you set here.

This method can only be called when camera is active. You can use this method to define regions which camera will use to perform meterings for focus, white balance and exposure corrections. On devices that do not support metering areas, this will be ignored. Some devices support multiple metering areas and some support only one. If device supports only one metering area, only the first rectangle from array will be used.

Each region is defined as Rectangle. First parameter of rectangle is x-coordinate represented as percentage of view width, second parameter is y-coordinate represented as percentage of view height, third parameter is region width represented as percentage of view width and fourth parameter is region height represented as percentage of view height.

View width and height are defined in current context, i.e. they depend on current device orientation. If you have custom OrientationAllowedListener, then device orientation will be the last orientation that you have allowed in your listener. If you don't have it set, orientation will be the orientation of activity as defined in AndroidManifest.xml. In portrait orientation view width will be smaller than height, whilst in landscape orientation width will be larger than height. This complies with view designer preview.

Second boolean parameter indicates whether or not metering areas should be automatically updated when device orientation changes.

You can use this method to define metadata listener that will obtain various metadata from the current recognition process. Which metadata will be available depends on metadata settings. For more information and examples, check demo applications and section Obtaining various metadata with MetadataListener.

This method sets the license key that will unlock all features of the native library. You can obtain your license key from Microblink website.

Use this method to set a license key that is bound to a licensee, not the application package name. You will use this method when you obtain a license key that allows you to use PDF417.mobi SDK in multiple applications. You can obtain your license key from Microblink website.

Using direct API for recognition of Android Bitmaps

This section will describe how to use direct API to recognize android Bitmaps without the need for camera. You can use direct API anywhere from your application, not just from activities.

  1. First, you need to obtain reference to Recognizer singleton using getSingletonInstance.
  2. Second, you need to initialize the recognizer.
  3. After initialization, you can use singleton to process images. You cannot process multiple images in parallel.
  4. Do not forget to terminate the recognizer after usage (it is a shared resource).

Here is the minimum example of usage of direct API for recognizing android Bitmap:

public class DirectAPIActivity extends Activity implements ScanResultListener {
	private Recognizer mRecognizer;
		
	@Override
	protected void onCreate(Bundle savedInstanceState) {
		// initialize your activity here
	}
	
	@Override
	protected void onStart() {
	   super.onStart();
	   try {
		   mRecognizer = Recognizer.getSingletonInstance();
		} catch (FeatureNotSupportedException e) {
			Toast.makeText(this, "Feature not supported! Reason: " + e.getReason().getDescription(), Toast.LENGTH_LONG).show();
			finish();
			return;
		}
	   try {
	       // set license key
	       mRecognizer.setLicenseKey(this, "your license key");
	   } catch (InvalidLicenceKeyException exc) {
	       finish();
	       return;
	   }
		RecognitionSettings settings = new RecognitionSettings();
		// setupSettingsArray method is described in chapter "Recognition 
		// settings and results")
		settings.setRecognizerSettingsArray(setupSettingsArray());
		mRecognizer.initialize(this, settings, new DirectApiErrorListener() {
			@Override
			public void onRecognizerError(Throwable t) {
				Toast.makeText(DirectAPIActivity.this, "There was an error in initialization of Recognizer: " + t.getMessage(), Toast.LENGTH_SHORT).show();
				finish();
			}
		});
	}
	
	@Override
	protected void onResume() {
	   super.onResume();
		// start recognition
		Bitmap bitmap = BitmapFactory.decodeFile("/path/to/some/file.jpg");
		mRecognizer.recognize(bitmap, Orientation.ORIENTATION_LANDSCAPE_RIGHT, this);
	}

	@Override
	protected void onStop() {
	   super.onStop();
	   mRecognizer.terminate();
	}

    @Override
    public void onScanningDone(RecognitionResults results) {
    	// this method is from ScanResultListener and will be called 
    	// when scanning completes
    	// RecognitionResults may contain multiple results in array returned
    	// by method getRecognitionResults().
    	// This depends on settings in RecognitionSettings object that was
    	// given to RecognizerView.
    	// For more information, see chapter "Recognition settings and results")
    	    	
    	finish(); // in this example, just finish the activity
    }
    
}

Understanding DirectAPI's state machine

DirectAPI's Recognizer singleton is actually a state machine which can be in one of 4 states: OFFLINE, UNLOCKED, READY and WORKING.

  • When you obtain the reference to Recognizer singleton, it will be in OFFLINE state.
  • First you need to unlock the Recognizer by providing a valid licence key using setLicenseKey method. If you attempt to call setLicenseKey while Recognizer is not in OFFLINE state, you will get IllegalStateException.
  • After successful unlocking, Recognizer singleton will move to UNLOCKED state.
  • Once in UNLOCKED state, you can initialize Recognizer by calling initialize method. If you call initialize method while Recognizer is not in UNLOCKED state, you will get IllegalStateException.
  • After successful initialization, Recognizer will move to READY state. Now you can call any of the recognize* methods.
  • When starting recognition with any of the recognize* methods, Recognizer will move to WORKING state. If you attempt to call these methods while Recognizer is not in READY state, you will get IllegalStateException
  • Recognition is performed on background thread so it is safe to call all Recognizer's method from UI thread
  • When recognition is finished, Recognizer first moves back to READY state and then returns the result via provided ScanResultListener.
  • Please note that ScanResultListener's onScanningDone method will be called on background processing thread, so make sure you do not perform UI operations in this calback.
  • By calling terminate method, Recognizer singleton will release all its internal resources and will request processing thread to terminate. Note that even after calling terminate you might receive onScanningDone event if there was work in progress when terminate was called.
  • terminate method can be called from any Recognizer singleton's state
  • You can observe Recognizer singleton's state with method getCurrentState

Using DirectAPI while RecognizerView is active

Both RecognizerView and DirectAPI recognizer use the same internal singleton that manages native code. This singleton handles initialization and termination of native library and propagating recognition settings to native library. It is possible to use RecognizerView and DirectAPI together, as internal singleton will make sure correct synchronization and correct recognition settings are used. If you run into problems while using DirectAPI in combination with RecognizerView, let us know!

Obtaining various metadata with MetadataListener

This section will give an example how to use Metadata listener to obtain various metadata, such as object detection location, images that are being processed and much more. Which metadata will be obtainable is configured with Metadata settings. You must set both MetadataSettings and your implementation of MetadataListener before calling create method of RecognizerView. Setting them after causes undefined behaviour.

The following code snippet shows how to configure MetadataSettings to obtain detection location, video frame that was used to perform and dewarped image of the document being scanned (NOTE: the availability of metadata depends on currently active recognisers and their settings. Not all recognisers can produce all types of metadata. Check Recognition settings and results article for more information about recognisers and their settings):

// this snippet should be in onCreate method of your scanning activity

MetadataSettings ms = new MetadataSettings();
// enable receiving of detection location
ms.setDetectionMetadataAllowed(true);

// ImageMetadataSettings contains settings for defining which images will be returned
MetadataSettings.ImageMetadataSettings ims = new MetadataSettings.ImageMetadataSettings();
// enable returning of dewarped images, if they are available
ims.setDewarpedImageEnabled(true);
// enable returning of image that was used to obtain valid scanning result
ims.setSuccessfulScanFrameEnabled(true)

// set ImageMetadataSettings to MetadataSettings object
ms.setImageMetadataSettings(ims);

// this line must be called before mRecognizerView.create()
mRecognizerView.setMetadataListener(myMetadataListener, ms);

The following snippet shows one possible implementation of MetadataListener:

public class MyMetadataListener implements MetadataListener {

	/**
	 * Called when metadata is available.
	 */
    @Override
    public void onMetadataAvailable(Metadata metadata) {
    	// detection location will be available as DetectionMetadata
        if (metadata instanceof DetectionMetadata) {
        	// DetectionMetadata contains DetectorResult which is null if object detection
        	// has failed and non-null otherwise
        	// Let's assume that we have a QuadViewManager which can display animated frame
        	// around detected object (for reference, please check javadoc and demo apps)
            DetectorResult dr = ((DetectionMetadata) metadata).getDetectionResult();
            if (dr == null) {
            	// animate frame to default location if detection has failed
                mQuadViewManager.animateQuadToDefaultPosition();
            } else if (dr instanceof QuadDetectorResult) {
            	// otherwise, animate frame to detected location
                mQuadViewManager.animateQuadToDetectionPosition((QuadDetectorResult) dr);
            }
        // images will be available inside ImageMetadata
        } else if (metadata instanceof ImageMetadata) {
        	// obtain image
        	
        	// Please note that Image's internal buffers are valid only
        	// until this method ends. If you want to save image for later,
        	// obtained a cloned image with image.clone().
        	
            Image image = ((ImageMetadata) metadata).getImage();
            // to convert the image to Bitmap, call image.convertToBitmap()
            
            // after this line, image gets disposed. If you want to save it
            // for later, you need to clone it with image.clone()
        }
    }
}

Here are javadoc links to all classes that appeared in previous code snippet:

Using ImageListener to obtain images that are being processed

There are two ways of obtaining images that are being processed:

This section will give an example how to implement ImageListener interface that will obtain images that are being processed. ImageListener has only one method that needs to be implemented: onImageAvailable(Image). This method is called whenever library has available image for current processing step. Image is class that contains all information about available image, including buffer with image pixels. Image can be in several format and of several types. ImageFormat defines the pixel format of the image, while ImageType defines the type of the image. ImageListener interface extends android's Parcelable interface so it is possible to send implementations via intents.

Here is the example implementation of ImageListener interface. This implementation will save all images into folder myImages on device's external storage:

public class MyImageListener implements ImageListener {

   /**
    * Called when library has image available.
    */
    @Override
    public void onImageAvailable(Image image) {
        // we will save images to 'myImages' folder on external storage
        // image filenames will be 'imageType - currentTimestamp.jpg'
        String output = Environment.getExternalStorageDirectory().getAbsolutePath() + "/myImages";
        File f = new File(output);
        if(!f.exists()) {
            f.mkdirs();
        }
        DateFormat dateFormat = new SimpleDateFormat("yyyy-MM-dd-HH-mm-ss");
        String dateString = dateFormat.format(new Date());
        String filename = null;
        switch(image.getImageFormat()) {
            case ALPHA_8: {
                filename = output + "/alpha_8 - " + image.getImageName() + " - " + dateString + ".jpg";
                break;
            }
            case BGRA_8888: {
                filename = output + "/bgra - " + image.getImageName() + " - " + dateString + ".jpg";
                break;
            }
            case YUV_NV21: {
                filename = output + "/yuv - " + image.getImageName()+ " - " + dateString + ".jpg";
                break;
            }
        }
        Bitmap b = image.convertToBitmap();
        FileOutputStream fos = null;
        try {
            fos = new FileOutputStream(filename);
            boolean success = b.compress(Bitmap.CompressFormat.JPEG, 100, fos);
            if(!success) {
                Log.e(this, "Failed to compress bitmap!");
                if(fos != null) {
                    try {
                        fos.close();
                    } catch (IOException ignored) {
                    } finally {
                        fos = null;
                    }
                    new File(filename).delete();
                }
            }
        } catch (FileNotFoundException e) {
            Log.e(this, e, "Failed to save image");
        } finally {
            if(fos != null) {
                try {
                    fos.close();
                } catch (IOException ignored) {
                }
            }
        }
        // after this line, image gets disposed. If you want to save it
        // for later, you need to clone it with image.clone()
    }

    /**
     * ImageListener interface extends Parcelable interface, so we also need to implement
     * that interface. The implementation of Parcelable interface is below this line.
     */

    @Override
    public int describeContents() {
        return 0;
    }

    @Override
    public void writeToParcel(Parcel dest, int flags) {
    }

    public static final Creator<MyImageListener> CREATOR = new Creator<MyImageListener>() {
        @Override
        public MyImageListener createFromParcel(Parcel source) {
            return new MyImageListener();
        }

        @Override
        public MyImageListener[] newArray(int size) {
            return new MyImageListener[size];
        }
    };
}

Note that ImageListener can only be given to Pdf417ScanActivity via Intent, while to RecognizerView, you need to give Metadata listener and Metadata settings that defines which metadata should be obtained. When you give ImageListener to Pdf417ScanActivity via Intent, it internally registers a MetadataListener that enables obtaining of all available image types and invokes ImageListener given via Intent with the result. For more information and examples how to use MetadataListener for obtaining images, refer to demo applications.

Recognition settings and results

This chapter will discuss various recognition settings used to configure different recognizers and scan results generated by them.

Recognition settings define what will be scanned and how will the recognition process be performed. Here is the list of methods that are most relevant:

Sets whether or not outputting of multiple scan results from same image is allowed. If that is true, it is possible to return multiple recognition results produced by different recognizers from same image. However, single recognizer can still produce only a single result from single image. If this option is false, the array of BaseRecognitionResults will contain at most 1 element. The upside of setting that option to false is the speed - if you enable lots of recognizers, as soon as the first recognizer succeeds in scanning, recognition chain will be terminated and other recognizers will not get a chance to analyze the image. The downside is that you are then unable to obtain multiple results from different recognizers from single image. By default, this option is true.

Sets the number of miliseconds PDF417.mobi will attempt to perform the scan it exits with timeout error. On timeout returned array of BaseRecognitionResults inside RecognitionResults might be null, empty or may contain only elements that are not valid (isValid returns false) or are empty (isEmpty returns true).

NOTE: Please be aware that time counting does not start from the moment when scanning starts. Instead it starts from the moment when at least one BaseRecognitionResult becomes available which is neither empty nor valid.

The reason for this is the better user experience in cases when for example timeout is set to 10 seconds and user starts scanning and leaves device lying on table for 9 seconds and then points the device towards the object it wants to scan: in such case it is better to let that user scan the object it wants instead of completing scan with empty scan result as soon as 10 seconds timeout ticks out.

Sets the mode of the frame quality estimation. Frame quality estimation is the process of estimating the quality of video frame so only best quality frames can be chosen for processing so no time is wasted on processing frames that are of too poor quality to contain any meaningful information. It is not used when performing recognition of Android bitmaps using Direct API. You can choose 3 different frame quality estimation modes: automatic, always on and always off.

  • In automatic mode (default), frame quality estimation will be used if device contains multiple processor cores or if on single core device at least one active recognizer requires frame quality estimation.
  • In always on mode, frame quality estimation will be used always, regardless of device or active recognizers.
  • In always off mode, frame quality estimation will be always disabled, regardless of device or active recognizers. This is not recommended setting because it can significantly decrease quality of the scanning process.

Sets the array of RecognizerSettings that will define which recognizers should be activated and how should the be set up. The list of available RecognizerSettings and their specifics are given below.

Scanning PDF417 barcodes

This section discusses the settings for setting up PDF417 recognizer and explains how to obtain results from PDF417 recognizer.

Setting up PDF417 recognizer

To activate PDF417 recognizer, you need to create a Pdf417RecognizerSettings and add it to RecognizerSettings array. You can do this using following code snippet:

private RecognizerSettings[] setupSettingsArray() {
	Pdf417RecognizerSettings sett = new Pdf417RecognizerSettings();
	// disable scanning of white barcodes on black background
	sett.setInverseScanning(false);
	// allow scanning of barcodes that have invalid checksum
	sett.setUncertainScanning(true);
	// disable scanning of barcodes that do not have quiet zone
	// as defined by the standard
	sett.setNullQuietZoneAllowed(false);

	// now add sett to recognizer settings array that is used to configure
	// recognition
	return new RecognizerSettings[] { sett };
}

As can be seen from example, you can tweak PDF417 recognition parameters with methods of Pdf417RecognizerSettings.

setUncertainScanning(boolean)

By setting this to true, you will enable scanning of non-standard elements, but there is no guarantee that all data will be read. This option is used when multiple rows are missing (e.g. not whole barcode is printed). Default is false.

setNullQuietZoneAllowed(boolean)

By setting this to true, you will allow scanning barcodes which don't have quiet zone surrounding it (e.g. text concatenated with barcode). This option can significantly increase recognition time. Default is false.

setInverseScanning(boolean)

By setting this to true, you will enable scanning of barcodes with inverse intensity values (i.e. white barcodes on dark background). This option can significantly increase recognition time. Default is false.

Obtaining results from PDF417 recognizer

PDF417 recognizer produces Pdf417ScanResult. You can use instanceof operator to check if element in results array is instance of Pdf417ScanResult class. See the following snippet for an example:

@Override
public void onScanningDone(RecognitionResults results) {
	BaseRecognitionResult[] dataArray = results.getRecognitionResults();
	for(BaseRecognitionResult baseResult : dataArray) {
		if(baseResult instanceof Pdf417ScanResult) {
			Pdf417ScanResult result = (Pdf417ScanResult) baseResult;
			
	        // getStringData getter will return the string version of barcode contents
			String barcodeData = result.getStringData();
			// isUncertain getter will tell you if scanned barcode is uncertain
			boolean uncertainData = result.isUncertain();
			// getRawData getter will return the raw data information object of barcode contents
			BarcodeDetailedData rawData = result.getRawData();
			// BarcodeDetailedData contains information about barcode's binary layout, if you
			// are only interested in raw bytes, you can obtain them with getAllData getter
			byte[] rawDataBuffer = rawData.getAllData();
		}
	}
}

As you can see from the example, obtaining data is rather simple. You just need to call several methods of the Pdf417ScanResult object:

String getStringData()

This method will return the string representation of barcode contents. Note that PDF417 barcode can contain binary data so sometimes it makes little sense to obtain only string representation of barcode data.

boolean isUncertain()

This method will return the boolean indicating if scanned barcode is uncertain. This can return true only if scanning of uncertain barcodes is allowed, as explained earlier.

BarcodeDetailedData getRawData()

This method will return the object that contains information about barcode's binary layout. You can see information about that object in javadoc. However, if you only need to access byte array containing, you can call method getAllData of BarcodeDetailedData object.

Quadrilateral getPositionOnImage()

Returns the position of barcode on image. Note that returned coordinates are in image's coordinate system which is not related to view coordinate system used for UI.

Scanning one dimensional barcodes with PDF417.mobi's implementation

This section discusses the settings for setting up 1D barcode recognizer that uses PDF417.mobi's implementation of scanning algorithms and explains how to obtain results from that recognizer. Henceforth, the 1D barcode recognizer that uses PDF417.mobi's implementation of scanning algorithms will be refered as "Bardecoder recognizer".

Setting up Bardecoder recognizer

To activate Bardecoder recognizer, you need to create a BarDecoderRecognizerSettings and add it to RecognizerSettings array. You can do this using following code snippet:

private RecognizerSettings[] setupSettingsArray() {
	BarDecoderRecognizerSettings sett = new BarDecoderRecognizerSettings();
	// activate scanning of Code39 barcodes
	sett.setScanCode39(true);
	// activate scanning of Code128 barcodes
	sett.setScanCode128(true);
	// disable scanning of white barcodes on black background
	sett.setInverseScanning(false);
	// disable slower algorithm for low resolution barcodes
	sett.setTryHarder(false);

	// now add sett to recognizer settings array that is used to configure
	// recognition
	return new RecognizerSettings[] { sett };
}

As can be seen from example, you can tweak Bardecoder recognition parameters with methods of BarDecoderRecognizerSettings.

setScanCode128(boolean)

Method activates or deactivates the scanning of Code128 1D barcodes. Default (initial) value is false.

setScanCode39(boolean)

Method activates or deactivates the scanning of Code39 1D barcodes. Default (initial) value is false.

setInverseScanning(boolean)

By setting this to true, you will enable scanning of barcodes with inverse intensity values (i.e. white barcodes on dark background). This option can significantly increase recognition time. Default is false.

setTryHarder(boolean)

By setting this to true, you will enabled scanning of lower resolution barcodes at cost of additional processing time. This option can significantly increase recognition time. Default is false.

Obtaining results from Bardecoder recognizer

Bardecoder recognizer produces BarDecoderScanResult. You can use instanceof operator to check if element in results array is instance of BarDecoderScanResult class. See the following snippet for example:

@Override
public void onScanningDone(RecognitionResults results) {
	BaseRecognitionResult[] dataArray = results.getRecognitionResults();
	for(BaseRecognitionResult baseResult : dataArray) {
		if(baseResult instanceof BarDecoderScanResult) {
			BarDecoderScanResult result = (BarDecoderScanResult) baseResult;
			
			// getBarcodeType getter will return a BarcodeType enum that will define
			// the type of the barcode scanned
			BarcodeType barType = result.getBarcodeType();
	        // getStringData getter will return the string version of barcode contents
			String barcodeData = result.getStringData();
			// getRawData getter will return the raw data information object of barcode contents
			BarcodeDetailedData rawData = result.getRawData();
			// BarcodeDetailedData contains information about barcode's binary layout, if you
			// are only interested in raw bytes, you can obtain them with getAllData getter
			byte[] rawDataBuffer = rawData.getAllData();
		}
	}
}

As you can see from the example, obtaining data is rather simple. You just need to call several methods of the BarDecoderScanResult object:

String getStringData()

This method will return the string representation of barcode contents.

BarcodeDetailedData getRawData()

This method will return the object that contains information about barcode's binary layout. You can see information about that object in javadoc. However, if you only need to access byte array containing, you can call method getAllData of BarcodeDetailedData object.

String getExtendedStringData()

This method will return the string representation of extended barcode contents. This is available only if barcode that supports extended encoding mode was scanned (e.g. code39).

BarcodeDetailedData getExtendedRawData()

This method will return the object that contains information about barcode's binary layout when decoded in extended mode. You can see information about that object in javadoc. However, if you only need to access byte array containing, you can call method getAllData of BarcodeDetailedData object. This is available only if barcode that supports extended encoding mode was scanned (e.g. code39).

getBarcodeType()

This method will return a BarcodeType enum that defines the type of barcode scanned.

Scanning barcodes with ZXing implementation

This section discusses the settings for setting up barcode recognizer that use ZXing's implementation of scanning algorithms and explains how to obtain results from it. PDF417.mobi uses ZXing's c++ port to support barcodes for which we still do not have our own scanning algorithms. Also, since ZXing's c++ port is not maintained anymore, we also provide updates and bugfixes to it inside our codebase.

Setting up ZXing recognizer

To activate ZXing recognizer, you need to create ZXingRecognizerSettings and add it to RecognizerSettings array. You can do this using the following code snippet:

private RecognizerSettings[] setupSettingsArray() {
	ZXingRecognizerSettings sett=  new ZXingRecognizerSettings();
	// disable scanning of white barcodes on black background
	sett.setInverseScanning(false);
	// activate scanning of QR codes
	sett.setScanQRCode(true);

	// now add sett to recognizer settings array that is used to configure
	// recognition
	return new RecognizerSettings[] { sett };
}

As can be seen from example, you can tweak ZXing recognition parameters with methods of ZXingRecognizerSettings. Note that some barcodes, such as Code 39 are available for scanning with PDF417.mobi's implementation. You can choose to use only one implementation or both (just put both settings objects into RecognizerSettings array). Using both implementations increases the chance of correct barcode recognition, but requires more processing time. Of course, we recommend using the PDF417.mobi's implementation for supported barcodes.

setScanAztecCode(boolean)

Method activates or deactivates the scanning of Aztec 2D barcodes. Default (initial) value is false.

setScanCode128(boolean)

Method activates or deactivates the scanning of Code128 1D barcodes. Default (initial) value is false.

setScanCode39(boolean)

Method activates or deactivates the scanning of Code39 1D barcodes. Default (initial) value is false.

setScanDataMatrixCode(boolean)

Method activates or deactivates the scanning of Data Matrix 2D barcodes. Default (initial) value is false.

setScanEAN13Code(boolean)

Method activates or deactivates the scanning of EAN 13 1D barcodes. Default (initial) value is false.

setScanEAN8Code(boolean)

Method activates or deactivates the scanning of EAN 8 1D barcodes. Default (initial) value is false.

shouldScanITFCode(boolean)

Method activates or deactivates the scanning of ITF 1D barcodes. Default (initial) value is false.

setScanQRCode(boolean)

Method activates or deactivates the scanning of QR 2D barcodes. Default (initial) value is false.

setScanUPCACode(boolean)

Method activates or deactivates the scanning of UPC A 1D barcodes. Default (initial) value is false.

setScanUPCECode(boolean)

Method activates or deactivates the scanning of UPC E 1D barcodes. Default (initial) value is false.

setInverseScanning(boolean)

By setting this to true, you will enable scanning of barcodes with inverse intensity values (i.e. white barcodes on dark background). This option can significantly increase recognition time. Default is false.

setSlowThoroughScan(boolean)

Use this method to enable slower, but more thorough scan procedure when scanning barcodes. By default, this option is turned on.

Obtaining results from ZXing recognizer

ZXing recognizer produces ZXingScanResult. You can use instanceof operator to check if element in results array is instance of ZXingScanResult class. See the following snippet for example:

@Override
public void onScanningDone(RecognitionResults results) {
	BaseRecognitionResult[] dataArray = results.getRecognitionResults();
	for(BaseRecognitionResult baseResult : dataArray) {
		if(baseResult instanceof ZXingScanResult) {
			ZXingScanResult result = (ZXingScanResult) baseResult;
			
			// getBarcodeType getter will return a BarcodeType enum that will define
			// the type of the barcode scanned
			BarcodeType barType = result.getBarcodeType();
	        // getStringData getter will return the string version of barcode contents
			String barcodeData = result.getStringData();
		}
	}
}

As you can see from the example, obtaining data is rather simple. You just need to call several methods of the ZXingScanResult object:

String getStringData()

This method will return the string representation of barcode contents.

getBarcodeType()

This method will return a BarcodeType enum that defines the type of barcode scanned.

Translation and localization

PDF417.mobi can be localized to any language. If you are using RecognizerView in your custom scan activity, you should handle localization as in any other Android app - RecognizerView does not use strings nor drawables, it only uses assets from assets/microblink folder. Those assets must not be touched as they are required for recognition to work correctly.

However, if you use our builtin Pdf417ScanActivity activity, it will use resources packed with library project to display strings and images on top of camera view. We have already prepared string in several languages which you can use out of the box. You can also modify those strings, or you can add your own language.

To use a language, you have to enable it from the code:

  • To enable usage of predefined language you should call method LanguageUtils.setLanguage(language, context). For example, you can set language like this:

     // define PDF417.mobi language
     LanguageUtils.setLanguage(Language.Croatian, this);
  • To enable usage of language that is not available in predefined language enum (for example, if you added your own language), you should call method LanguageUtils.setLanguageAndCountry(language, country, context). For example, you can set language like this:

     // define PDF417.mobi language
     LanguageUtils.setLanguageAndCountry("hr", "", this);

Adding new language

PDF417.mobi can easily be translated to other languages. The res folder in LibPdf417Mobi.aar archive has folder values which contains strings.xml - this file contains english strings. In order to make e.g. croatian translation, create a folder values-hr in your project and put the copy of strings.xml inside it (you might need to extract LibPdf417Mobi.aar archive to get access to those files). Then, open that file and change the english version strings into croatian version.

Changing strings in the existing language

To modify an existing string, the best approach would be to:

  1. choose a language which you want to modify. For example Croatia ('hr').
  2. find strings.xml in LibPdf417Mobi.aar archive folder res/values-hr
  3. choose a string key which you want to change. For example, <string name="PhotoPayHelp">Help</string>
  4. in your project create a file strings.xml in the folder res/values-hr, if it doesn't already exist
  5. create an entry in the file with the value for the string which you want. For example <string name="PhotoPayHelp">Pomoć</string>
  6. repeat for all the string you wish to change

Embedding PDF417.mobi inside another SDK

When creating your own SDK which depends on PDF417.mobi, you should consider following cases:

PDF417.mobi licensing model

PDF417.mobi supports two types of licenses:

  • application licenses
  • library licenses.

Application licenses

Application license keys are bound to application's package name. This means that each app must have its own license key in order to be able to use PDF417.mobi. This model is appropriate when integrating PDF417.mobi directly into app, however if you are creating SDK that depends on PDF417.mobi, you would need separate PDF417.mobi license key for each of your clients using your SDK. This is not practical, so you should contact us at help.microblink.com and we can provide you a library license key.

Library licenses

Library license keys are bound to licensee name. You will provide your licensee name with your inquiry for library license key. Unlike application license keys, library license keys must be set together with licensee name:

  • when using Pdf417ScanActivity, you should provide licensee name with extra Pdf417ScanActivity.EXTRAS_LICENSEE, for example:

     // set the license key
     intent.putExtra(Pdf417ScanActivity.EXTRAS_LICENSE_KEY, "Enter_License_Key_Here");
     intent.putExtra(Pdf417ScanActivity.EXTRAS_LICENSEE, "Enter_Licensee_Here");
  • when using RecognizerView, you should use method that accepts both license key and licensee, for example:

     mRecognizerView.setLicenseKey("Enter_License_Key_Here", "Enter_Licensee_Here");

Ensuring the final app gets all resources required by PDF417.mobi

At the time of writing this documentation, Android does not have support for combining multiple AAR libraries into single fat AAR. The problem is that resource merging is done while building application, not while building AAR, so application must be aware of all its dependencies. There is no official Android way of "hiding" third party AAR within your AAR.

This problem is usually solved with transitive Maven dependencies, i.e. when publishing your AAR to Maven you specify dependencies of your AAR so they are automatically referenced by app using your AAR. Besides this, there are also several other approaches you can try:

  • you can ask your clients to reference PDF417.mobi in their app when integrating your SDK
  • since the problem lies in resource merging part you can try avoiding this step by ensuring your library will not use any component from PDF417.mobi that uses resources (i.e. Pdf417ScanActivity). You can perform custom UI integration while taking care that all resources (strings, layouts, images, ...) used are solely from your AAR, not from PDF417.mobi. Then, in your AAR you should not reference LibPdf417Mobi.aar as gradle dependency, instead you should unzip it and copy its assets to your AAR’s assets folder, its classes.jar to your AAR’s lib folder (which should be referenced by gradle as jar dependency) and contents of its jni folder to your AAR’s src/main/jniLibs folder.
  • Another approach is to use 3rd party unofficial gradle script that aim to combine multiple AARs into single fat AAR. Use this script at your own risk.

Processor architecture considerations

PDF417.mobi is distributed with native library binaries for all processor architectures supported by Android.

ARMv7 architecture gives the ability to take advantage of hardware accelerated floating point operations and SIMD processing with NEON. This gives PDF417.mobi a huge performance boost on devices that have ARMv7 processors. Most new devices (all since 2012.) have ARMv7 processor so it makes little sense not to take advantage of performance boosts that those processors can give.

ARM64 is the new processor architecture that some new high end devices use. ARM64 processors are very powerful and also have the possibility to take advantage of new NEON64 SIMD instruction set to quickly process multiple pixels with single instruction.

x86 architecture gives the ability to obtain native speed on x86 android devices, like Prestigio 5430. Without that, PDF417.mobi will not work on such devices, or it will be run on top of ARM emulator that is shipped with device - this will give a huge performance penalty.

x86_64 architecture gives better performance than x86 on devices that use 64-bit Intel Atom processor.

Mips and Mips64 architectures are used for devices that use mips-compatible processor.

However, there are some issues to be considered:

  • ARMv7 processors understand ARMv6 instruction set, but ARMv6 processors do not understand ARMv7 instructions.
  • if ARMv7 processor executes ARMv6 code, it does not take advantage of hardware floating point acceleration and does not use SIMD operations
  • ARMv7 build of native library cannot be run on devices that do not have ARMv7 compatible processor (list of those old devices can be found here)
  • neither ARMv6 nor ARMv7 processors understand x86 instruction set
  • x86 processors do not understand neither ARMv6 nor ARMv7 instruction sets
  • however, some x86 android devices ship with the builtin ARM emulator - such devices are able to run ARM binaries (both ARMv6 and ARMv7) but with performance penalty. There is also a risk that builtin ARM emulator will not understand some specific ARM instruction and will crash.
  • ARM64 processors understand both ARMv6 and ARMv7 instruction sets, but neither ARMv6 nor ARMv7 processors do not understand ARM64 instructions
  • if ARM64 processor executes ARMv6 code, it does not take advantage of hardware floating point acceleration and does not use SIMD operations
  • if ARM64 processor executes ARMv7 code, it does not take advantage of modern NEON64 SIMD operations and does not take advantage of 64-bit registers it has - it runs in emulation mode
  • x86_64 processors understand x86 instruction set, but x86 processors do not understand x86_64 instruction set
  • if x86_64 processor executes x86 code, it does not take advantage of 64-bit registers and use two instructions instead of one for 64-bit operations
  • MIPS processors understand only MIPS instruction set, while MIPS64 processors understand both MIPS and MIPS64 instruction sets

LibPdf417Mobi.aar archive contains builds of native library for all available architectures. By default, when you integrate PDF417.mobi into your app, your app will contain native builds for all processor architectures. Thus, PDF417.mobi will work on all devices and will use specific processor features where it can, e.g. ARMv7 features on ARMv7 devices and ARM64 features on ARM64 devices. However, the size of your application will be rather large.

Reducing the final size of your app

If your final app is too large because of PDF417.mobi, you can decide to create multiple flavors of your app - one flavor for each architecture. With gradle and Android studio this is very easy - just add the following code to build.gradle file of your app:

android {
  ...
  splits {
    abi {
      enable true
      reset()
      include 'x86', 'armeabi-v7a', 'armeabi', 'arm64-v8a', 'mips', 'mips64', 'x86_64'
      universalApk true
    }
  }
}

With that build instructions, gradle will build four different APK files for your app. Each APK will contain only native library for one processor architecture and one APK will contain all architectures. In order for Google Play to accept multiple APKs of the same app, you need to ensure that each APK has different version code. This can easily be done by defining a version code prefix that is dependent on architecture and adding real version code number to it in following gradle script:

// map for the version code
def abiVersionCodes = ['armeabi':1, 'armeabi-v7a':2, 'arm64-v8a':3, 'mips':4, 'mips64':5, 'x86':6, 'x86_64':7]

import com.android.build.OutputFile

android.applicationVariants.all { variant ->
    // assign different version code for each output
    variant.outputs.each { output ->
        def filter = output.getFilter(OutputFile.ABI)
        if(filter != null) {
            output.versionCodeOverride = abiVersionCodes.get(output.getFilter(OutputFile.ABI)) * 1000000 + android.defaultConfig.versionCode
        }
    }
}

For more information about creating APK splits with gradle, check this article from Google.

After generating multiple APK's, you need to upload them to Google Play. For tutorial and rules about uploading multiple APK's to Google Play, please read the official Google article about multiple APKs.

Removing processor architecture support in gradle without using APK splits

If you will not be distributing your app via Google Play or for some other reasons you want to have single APK of smaller size, you can completely remove support for certaing CPU architecture from your APK. This is not recommended as this has consequences.

To remove certain CPU arhitecture, add following statement to your android block inside build.gradle:

android {
	...
	packagingOptions {
		exclude 'lib/<ABI>/libPdf417Mobi.so'
	}
}

where <ABI> represents the CPU architecture you want to remove:

  • to remove ARMv6 support, use exclude 'lib/armeabi/libPdf417Mobi.so'
  • to remove ARMv7 support, use exclude 'lib/armeabi-v7a/libPdf417Mobi.so'
  • to remove x86 support, use exclude 'lib/x86/libPdf417Mobi.so'
  • to remove ARM64 support, use exclude 'lib/arm64-v8a/libPdf417Mobi.so'
  • to remove x86_64 support, use exclude 'lib/x86_64/libPdf417Mobi.so'
  • to remove MIPS support, use exclude 'lib/mips/libPdf417Mobi.so'
  • to remove MIPS64 support, use exclude 'lib/mips64/libPdf417Mobi.so'

You can also remove multiple processor architectures by specifying exclude directive multiple times. Just bear in mind that removing processor architecture will have sideeffects on performance and stability of your app. Please read this for more information.

Removing processor architecture support in Eclipse

This section assumes that you have set up and prepared your Eclipse project from LibPdf417Mobi.aar as described in chapter Eclipse integration instructions.

If you are using Eclipse, removing processor architecture support gets really complicated. Eclipse does not support build flavors and you will either need to remove support for some processors or create several different library projects from LibPdf417Mobi.aar - each one for specific processor architecture.

Native libraryies in eclipse library project are located in subfolder libs:

  • libs/armeabi contains native libraries for ARMv6 processor architecture
  • libs/armeabi-v7a contains native libraries for ARMv7 processor arhitecture
  • libs/x86 contains native libraries for x86 processor architecture
  • libs/arm64-v8a contains native libraries for ARM64 processor architecture
  • libs/x86_64 contains native libraries for x86_64 processor architecture
  • libs/mips contains native libraries for MIPS processor architecture
  • libs/mips64 contains native libraries for MIPS64 processor architecture

To remove a support for processor architecture, you should simply delete appropriate folder inside Eclipse library project:

  • to remove ARMv6 support, delete folder libs/armeabi
  • to remove ARMv7 support, delete folder libs/armeabi-v7a
  • to remove x86 support, delete folder libs/x86
  • to remove ARM64 support, delete folder libs/arm64-v8a
  • to remove x86_64 support, delete folder libs/x86_64
  • to remove MIPS support, delete folder libs/mips
  • to remove MIPS64 support, delete folder libs/mips64

Consequences of removing processor architecture

However, removing a processor architecture has some consequences:

  • by removing ARMv6 support PDF417.mobi will not work on devices that have ARMv6 processors.
  • by removing ARMv7 support, PDF417.mobi will work on both devices that have ARMv6, ARM64 or ARMv7 processor. However, on ARMv7 and ARM64 processors, hardware floating point and SIMD acceleration will not be used, thus making PDF417.mobi much slower. Our internal tests have shown that running ARMv7 version of PDF417.mobi on ARMv7 device is more than 50% faster than running ARMv6 version on same device.
  • by removing ARM64 support, PDF417.mobi will not use ARM64 features on ARM64 device
  • by removing x86 support, PDF417.mobi will not work on devices that have x86 processor, except in situations when devices have ARM emulator - in that case, PDF417.mobi will work, but will be slow
  • by removing x86_64 support, PDF417.mobi will not use 64-bit optimizations on x86_64 processor, but if x86 support is not removed, PDF417.mobi should work
  • by removing MIPS support, PDF417.mobi will not work on MIPS processors
  • by removing MIPS64 support, PDF417.mobi will not utilize MIPS64 optimizations on MIPS64 processor, but if MIPS support is not removed, PDF417.mobi should work

Our recommendation is to include all architectures into your app - it will work on all devices and will provide best user experience. However, if you really need to reduce the size of your app, we recommend releasing separate version of your app for each processor architecture. It is easiest to do that with APK splits.

Combining PDF417.mobi with other native libraries

If you are combining PDF417.mobi library with some other libraries that contain native code into your application, make sure you match the architectures of all native libraries. For example, if third party library has got only ARMv6 and x86 versions, you must use exactly ARMv6 and x86 versions of PDF417.mobi with that library, but not ARMv7, ARM64 or some else. Using these architectures will crash your app in initialization step because JVM will try to load all its native dependencies in same preferred architecture - for example if device preferres ARMv7 native libraries so it will see that there is a PDF417.mobi ARMv7 native library and will load it. After that, it will try to load ARMv7 version of your third party library which does not exist - therefore app will crash with UnsatisfiedLinkError.

Troubleshooting

Integration problems

In case of problems with integration of the SDK, first make sure that you have tried integrating it into Android Studio by following integration instructions. Althought we do provide Eclipse ADT integration integration instructions, we officialy do not support Eclipse ADT anymore. Also, for any other IDEs unfortunately you are on your own.

If you have followed Android Studio integration instructions and are still having integration problems, please contact us at help.microblink.com.

SDK problems

In case of problems with using the SDK, you should do as follows:

Licencing problems

If you are getting "invalid licence key" error or having other licence-related problems (e.g. some feature is not enabled that should be or there is a watermark on top of camera), first check the ADB logcat. All licence-related problems are logged to error log so it is easy to determine what went wrong.

When you have determine what is the licence-relate problem or you simply do not understand the log, you should contact us help.microblink.com. When contacting us, please make sure you provide following information:

  • exact package name of your app (from your AndroidManifest.xml and/or your build.gradle file)
  • licence key that is causing problems
  • please stress out that you are reporting problem related to Android version of PDF417.mobi SDK
  • if unsure about the problem, you should also provide excerpt from ADB logcat containing licence error

Other problems

If you are having problems with scanning certain items, undesired behaviour on specific device(s), crashes inside PDF417.mobi or anything unmentioned, please do as follows:

  • enable logging to get the ability to see what is library doing. To enable logging, put this line in your application:

     com.microblink.util.Log.setLogLevel(com.microblink.util.Log.LogLevel.LOG_VERBOSE);

    After this line, library will display as much information about its work as possible. Please save the entire log of scanning session to a file that you will send to us. It is important to send the entire log, not just the part where crash occured, because crashes are sometimes caused by unexpected behaviour in the early stage of the library initialization.

  • Contact us at help.microblink.com describing your problem and provide following information:

    • log file obtained in previous step
    • high resolution scan/photo of the item that you are trying to scan
    • information about device that you are using - we need exact model name of the device. You can obtain that information with this app
    • please stress out that you are reporting problem related to Android version of PDF417.mobi SDK

Additional info

Complete API reference can be found in Javadoc.

For any other questions, feel free to contact us at help.microblink.com.