Building a Barcode and a QR Code Reader Application for iOS using Swift 4: PT2

Category: Swift
Building a Barcode and a QR Code Reader Application for iOS using Swift 4: PT2

This is the second part of a guide on building a Barcode and QR Code reader application with Swift 4. In the previous article we defined AVCaptureSession, passed the input source - camera device to it and defined the output using AVCaptureMetadataOutput that detects metadata type that is a barcode or a QR Code.

In this part we will display camera input to the screen via AVCaptureVideoPreviewLayer and then implement the method which will be triggered by AVCaptureMetadataOutput when it detects metadata that we are looking for and translate the contents of a barcode or a QR Code to human-readable data.

Before we proceed implementing all the above mentioned we need to set permissions for the camera. If you tried launching the application it would crash because iOS requires that we ask users for permission that our application will be using the camera. You would also have some crazy error log in the console.

To gain permission we need to edit info.plist file by adding new property Privacy - Camera Usage Description and as a string value Gain acceess to camera for scanning. You can enter whatever message you want.

Privacy - Camera Usage Description

Now that we have permissions to access the camera let’s display the camera capture to the screen so that user knows where he’s pointing it.

First let's define a global variable for our class of type AVCaptureVideoPreviewLayer:

var videoPreviewLayer:AVCaptureVideoPreviewLayer?

The AVCaptureVideoPreviewLayer is a Core Animation layer that can display video as it is being captured. You use this preview layer in conjunction with an AV capture session. When we initialize preview layer we have to pass a capture session.

This layer itself is a rectangle and we tell it to be of the specific size by setting its .frame property for example equal to view.layer.bounds so that we see the video input on the whole screen. AVCaptureVideoPreviewLayer class has a property called .video​Gravity that indicates how the video is displayed within a player layer’s bounds rect. These two properties are required to properly set it up. And then we will add it to the view layer as a sub layer to be displayed on the screen:

videoPreviewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
videoPreviewLayer?.videoGravity = .resizeAspectFill
videoPreviewLayer?.frame = view.layer.bounds
view.layer.addSublayer(videoPreviewLayer!)

The whole code so far looks as given below and if you run it on the actual device you should be able to see the input source from the camera on the screen. So far our application detects the barcode but we are not telling the user that it’s been detected.

//
//  ScannerViewController.swift
//  BarcodeScanner
//
//  Created by Mikheil Gotiashvili on 7/14/17.
//  Copyright © 2017 Mikheil Gotiashvili. All rights reserved.
//

import UIKit
import AVFoundation

class ScannerViewController: UIViewController, AVCaptureMetadataOutputObjectsDelegate {

    var captureDevice:AVCaptureDevice?
    var videoPreviewLayer:AVCaptureVideoPreviewLayer?
    
    override func viewDidLoad() {
        super.viewDidLoad()
        // Do any additional setup after loading the view, typically from a nib.
        navigationItem.title = "Scanner"
        view.backgroundColor = .white
        
        captureDevice = AVCaptureDevice.default(for: .video)
        // Check if captureDevice returns a value and unwrap it
        if let captureDevice = captureDevice {
        
            do {
                let input = try AVCaptureDeviceInput(device: captureDevice)
                
                let captureSession = AVCaptureSession()
                captureSession.addInput(input)
                
                let captureMetadataOutput = AVCaptureMetadataOutput()
                captureSession.addOutput(captureMetadataOutput)
                
                captureMetadataOutput.setMetadataObjectsDelegate(self, queue: .main)
                captureMetadataOutput.metadataObjectTypes = [.code128, .qr, .ean13,  .ean8, .code39] //AVMetadataObject.ObjectType
                
                captureSession.startRunning()
                
                videoPreviewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
                videoPreviewLayer?.videoGravity = .resizeAspectFill
                videoPreviewLayer?.frame = view.layer.bounds
                view.layer.addSublayer(videoPreviewLayer!)
                
            } catch {
                print("Error Device Input")
            }
            
        }
        
    }

    override func didReceiveMemoryWarning() {
        super.didReceiveMemoryWarning()
        // Dispose of any resources that can be recreated.
    }

}

Now it’s time to implement the method from AVCaptureMetadataOutputObjectsDelegate that informs the delegate (our scannerViewController) that the capture output object emitted new metadata objects.

The single method is called captureOutput and looks like this:

func metadataOutput(_ output: AVCaptureMetadataOutput, didOutput metadataObjects: [AVMetadataObject], from connection: AVCaptureConnection) { ... }

If you want to take a look at the documentation of this optional method check Apple Docs. Our point of interest is metadata​Objects that gives us an array of AVMetadata​Object instances representing the newly emitted metadata. Briefly saying in this array we will have the code that has been read from the barcode or QR Code and we will be able to get the string value for further processing.

Inside the captureOutput function we will first check that the returned array of metadata objects contains any data and in case it contains we will output the string value to a text label:

    func metadataOutput(_ output: AVCaptureMetadataOutput, didOutput metadataObjects: [AVMetadataObject], from connection: AVCaptureConnection) {
        if metadataObjects.count == 0 {
            print("No Input Detected")
            return
        }
        
        let metadataObject = metadataObjects[0] as! AVMetadataMachineReadableCodeObject
        
        guard let stringCodeValue = metadataObject.stringValue else { return }
        // Create some label and assign returned string value to it
		codeLabel.text = stringCodeValue
        // Perform further logic needed (ex. redirect to other ViewController)
        
    }

So far so good. Now your application should be able previewing and detecting input from the camera.

Conclusion

We have pretty much implemented barcode and QR Code scanning in our application but there are some more things to fix and polish before we call it a final product. For example we would love display a new view where we pass the captured code, stop running the capture session and to have some kind of notification that the data has been read except the message label. We will implement a visual notification as outlining the barcode or QR Code on the screen so that user knows which barcode has been detected in case there are multiple in the view and make a sound notification and vibration when the data has been read. In case you have any comments, any suggestions, criticism please reach my by email or follow me on Twitter. Cheers!