Building a Barcode and QR Code Reader Application for iOS using Swift 4: PT1

Category: Swift
Building a Barcode and QR Code Reader Application for iOS using Swift 4: PT1

This article will give you a overview of how the Barcode and QR Code application is developed in Swift 4 using Apple’s built-in AVFoundation framework by example. Let’s get started!

Many articles have been written and many video tutorials have been published on developing Barcode and QR Code reader for iOS devices. What I don’t like about the many of them is lack of general explanation. Most of the tutorials give you the sample code with very few explanations and you are expected to just copy and paste it into your project. That is very helpful if you want to just copy and paste it, but what I love when reading and learning something new is to have a general idea of what I’m going to do.

INFO: I’m planning to expand this article into series and write follow-up articles for checking the inventory with an existing database - filtering the data, reading from the database and many more features of what the asset management scanning application should be doing in the real world. So fingers crossed for that.

Let’s create a single view application and in the existing view controller start building the barcode and QR code reader. I renamed ViewController.swift into ScannerViewController.swift.

I prefer building UI of my applications programmatically because it’s very simple to do and I feel more flexibility building UIs by writing everything myself then using Storyboards. If you want to explore building cool UIs programmatically I highly recommend you to visit LetsBuildThatApp on Youtube. Brian is phenomenal at building really cool stuff and will give you all the tools you start building your own applications programmatically and it’s very simple!

As already mentioned the application is using AVFoundation framework provided by Apple that provides an Objective-C interface for managing and playing audio-visual media in iOS and macOS applications. We need to import AVFoundation into ScannerViewController.swift.

The main object we will be defining in this project is:

AVCaptureSession - An object that performs capture activity and coordinates the flow of data from input devices to capture outputs. We invoke startRunning() to start the flow of data from the inputs to the outputs, and invoke stopRunning() to stop the flow. According to Apple’s documentation the startRunning() method is a blocking call which can take some time, therefore you should perform session setup on a serial queue so that the main queue, which keeps the UI responsive, isn't blocked.

To work with AVCaptureSession we should tell it the input source from which it will be grabbing the data - whether it is audio (mic) or video (camera) device.

To references a physical capture device (such as audio or video) we need to use AVCaptureDevice object. To provide media from a capture device to a capture session we use AVCaptureDeviceInput. So to say it LINKS the device and capture session between each other.

Here’s how you implement it:
First you declare a variable of AVCaptureSession type var captureSession:AVCaptureSession? as an optional and later we will instantiate it.

Inside the viewDidLoad function (it might be any other function in your code that you will call when you want to start reading barcodes, in our case we have this single view where reading of barcodes is happening) place the following code:

captureDevice = AVCaptureDevice.default(for: .video)

With this line of code we declare a variable captureDevice to which we assign AV capture device and specifically the camera - AVMediaTypeVideo. Before we proceed we need to unwrap captureDevice because we defined it previously as an optional.

Next we try to get the capture input from the device we initialized in the above code like this:

let input = try AVCaptureDeviceInput(device: captureDevice)

AVCaptureDeviceInput can throw an error so we have to mark it with try and place the whole thing into do/catch.

And as already mentioned we will pass input variable to the AVCaptureSession.

let captureSession = AVCaptureSession()
captureSession.addInput(input)

At this point we could invoke startRunning() to start the flow of the data from the AVCaptureDevice through AVCaptureDeviceInput to AVCaptureSession but we need some output type code to tell the application that we are looking for a Barcode or a QR Code, otherwise it will be just capturing useless video from the camera.

What we want to do next is to tell AVCaptureSession what kind of output data has been received from the camera. The output data in our case has to be meta-data captured while pointing the camera to a barcode. AVCaptureMetadataOutput is used to process timed metadata produced by a capture session.

Apple documentation says “AVCaptureMetadataOutput object intercepts metadata objects emitted by its associated capture connection and forwards them to a delegate object for processing. You can use instances of this class to process specific types of metadata included with the input data. You use this class the way you do other output objects, typically by adding it as an output to an AVCaptureSession object.”

So what’s going on?
When we point the camera to an item we have to define that we are looking for a barcode or QR code kind of information shown to the camera and trigger some event when it’s detected.

The AVCaptureMetadataOutputObjectsDelegate protocol must be adopted by the delegate of an AVCaptureMetadataOutput object. It contains a single optional method that allows a delegate to respond when a capture metadata output object receives relevant metadata objects through its connection.

Hence we have to tell AVCaptureMetadataOutput that the delegate will be our class and adopt the AVCaptureMetadataOutputObjectsDelegate protocol.

To adopt the above mentioned protocol simply add AVCaptureMetadataOutputObjectsDelegate to our class definition:

class scannerViewController: UIViewController, AVCaptureMetadataOutputObjectsDelegate { … }

Now let’s define AVCaptureMetadataOutput as an output for AVCaptureSession and then tell it what kind of meta-data we expect to read through the camera. In this case I have selected AVMetadataObjectTypeCode128Code, this is a Code 128 type. The final code will include all possible barcode types and QR Code.

let captureMetadataOutput = AVCaptureMetadataOutput()
captureSession.addOutput(captureMetadataOutput)
                
captureMetadataOutput.setMetadataObjectsDelegate(self, queue: .main)
captureMetadataOutput.metadataObjectTypes = [.code128, .qr, .ean13,  .ean8, .code39] //AVMetadataObject.ObjectType

Once again, if we look through the code we see that first we initialized metadata output and passed it to AVCaptureSession as an output. Then we told that the delegate, when a capture metadata output object receives relevant metadata, will be our class and we specified the dispatch queue on which to execute the delegate’s methods. A dispatch queue can be either serial or concurrent. According to Apple’s documentation, the queue must be a serial queue. So we used DispatchQueue.main to get the default serial queue.

And now you can invoke startRunning() to let AVCaptureSession start capturing the video and outputting the captured data by simply writing the following:

captureSession?.startRunning()

NOTE: If you try launching the application on the real device you will have a huge error in the console and your application might crash. The reason is that we haven't requested permission to access the camera. We'll be doing that in the next article

The code so far looks like this:

//
//  ScannerViewController.swift
//  BarcodeScanner
//
//  Created by Mikheil Gotiashvili on 7/14/17.
//  Copyright © 2017 Mikheil Gotiashvili. All rights reserved.
//

import UIKit
import AVFoundation

class ScannerViewController: UIViewController, AVCaptureMetadataOutputObjectsDelegate {

    var captureDevice:AVCaptureDevice?
    
    override func viewDidLoad() {
        super.viewDidLoad()
        // Do any additional setup after loading the view, typically from a nib.
        navigationItem.title = "Scanner"
        view.backgroundColor = .white
        
        captureDevice = AVCaptureDevice.default(for: .video)
        // Check if captureDevice returns a value and unwrap it
        if let captureDevice = captureDevice {
        
            do {
                let input = try AVCaptureDeviceInput(device: captureDevice)
                
                let captureSession = AVCaptureSession()
                captureSession.addInput(input)
                
                let captureMetadataOutput = AVCaptureMetadataOutput()
                captureSession.addOutput(captureMetadataOutput)
                
                captureMetadataOutput.setMetadataObjectsDelegate(self, queue: .main)
                captureMetadataOutput.metadataObjectTypes = [.code128, .qr, .ean13,  .ean8, .code39] //AVMetadataObject.ObjectType
                
                captureSession.startRunning()
                
            } catch {
                print("Error Device Input")
            }
            
        }
        
    }
}

Conclusion

We are half way there! You have reached the end of the first part of building Barcode and QR Code reader application for iOS using Swift 4 article. So far our application is able to start the capture session from the camera and knows which kind of metadata to look for. In the next article we will display the capture input video on the screen and implement the method that will allow us to respond when a capture metadata output object receives relevant metadata objects so that we display the information given in the barcode to the screen of the application.