Implement inceptionV3 model in UIKit project with swift

Using inceptionV3 model we will detect some objects & also use for verify this object.

Step 1:  First of all, go to the XCode & create new project. Then download inceptionV3 model from this below link:

https://docs-assets.developer.apple.com/coreml/models/Inceptionv3.mlmodel

You can find more model this link https://developer.apple.com/machine-learning/build-run-models/

Step 2: Now insert inceptionV3 model in your project. Be aware so that your model target in your project.

Step 3: Now we will design our demo app. we will add button which take photo and add another image view where we will show our image.

class ViewController: UIViewController {

    var cameraButton = UIButton(frame: CGRect(x: 200, y: 100, width: 80, height: 50))

    var imageView = UIImageView(frame: CGRect(x: 10, y: 350, width: 400, height: 400))

    override func viewDidLoad() {

        super.viewDidLoad()

        uiSetup()

    }

    private func uiSetup() {

        self.view.addSubview(cameraButton)

        cameraButton.backgroundColor = .red

        cameraButton.setTitle("Camera", for: .normal)

        cameraButton.addTarget(self, action: #selector(cameraButtonTapped), for: .touchUpInside)

        self.view.addSubview(imageView)

        imageView.backgroundColor = .red

    }

}

Step 4: It’s time to show camera. When user click camera button then camera will be open & user can capture picture. Add two delegate in ViewController class (UIImagePickerControllerDelegate, UINavigationControllerDelegate). Set the code as below:

class ViewController: UIViewController, UIImagePickerControllerDelegate, UINavigationControllerDelegate {
    
    var cameraButton = UIButton(frame: CGRect(x: 200, y: 100, width: 80, height: 50))
    var imageView = UIImageView(frame: CGRect(x: 10, y: 350, width: 400, height: 400))
    
    let imagePicker = UIImagePickerController()
    
    override func viewDidLoad() {
        super.viewDidLoad()
        
        imagePicker.delegate = self
        imagePicker.sourceType = .camera
        imagePicker.allowsEditing = false
        
        uiSetup()
        
    }
    
    private func uiSetup() {
        self.view.addSubview(cameraButton)
        cameraButton.backgroundColor = .red
        cameraButton.setTitle("Camera", for: .normal)
        cameraButton.addTarget(self, action: #selector(cameraButtonTapped), for: .touchUpInside)
        
        self.view.addSubview(imageView)
        imageView.backgroundColor = .red
    }
    
    @objc func cameraButtonTapped() {
        print("✅ cameraButtonTapped")
        present(imagePicker, animated: true, completion: nil)
    }
}

After click Camera button Now you will get error. Because you didn’t set privacy. you should set privacy in info.Plist file.

Step 4: Before working with model first of all we should pick up photo and store in imageView. now implement below function:

    func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [UIImagePickerController.InfoKey : Any]) {
        if let userPickedImage = info[UIImagePickerController.InfoKey.originalImage] as? UIImage {
            imageView.image = userPickedImage
            
            //convert ciimage for model
            guard let ciimage = CIImage(image: userPickedImage) else {
                fatalError("Could not convert UIimage into CIImage")
            }
        }
        imagePicker.dismiss(animated: true)
    }

Step 5: Now import Vision & CoreML for working with model. Now we will write a function where we send photos and after classification it detect our photos and also tell which is it.

    func detect(image: CIImage) {
        guard let model = try? VNCoreMLModel(for: Inceptionv3().model) else {
            fatalError("Loading CoreML Model Failed")
        }
        
        let request = VNCoreMLRequest(model: model) { request, error in
            guard let results = request.results as? [VNClassificationObservation] else {
                fatalError("Model failed to process image")
            }
            print(results)
            if let firstResult = results.first {
                if firstResult.identifier.contains("banana") {
                    print("✅ This is banana")
                }else{
                    print("🥹 This is not banana")
                }
            }
        }
        
        let handler = VNImageRequestHandler(ciImage: image)
        
        do {
            try handler.perform([request])
        }
        catch{
            print(error)
        }
        
    }

Github source code: https://github.com/Joynal279/Object-Detect-ML-model

394 thoughts on “Implement inceptionV3 model in UIKit project with swift