This article is the 16th day of the Flutter #1 Advent Calendar 2020.
The year 2020 was a year of intense activity. Looking back at my 2020 from a technical perspective, I personally spent most of the year immersed in Flutter, both for business and pleasure.
I made a lot of apps, but I had never made an app related to machine learning before. So, I took the opportunity of the Advent Calendar to make the following object detection app.
This article explains how to create a real-time object detection application using Flutter.
TensorFlow Lite will be used as the machine learning framework.
TensorFlow Lite (hereinafter referred to as “TFLite”) is a deep learning framework for performing inference on mobile devices.
tflite_flutter is a library that binds the C++ API of TFLite with dart:ffi
and makes it available to Flutter.
In addition, tflite_flutter_helper is a library for preprocessing images in TFLite.
Here, we will use tflite_flutter and tflite_flutter_helper to use TFLite from Flutter.
First, add the list of libraries to be used in this application to pubspec.yaml.
dependencies:
flutter:
sdk: flutter
hooks_riverpod: ">=0.12.0 <1.0.0"
flutter_hooks: ">=0.14.0 <1.0.0"
tflite_flutter: ">=0.5.0 <1.0.0"
tflite_flutter_helper: ">=0.1.2 <1.0.0"
camera: ">=0.5.8 <1.0.0"
After adding these, install it with the following command.
flutter pub get
Once the package is installed, the next step is to download TFLite. to download TFLite, first download the following script: install.sh for Linux and Mac, install.bat for Windows.
Place the downloaded script in the root of your project.
Note that this script uses wget
, so if you are using a Mac, install wget
using the following command.
brew install wget
Then, you can install TFLite with the following command
# Linux、Mac
sh install.sh # Windows
install.bat
If you want to use Delegate in TFLite, install it with the argument sh install.sh -d
or install.bat -d
. Delegate is a function that delegates part/all of the TFLite graph to an Executor other than the CPU. Delegate can be used to improve inference latency and power efficiency.
# Linux、Mac
sh install.sh -d# Windows
install.bat -d
Once you have finished downloading TFLite, click on the following link to download the model and labels.
coco_ssd_mobilenet_v1_1.0_quant_2018_06_29.zip
Unzip the downloaded file coco_ssd_mobilenet_v1_1.0_quant_2018_06_29.zip, and place the file containing detect.tflite and labelmap.txt under <project>/assets
.
Also, add the following to pubspec.yaml so that you can refer to the files in assets.
flutter:
assets:
- assets/
We have now finished downloading the necessary files.
Now, open <project>/app/build.gradle
and change the minSDK to 21 since the camera package you are using requires API Level 21 (Android 5.0) or higher.
android {
...
defaultConfig {
...
minSdkVersion 21
}
}
By creating the following classifier, we can perform object detection from the image stream.
The classifier is based on the following article by the author of tflite_flutter.
After creating a classifier, the next step is to use this classifier to make inferences from the image stream.
In this section, we will use the package:camera
to access the image stream.
In the package:camera
, we can do the following to process the image stream.
import 'package:camera/camera.dart';final cameras = await availableCameras();
final cameraController = CameraController(
cameras[0],
ResolutionPreset.low,
enableAudio: false,
);
await cameraController.initialize();
cameraController.startImageStream((CameraImage cameraImage){
// Write the process for the image stream here
});
By writing the inference process to the callback function of thecameraController.startImageStream()
, we can perform object detection from the image stream.
However, Dart’s execution model is a single-threaded event loop.
If you use MainIsolate to handle inference and other heavy processing, the event loop will get clogged and other events will not be processed, causing the screen to freeze.
Therefore, the inference process should be run in a separate Isolate and processed in parallel there.
In Flutter, parallel processing can be easily handled by using the compute
function.
// Parallel processing is possible by using compute
// write the callback function to be processed in parallel as the first argument
// The second argument is the value to be passed to the callback function
await compute(inference, isolateData);// inference process
static Future<List<Recognition>> inference(
IsolateData isolateData) async {
var image = ImageUtils.convertYUV420ToImage(
isolateData.cameraImage,
);
if (Platform.isAndroid) {
image = image_lib.copyRotate(image, 90);
}final classifier = Classifier(
interpreter: Interpreter.fromAddress(
isolateData.interpreterAddress,
),
labels: isolateData.labels,
);return classifier.predict(image);
}
}// Arguments to pass to inference
// Argument must be a top-level function
// if the argument contains an instance of a class or a static method
// cannot be passed to Isolate
class IsolateData {
IsolateData({
this.cameraImage,
this.interpreterAddress,
this.labels,
});
final CameraImage cameraImage;
final int interpreterAddress;
final List<String> labels;
}
Finally, here is the completed code for the part that handles the image stream.
So far, we have implemented the part related to the inference process, not the UI.
Finally, we will create the UI display part as shown below, and combine it with the previous processing to complete the object detection application.
Now, if you execute these, you can perform object detection as shown in the demo at the beginning of this article.
To the following repository, we have placed all the source code of the app we have created.
So, I have made an object detection application using Flutter. I think Flutter is a highly productive framework, as I was able to make a working demo in about a day after I decided to make it.
Above all, it’s fun to write, so I’d like to make more things with Flutter next year.
https://www.slideshare.net/ssuser479fa3/tensorflow-lite-delegate