domino 惯用选项_Android上的惯用TensorFlow-TensorFlow支持库入门

黄英韶
2023-12-01

domino 惯用选项

在Android上使用数据很不方便! (Working with data on Android is inconvenient!)

If you’ve used TensorFlow Lite on Android before, chances are that you’ve had to deal with the tedious task of pre-processing data, working with Float arrays in a statically typed language or resize, transform, normalize, and do any of the other standard tasks required before the data is fit for consumption by the model.

如果您之前在Android上使用过TensorFlow Lite,则很有可能不得不处理繁琐的任务,即预处理数据,使用静态类型的语言处理Float数组或调整大小,转换,规范化以及执行任何以下操作数据适合模型使用之前需要执行的其他标准任务。

Well, no more! The TFLite support library nightly is now available, and in this post, we’ll go over its usage and build a wrapper around a tflite model.

好吧,没有更多! 每晚都有TFLite支持库可用,在这篇文章中,我们将介绍它的用法并围绕tflite模型构建包装器。

Note: A companion repository for this post is available here. Follow along, or jump straight into the source!

注意:此文章的配套存储库位于此处 。 跟随,或直接跳入源代码!

A big thank you to Tanmay Thakur and Ubaid Usmani for their help in the python implementation. Without their assistance, this wouldn’t have been possible.

非常感谢Tanmay ThakurUbaid Usmani在python实现中的帮助。 没有他们的协助,这是不可能的。

这篇文章的范围 (Scope of this post)

This post is limited in scope to loading and creating a wrapper class around a tflite model; however, you can see a fully functional project in the repository linked above. The code is liberally commented and very straightforward.

这篇文章的范围仅限于围绕tflite模型加载和创建包装器类; 但是,您可以在上面链接的存储库中看到一个功能齐全的项目。 该代码经过严格注释,非常简单。

If you still have any queries, please don’t hesitate to reach out to me and drop a comment. I’ll be glad to help you out.

如果您还有任何疑问,请随时与我联系并发表评论。 我很高兴为您提供帮助。

设置项目 (Setting up the project)

We’re going to be deploying a TFLite version of the popular YOLOv3 object detection model on an Android device. Without further ado, let’s jump into it.

我们将在Android设备上部署流行的YOLOv3对象检测模型的TFLite版本 。 事不宜迟,让我们跳进去。

Create a new project using Android Studio, name it anything you like, and wait for the initial gradle sync to complete. Next, we’ll install the dependencies.

使用Android Studio创建一个新项目, gradle命名,然后等待初始gradle同步完成。 接下来,我们将安装依赖项。

添加依赖项 (Adding dependencies)

Add the following dependencies to your app-level build.gradle.

将以下依赖项添加到您的应用程序级别build.gradle

// Permissions handling    
implementation 'com.github.quickpermissions:quickpermissions-kotlin:0.4.0'


// Tensorflow lite    
implementation 'org.tensorflow:tensorflow-lite:0.0.0-nightly'    
implementation 'org.tensorflow:tensorflow-lite-support:0.0.0-nightly'


// CameraView
implementation 'com.otaliastudios:cameraview:2.6.2'

Let’s go through what they are, and why we need them.

让我们来看看它们是什么,以及我们为什么需要它们。

  1. Quick Permissions: This is a great library to make granting permissions quick and easy.

    快速权限 :这是一个很棒的库,可让您快速轻松地授予权限。

  2. Tensorflow Lite: This is the core TFLite library.

    Tensorflow Lite :这是TFLite的核心库。

  3. Tensorflow Lite Support: This is the support library we’ll be using to make our data-related tasks simpler.

    Tensorflow Lite支持 :这是我们将用来简化与数据相关的任务的支持库。

  4. CameraView: This is a library that provides a simple API for accessing the camera.

    CameraView :这是一个提供用于访问摄像机的简单API的库。

配置gradle项目 (Configuring the gradle project)

Our project still needs a little more configuration before we’re ready for the code. In the app-level build.gradle file, add the following options under the android block.

在准备好代码之前,我们的项目仍然需要更多的配置。 在应用程序级别的build.gradle文件中,在android块下添加以下选项。

android {
    // other stuff
    aaptOptions {
        noCompress "tflite"    
    }
}

The reason we need to add this is because we’ll be shipping the model inside our assets, which compresses it by default. This is an issue because compressed models cannot be loaded by the interpreter.

我们需要添加此代码的原因是因为我们将在资产中运送模型,默认情况下会对其进行压缩。 这是一个问题,因为解释器无法加载压缩模型。

Note: After this initial configuration, run the gradle sync again to fetch all dependencies.

注意:完成此初始配置后,请再次运行gradle sync以获取所有依赖项。

跳入代码 (Jumping into the code)

First things first; we need a model to load. The one I used can be found here. Place the model inside app/src/main/assets. This will enable us to load it at runtime.

首先是第一件事; 我们需要一个模型来加载。 我使用的那个可以在这里找到。 将模型放在app/src/main/assets 。 这将使我们能够在运行时加载它。

The labels for detected objects can be found here. Place them in the same directory as the model.

可在此处找到检测到的物体的标签。 将它们放置在与模型相同的目录中。

Warning: If you plan to use your own custom models, a word of caution; the input and output shapes may not match the ones used in this project.

警告:如果您打算使用自己的定制模型,请注意; 输入和输出形状可能与该项目中使用的形状不匹配。

创建包装器类 (Creating a wrapper class)

We’re going to wrap our model and its associated methods inside a class called YOLO. The initial code is as follows.

我们将把模型及其相关方法包装在一个名为YOLO的类中。 初始代码如下。

class YOLO(private val context: Context) {
   private val interpreter: Interpreter
      
   companion object {
        private const val MODEL_FILE = "detect.tflite"
        private const val LABEL_FILE = "labelmap.txt"
    }


    init {
        val options = Interpreter.Options()
        interpreter = Interpreter(FileUtil.loadMappedFile(context, MODEL_FILE), options)
    }
}

Let’s break this class down into its core functionality and behaviour.

让我们将此类分为其核心功能和行为。

  1. First, upon being created, the class loads the model from the app assets through the FileUtil class provided by the support library.

    首先,在创建后,该类通过支持库提供的FileUtil类从应用程序assets加载模型。

  2. Next, we have a class member. The interpreter is self-explanatory, it’s an instance of a TFLite interpreter.

    接下来,我们有一个班级成员。 interpreter是不言自明的,它是TFLite解释器的一个实例。

  3. Finally, we have some static variables. These are just the file names of the model and the labels inside our assets.

    最后,我们有一些静态变量。 这些只是模型的文件名和我们assets的标签。

Moving on, let’s add a convenience method to load our labels from the assets.

继续,让我们添加一个方便的方法来从assets加载标签。

class YOLO(private val context: Context) {
   // other stuff
    
   // lazily load object labels
    private val labelList by lazy { loadLabelList(context.assets) }
    
    private fun loadLabelList(
        assetManager: AssetManager
    ): List<String> {
        val labelList = mutableListOf<String>()
        val reader =
            BufferedReader(InputStreamReader(assetManager.open(LABEL_FILE)))
        var line = reader.readLine()
        while (line != null) {
            labelList.add(line)
            line = reader.readLine()
        }
        reader.close()
        return labelList
    }
}

Here we’ve declared a method that loads the label file and lazily initialized a member var to the returned value.

在这里,我们声明了一种加载标签文件并延迟初始化成员var到返回值​​的方法。

Let’s get down to the brass tacks. We’re now going to define a method that takes in a bitmap, passes it into the model and returns the detected object classes.

让我们开始讨论。 现在,我们将定义一个方法,该方法采用位图,将其传递到模型中并返回检测到的对象类。

class YOLO(private val context: Context) {
      private val interpreter: Interpreter


    // lazily load object labels
    private val labelList by lazy { loadLabelList(context.assets) }


    // create image processor to resize image to input dimensions
    private val imageProcessor by lazy {
        ImageProcessor.Builder()
            .add(ResizeOp(300, 300, ResizeOp.ResizeMethod.BILINEAR))
            .build()
    }


    // create tensorflow representation of an image
    private val tensorImage by lazy { TensorImage(DataType.UINT8) }


    fun detectObjects(bitmap: Bitmap): List<String> {
        tensorImage.load(bitmap)


        // resize image using processor
        val processedImage = imageProcessor.process(tensorImage)


        // load image data into input buffer
        val inputbuffer = TensorBuffer.createFixedSize(intArrayOf(1, 300, 300, 3), DataType.UINT8)
        inputbuffer.loadBuffer(processedImage.buffer, intArrayOf(1, 300, 300, 3))


        // create output buffers
        val boundBuffer = TensorBuffer.createFixedSize(intArrayOf(1, 10, 4), DataType.FLOAT32)
        val classBuffer = TensorBuffer.createFixedSize(intArrayOf(1, 10), DataType.FLOAT32)
        val classProbBuffer = TensorBuffer.createFixedSize(intArrayOf(1, 10), DataType.FLOAT32)
        val numBoxBuffer = TensorBuffer.createFixedSize(intArrayOf(1), DataType.FLOAT32)


        // run interpreter
        interpreter.runForMultipleInputsOutputs(
            arrayOf(inputbuffer.buffer), mapOf(
                0 to boundBuffer.buffer,
                1 to classBuffer.buffer,
                2 to classProbBuffer.buffer,
                3 to numBoxBuffer.buffer
            )
        )


        // map and return classnames to detected categories
        return classBuffer.floatArray.map { labelList[it.toInt() + 1] }
    }
}

Whoa, that’s a wall of code! Let’s go through it and break it down.

哇,那是一堵代码墙! 让我们仔细分析一下。

We’ve declared some new lazily initialized variables; an ImageProcessor and a TensorImage. These are classes exposed by the support library, to make loading images and processing them much simpler.

我们已经声明了一些新的延迟初始化变量。 一个ImageProcessor和一个TensorImage 。 这些是支持库公开的类,使加载图像和处理图像变得更加简单。

As is shown here, we can load a bitmap directly into the TensorImage and then pass it on to the ImageProcessor for further processing.

如此处所示,我们可以将bitmap直接加载到TensorImage ,然后将其传递给ImageProcessor进行进一步处理。

The ImageProcessor has several operations available, but the one we’ve used here is to resize our input images to 300 * 300. This is because our model’s input size requires a 300 * 300 image.

ImageProcessor有几种可用的操作,但是我们在这里使用的一种操作是将输入图像的大小调整为300 *300。这是因为我们模型的输入大小需要300 * 300的图像。

After processing the image, we create several TensorBuffers. These are representations of Tensors that we can manipulate and access easily. The shapes of these TensorBuffers is determined by the model. Take a look at the model summary to figure out the appropriate shapes.

处理完图像后,我们创建了多个TensorBuffers 。 这些是张量的表示,我们可以轻松地对其进行操作和访问。 这些TensorBuffers的形状由模型确定。 查看模型摘要以找出合适的形状。

We load the TensorImage into the input TensorBuffer, and then pass the input and output buffers into the interpreter.

我们将TensorImage加载到输入TensorBuffer ,然后将输入和输出缓冲区传递到解释器中。

Note: The YOLOv3 model has multiple outputs. This is the reason why we had to use multiple output buffers.

注意: YOLOv3模型具有多个输出。 这就是为什么我们必须使用多个输出缓冲区的原因。

After running inference, the interpreter sets the internal FloatArrays of the output buffers. Right now, we’re only interested in the one that contains the predicted classes. Using the convenient kotlin map function, we map labels to the numerical classes output by the model and return them.

运行推断后,解释器设置输出缓冲区的内部FloatArrays 。 现在,我们只对包含预测类的类感兴趣。 使用方便的kotlin map功能,我们将标签映射到模型输出的数字类并返回它们。

This class can now be used by our application to run inference on a bitmap. How convenient!

现在,我们的应用程序可以使用此类来在位bitmap上运行推理。 多么方便!

结论 (Conclusion)

And that’s it! Compared to a project without using the support library; we’d have written much more code to resize the image, convert bitmaps to float arrays, allocate float arrays manually to store the output in, etc.

就是这样! 与不使用支持库的项目相比; 我们将编写更多代码来调整图像大小,将位图转换为float数组,手动分配float数组以将输出存储在其中,等等。

The TensorFlow support library thus makes life simpler for a developer; and despite being a nightly, it’s pretty stable in my experience.

因此,TensorFlow支持库使开发人员的工作变得更简单; 尽管每天晚上,但根据我的经验,它还是很稳定的。

To find out more, view the support library readme here. As of now, there aren’t any formal docs available, but the readme contains all the information a developer would need to get started quickly.

要了解更多信息,请在此处查看支持库readme 。 截至目前,尚无任何正式文档,但readme包含开发人员快速入门所需的所有信息。

Stay safe, and have fun!

保持安全,玩得开心!

翻译自: https://levelup.gitconnected.com/idiomatic-tensorflow-on-android-get-started-with-the-tensorflow-support-library-c12fe96bc029

domino 惯用选项

 类似资料: