fcm和firebase
In recent weeks, there have been some changes to the Firebase machine learning products. Firstly, we had a name change. Firebase MLKit is no longer, it is now known as Firebase Machine Learning. MLKit does still exist, but it is now its own product as is known as MLKit. So you are probably thinking what is the difference between these two products?
最近几周,Firebase机器学习产品发生了一些变化。 首先,我们进行了更名。 Firebase MLKit不再存在,它现在被称为Firebase机器学习。 MLKit确实仍然存在,但是现在它是它自己的产品,被称为MLKit。 因此,您可能正在考虑这两种产品之间的区别是什么?
Well, MLKit now is the solution for on-device machine learning for mobile apps. This is a standalone SDK for Android and iOS. There is no requirement to have a Firebase project anymore in order to use these offline models.
好吧,MLKit现在是用于移动应用程序的设备上机器学习的解决方案。 这是适用于Android和iOS的独立SDK。 不再需要有Firebase项目即可使用这些离线模型。
Firebase Machine Learning on the other hand is all the machine learning SDK’s for mobile that require cloud based API’s in order to achieve predictions. This includes text recognition, image labeling, landmarks recognition as cloud API calls through an SDK. AutoML Vision Edge to create custom machine learning models and serving your custom .tflite models dynamically to your app are also still available in Firebase.
另一方面,Firebase机器学习是用于移动设备的所有机器学习SDK,需要基于云的API才能实现预测。 这包括通过SDK进行云API调用时的文本识别,图像标签,地标识别。 Firebase中仍然可以使用AutoML Vision Edge创建自定义机器学习模型并为您的应用动态提供自定义.tflite模型。
For both Firebase Machine Learning and MLKit, you will need to migrate to the new SDK’s as the gradle dependencies/pods have changed. Your current solutions will still work but they will not receive any updates as these SDKs have been deprecated. Let’s look at this migration process in more detail.
对于Firebase Machine Learning和MLKit,由于gradle依赖项/容器已更改,您将需要迁移到新的SDK。 您当前的解决方案仍然可以使用,但是由于这些SDK已被弃用,因此它们将不会收到任何更新。 让我们更详细地看一下这个迁移过程。
移民 (Migration)
Migrating to the new SDK’s is very simple and it is really well documented by the MLKit website. It is mostly gradle/pod changes you will need to make to your app and changing some class names. Something new I did see when migrating an app was that there are now bundled models and thin models.
迁移到新的SDK十分简单,并且MLKit网站对此进行了很好的记录。 这主要是您需要对应用程序进行的gradle / pod更改以及更改某些类名。 迁移应用程序时,我确实看到的新内容是现在有捆绑的模型和精简模型。
The bundled models are bundled as part of your app, and will allow for immediate usage when called. This will result in fast inference but will increase the size of your app.
捆绑的模型作为您的应用程序的一部分捆绑在一起,并且可以在调用时立即使用。 这将导致快速推断,但会增加应用程序的大小。
implementation 'com.google.mlkit:barcode-scanning:16.0.0'
The thin models are not bundled with your app and are downloaded using Google Play services to your app when you try doing inference for the first time. This is great as it reduces the size of your app but there is a bit of work from the developers side to add some metadata into your manifest in order for your app to download the model on first install.
精简模型未与您的应用程序捆绑在一起,而是在您首次尝试进行推理时使用Google Play服务下载到您的应用程序。 这很棒,因为它可以减小应用程序的大小,但是开发人员需要做一些工作,将一些元数据添加到清单中,以便应用程序在首次安装时下载模型。
Dependencies:
依存关系:
implementation 'com.google.android.gms:play-services-mlkit-barcode-scanning:16.0.0'
Manifest:
表现:
<application>
<meta-data
android:name="com.google.mlkit.vision.DEPENDENCIES"
android:value="barcode" />
</application>
I recently migrated this demo app that uses AutoML Vision Edge from MLkit to Firebase Machine Learning which proved to be very simple.
我最近将这个使用AutoML Vision Edge的演示应用程序从MLkit迁移到了Firebase Machine Learning,事实证明这非常简单。
There are also a few new things in MLKit to explore for Android.
MLKit还为Android探索了一些新功能。
MLKit的新功能 (New in MLKit)
With the new SDK there were a few other updates to MLKit. Firstly, MLKit is now lifecycle-aware, helping it work a lot better with CameraX. All detectors, labelers, translators and detectors will automatically invoke the close method when no longer being used.
有了新的SDK,MLKit有了其他一些更新。 首先,MLKit现在具有生命周期感知功能,可帮助它在CameraX中更好地工作。 当不再使用所有检测器,标记器,翻译器和检测器时,将自动调用close方法。
Object Detection and Tracking
目标检测与追踪
The Object Detection and Tracking library now supports custom models. This is great as previously you were stuck with the base model that Google provided. This base model is still available but if you would like to add objects that it does not pick up, you can now do this.
对象检测和跟踪库现在支持自定义模型。 与以前一样,您仍然被Google提供的基本模型所困扰,这很棒。 该基本模型仍然可用,但是如果您要添加无法拾取的对象,则可以执行此操作。
MLKit also has two new products which are Entity Extraction and Pose Detection for both Android and iOS. These models are part of an early access program that you would need to sign up for.
MLKit还具有两个新产品,分别是针对Android和iOS的实体提取和姿势检测。 这些模型是您需要注册的早期访问计划的一部分。
Entity Extraction
实体提取
The Entity Extraction model allows you to extract different entities such as : addresses, phone numbers, currencies etc out of a paragraph of text. The model also supports multiple languages which is great if you have an app that supports localization. This allows you to give your users a better experience in text that you might be getting back from your back-end service, such as displaying an address on a map, enabling them to click on email addresses to send a mail and phone numbers to make calls instead of just displaying plain text.
实体提取模型允许您从文本段落中提取不同的实体,例如:地址,电话号码,货币等。 该模型还支持多种语言,如果您有支持本地化的应用程序,那就很棒。 这样,您可以为用户提供更好的文本体验,而这些文本可能是您从后端服务中获得的,例如在地图上显示地址,使他们能够单击电子邮件地址以发送邮件和电话号码,通话,而不仅仅是显示纯文本。
Pose Detection
姿势检测
The Pose Detection model allows you to track physical actions of different subjects and display these data points in an augmented reality view through your app’s camera. This will work well for apps that are trying to track people for fitness or dancing applications. At Google I/O 19 there were some great examples of pose detection applications, you can see this in the video below.
姿势检测模型使您可以跟踪不同主体的身体动作,并通过应用程序的相机在增强现实视图中显示这些数据点。 这对于尝试跟踪健身或跳舞应用程序的人的应用程序将非常有效。 在Google I / O 19上,有一些很好的姿势检测应用程序示例,您可以在下面的视频中看到。
A great repository was also released along with the new MLKit SDK, that shows Material design and MLKit working together to give a great user experience.
还发布了一个不错的存储库,以及新的MLKit SDK,它显示了Material design和MLKit协同工作以提供出色的用户体验。
People +AI
人+ AI
Another great resource is the People +AI user guide, which provides great guidelines on how we should be thinking and building ML enabled features in our apps. I definitely recommend reading through this guide book if you are looking at building any ML features with Firebase ML, MLKit or Tensorflow lite.
另一个很棒的资源是People + AI用户指南,它提供了有关如何思考和构建应用程序中支持ML的功能的出色指南。 如果您要使用Firebase ML,MLKit或Tensorflow lite构建任何ML功能,我绝对建议您通读本指南。
最后的想法 (Final Thoughts)
If you are using Firebase MLKit in your app, make sure that you migrate to the new standalone SDK so you get updates to the SDK in the future. If you use any of the cloud models, AutoML Vision Edge or Tensorflow custom model serving, make sure you update your dependencies.
如果您在应用中使用Firebase MLKit,请确保您迁移到新的独立SDK,以便将来获得该SDK的更新。 如果您使用任何云模型,AutoML Vision Edge或Tensorflow自定义模型服务,请确保您更新了依赖性。
If you’re new to these SDK, the documentation can be found below.
如果您不熟悉这些SDK,可以在下面找到文档。
If you have any thoughts about MLKit and Firebase machine learning, comment below.
如果您对MLKit和Firebase机器学习有任何想法,请在下面评论。
Stay in touch.
保持联系。
翻译自: https://proandroiddev.com/whats-new-in-firebase-machine-learning-and-mlkit-a138023b73a8
fcm和firebase