Deep Dive Into Google’s ML Kit and Apple’s Core ML: Which one to use?



If you decided to integrate machine learning into your new mobile app and you don’t know which service to use? or you just simply want to have more knowledge over the new Apple’s Core ML and Google’s ML Kit then this article is perfect for you.

This article is the second part of the “Which one to use?” series. In Previous article we talked about how machine learning works and what are the possible approaches to start with machine learning. In this article we want to focus more on the recently introduced SDKs from Google and Apple.

Recently Google and Apple introduced their SDKs which allow developers to integrate machine learning into their apps easier than ever. Their attempt is to address the challenges mobile developers faced every day when trying to leverage deep learning models from mobile devices. As mobile apps have become the great contributor of data point, machine learning has become critical to mobile development. So it is more essential than ever for such a services to exist to let all range of developers (whether an experience ML developer or an amateur person) to be able to start with no problem.

Today, the way most mobile apps leverage deep learning capabilities is by exclusively calling centralized APIs hosted on a cloud platform. While that model is relatively simpler from a developer standpoint, it has some tangible limitations which we have discussed closely on the previous article, but some of the most notable limitation are:

  1. Latency: Executing computational intensive deep learning models imposes a latency penalty that sometimes is prohibited for mobile apps. For instance if your app should handle some kind of realtime realtime predictions on the big data types, it would be impossible to use a centralized API on a cloud.
  2. Long Training Cycles: Mobile devices are great collectors of data that can be used on the training of deep learning models. However, the lifecycle of collecting data from a mobile app, training a deep learning model on a cloud platform and using the model from the mobile application is both computationally expensive and relatively long.
  3. Mobile Adaptability: Most deep learning runtimes have been natively designed to produce models that require large hardware infrastructures to execute. As a result, adapting many of the existing deep learning models to mobile applications remains incredibly challenging.

The starting sentences of “Google, Introducing ML Kit” are:

In today’s fast-moving world, consumers have come to expect mobile apps to not only be intuitive, but also to be able to provide powerful features and adapt to new information. That’s why machine learning has become critical to mobile development. developer’s are increasingly relaying on machine learning to enhance their app’s user experience and only with fine-tuned machine learning models can they deliver those powerful features to delight users…

Fine-tuned machine learning models: In some cases Apple’s Core ML might be the service which offers those fine-tuned machine learning models that you need and in other cases Google’s ML Kit, depending on your app and use case scenario.


Core ML

Core ML was first introduced at WWDC conference in June 2017 in the form of a framework with the launch of iOS 11 to help the developers and make it possible to for them to integrate trained machine learning models into their iOS, macOs and tvOS apps easier than every. Trained model are loaded into Apple’s Xcode IDE and packaged in the app bundle.

Core ML supports:

You should note that Core ML only works with Apple ecosystem and it cannot be used for Android apps.

Unlike Google’s ML Kit, Core ML is running strictly on-device (offline) to ensures the privacy of user’s data which has its own benefits and drawbacks. The main reasons why Apple chose on-device approach over in cloud is:

A year later, in WWDC 2018 Apple introduced Core ML 2 . During the coference Apple said:

“Core ML 2 is 30 percent faster thanks to batch prediction, and it can compress machine learning models by up to 75 percent with the help of quantisation.”

The new version of Core ML can update models from a cloud service like Amazon Web Services or Microsoft Azure at runtime. It also comes with a converter that works with Caffe, Caffe 2, Keras, sci-kit-learn, XGBoost, LibSVM and TensorFlow Lite.

And they also introduced Create ML. It’s a new GPU-accelerated tool for native AI model training on Mac computers. Since it’s coded in swift, developers can use drag-and-drop programing interfaces like Xcode Playground to train models. The trained models can perform tasks like recognizing images, extracting meaning from text or finding relationships between numerical values.

Apple says the developers of Memrise, a language-learning app, previously took 24 hours to train a model using 20,000 images. CreateML and Core ML 2 reduced it to 48 minutes on a MacBook Pro and 18 minutes on an iMac Pro. Furthermore, the size of the model was reduced from 90MB to just 3MB.

But the question is, how do these really work?

Create ML

let’s say you want to create the model that can recognizes the certain type of fruits in a picture like banana, strawberry and orange.

So the question for the picture below would be like, is there any banana?

~ 2007

Hope computers can answer some day, but for now — just ask someone.

~ 2012

Wow — Computers can answer this! Cutting-edge deep learning research is exciting.

2018

Train a model with Create ML — the UI is great!

So how to start? Very simple. we just need to prepare your required data, in this case different pictures of bananas, strawberries and oranges and group them each in a separated folder in the way that all the pictures of bananas should be in banana folder and so on. After that we need to put all of our folders in one folder and we can call it “Training Data”

Then we need to open Xcode and then File/New/Playground and then choose we choose macOS and we can start with the blank playground and set the name of our file and Create.

import CreateMLUI
var builder = MLImageClassifierBuilder()
builder.showInLiveView()

Now if we run the code we get this view:

and after that we can drag and drop our “Training Data” folder to specified area and in matter of seconds we will have our trained model created. Then to evaluate our model we can create another folder with different set of pictures of the same fruits in same way as before and test our model before saving it to check if you are satisfied with the results and it is accurate enough.

Core ML 2

This year, Apple was focused on 3 main points for helping Core ML developers.

So now let’s explore there three points!

Model Size

One of the main things about Core ML as we have discussed previously is that it is on-device and totally offline. So the way it works is, the app should have the models in it’s bundle so it would be able to use them for predictions. But one of the biggest drawbacks with this method is increase in app size.

So Imagine you have an app that works perfectly and all the customers are happy and then you decide to integrate machine learning to your app to increase the satisfaction of your customers. But sometimes this implementation means increase of 100MB in size of your app or if you want to have more accurate and better machine learning sometimes even +250MB.

So with Core ML 2 Apple decided to give developers the tools to quantize their Core ML models. Quantization is techniques used to store and calculate numbers in a more compact form. This can lead to a reduced runtime memory usage and faster calculations.

There are 3 main factors when it comes to Core ML app size:

Quantization for models is done by reducing the size of the weight. In Core ML, the models were stored in 32-bit models. But Core ML 2 now supports 16-bit Floating Point and all level of quantization including down to 1-bit which can greatly reduce the size of the models.

In case you aren’t familiar with what weights are, here’s a really good analogy. Say that you’re going from your house to the supermarket. The first time, you may take a certain path. The second time, you’ll try to find a shorter path to the supermarket, since you already know your way to the market. And the third time, you’ll take an even shorter route because you have the knowledge of the previous 2 paths. Each time you go to the market, you’ll keep taking a shorter path as you learn over time! This knowledge of knowing which route to take is known as the weights. Hence, the most accurate path, is the one with the most weights!

But we should have in-mind that this reduction in size doesn’t come without tradeoffs. When we quantize our model it means in simple words that the new quantized model is approximation of the original one which means it won’t be as accurate as the original model. But it’s not a bad thing since it totally depends on our model and how these approximations effect our over all result so for example let’s look at the picture below:

In this picture we can see that 8-bit version of the model is giving the same output as for the 32-bit version even-though the size decreased from 6.7MB to 1.7MB and we can see the output starts to change from 4-bit which might be still acceptable.

Performance

Since we are running the ML computations on-device, we want it to be fast and accurate.

Let’s consider a realtime style transferrer app that can transform a certain image into different styls.

If we were to look into the neural network of style transfer, this is what we would notice. There are a certain set of inputs for this algorithm. Each layer in the neural network adds a certain transformation to the original image. This means that the model has to take in every input and map it out to an output and make a prediction from that. The prediction then helps in the creation of the weights. What would this look like in code?

// Loop over inputs
for i in 0..< modelInputs.count {
modelOutputs[i] = model.prediction(from: modelInputs[i], options: options)
}

In the above code, you see that for each input, we are asking the model to generate a prediction . However, iterating over every input can take a long time.

To solve this problem, Apple has introduced a new API called Batch. Unlike a for loop, a batch in machine learning is when all inputs are fed to the model as soon as it can be don’t and it won’t need to wait like it used to in a for loop method, so it loops to the next input after first input is done. This can take much less time and also less code.

For Loop Vs. Batch

And if we write the above for loop code with the new Batch API, it will be like:

modelOutputs = model.prediction(from: modelInputs, options: options)

Customization

When it comes to neural network, there are consist of many layers and there might be cases in which we want to convert a neural network from some service like Tensorflow or Keras to Core ML. Even though now that Core ML comes with a converter that works with many different services, there might be a scenario where Core ML simply does not have the tools to convert the model correctly. Don’t worry if don’t understand this, let’s take a look at the example below to see what it means.

So for example imagine you have a model using a Convolutional Neural Network (CNN). CNNs consists of series of layers which are highly optimized and when you convert a neural network from one format to Core ML, you are transforming each of those layers. But there might a scenario where Core ML does not provide the tools to convert a layer. Before Core ML 2, you could not do anything about it but now in this version of Core ML apple engineers have introduced the MLCustomLayerprotocol which solves this issue and allows developers to create their own layers in Swift..

This topic is very complicated and it usually takes a skilled data scientists or a specialist to deal with these things. This is beyond the scope of the tutorial so we won’t delve into it.


ML Kit

ML Kit was first introduced at Google I/O in May 2018 in the form of a framework and it is Google’s answer to bringing deep learning universe to Android and iOS developers. With ML Kit, Google aims to simplify the execution of deep learning models in mobile applications and it includes a-bunch of ready to use deep learning APIs that that are divided into 5 categories:

Google is also working on Smart reply that they said will be later.

One of the main and biggest advantages with ML Kit would be the fact that there are many options and it can be used anywhere. Unlike Apple, it is cross-platform and can be used for both iOS and Android. Also with ML Kit you can either call the APIs from the cloud or perform the calls all locally inside the device (offline).

At the conference, Google introduced ML Kit with these four main factors that can be seen in the picture above. So now let’s talk about each and look at them one by one.

iOS and Android SDK

ML Kit uses the standard Neural Networks API on Android and Metal on iOS. And now let’s see how to start

  • On iOS:

First we need to include the ML Kit libraries in the app’s Podfile:

pod 'Firebase/Core'
pod 'Firebase/MLVision'
pod 'Firebase/MLVisionLabelModel' # for on-device model

Then for instance for on-device image labeling we can call the APIs like:

vision = Vision.vision()
let labelDetector = vision?.labelDetector()
// set up image metadata...
let image = VisionImage(image: uiImage)
labelDetector?.detect(in: image) { (labels, error) in
guard error == nil, let labels = labels, !labels.isEmpty else {
//...
return
}
// handle extracted entities
}
  • On Android:

First we need to include the ML Kit libraries in the app’s build.gradle:

dependencies {
// ...
implementation 'com.google.firebase:firebase-core:15.+'
implementation 'com.google.firebase:firebase-ml-vision:16.+'
implementation 'com.google.firebase:firebase-ml-vision-image-label-model:15.+'
}

Then for instance for on-device image labeling we can call the APIs like:

FirebaseVisionLabelDetector detector = FirebaseVision.getInstance().getVisionLabelDetector();
FirebaseVisionImage image = FirebaseVisionImage.formFilePath(context, uri);
Task<List<FirebaseVisionLabel>> result = 
detector.detectInImage(image)
.addOnSuccessListener(
new OnSuccessListener<List<FirebaseVisionLabel>>() {
@Override
public void onSuccess(List<FirebaseVisionLabel> labels) { //handle extracted entities }
})
.addOnFailureListener(...);

The above code is for on-device. But what if we want to call the APIs from the cloud? No problem, easily we just need to change the parts which are bold in the above code and the rest is the same:

FirebaseVisionCloudLabelDetector detector = FirebaseVision.getInstance().getVisionLabelDetector();
FirebaseVisionImage image = FirebaseVisionImage.formFilePath(context, uri);
Task<List<FirebaseVisionCloudLabel>> result = 
detector.detectInImage(image)
.addOnSuccessListener(
new OnSuccessListener<List<FirebaseVisionCloudLabel>>() {
@Override
public void onSuccess(List<FirebaseVisionCloudLabel> labels) { //handle extracted entities }
})
.addOnFailureListener(...);

Base APIs and custom model support

Now, what about custom models? If ML Kit base APIs don’t cover you use cases, it would be possible to bring your own existing TensorFlow Lite models. It is just enough to upload the TensorFlow Lite models to the service. Then it acts as an API layer to your custom model.

Also in addition to deploying your own custom model, Google is releasing an experimental model compression flow that can helps to reduce model size. To have early access to this experimental feature you can sign up now in here.

On-device and Google Cloud AI interface APIs

As we discussed before with ML Kit, it is possible to perform the calls both locally on-device (offline) and from the cloud (online). Unsurprisingly, there is a trade-off here. The models that run on the device are smaller and hence offer a lower level of accuracy but if you decide to use the models from the cloud, neither model size nor available compute power are an issue therefore though models are larger and hence more accurate.

(you can read about the benefits and trade-offs of the two method to see which one is more suitable for your use case from this article)

So, For instance in the case of image labeling, while the on-device model features about 400 labels, the cloud-based one features more than 10000.

As an example, Google showed the case below where we can see the clear difference between the accuracy of the two methods:

Deeply integrated into Firebase

ML Kit is introduced as a part of Firebase cloud services and one of the main advantages is that it allows developers to easily start developing their apps. Also since it is a part of Firebase services, it seamlessly works with other firebase services.


Conclusion

Choosing between Core ML 2 and ML Kit mostly depends on your preference and use case scenario.

Core ML 2 doesn’t support Android and it only runs on-device (offline) and in the other hand, ML Kit is still in beta and it is not as developed and shaped as Core ML 2.

Developers familiar with Google’s Firebase are more likely to prefer ML Kit. Likewise, longtime Xcode users will probably tend toward Core ML 2.

References:

Source: Deep Learning on Medium