MLflow Part 2: Deploying a Tracking Server to Minikube!

Original article was published by David Hundley on Artificial Intelligence on Medium


MLflow Part 2: Deploying a Tracking Server to Minikube!

Creating a point for logging and tracking model artifacts in a single server running on Minikube

Welcome back, friends! We’re back with our continued mini-series on MLflow. In case you missed out part one, be sure to check it out here. The first post was a super basic introduction to log basic parameters, metrics, and artifacts with MLflow. That was just having us log those items to a spot on our local machine, which is not an ideal practice. In a company context, you ideally want to have all those things logged to a central, reusable location. That’s we’ll be tackling in today’s post! And of course, you can find all my code on GitHub at this link.

So to be clear, we’re going to be covering some advanced topics that require a bit of foreknowledge about Docker and Kubernetes. I personally plan to write posts on those at a later date, but for now, I’d recommend the following resources if you want to get a quick start on working with Docker and Kubernetes:

Now if you know Kubernetes, chances are that you are familiar with Minikube, but in case you aren’t, Minikube is basically a small VM you can run on your local machine to start a sandbox environment to test out Kubernetes concepts. Once Minikube is up and running, it’ll look very familiar to those of you who have worked in legit Kubernetes environments. The instructions to set up Minikube are nicely documented in this page, BUT in order to get Minikube working, we need to get a couple additional things added later on down this post.

Before going further, I think a picture is worth a thousand words, so below is a tiny picture of the architecture we’ll be building here.

Alrighty, so on the right there we have our Minikube environment. Again, Minikube is highly representative of a legit Kubernetes environment, so the pieces inside Minikube are all things we’d see in any Kubernetes workspace. As such, we can see that MLflow’s tracking server is deployed inside a Deployment. That Deployment interacts with the outside world by connecting a service to an ingress (which is why the ingress spans both the inside and outside in our picture), and then we can view the tracking server interface inside our web browser. Simple enough, right?

Okay, so step 1 is going to be to create a Docker image that builds the MLflow tracking server. This is really simple, and I personally have uploaded my public image in case you want to skip this first step. (Here is that image in my personal Docker Hub.) The Dockerfile is simply going to build on top of a basic Python image, install MLflow, and set the proper entrypoint command. That looks like this:

# Defining base image
FROM python:3.8.2-slim
# Installing MLflow from PyPi
RUN pip install mlflow[extras]==1.9.1
# Defining start up command
EXPOSE 5000
ENTRYPOINT ["mlflow", "server"]

You know the drill from here: build and push out to Docker Hub! (Or just use mine.)

The next stop is to define our Kubernetes manifest files. I’m primarily going to stick to the Deployment manifest here. Most of this syntax will look pretty familiar to you. The only thing to be mindful of here are the arguments we’ll pass to our building Docker image. Let me show you what my Deployment manifest looks like first.

# Creating MLflow deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: mlflow-deployment
spec:
replicas: 1
selector:
matchLabels:
app: mlflow-deployment
template:
metadata:
labels:
app: mlflow-deployment
spec:
volumes:
- name: mlflow-pvc
persistentVolumeClaim:
claimName: mlflow-pvc
containers:
- name: mlflow-deployment
image: dkhundley/mlflow-server:1.0.1
imagePullPolicy: Always
args:
- --host=0.0.0.0
- --port=5000
- --backend-store-uri=/opt/mlflow/backend/
- --default-artifact-root=/opt/mlflow/artifacts/
- --workers=2
ports:
- name: http
containerPort: 5000
protocol: TCP
volumeMounts:
- name: mlflow-pvc
mountPath: /opt/mlflow/

The “host” and “port” arguments are probably pretty familiar to you, but what might be new are the latter two arguments. The latter two arguments respectively note where MLflow should log your model metadata for the model registry and where to log the model artifacts themselves. Now, I’m using a simple Persistent Volume Claim (PVC) setup here, but the cool thing is that MLflow supports lots of different options for these, including cloud services. So if you wanted to store all your model artifacts in an S3 bucket on AWS, you can absolutely do that. Neat!

The Kubernetes manifests to build the service and PVC are pretty straightforward, but where things get tricky is with the ingress. Now honestly, you probably won’t have this issue if you’re working in a legit Kubernetes environment, but Minikube can be a bit tricky here. Honestly, this last part took me several days to figure out, so I’m glad to finally pass this knowledge along to you!

Let’s take a glance at the ingress YAML first:

# Creating the Minikube ingress
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: mlflow-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.il/add-base-url: "true"
spec:
rules:
- host: mlflow-server.local
http:
paths:
- backend:
serviceName: mlflow-service
servicePort: 5000
path: /

Most of this should be familiar to you. In our example here, we’ll be serving out the MLflow tracking server’s UI at mlflow-server.local. One thing that might be new to you are those annotations, and they are absolutely necessary. Without them, your ingress will not work properly. I specifically posted the image below to Twitter to try getting folks to help me out with my blank screen issue. It was quite frustrating.

Bleh, talk about a mess! After much trial and error, I finally figured out that the specific annotation configuration provided above worked. I honestly can’t tell you why though. ¯\_(ツ)_/¯

But wait, there’s more! By default, Minikube isn’t setup to handle ingress right out of the box. In order to do that, you’ll need to do a few things. First up, after your Minikube server is running, run the following command:

minikube addons enable ingress

Easy enough. Now, you need to set up your computer to reference the Minikube cluster’s IP through the mlflow-server.local host we’ve set up in the ingress. To get your Minikube’s IP address, simply run this command:

minikube ip

Copy that to your clipboard. Now, this next part might be totally new to you. (At least, it was to me!) Just like you can create alias commands for Linux, you can also apparently create alias ties from IP addresses to web addresses. It’s very interesting because I learned that this is the place where your browser translates “localhost” to your local IP address.

To navigate to where you need to do that, run the following command:

sudo nano /etc/hosts

You should be greeted with a screen that looks like this:

So you can see here at the top what I was just referencing with the localhost thing. With this interface open, paste in your Minikube’s IP address (which is 192.168.64.4 in my case) followed by the host name, which is mlflow-server.local in this case.

Alright, if you did everything properly, you should be pretty much all set! Do a kubectl apply with all your YAMLs and watch the resources come to life. Navigate on over to your browser of choice and open up http://mlflow-server.local. If all goes well, you should see a familiar looking screen.

If you look at my screenshot above, you’ll notice we’re definitely accessing this via the mlflow-server.local address, and if you notice the “Artifact Location,” it’s also correctly displaying where those artifacts are being stored in our PVC. Nice!

That’s it for this post, folks! I don’t want to overload you all with too much, so in our next post, we’ll take off from here by logging a practice model or two to this shared tracking server just to see that it’s working. And in two posts from now, we’ll keep the ball rolling even further by showing how to deploy models for usage out from this tracking server. So to be honest, the content of this post might not have been that glamorous, but we’re laying down the train tracks that are going to make everything really fly in the next couple posts.

Until then, thanks for reading this post! Be sure to check out my former ones on other data science-related topics, and we’ll see you next week for more MLflow content!