Source: Deep Learning on Medium
A GPU Server for the AI Racing League
We have been working on designing a low-cost GPU server for our AI Racing League events. This blog covers the design decision our team made to create a reasonably priced GPU server for around $1,200 that should support 10 teams. For larger events, we also have options for upgraded systems.
The best way to give you an overview of what these events look like is to show you a typical floor plan.
We realized that we can’t count on any reliable network access at these events. Some of the events are in high school gyms with little or no hard-wired Internet or WiFi connections to the outside world. So all the model training needs to be done locally. And we need cost-effective hardware that supports our TensorFlow deep learning training needs.
The goal is to have ten teams compete at these events. Each team consists of 2–5 people. They are given a DonkeyCar and a mentor to work with if they are new to these events. Their job is to manually drive around the track ten times, gather the training images, transfer the data to our GPU server, build a model on the GPU server and then transfer the model back to their car. We can use low-cost micros SD cards to transfer the data from the cards to the GPU server.
There are typically about 10,000 224×224 images in a training set (called a tub). Our goal is to be able to build a model for this training set in under five minutes. If it takes longer than five minutes there could be a backup of teams that need to train their models.
From experience, we found that using a standard laptop to build a model took too long. Training typically took over an hour. However, if we use a reasonably high powered GPU server, training could be done in under five minutes.
We originally “borrowed” some older GPU servers from a data center that were no longer being used. Our first server weighed almost 70 pounds and required a cart to move around without straining our backs. We then realized that most of this weight could be eliminated by custom building a GPU server in a small case. We found a small case with a handle that could be easily hauled around to various events.
We started configuring a system with a high-end GPU board from Nvidia. The GTX 2080 Ti was our first purchase. Its list price was around $1,200. We found a small case and added some RGB RAM and an RGB water cooler for a little extra bling. We also purchased a case with a glass side so we could show the students the GPU board. The small case required a small motherboard and we decided we didn’t need more than 32GB of RAM. We also used a 1TB SSD drive which has become very cost-effective in the last year. Most of the work will be done in the GPU, so we didn’t really need a fast CPU and a lot of RAM for our application. The total build price was $2,310.02.
Now the question is, could we optimize this design and make it affordable for high-schools that we plan to give grants to?
Here is a sample parts list for what we call the “Cost-Effective GPU servers” we are suggesting for these events:
You can see Jon Herke’s powerful original Tiny Monster GPU server parts on the PC Parts Picker list here. Note that there is considerable effort required to get all the parts in our small case. You can view our cost-effective parts list on PC Parts Picker here. I would like your suggestions if you see any additional room for improvement in price or performance.
My next blogs will include photos from an assembly process and directions for setting up the Nvidia cards under Ubuntu.