ODSC West 2018 — A Brief Recap

Source: Deep Learning on Medium


ODSC (Open Data Science Conference) is currently one of the biggest data science conferences around and they’ve definitely earned their high search ranking and reputation. After being fortunate enough to actually receive a partial scholarship to ODSC West 2018 (thanks ODSC!), I had high hopes and expectations for the talks just based on their titles and descriptions. After attending some of those workshops and talks on Friday, November 2, and Saturday, November 3, they definitely met, if not exceeded, my expectations. So for those who are considering checking it out in the future that also have the spare money and time, I would definitely recommend attending.

But why mention this? Well, not everyone can attend for various reasons, so I’ve decided to recap some of the workshops that I attended since they had some fairly interesting information worth sharing. I’ll briefly discuss the workshops entitled “How to Use Satellite Imagery to be a Machine Learning Mantis Shrimp”, “Open Source Random Variables: Building a Prediction Web”, and “Deep Learning for Mobile”, providing whatever resources and references I can. I personally walked out of these talks with some amazement and inspiration, so I’ll attempt to pass some of those feelings along with the knowledge.

How to Use Satellite Imagery to be a Machine Learning Mantis Shrimp

So, I found Steven Pousty’s talk quite illuminating and his enthusiasm infectious, especially since I haven’t dealt with satellite imagery before. Here are the slides for reference; inside are some more links to prepped Jupyter notebooks that go over some of the major concepts and details of dealing with satellite imagery, including finding and inserting imagery, preprocessing imagery, manipulating and segmenting images, and extracting vectors.

So, some points I’d like to highlight:

  • Finding satellite imagery where there’s less cloud cover makes analyzing the images a lot easier because you don’t have to deal with acomp (atmospheric compensation), which makes analyzing images more difficult if it’s present for a couple reasons.
  • Images from different sources/satellites can have different numbers of bands/wavelengths which can affect the capabilities of whatever analysis you can do; I didn’t know that vegetation reflects near-infrared and that water doesn’t, making it easy to detect either of these if you have information from that infrared band; there are plenty of other amazing, interesting factoids about what is or isn’t reflected by certain wavelengths that I think are worth exploring more.
  • Remote sensing indices are useful for finding which bands/wavelengths can help you find what you’re looking for. This remote sensing index database is a good place to start if you want or have ideas of what you’re looking for.
  • Understanding how to convert between rasters and vectors is kind of important if you want to work with GIS software; GeoJSON is a nice format that more easily enables that conversion.
  • Converting between 3d and 2d objects will introduce distortion of some sort; whatever projection and datums you choose to use, understand them well.

Steven Pousty took what could have been a very dry talk and made it very engaging and accessible to laymen. I’ll give him the sole credit for how amazing this talk was.

Open Source Random Variables: Building a Prediction Web

So this talk was interesting in the fact that it was essentially a preview and discussion about a service, Roar, that is in the works by JP Morgan. Unfortunately, the slides don’t seem to be available (or at least I can’t find them) and as of the time of the writing, the website seems to be down as well, so I’ll try to capture the thrust of the presentation.

After the speakers (Rusty Conover and Peter Cotton) introduced themselves and asked a few questions to feel out who their audience was, they launched into their presentation on what exactly their Roar platform was and what motivated them into creating it. The gist of it is that they want to create a platform to essentially offer data science as a service. They see the marketplace as companies getting boutique, custom-tailored solutions for their problems. They see this as a kind of pre-Ford, pre-industrial era of data science and see the current system as inefficient, in a sense.

Rather than have one data scientist or one group of data scientists to solve an individual problem, they envision a platform where a business can pose a problem and data scientists, or the bots that serve them, can put a price on the accuracy of their models. Perhaps a data scientist can offer a model that provides 80% accuracy for a fixed price but another data scientist can offer a model that offers 90% accuracy for a similar price; companies would easily benefit from having multiple people vying for their dollar and it would most likely drive the price of data science down. Roar Data would rely on the invisible hand of the market to determine the price point of each additional percentage of accuracy or whatever other metric would be used for a given model.

They envision a system of incentives such that multiple parties can be paid if their models capture aspects of the data not already incorporated by existing (submitted) models. As an example, let’s say a business was trying to predict the amount of ice cream sales. Multiple data scientists created models and each of their models incorporated different aspects; one data scientist included the temperature as the main factor in their model to reach 80% accuracy and another included the scarcity of water and other thirst-quenching alternatives to reach 80% accuracy in their model. Under the Roar system, each data scientist would be compensated for their marginal contribution to what would ideally be an even better model that incorporates both aspects, even though each individually had approximately the same level of accuracy.

One point raised was that someone could perhaps create a bunch of models that found some spurious correlation, submit the model, and get paid even though their model found an irrelevent correlation. Peter’s and Rusty’s response was that they wouldn’t restrict people from doing that, but rather that the invisible hand of the market would naturally correct it. Models that find spurious correlation take time and computational power to run and those demanding a model may be able to throw in the requirement that the model be interpretable or otherwise they won’t pay for it. So in that respect, running such models would incur a cost that may never be recovered and would discourage such behavior.

Through Roar, J.P. Morgan hopes to essentially become a realtime broker of machine learning. The data that would appear on the platform would be the responsibility of the company that provided it; Roar wouldn’t try to manage it in any way. J. P. Morgan seems to seek to build a community around this new platform and increase both the buy-in and feedback about the service from the data science community at large.

I found the talk quite interesting because it gave me a look into the future that J. P. Morgan envisions, a future in which data science is more commoditized and less artisanal. This talk probably gave me the broadest perspective of the data science field compared to the ones I attended, and for that alone, it was definitely worth it.

Deep Learning on Mobile

This was a dense talk chock full of fantastic, detailed information. For reference, here are the slides. Anirudh Koul, a fantastic, fast-paced presenter, structured this talk in terms of what you could do given a certain amount of time to create a deep learning app (an hour, a day, a week, etc.) and provided not only background knowledge and resources/references to get you started, but some tips borne from experience on best practices for creating deep learning mobile apps.

Some of the points I’d like to reinforce here:

  • There are plenty of services out there that can perform text recognition or image tagging if you need to quickly create an MVP; between Google’s Cloud Vision API, Microsoft’s Cognitive Services, IBM Watson’s Visual Recognition, Amazon’s Rekognition, and Clarifai, you should have plenty of options; also, it’s a bit more difficult than it seeems to do apple-to-apple comparisons between the services because they return varying numbers of tags with different specificities.
  • Deep learning for mobile applications seems to have become significantly easier as binaries become smaller, developers can import models into projects, and formats like ONNX (Open Neural Network Exchange) increase interoperability between the various frameworks.
  • When developing deep learning mobile applications, size matters especially since Android and iOS have limits on the app size; downloading and compiling the models onto the phones has some benefits and using frameworks for model management (ex. Google ML Kit or Fritz) helps alleviate model updates.
  • Sometimes, fine-tuning an existing pre-trained model is appropriate; other times, it might make sense to build a CNN from scratch. Consider the size of your dataset and how similar it is to the original dataset when making that choice.
  • There are some services to train your own classifier without coding. Microsoft’s CustomVision.ai, Google’s AutoML, IBM’s Watson Visual Recognition, and Baidu’s EZDL are pretty good services, each with their own unique strengths; if you do decide to use them, 30 images should be enough to get a prototype running; 200 images would better for a more robust production model.
  • Crowdsourcing your data collection can get you a lot of good training data if you do it right; Anirudh provided a great example of this with detecting currency for the Seeing AI API.
  • Anirudh provides a panoply of tips and tricks to get the maximum efficiency from your CNN including but not limited to: choosing the right architecture, designing efficient layers, pruning, network binarization, and especially quantization. He offers more details and I can’t do it justice with a simple summary; you should really check out the slides for this portion if you haven’t already.
  • If you want to check out the state-of-the-art for machine learning on embedded platforms, you should check out the results of the LPIRC (Low Power Image Recognition Challenge) and the System Design Contest at the Design Automation Conference.

While all of that information alone was enough to keep me satisfied and engaged for a long time, the presenters even managed to demo a mostly-prepped Coke vs. Pepsi detector using the website, CustomVision.ai, and images taken by the audience on the Cokes and Pepsi cans that were passed around at the beginning of the presentation. It was so simple, yet so satisfying to be a part of.

All’s Well That Ends Well

While I attended more talks and workshops, there was so much good information that it was hard to retain everything, despite the copious amounts of caffeine available throughout the event. ODSC has built up quite a community and environment and I’m both glad and impressed that I had such a pleasant experience throughout.

Anyways, I hope that I’ve passed on both the cool information and sense of amazement that I was fortunate enough to have. Enjoy!