Supervisely goes beyond annotation – latest Deep Learning models out of the box



Since our first release in August 2017 a lot of amazing events happened around Supervisely. Community keeps growing and today around 5000 people are using our platform. We have gone a long way, received positive feedback, implemented new features, fixed bugs and now understand the people’s needs much better.

Supervisely makes AI available for everyone

Solving the GPU puzzle

But there was one thing that makes us really sad:

For a long time, Community Edition remained very limited compared to Enterprise Edition. Our enterprise customers had an opportunity to use Supervisely as a complete computer vision platform whereas Community Edition was more like a set of annotation tools. Why is it not enough?

When you build an AI product, you have to go through three main stages one by one — annotation, data preparation and neural network building. If one stage is missing or inefficient, the development process slows down in a significant way.

Supervisely was designed to cover these main stages:

Fundamental concepts behind Supervisely

In our case, neural network module was missed in Community Edition and user experience was incomplete. But we managed to find a solution.

The final piece

Today we are happy to announce the final piece of our computer vision platform — now everyone can train & run latest neural networks with Supervisely.

To make it possible, we had to overcome one huge challenge: “Where do we get tons of GPUs resources so that our community could use neural networks?

The solution itself turns out to be trivial — we have made it extremely easy to connect your own PC or cloud computer to Supervisely. Just run a single command in your terminal to install Supervisely Agent and start experimenting with neural networks right away: UNet V2, YOLO V3, Faster-RCNN, Mask-RCNN, DeepLab V3 and many others are already there and many more are coming.

Our solution to “GPU puzzle” has the following benefits:

1. It’s Free!

We gave too much power to NVIDIA — they forbid datacenters to use cheaper GeForce video cards in servers. Now GPUs in a cloud will cost you an arm and a leg. High-end GTX 1080 Ti with Pascal architecture costs cheaper than a month of a single p2.xlarge in AWS.

Be smart and use bare-metal — it’s OK, we personally know tons of large companies doing this. But, again, you can always deploy an agent in AWS or Azure if don’t have GPU on your hands.

2. No lock-in

Google, Amazon, Microsoft — they all offer cloud AI services. The problem is, essentially, they force you to use their computational resources, store your data on their servers, use their software — to keep you forever.

With Supervisely you can run models and store data anywhere you want — it might be your local PC or AWS server — the choice is yours.

3. Effective usage of GPUs

Have you ever had that unsatisfied feeling that too many research ideas were left untested? Not anymore! Connect as much computers as you want, run training processes with various data samples or training metaparameters and then aggregate and estimate the results.

In other words, connect computers to your private computational cluster and conduct tons of experiments with no costs.

4. Reproducible research

Building computer vision product based on neural networks implies a lot of experiments, especially on the early stages of development. These experiments involves trying to annotate objects in different ways, playing with different neural network architectures, understanding how to better augment the data and so on.

Supervisely was designed to keep track of experiments that users perform. So scripts for data transformations, training metaparameters are kept and organized in the way that it is easy to see and reproduce the actions that lead to “the most accurate model”.


Not enough? Well, we have another surprise for you.

Supervisely goes open source

We open source (github link) every NN we have, tools like DTL, our python library and, of course, the agent. But why do you even need to bother? Here are a couple reasons why:

1. Single format to rule ’em all

Deep Learning community is awesome. Every week people share state of the art NNs. Better, Faster, Stronger!

But it takes a lot of time to figure out how to use them. Because open-sourced projects are not standardized: they use different data formats and training/inference procedures. And unfortunately, sometimes these projects have bugs that are hard to find.

We adapt most popular NNs for you — every model we release is standardized and has been tested on real projects and rewritten to a single format to work with Supervisely. Even if you don’t want use Supervisely at all (but why?), it’s still a good collection of best models that are ready to use.

Just take ‘em!

2. Safety first!

Precaution is better than cure. You don’t have to trust us and run some commands on your computer — with agent open-sourced you can check code yourself and see that there is no hidden parts: we are not going to mine some sweet dogecoins, we don’t touch your computer at all.

3. Custom models

The best part — you are not limited to models we provide: with Supervisely Library you can implement any architecture you want and add it to Supervisely as a Docker image. Boo-Yah! Now you can run it with a single click, track experiments, try different datasets and data augmentations strategies — everything you love Supervisely for.

And, of course, your models are yours — push them to a private registry and provide credentials in environment variables: we don’t have any access to it. Or share them with community if you want.

Other improvements

Besides neural networks support and bugs fixes, this release has a bunch of new cool features like:

  • Sharable links
    Now you can share your projects and neural networks between accounts via simple link. Sharing mechanism is as simple as in google documents. You create a shareable link and give it to anyone you want. After a user click on the link, corresponding project or model is copied to his account.
  • Simplified user permissions
    New permission control allows to create several users and give them the granular access to selected portion of data.
  • Customizable hotkeys
    Now annotation tool has become even more convenient — every action now has a hotkey that you can re-assign at any moment to make your workflow exactly like you want it to be.

How to get started

Installation

To start using neural networks in your project, please, do the following steps

Step 1. Go to new.supervise.ly and sign up
Step 2. Install agent on your machine with gpu by following those steps.

After that, your gpu machine is connected to Supervisely and might be used to run training / inference tasks.

Tutorials

The easiest way to start is to go through our tutorials step by step. We recommend to start with the following one: Multi-class image segmentation using UNet V2.

Below is the list of tutorials with toy examples that will help to understand the basic concepts and train most popular Deep Learning models yourself.

Here are tutorials for more complicated real-world cases:

We believe you will find a lot of interesting stuff there.


Post scriptum

New version of Supervisely is still in beta, so some issues are possible. To make transition as smooth as possible we have released it under a new link: new.supervise.ly. Your accounts are already there, so just log in, attach an agent and start experimenting with neural networks! After beta testing we will migrate your datasets from the previous version to the new one.

Feedback and suggestions would be greatly appreciated! If you will have any technical or general questions, feel free to ask in our Slack.

If you found this article interesting, then let’s help others too. More people will see it if you give it some 👏.

Source: Deep Learning on Medium