Zero Shot Super Resolution Part 2: Implementing ZSSR with Keras and –

Source: Deep Learning on Medium

Go to the profile of Shahar Guigui

In part 1, we reviewed Single Image Super Resolution (SISR) methods and Zero-Shot Super Resolution in particular. These methods aim to obtain a high resolution (HR) output from a low-resolution (LR) version. As a quick recap, the applications for this range from medical imaging, working with compressed images, agriculture analysis, autonomous driving, satellite imagery, reconnaissance and more. In part two, we are going to set up our environment for running the code and in Part 3 we’ll take a deep dive into implementing it with Keras and

Super Resolution is the act of creating a high-resolution image from a low-resolution image. Most Deep Learning methods tackle this problem by using a large neural network which is trained on big sets of data for long periods of time. Zero Shot Super Resolution tries instead to train a small ‘image-specific’ neural network on the single target image and its augmentations.

Here is an example of what we hope to achieve using this technique:

Original image:

Super-resolution image from ZSSR:

Our goal for this project is to create a model that can take different low-resolution images and upscale them as a proof of concept that illustrates how this technique works in practice.

In order for us to get started, we will need to run the example implementation I’ve created based on Keras with TensorFlow backend. 
We’ll also be using OpenCV and Sign up now for a demo of

Keras is a high-level deep learning library in python. It’s an abstraction of TensorFlow that enables us to speed up the process of writing working deep learning code. Likewise, OpenCV is a library of programming functions mainly aimed at real-time computer vision. We will need to run our code example with Python 3.

The final component of this project is MissingLink which is a deep learning workflow automation platform. It allows us to train various versions of our model and utilizing cloud services with ease, compare experiments and analyze them even as they run, create data queries and version and much more!

You’ll want to set up a free account before you start by doing the following steps:

  1. Visit and click on the “Start Now” button
  2. Enter your email and follow up the new account wizard.
  3. Once you have an account you can create a new project called ZSSR Test and configure it for Keras.

Once you have an account and new project set up, you’ll be able to use MissingLink’s Experiment Management page to track the progress of each experiment you run.

When you have everything you need, you’ll be ready to move onto downloading and setting up the actual code itself.

I’ve published a ZSSR Project in the GitHub repo which you can clone or download to run through the code example. You can find the source code on GitHub here. The project I created contains everything you need to get up and running quickly inside of the ReadMe file. Here is a high-level overview of what you’ll need to do:

Step 1. Clone this repo:

Step 2. You are strongly recommended to use virtualenv to create a sandboxed environment for individual Python projects:

Step 3. Create and activate the virtual environment:

Step 4. Install dependency libraries:

Step 5. Authenticate your username from the CLI:

Step 6. Run the code

At this point, you may need to also select the project you’d like to link to:

Once the project is running, you’ll be able to monitor its progress in the log file and in real-time on your dashboard.

Here you can see the loss and learning rate of our model over 2000 epochs. When the experiment is done, you’ll have a folder called “output” with the original image and the super-resolution image examples.

At this point, you can experiment with the different input images by adding --subdir DIRNAME to the run command with any of the following optional directory names:

Here is an example to run this with 2000 epochs in subdirectory 1:

At this point you should have everything you need to download the code, run the project and make changes. In the next part, we’ll go through the code and dig deeper into what is going on under the hood.

Originally published at