The Intra-MIC Hack: A Word from the Teams (Part 2)

Original article was published by SRM Machine Intelligence Community on Deep Learning on Medium


Team Carbon

Team members: Pooja Ravi, Anushka Choudhury

What was your hack about?

We wanted to do something meaningful by finding a possible way to bridge the gaps in communication for people with hearing or speech related disabilities. Unless a person is somehow in touch with an individual with hearing or speech deficiency or both, they generally wouldn’t be aware of any sign language. Hence we selected a project that we decided to call ‘Sign Language Reader’, which as you can clearly understand from the title, is a model that translates 37 characters (26 letters, 10 digits and a special character ‘_’) of the American Sign Language to English Text.

What did you learn along the way?

This was the first time that we’ve worked on a full fledged computer vision project. We learnt about image preprocessing, perform binary masking on images, implementing a CNN and being entirely new to Streamlit, we learnt how to deploy a model on a web application. Besides the technical skills, what we truly loved was that we got to learn a bit about the American Sign Language. We strongly believe that it is necessary to try to include everybody and to not leave anyone out and communication can’t be a barrier there.

What difficulties did you face?

Technical issues and shortage of a proper dataset restricted our project to a great extent. We wanted to include words instead of singular characters but couldn’t find a suitable dataset. While working on video feed for taking inputs, we faced some technical difficulties. The deployment on Heroku also caused some issues regarding the slug size.

Any special comments?

The hack was a great learning experience and we learnt a lot more than we expected to, in such a short period of time.

Yet another language barrier broken down!

Check out the hack here!

Team Nitrogen

Team members: Aradhya Tripathi, Astha Vijayvargiya, Aayush Upadhyay

What was your hack about?

Our project “Colab Connect” essentially aimed at bridging the gap between a cloud GPU (Google Colab) and our local machine. The inspiration was people having laptops with no GPUs but a with a huge necessity to compute. We abstracted the process of connecting to Google Colab’s VM so that we would be able to use our GPUs on our local machines which turned out to be great.

What did you learn along the way?

Primarily we learnt not giving up on a project even when many people are working on the same notion simultaneously. We learnt how we can get two servers to communicate and about tunnels and key bindings. We got to know about ‘ngrok’, a cross platform application which enables developers to expose a local developement server to the internet with minimal effort, which went really well with our idea. Besides, the project has taught us a great deal about JSONs, file handling, OOPs, networking, remote servers and production difficulties.

What difficulties did you face?

Colab does not expose an API while drive wants a GCP connection. We wanted to abstract these two parts, but couldn’t execute it as a result of these constraints. Moreover we faced difficulty in mounting Google Drive.

Any special comments?

The MIC Hack was a fantastic idea in itself, to say nothing of the perfect execution. Each and every project presented was successful in creating a niche of its own. The quality of MIC was very evident and this should be regular.

Goodbye Colab, hello VSC

Check out the hack here!

Team Oxygen

Team members: Pranjal Datta

What was your hack about?

The hack, titled “Nocode.ai”, was about building neural networks without a single line of code! I created a wrapper around Pytorch to build custom complex models in a matter of minutes, while having minimal knowledge of the framework itself. Building and prototyping AI should be easy, quick and hassle free.

What did you learn along the way?

Representing DL architectures in YAML, parsing them efficiently, building dynamic graphs and an internal graph engine that facilitates efficient model building, forward pass and non linear flow control; Essentially way too much for a single hack.

What difficulties did you face?

Non linear flow control and memory efficiency were key challenges. Creating this wrapper would mean it should be compatible with advanced users as well, and so it could not only cater to beginner sequential model architectures.

Any special comments?

It was a great hack! I am really looking forward to bring Nocode.ai to the next level, and remove a massive barrier for entry into deep learning!

You don’t know how to code? Say no more, we got you covered

Check out the hack here!

Team Fluorine

Team members: Srijarko Ray, Tanmay Khot

What was your hack about?

The project hack by Team Fluorine was an Image Classifier integrated with Adversarial Attacks. We had built a web app for the user to upload an image and classify it with a model and adversarial attack of his choice from the options available to him. Self driven cars might well be the future for which correct recognition of road signs and traffic boards is essential. The concept of Adversarial Attack finds its place in this field particularly besides the medical field and drones to a certain extent. This was basically what inspired us to take up this topic for the hack.

What did you learn along the way?

We collectively we learnt a lot. Both of us had never built a web app before so learning Streamlit was one of the major gains. We integrated the Adversarial Robustness Toolbox library for our model and the implementation of various attacks. We also gained an awesome experience of working on a collaborative project and it was amazing.

What difficulties did you face?

The main difficulties were to understand the code base of Adversarial Robustness Toolboox library to integrate it with our model. Initially we had some issues with producing the samples as well, along with the final form of the deployment on Streamlit. But with some solid perseverance, we overcame it soon enough.

Any special comments?

This hack helped us solidify our learnings as well as participate in a team of equals. We just had great fun overall!

Who would win: A 121 layer deep CNN with billions of params or one thicc pixel boi?

Check out the hack here!