100 Days of ML — Day 3 — A Brief Intro Into Neural Networks and Why I’m Probably Not Disrupting…



This comes up in a Pexels search for Neural Network

Since I don’t know who’s going to read this, let’s start from scratch. Humans have wanted to easily predict things since the beginning of time. It informed us back in the days of timing our hunts around sabre-toothed tigers to today when we’re trying to stack the best Fantasy Football team (which, by the way does not star Alex Jones as QB and I totally had Ryan Fitzpatrick on the table).

In mathematics, the easiest way to look way we’ve learned about these relationships is they old y = x deal. You might have done slope? y = mx+b. If I work x number of hours at a rate of money (m) per hour, plus a bonus (b), then I get y (like y is my income so low, I should study AI).

This is a linear equation relying one set of x values giving us one set of y values. At a high level, it looks like this:

I’m so happy I learned math when they switched to white boards.

We can generalize this to deal with all sorts of real life scenarios that can’t be measured linearly. The equation would be like y = sin(mx^n+b). Here are some terrible graphs that I’m sorry aren’t better, but I don’t have a graphic designer or a layout person right now.

You did all this in Windows Paint?!?

At the highest levels, these graphs ultimately yield the same “neural network”.

If THEN then! NICE

In the real world, two or more variables (inputs) will yield a y variable (output). There’s problems in the statistical world when measuring more and more variables, but that’s outside the scope of this article. It’s also why we have neural networks and they look like this.

The hidden layer is my favorite thing in neural networks because the understanding is so limited. Basically, it’s all the if-then relationships that we don’t have to manually code, which is great, but it’s not something I can explain to my grandmother, so voodoo magic it is.

So as I’m working on my daily podcast: recording, editing, uploading, and trying to find every little trick, I think to myself, can’t a neural network do the normalizing, compressing, and EQ way better than I could? Is it possible to automate my audio engineering?

It is! But it doesn’t need a neural network.

What we’d be looking at is me recording about 1000 samples of my voice and then doing the post-production work. We’d feed in a training set, a validation set, and a testing set and get the neural network trained.

But it’d be overkill.

At a high level, the data set of my voice is just one x variable yielding a y variable. Where neural networks could come in handy is if I had 1000 voice actors recorded 1000 ways. Now I can feed a neural network a ton of data that can generalize audio engineering for nearly the whole planet.

This is data that I can’t wrangle cheaply, but it’s a billion dollar idea if you can do it.

Jimmy Murray is a Florida based comedian who studied Marketing and Film before finding himself homeless. Resourceful, he taught himself coding, which led to a ton of opportunities in many fields, the most recent of which is coding away his podcast editing. His entrepreneurial skills and love of automation have led to a sheer love of all things related to AI.

#100DaysOfML

#ArtificialIntelligence

#MachineLearning

#DeepLearning

Source: Deep Learning on Medium