Source: Deep Learning on Medium
Understand the Basics of Deep Learning in a Hurry
What is Keras?
Keras is a deep learning framework that sits on top of backend frameworks like TensorFlow.
Why use Keras?
Keras is excellent because it allows you to experiment with different neural-nets with great speed! It sits atop other excellent frameworks like TensorFlow, and lends well to the experienced as well as novice data scientist! It doesn’t require nearly as much code to get up and running!
Keras provides you with the flexibility to build all types of architectures; that could be recurrent neural networks, convolutional neural networks, simple neural networks, deep neural networks, etc.
What’s the difference?
You may be asking yourself what the difference is between Keras and TensorFlow… lets clear that up! Keras is actually integrated into TensorFlow. It’s a wrapper around the TensorFlow backend (Technically you could use Keras with a variety of potential backends). What does that mean? Pretty much that you are able to make any Keras call you need from within TensorFlow. You get to enjoy the TensorFlow backend, while leveraging the simplicity of of Keras. .
What problems do neural nets work best with?
What is the main difference between a neural network and traditional machine learning? Feature Extraction! Traditionally whoever is operating a machine learning model is performing all tasks related to feature extraction. What makes a neural network different is that they are very good at performing that step for you.
When it comes to data that isn’t tabular by nature and comes in a very unstructured format; ie audio, video, etc.; it is difficult to perform feature engineering. Your handy dandy neural net is going to perform far better at this type of task.
When you don’t need interpretation
When it comes to a neural net, you don’t have a lot of visibility into the results of your model. While this can be good, depending on the application of the neural net, this can also be tricky. If you can correctly classify an image as a horse, then great! You did it; you don’t really need to know how your neural net figured it out; whereas for other problems, that may be a key aspect to the model’s value.
What does a neural network look like?
Your most basic neural network is going to consist of three main layers:
Your input layer, which is going to consist of all of your training data,
Your hidden layer(s), this is where all parameter weighting will take place,
Then finally your output layer — where your prediction will be served up!
When it comes to the weights applied in the hidden layers of a neural network, there are couple main things that we use to help optimize our neural net for the right weight. One of those things is the use of an activation function to determine the weights. Activation functions help your network identify complex non-linear pattern in your data. You might find yourself using sigmoid, tanh, relu, and softmax.
Get your hands dirty!
You’ll want to import the required packages from Keras.
Sequential allows you to instantiate your model, & layers allows you to add each layer, base, hidden, & output. Furthermore the
model.add() calls are how we go about adding each layer to a deep learning network all the way through the final output layer.
# load libraries
from keras.models import Sequential
from keras.layers import Dense# instantiate model
model = Sequential()# here you can add your hidden layer
model.add(Dense(4, input_shape=(2,), activation="relu"))# one neuron output!
If you’ve made it all they way down here then you’ve successfully run your first neural network! I hope this quick intro to Keras was informative & helpful. Let me know if there are other topics or principles you’d like to hear more about. Until then, Happy Data-sciencing!