Udemy 課程筆記 – TensorFlow 2.0 (第一週)

Source: Deep Learning on Medium

The following code is Tensorflow + Python + Keras

The simplest possible neural network is one that has only one neuron in it, and that’s what this line of code does.

In keras, you use the word dense to define a layer of connected neurons. There’s only one dense here.

So there’s only one layer and there’s only one unit in it, so it’s a single neuron. Successive layers are defined in sequence, hence the word sequential.

you can see that our input shape is super simple. It’s just one value.


There are two function roles that you should be aware of though and these are loss functions and optimizers.

The neural network has no idea of the relationship between X and Y,so it makes a guess. The set of Xs and Ys that we’ve already seen to measure how good or how bad its guess was.

loss functions測量X與Y資料的差距,optimizers找出下一個guess

Then the logic is that each guess should be better than the one before. As the guesses get better and better, an accuracy approaches 100 percent

— 這過程稱為convergence

The np.array is using a Python library called numpy that makes data representation particularly enlists much easier.
The training takes place in the fit command. Here we’re asking the model to figure out how to fit the X values to the Y values.


我們前面已經描述了這個訓練循環。進行猜測,用loss function衡量猜測的好壞,然後使用optimizers和數據進行另一次猜測並重複此步驟。




這是因為 1.數據量太少 2. 並且神經網路並不是以Integer表示,而是Float

When using neural networks, as they try to figure out the answers for everything, they deal in probability.