Part 1:Building your Deep Neural Network: Step by Step

1 – Packages

Let’s first import all the packages that you will need duri

ng this assignment. 
– numpy is the main package for scientific computing with Python. 
– matplotlib is a library to plot graphs in Python. 
– dnn_utils provides some necessary functions for this notebook. 
– testCases provides some test cases to assess the correctness of your functions 
– np.random.seed(1) is used to keep all the random function calls consistent. It will help us grade your work. Please don’t change the seed.

import numpy as npimport h5pyimport matplotlib.pyplot as pltfrom testCases_v3 import *from dnn_utils_v2 import sigmoid, sigmoid_backward, relu, relu_backward%matplotlib inlineplt.rcParams['figure.figsize'] = (5.0, 4.0) # set default size of plotsplt.rcParams['image.interpolation'] = 'nearest'plt.rcParams['image.cmap'] = 'gray'%load_ext autoreload%autoreload 2np.random.seed(1)

2 – 大作业框架

To build your neural network, you will be implementing several "helper functions". These helper functions will be used in the next assignment to build a two-layer neural network and an L-layer neural network. Each small helper function you will implement will have detailed instructions that will walk you through the necessary steps. Here is an outline of this assignment, you will:

  • Initialize      the parameters for a two-layer network and for an L-layer neural network.(初始化2层和L层神经网络的参数)

  • Implement the      forward propagation module (shown in purple in the figure below).(进行前向传播模型)

    • Complete the       LINEAR part of a layer's forward propagation step (resulting in Z[l]).(完成一层前向传播的线性部分)

    • 我们给你了激活函数(relu/sigmoid).

    • 将之前的两步混合加入一个新的【线性->激活】前向传递函数。

    • 堆叠【线性->激活】前向传递函数并且增加一个【线性->SIGMOID】在最后一层。这完成了一个新的L_model前向传播函数。

  • 计算损失函数.

  • Implement the      backward propagation module (denoted in red in the figure below).进行反向传播模型

    • Complete the       LINEAR part of a layer's backward propagation step.

    • We give you       the gradient of the ACTIVATE function (relu_backward/sigmoid_backward)

    • Combine the       previous two steps into a new [LINEAR->ACTIVATION] backward function.

    • Stack       [LINEAR->RELU] backward L-1 times and add [LINEAR->SIGMOID]       backward in a new L_model_backward function

  • Finally update the parameters.

1517584310140726.jpg

注意:对于每个前向函数,都有一个相应的反向函数。 这就是为什么在你的前向模块的每一步你都会在缓存中存储一些值。 缓存的值对于计算梯度非常有用。 在反向传播模块中,您将使用缓存来计算梯度。 这项任务将向您显示如何执行每个步骤。

3 – 初始化

本部分你将写出两个子函数来初始化你的模型的参数。第一个函数被用来初始化两层模型的参数,第二个用来初始化L层模型的参数。

3.1 – 2层神经网络

Exercise: Create and initialize the parameters of the 2-layer neural network.

Instructions:

  • The model's structure is: LINEAR -> RELU -> LINEAR -> SIGMOID.

  • Use random initialization for the weight matrices. Use np.random.randn(shape)*0.01 with the correct shape.

  • Use zero initialization for the biases. Use np.zeros(shape).

本部分代码在第三周作业已经完成,在此不再累述。

3.2 – L层神经网络

The initialization for a deeper L-layer neural network is more complicated because there are many more weight matrices and bias vectors. When completing the initialize_parameters_deep, 你应当确保你的维度符合每一层的维度. 调用的 n[l]是l层中每一个节点(单元)的个数。 Thus for example if the size of our input Xis (12288,209) (with m=209 examples) then:

1517584590120928.jpg

Exercise: Implement initialization for an L-layer Neural Network.

Instructions:

  • The model's structure is [LINEAR -> RELU] ×(L-1) -> LINEAR -> SIGMOID. I.e., it has L−1 layers using a ReLU activation function followed by an output layer with a sigmoid activation function.

  • Use random initialization for the weight matrices. Use np.random.rand(shape) * 0.01.

  • Use zeros initialization for the biases. Use np.zeros(shape).

  • We will store n[l], the number of units in different layers, in a variable layer_dims. For example, the layer_dims for the "Planar Data classification model" from last week would have been [2,4,1]: There were two inputs, one hidden layer with 4 hidden units, and an output layer with 1 output unit. Thus means W1's shape was (4,2), b1 was (4,1), W2 was (1,4) and b2 was (1,1). Now you will generalize this to L layers!

  • Here is the implementation for L=1 (one layer neural network). It should inspire you to implement the general case (L-layer neural network).

if L == 1:      parameters["W" + str(L)] = np.random.randn(layer_dims[1], layer_dims[0]) * 0.01      parameters["b" + str(L)] = np.zeros((layer_dims[1], 1))
# GRADED FUNCTION: initialize_parameters_deep def initialize_parameters_deep(layer_dims):    """    Arguments:    layer_dims -- python array (list) containing the dimensions of each layer in our network       Returns:    parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":                    Wl -- weight matrix of shape (layer_dims[l], layer_dims[l-1])                    bl -- bias vector of shape (layer_dims[l], 1)    """       np.random.seed(3)    parameters = {}    L = len(layer_dims)            # number of layers in the network     for l in range(1, L):        ### START CODE HERE ### (≈ 2 lines of code)        parameters['W' + str(l)] = np.random.randn(layer_dims[l],layer_dims[l-1])*0.01        parameters['b' + str(l)] = np.zeros((layer_dims[l],1))        ### END CODE HERE ###        assert(parameters['W' + str(l)].shape == (layer_dims[l], layer_dims[l-1]))        assert(parameters['b' + str(l)].shape == (layer_dims[l], 1)) return parameters

4 – 前向传播模型

4.1 – 线性前向传播

Now that you have initialized your parameters, you will do the forward propagation module. You will start by implementing some basic functions that you will use later when implementing the model. You will complete three functions in this order:

  • LINEAR

  • LINEAR -> ACTIVATION where      ACTIVATION will be either ReLU or Sigmoid.

  • [LINEAR -> RELU] ×× (L-1) -> LINEAR -> SIGMOID (whole model)

The linear forward module (vectorized over all the examples) computes the following equations:

Z[l]=W[l]A[l1]+b[l]

where A[0]=X

Exercise: Build the linear part of forward propagation.

Reminder: The mathematical representation of this unit is Z[l]=W[l]A[l−1]+b[l]. You may also find np.dot() useful. If your dimensions don't match, printing W.shape may help.

# GRADED FUNCTION: linear_forward def linear_forward(A, W, b):    """    Implement the linear part of a layer's forward propagation.     Arguments:    A -- activations from previous layer (or input data):     (size of previous layer, number of examples)    W -- weights matrix: numpy array of shape     (size of current layer, size of previous layer)    b -- bias vector, numpy array of shape (size of the current layer, 1)     Returns:    Z -- the input of the activation function, also called pre-activation parameter    cache -- a python dictionary containing "A", "W" and "b" ;     stored for computing the backward pass efficiently    """       ### START CODE HERE ### (≈ 1 line of code)    Z = np.dot(W,A) + b    ### END CODE HERE ###       assert(Z.shape == (W.shape[0], A.shape[1]))    cache = (A, W, b)   return Z, cache

4.2 -线性激活函数前向传播

In this notebook, you will use two activation functions:

  • Sigmoid: σ(Z)=σ(WA+b)=11+e−(WA+b).      We have provided you with the sigmoid function.      This function returns two items: the activation      value “a” and a “cache” that contains “Z” (it’s what we      will feed in to the corresponding backward function). To use it you could      just call:

A, activation_cache = sigmoid(Z)

  • ReLU:(修正线性单元) The mathematical formula for ReLu is A=RELU(Z)=max(0,Z).      We have provided you with the relu function.      This function returns two items: the activation      value “A” and a “cache” that contains “Z” (it’s what we      will feed in to the corresponding backward function). To use it you could      just call:

A, activation_cache = relu(Z)

For more convenience, you are going to group two functions (Linear and Activation) into one function (LINEAR->ACTIVATION). Hence, you will implement a function that does the LINEAR forward step followed by an ACTIVATION forward step.

Exercise: Implement the forward propagation of the LINEAR->ACTIVATION layer. Mathematical relation is: A[l]=g(Z[l])=g(W[l]A[l1]+b[l]) where the activation “g” can be sigmoid() or relu(). Use linear_forward() and the correct activation function.

# GRADED FUNCTION: linear_activation_forward def linear_activation_forward(A_prev, W, b, activation):    """    Implement the forward propagation for the LINEAR->ACTIVATION layer     Arguments:    A_prev -- activations from previous layer (or input data):     (size of previous layer, number of examples)    W -- weights matrix: numpy array of shape (size of current     layer, size of previous layer)    b -- bias vector, numpy array of shape (size of the current layer, 1)    activation -- the activation to be used in this layer, stored as     a text string: "sigmoid" or "relu"     Returns:    A -- the output of the activation function, also called the post-activation value    cache -- a python dictionary containing "linear_cache" and "activation_cache";             stored for computing the backward pass efficiently    """       if activation == "sigmoid":        # Inputs: "A_prev, W, b". Outputs: "A, activation_cache".        ### START CODE HERE ### (≈ 2 lines of code)        Z, linear_cache = linear_forward(A_prev,W,b)        A, activation_cache = sigmoid(Z)        ### END CODE HERE ###       elif activation == "relu":        # Inputs: "A_prev, W, b". Outputs: "A, activation_cache".        ### START CODE HERE ### (≈ 2 lines of code)        Z, linear_cache = linear_forward(A_prev,W,b)        A, activation_cache = relu(Z)        ### END CODE HERE ###       assert (A.shape == (W.shape[0], A_prev.shape[1]))    cache = (linear_cache, activation_cache)     return A, cache

d) L层模型

For even more convenience when implementing the LL-layer Neural Net, you will need a function that replicates the previous one (linear_activation_forward with RELU) L1L1 times, then follows that with one linear_activation_forward with SIGMOID.

1517584950824103.jpg

Figure 2 : [LINEAR -> RELU] ×× (L-1) -> LINEAR -> SIGMOID model

 

Exercise: Implement the forward propagation of the above model.

Instruction: In the code below, the variable AL will denote A[L]=σ(Z[L])=σ(W[L]A[L−1]+b[L])

. (This is sometimes also called Yhat, i.e., this is Y^.)

Tips:

  • Use the functions      you had previously written

  • Use a for      loop to replicate [LINEAR->RELU] (L-1) times

  • Don't forget      to keep track of the caches in the "caches" list. To add a new      value c to a list, you can use list.append(c).

5 – 代价函数

Now you will implement forward and backward propagation. You need to compute the cost, because you want to check if your model is actually learning.

Exercise: Compute the cross-entropy cost JJ, using the following formula:

1mi=1m(y(i)log(a[L](i))+(1y(i))log(1a[L](i)))

# GRADED FUNCTION: compute_cost def compute_cost(AL, Y):    """    Implement the cost function defined by equation (7).     Arguments:    AL -- probability vector corresponding to your label predictions,    shape (1, number of examples)    Y -- true "label" vector (for example: containing 0 if non-cat, 1 if cat),    shape (1, number of examples)     Returns:    cost -- cross-entropy cost    """        m = Y.shape[1]     # Compute loss from aL and y.    ### START CODE HERE ### (≈ 1 lines of code)    cost = -1 / m * np.sum(Y * np.log(AL) + (1-Y) * np.log(1-AL),axis=1,keepdims=True)    ### END CODE HERE ###        cost = np.squeeze(cost)    # To make sure your cost's shape is what we expect (e.g. this turns [[17]] into 17).    assert(cost.shape == ())    return cost

6 – 反向传播模块

Just like with forward propagation, you will implement helper functions for backpropagation. Remember that back propagation is used to calculate the gradient of the loss function with respect to the parameters.

Reminder:

1517585078165939.jpg

Figure 3 : Forward and Backward propagation for LINEAR->RELU->LINEAR->SIGMOID
The purple blocks represent the forward propagation, and the red blocks represent the backward propagation.

Now, similar to forward propagation, you are going to build the backward propagation in three steps:

  • LINEAR backward

  • LINEAR -> ACTIVATION backward where ACTIVATION computes the derivative of either the ReLU or sigmoid activation

  • [LINEAR ->RELU] × (L-1) -> LINEAR -> SIGMOID backward(whole model)

6.1 – 线性反馈

1517585169770450.jpg

1517585187107498.jpg

Exercise: Use the 3 formulas above to implement linear_backward().

# GRADED FUNCTION: linear_backward def linear_backward(dZ, cache):    """    Implement the linear portion of backward propagation for a single layer (layer l)     Arguments:    dZ -- Gradient of the cost with respect to the linear output (of current layer l)    cache -- tuple of values (A_prev, W, b) coming from the forward propagation in the    current layer     Returns:    dA_prev--Gradient of the cost with respect to the activation(of the previous layerl-1),    same shape as A_prev    dW -- Gradient of the cost with respect to W (current layer l), same shape as W    db -- Gradient of the cost with respect to b (current layer l), same shape as b    """    A_prev, W, b = cache    m = A_prev.shape[1]     ### START CODE HERE ### (≈ 3 lines of code)    dW = 1 / m * np.dot(dZ ,A_prev.T)    db = 1 / m * np.sum(dZ,axis = 1 ,keepdims=True)    dA_prev = np.dot(W.T,dZ)    ### END CODE HERE ###       assert (dA_prev.shape == A_prev.shape)    assert (dW.shape == W.shape)    assert (db.shape == b.shape)   return dA_prev, dW, db

6.2 – 线性激活函数反馈

Next, you will create a function that merges the two helper functions: linear_backward and the backward step for the activation linear_activation_backward.

To help you implement linear_activation_backward, we provided two backward functions:

  • sigmoid_backward: Implements the backward propagation for SIGMOID unit. You can      call it as follows:

dZ = sigmoid_backward(dA, activation_cache)

  • relu_backward: Implements the backward propagation for RELU      unit. You can call it as follows:

dZ = relu_backward(dA, activation_cache)

1517585290422421.jpg

# GRADED FUNCTION: linear_activation_backward def linear_activation_backward(dA, cache, activation):    """    Implement the backward propagation for the LINEAR->ACTIVATION layer.       Arguments:    dA -- post-activation gradient for current layer l    cache -- tuple of values (linear_cache, activation_cache) we store for computing    backward propagation efficiently    activation -- the activation to be used in this layer, stored as a text string:     "sigmoid" or "relu"       Returns:    dA_prev -- Gradient of the cost with respect to the activation     (of the previous layerl-1), same shape as A_prev    dW -- Gradient of the cost with respect to W (current layer l), same shape as W    db -- Gradient of the cost with respect to b (current layer l), same shape as b    """    linear_cache, activation_cache = cache       if activation == "relu":        ### START CODE HERE ### (≈ 2 lines of code)        dZ = relu_backward(dA, activation_cache)        dA_prev, dW, db = linear_backward(dZ, linear_cache)        ### END CODE HERE ###           elif activation == "sigmoid":        ### START CODE HERE ### (≈ 2 lines of code)        dZ = sigmoid_backward(dA, activation_cache)        dA_prev, dW, db = linear_backward(dZ, linear_cache)        ### END CODE HERE ###   return dA_prev, dW, db

6.3 – L层模型的反馈

Now you will implement the backward function for the whole network. Recall that when you implemented the L_model_forward function, at each iteration, you stored a cache which contains (X,W,b, and z). In the back propagation module, you will use those variables to compute the gradients. Therefore, in the L_model_backward function, you will iterate through all the hidden layers backward, starting from layer L. On each step, you will use the cached values for layer l to backpropagate through layerl. Figure 5 below shows the backward pass.

1517585447137419.jpg

1517585486746510.jpg

6.4 – 更新参数

在本节,你将利用梯度下降算法更新模型的参数:

1517585520126996.jpg

# GRADED FUNCTION: update_parametersdef update_parameters(parameters, grads, learning_rate):    """    Update parameters using gradient descent    Arguments:    parameters -- python dictionary containing your parameters    grads -- python dictionary containing your gradients, output of L_model_backward    Returns:    parameters -- python dictionary containing your updated parameters                  parameters["W" + str(l)] = ...                  parameters["b" + str(l)] = ...    """    L = len(parameters) // 2 # number of layers in the neural network     # Update rule for each parameter. Use a for loop.    ### START CODE HERE ### (≈ 3 lines of code)    for l in range(L):        parameters["W"+str(l+1)]=parameters["W"+str(l+1)]-learning_rate*grads["dW"+str(l+1)]        parameters["b"+str(l+1)]=parameters["b"+str(l+1)]-learning_rate*grads["db"+str(l+1)]    ### END CODE HERE ###return parameters

7 – 结论

祝贺你利用所有必须的函数完成了一个深度神经网络!

我们指导这是一个很长但是继续往前会变得更好的大作业。作业的下一部分将会更简单。

在接下来的部分,你将需要把所有的这些函数组合来创建两个模型:

  • 两层神经网络

  • L层神经网络

你将能够利用这些模型来对猫/非猫图像进行分类!