优化方法
直到现在,你已经能够经常使用梯度下降来更新你的参数和最小化代价。在本次作业中,你将学到更多优秀的优化方法以来加速学习速度,并且可能最后得到一个更好的代价函数的最终结果。拥有一个好的优化算法要比等待数日或数小时来得到一个好结果更有意义。
梯度下降就像是一个代价函数J下山一般。
为了能够开始本作业,首先当然是载入所用到的包。
import numpy as npimport matplotlib.pyplot as pltimport scipy.ioimport mathimport sklearnimport sklearn.datasets from opt_utils import load_params_and_grads, initialize_parametersfrom opt_utils import forward_propagation, backward_propagationfrom opt_utils import compute_cost, predict, load_datasetfrom opt_utils import predict_dec, plot_decision_boundary,from testCases import * %matplotlib inlineplt.rcParams['figure.figsize'] = (7.0, 4.0) # set default size of plotsplt.rcParams['image.interpolation'] = 'nearest'plt.rcParams['image.cmap'] = 'gray'
1 – 梯度下降
具体算法原理因为在此之前都已经重述了多次,所以在此不再重述。直接贴出代码。
# GRADED FUNCTION: update_parameters_with_gddef update_parameters_with_gd(parameters, grads, learning_rate): """ Update parameters using one step of gradient descent Arguments: parameters -- python dictionary containingyour parameters to be updated: parameters['W' + str(l)] = Wl parameters['b' + str(l)] = bl grads -- python dictionary containing yourgradients to update each parameters: grads['dW' + str(l)] = dWl grads['db' + str(l)] = dbl learning_rate -- the learning rate, scalar. Returns: parameters -- python dictionary containing your updated parameters """ L = len(parameters) // 2 # number of layers in the neural networks # Update rule for each parameter for l in range(L): ### START CODE HERE ### (approx. 2 lines) parameters["W" + str(l+1)] = parameters["W" + str(l+1)] -learning_rate * grads["dW" + str(l+1)] parameters["b" + str(l+1)] = parameters["b" + str(l+1)] -learning_rate * grads["db" + str(l+1)] ### END CODE HERE ### return parameters
测试一下:
parameters, grads, learning_rate=update_parameters_with_gd_test_case()parameters=update_parameters_with_gd(parameters, grads, learning_rate)print("W1 = " + str(parameters["W1"]))print("b1 = " + str(parameters["b1"]))print("W2 = " + str(parameters["W2"]))print("b2 = " + str(parameters["b2"]))
结果如下
W1 = [[ 1.63535156 -0.62320365 -0.53718766]
[-1.07799357 0.85639907 -2.29470142]]
b1 = [[ 1.74604067]
[-0.75184921]]
W2 = [[ 0.32171798 -0.25467393 1.46902454]
[-2.05617317 -0.31554548 -0.3756023 ]
[ 1.1404819 -1.09976462 -0.1612551 ]]
b2 = [[-0.88020257]
[ 0.02561572]
[ 0.57539477]]
梯度下降的一种变体是随机梯度下降(SGD),这相当于小批量梯度下降(mini-batch),其中每个小批量只有1个例子。您刚刚实现的更新规则没有改变。发生变化的是,您将只计算一个训练实例的梯度,而不是整个培训集。下面的代码示例说明了随机梯度下降和(批)梯度下降的区别。
(Batch) Gradient Descent:
X = data_input
Y = labels
parameters = initialize_parameters(layers_dims)
for i in range(0, num_iterations):
# Forward propagation
a, caches = forward_propagation(X, parameters)
# Compute cost.
cost = compute_cost(a, Y)
# Backward propagation.
grads = backward_propagation(a, caches, parameters)
# Update parameters.
parameters = update_parameters(parameters, grads)
•Stochastic Gradient Descent:
X = data_input
Y = labels
parameters = initialize_parameters(layers_dims)
for i in range(0, num_iterations):
for j in range(0, m):
# Forward propagation
a, caches = forward_propagation(X[:,j], parameters)
# Compute cost
cost = compute_cost(a, Y[:,j])
# Backward propagation
grads = backward_propagation(a, caches, parameters)
# Update parameters.
parameters = update_parameters(parameters, grads)
在SGD中,在更新所有梯度之前你只需要使用1个训练例子。当训练集非常大时,SGD速度会非常快。。但是这些参数会“振荡”到最小值而不是平滑地收敛。下面是一个例子:
Figure 1 : SGD vs GD
"+" denotes a minimum of the cost. SGD leads to many oscillations to reach convergence. But each step is a lot faster to compute for SGD than for GD, as it uses only one training example (vs. the whole batch for GD).
2 – Mini-Batch梯度下降算法
让我们来学习如何从训练集 (X, Y)利用mini-batch进行训练。
分两步:
Shuffle: Create a shuffled version of the training set (X, Y) as shown below. Each column of X and Y represents a training example. Note that the random shuffling is done synchronously between X and Y. Such that after the shuffling the i th column of X is the example corresponding to the th label in Y. The shuffling step ensures that examples will be split randomly into different mini-batches.
Partition: Partition the shuffled (X, Y) into mini-batches of size mini_batch_size
(here 64). Note that the number of training examples is not always divisible by mini_batch_size
. The last mini batch might be smaller, but you don't need to worry about this. When the final mini-batch is smaller than the full mini_batch_size
, it will look like this:
Exercise: Implement random_mini_batches. We coded the shuffling part for you. To help you with the partitioning step, we give you the following code that selects the indexes for the 1st and 2 nd mini-batches:
first_mini_batch_X = shuffled_X[:, 0 : mini_batch_size]
second_mini_batch_X = shuffled_X[:, mini_batch_size : 2 * mini_batch_size]
…
# GRADED FUNCTION: random_mini_batchesdef random_mini_batches(X, Y,mini_batch_size=64,seed = 0): """ Creates a list of random minibatches from (X, Y) Arguments: X --input data, of shape (input size, number of examples) Y -- true "label" vector (1 for blue dot / 0 for red dot), of shape (1, number of examples) mini_batch_size -- size of the mini-batches, integer Returns: mini_batches -- list of synchronous (mini_batch_X, mini_batch_Y) """ # To make your "random" minibatches the same as ours np.random.seed(seed) m = X.shape[1] # number of training examples mini_batches = [] # Step 1: Shuffle (X, Y) permutation = list(np.random.permutation(m)) shuffled_X = X[:, permutation] shuffled_Y = Y[:, permutation].reshape((1,m))# Step 2: Partition (shuffled_X, shuffled_Y).Minus the end case. num_complete_minibatches = math.floor(m/mini_batch_size) # number of mini batches of size mini_batch_size in your partitionning for k in range(0, num_complete_minibatches):### START CODE HERE ### (approx. 2 lines) mini_batch_X = shuffled_X[:, k * mini_batch_size :(k+1) * mini_batch_size] mini_batch_Y = shuffled_Y[:, k * mini_batch_size :(k+1) * mini_batch_size]### END CODE HERE ### mini_batch = (mini_batch_X, mini_batch_Y) mini_batches.append(mini_batch)# Handling the end case (last mini-batch < mini_batch_size) if m % mini_batch_size != 0: ### START CODE HERE ### (approx. 2 lines) mini_batch_X = shuffled_X[:, mini_batch_size *num_complete_minibatches : m] mini_batch_Y = shuffled_Y[:, mini_batch_size *num_complete_minibatches : m] ### END CODE HERE ### mini_batch = (mini_batch_X, mini_batch_Y) mini_batches.append(mini_batch) return mini_batches测试一下:X_assess, Y_assess, mini_batch_size =random_mini_batches_test_case()mini_batches = random_mini_batches(X_assess, Y_assess, mini_batch_size)print ("shape of the 1st mini_batch_X: "+ str(mini_batches[0][0].shape))print ("shape of the 2nd mini_batch_X: "+ str(mini_batches[1][0].shape))print ("shape of the 3rd mini_batch_X: "+ str(mini_batches[2][0].shape))print ("shape of the 1st mini_batch_Y: " + str(mini_batches[0][1].shape))print ("shape of the 2nd mini_batch_Y: "+ str(mini_batches[1][1].shape))print ("shape of the 3rd mini_batch_Y: "+ str(mini_batches[2][1].shape))print ("mini batch sanity check: " +str(mini_batches[0][0][0][0:3]))
结果:
shape of the 1st mini_batch_X: (12288, 64)
shape of the 2nd mini_batch_X: (12288, 64)
shape of the 3rd mini_batch_X: (12288, 20)
shape of the 1st mini_batch_Y: (1, 64)
shape of the 2nd mini_batch_Y: (1, 64)
shape of the 3rd mini_batch_Y: (1, 20)
mini batch sanity check: [ 0.90085595 -0.7612069 0.2344157 ]
3 – 动量梯度下降算法
Exercise: 初始化向量v。向量v是一个python中的字典,初始化为一组0。这一步的关键跟在梯度向量中类似。
# GRADED FUNCTION: initialize_velocitydef initialize_velocity(parameters): """ Initializes the velocity as a python dictionary with: - keys: "dW1", "db1", ..., "dWL", "dbL" - values: numpy arrays of zeros of the same shapeas the corresponding gradients/parameters. Arguments:parameters -- python dictionary containing your parameters. parameters['W' + str(l)] = Wl parameters['b' + str(l)] = bl Returns: v --python dictionary containing the current velocity.v['dW' + str(l)] = velocity of dWl v['db' + str(l)] = velocity of dbl """ L = len(parameters) // 2 v = {} # Initialize velocity for l in range(L): ### START CODE HERE ### (approx. 2 lines) v["dW" + str(l+1)] = np.zeros(parameters["W"+ str(l+1)].shape) v["db" + str(l+1)] = np.zeros(parameters["b" + str(l+1)].shape) ### END CODE HERE ### return v
测试一下:
parameters = initialize_velocity_test_case() v = initialize_velocity(parameters)print("v[\"dW1\"] = " + str(v["dW1"]))print("v[\"db1\"] = " + str(v["db1"]))print("v[\"dW2\"] = " + str(v["dW2"]))print("v[\"db2\"] = " + str(v["db2"]))
Exercise: Now, implement the parameters update with momentum. The momentum update rule is, for =1,…,L l=1,…,L:
# GRADED FUNCTION:#update_parameters_with_momentumdef update_parameters_with_momentum(parameters, grads, v, beta, learning_rate): """ Update parameters using Momentum Arguments: parameters --python dictionary containing your parameters:parameters['W' + str(l)] = Wlparameters['b' + str(l)] = blgrads -- python dictionary containing your gradients for each parameters:grads['dW' + str(l)] = dWlgrads['db' + str(l)] = dblv -- python dictionary containing thecurrent velocity:v['dW' + str(l)] = ...v['db' + str(l)] = ...beta -- the momentum hyperparameter, scalarlearning_rate -- the learning rate, scalarReturns:parameters -- python dictionary containing yourupdated parametersv -- python dictionary containing your updated velocities """ L = len(parameters) // 2 # Momentum update for each parameter for l in range(L):### START CODE HERE ### (approx. 4 lines) # compute velocities v["dW" + str(l+1)] = beta * v["dW" + str(l+1)]+ (1. - beta) * grads["dW" + str(l+1)] v["db" + str(l+1)] = beta * v["db" + str(l+1)]+ (1. - beta) * grads["db" + str(l+1)]# update parameters parameters["W" + str(l+1)] = parameters["W" + str(l+1)]- learning_rate * v["dW" + str(l+1)] parameters["b" + str(l+1)] = parameters["b"+ str(l+1)]- learning_rate * v["db" + str(l+1)] ### END CODE HERE ###return parameters, v
测试一下:
parameters, grads, v = update_parameters_with_momentum_test_case() parameters, v = update_parameters_with_momentum(parameters, grads, v, beta = 0.9, learning_rate = 0.01)print("W1 = " + str(parameters["W1"]))print("b1 = " + str(parameters["b1"]))print("W2 = " + str(parameters["W2"]))print("b2 = " + str(parameters["b2"]))print("v[\"dW1\"] = " + str(v["dW1"]))print("v[\"db1\"] = " + str(v["db1"]))print("v[\"dW2\"] = " + str(v["dW2"]))print("v[\"db2\"] = " + str(v["db2"]))
4 – Adam算法
Adam is one of the most effective optimization algorithms for training neural networks. It combines ideas from RMSProp (described in lecture) and Momentum
# GRADED FUNCTION: initialize_adamdef initialize_adam(parameters) : """ Initializes v and s as two python dictionaries with: - keys: "dW1", "db1", ..., "dWL", "dbL" - values: numpy arrays of zeros of the sameshape as the corresponding gradients/parameters. Arguments: parameters -- python dictionary containing yourparameters. parameters["W" + str(l)] = Wl parameters["b" + str(l)] = bl Returns: v -- python dictionary that will contain the exponentially weighted average of the gradient. v["dW" + str(l)] = ... v["db" + str(l)] = ... s -- python dictionary that will contain the exponentiallyweighted average of the squared gradient. s["dW" + str(l)] = ... s["db" + str(l)] = ... """ L = len(parameters) // 2 v = {} s = {}# Initialize v, s. Input: "parameters". Outputs: "v, s". for l in range(L): ### START CODE HERE ### (approx. 4 lines) v["dW" + str(l+1)] = np.zeros(parameters["W" + str(l+1)].shape) v["db" + str(l+1)] = np.zeros(parameters["b" + str(l+1)].shape) s["dW" + str(l+1)] = np.zeros(parameters["W" + str(l+1)].shape) s["db" + str(l+1)] = np.zeros(parameters["b" + str(l+1)].shape) ### END CODE HERE ###return v, s
测试一下:
parameters = initialize_adam_test_case() v, s = initialize_adam(parameters)print("v[\"dW1\"] = " + str(v["dW1"]))print("v[\"db1\"] = " + str(v["db1"]))print("v[\"dW2\"] = " + str(v["dW2"]))print("v[\"db2\"] = " + str(v["db2"]))print("s[\"dW1\"] = " + str(s["dW1"]))print("s[\"db1\"] = " + str(s["db1"]))print("s[\"dW2\"] = " + str(s["dW2"]))print("s[\"db2\"] = " + str(s["db2"]))
Exercise: Now, implement the parameters update with Adam
# GRADED FUNCTION: update_parameters_with_adamdef update_parameters_with_adam(parameters, grads, v, s, t, learning_rate = 0.01,beta1 = 0.9, beta2 = 0.999, epsilon = 1e-8): """ Update parameters using Adam Arguments: parameters -- python dictionary containing your parameters:parameters['W' + str(l)] = Wlparameters['b' + str(l)] = bl grads -- python dictionary containing your gradients for each parameters: grads['dW' + str(l)] = dWl grads['db' + str(l)] = dblv -- Adam variable, moving average of the first gradient, python dictionary s -- Adam variable, moving average of the squared gradient, python dictionary learning_rate -- the learning rate, scalar. beta1 -- Exponential decay hyperparameter for the first moment estimates beta2 -- Exponential decay hyperparameter for the second moment estimates epsilon -- hyperparameter preventing division by zero in Adam updates Returns: parameters -- python dictionary containing your updated parameters v -- Adam variable, moving average of the first gradient, python dictionary s -- Adam variable, moving average of the squared gradient, python dictionary """ L = len(parameters) // 2 v_corrected = {} # Initializing first moment estimate, python dictionary s_corrected = {}# Initializing second moment estimate, python dictionary # Perform Adam update on all parameters for l in range(L): # Moving average of the gradients. #Inputs: "v, grads, beta1". Output: "v". ### START CODE HERE ### (approx. 2 lines) v["dW" + str(l+1)] = beta1 * v["dW" + str(l+1)] + (1. - beta1) * grads["dW" + str(l+1)] v["db" + str(l+1)] = beta1 * v["db" + str(l+1)] + (1. - beta1) * grads["db" + str(l+1)] ### END CODE HERE ### # Compute bias-corrected first moment estimate# Inputs: "v, beta1, t". Output: "v_corrected". ### START CODE HERE ### (approx. 2 lines) v_corrected["dW" + str(l+1)] = v["dW" + str(l+1)] / (1 - beta1**t) v_corrected["db" + str(l+1)] = v["db" + str(l+1)] / (1 - beta1**t) ### END CODE HERE ### # Moving average of the squared gradients. #Inputs: "s, grads, beta2". Output: "s". ### START CODE HERE ### (approx. 2 lines) s["dW" + str(l+1)] = beta2 * s["dW" + str(l+1)] + (1. - beta2) * grads["dW" + str(l+1)]**2 s["db" + str(l+1)] = beta2 * s["db" + str(l+1)] + (1. - beta2) * grads["db" + str(l+1)]**2 ### END CODE HERE ### # Compute bias-corrected second raw moment #estimate. Inputs: "s, beta2, t". Output: "s_corrected". ### START CODE HERE ### (approx. 2 lines) s_corrected["dW" + str(l+1)] = s["dW" + str(l+1)] / (1 - beta2**t) s_corrected["db" + str(l+1)] = s["db" + str(l+1)] / (1 - beta2**t) ### END CODE HERE ### # Update parameters. Inputs: "parameters,# learning_rate, v_corrected, s_corrected, epsilon". #Output: "parameters". ### START CODE HERE ### (approx. 2 lines) parameters["W" + str(l+1)] = parameters["W" + str(l+1)] - learning_rate * v_corrected["dW" + str(l+1)] / \ (np.sqrt(s_corrected["dW" + str(l+1)]) + epsilon) parameters["b" + str(l+1)] = parameters["b" + str(l+1)] - learning_rate * v_corrected["db" + str(l+1)] / \ (np.sqrt(s_corrected["db" + str(l+1)]) + epsilon) ### END CODE HERE ### return parameters, v, s
5 – 使用不同优化算法的模型
让我们利用“moons”数据库来测试不同的优化模型。
def model(X, Y, layers_dims, optimizer, learning_rate = 0.0007, mini_batch_size = 64, beta = 0.9, beta1 = 0.9, beta2 = 0.999, epsilon = 1e-8, num_epochs = 10000, print_cost = True): """ Arguments: X -- input data, of shape (2, number of examples) Y -- true "label" vector (1 for blue dot / 0 for red dot), of shape (1, number of examples) layers_dims -- python list, containing the size of each layer learning_rate -- the learning rate, scalar. mini_batch_size -- the size of a mini batch beta -- Momentum hyperparameter beta1 -- Exponential decay hyperparameter for the past gradients estimates beta2 -- Exponential decay hyperparameter for the past squared gradients estimates epsilon -- hyperparameter preventing division by zero in Adam updates num_epochs -- number of epochs print_cost -- True to print the cost every 1000 epoch Returns:parameters -- python dictionary containing your updated parameters """ L = len(layers_dims) costs = [] # to keep track of the cost t = 0# initializing the counter required for Adam update# For grading purposes, so that your "random"# minibatches are the same as ours seed = 10 # Initialize parameters parameters = initialize_parameters(layers_dims) # Initialize the optimizer if optimizer == "gd": pass # no initialization required for gradient descent elif optimizer == "momentum": v = initialize_velocity(parameters) elif optimizer == "adam": v, s = initialize_adam(parameters) # Optimization loop for i in range(num_epochs): # Define the random minibatches. We increment the seed #to reshuffle differently the dataset after each epoch seed = seed + 1 minibatches = random_mini_batches(X, Y, mini_batch_size, seed) for minibatch in minibatches: # Select a minibatch (minibatch_X, minibatch_Y) = minibatch # Forward propagation a3, caches = forward_propagation(minibatch_X, parameters) # Compute cost #print("mini Batch Y shape", minibatch_Y.shape) cost = compute_cost(a3, minibatch_Y) # Backward propagation grads = backward_propagation(minibatch_X, minibatch_Y, caches) # Update parameters if optimizer == "gd": parameters = update_parameters_with_gd(parameters, grads, learning_rate) elif optimizer == "momentum": parameters, v = update_parameters_with_momentum(parameters, grads, v, beta, learning_rate) elif optimizer == "adam": t = t + 1 # Adam counter parameters, v, s = update_parameters_with_adam(parameters, grads, v, s, t, learning_rate, beta1, beta2, epsilon) # Print the cost every 1000 epoch if print_cost and i % 1000 == 0: print ("Cost after epoch %i: %f" %(i, cost)) if print_cost and i % 100 == 0: costs.append(cost) # plot the cost plt.plot(costs) plt.ylabel('cost') plt.xlabel('epochs (per 100)') plt.title("Learning rate = " + str(learning_rate)) plt.show() return parameters
现在使用这个3层神经网络来测试3种优化算法。
5.1 – Mini-batch Gradient descent
Run the following code to see how the model does with mini-batch gradient descent
# train 3-layer modellayers_dims = [train_X.shape[0], 5, 2, 1]parameters = model(train_X, train_Y, layers_dims, optimizer = "gd") # Predictpredictions = predict(train_X, train_Y, parameters) # Plot decision boundaryplt.title("Model with Gradient Descent optimization")axes = plt.gca()axes.set_xlim([-1.5,2.5])axes.set_ylim([-1,1.5])plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
结果:
Cost after epoch 0: 0.690736
Cost after epoch 1000: 0.685273
Cost after epoch 2000: 0.647072
Cost after epoch 3000: 0.619525
Cost after epoch 4000: 0.576584
Cost after epoch 5000: 0.607243
Cost after epoch 6000: 0.529403
Cost after epoch 7000: 0.460768
Cost after epoch 8000: 0.465586
Cost after epoch 9000: 0.464518
5.2 – Mini-batch gradient descent with momentum
Run the following code to see how the model does with momentum. Because this example is relatively simple, the gains from using momemtum are small; but for more complex problems you might see bigger gains.
# train 3-layer modellayers_dims = [train_X.shape[0], 5, 2, 1]parameters = model(train_X, train_Y, layers_dims, beta = 0.9, optimizer = "momentum") # Predictpredictions = predict(train_X, train_Y, parameters) # Plot decision boundaryplt.title("Model with Momentum optimization")axes = plt.gca()axes.set_xlim([-1.5,2.5])axes.set_ylim([-1,1.5])plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
结果:
Cost after epoch 0: 0.690741
Cost after epoch 1000: 0.685341
Cost after epoch 2000: 0.647145
Cost after epoch 3000: 0.619594
Cost after epoch 4000: 0.576665
Cost after epoch 5000: 0.607324
Cost after epoch 6000: 0.529476
Cost after epoch 7000: 0.460936
Cost after epoch 8000: 0.465780
Cost after epoch 9000: 0.464740
5.3 – Mini-batch with Adam mode
Run the following code to see how the model does with Adam
# train 3-layer modellayers_dims = [train_X.shape[0], 5, 2, 1]parameters = model(train_X, train_Y, layers_dims, optimizer = "adam") # Predictpredictions = predict(train_X, train_Y, parameters) # Plot decision boundaryplt.title("Model with Adam optimization")axes = plt.gca()axes.set_xlim([-1.5,2.5])axes.set_ylim([-1,1.5])plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
结果:
Cost after epoch 0: 0.690552
Cost after epoch 1000: 0.185567
Cost after epoch 2000: 0.150852
Cost after epoch 3000: 0.074454
Cost after epoch 4000: 0.125936
Cost after epoch 5000: 0.104235
Cost after epoch 6000: 0.100552
Cost after epoch 7000: 0.031601
Cost after epoch 8000: 0.111709
Cost after epoch 9000: 0.197648
5.4 – 总结
optimization method | accuracy | cost shape |
Gradient descent | 79.7% | oscillations |
Momentum | 79.7% | oscillations |
Adam | 94% | smoother |
动量通常有用,但是考虑到较小的学习速率和简单的数据库,它的影响几乎是微乎其微的。另外,考虑到许多mini-batch比其它优化方法更困难这个事实,从这里面的出来的代价可以看到有很大的振荡。
在另一方面,Adam明显优于mini-batch和动量梯度下降算法。如果你在这个简单的数据集上运行更多时间段的模型,那么所有这些方法都会有非常好的记过。然而你可以看到,Adam收敛更快。
Adam有以下优点:
(1)相对较低的内存需求(虽然比梯度下降和动量梯度下降要求高)
(2)即使稍微调整超参数(除了α外),也会工作效果很好。