Part 1:Python Basics with Numpy (optional assignment)

1 – Building basic functions with numpy

Numpy is the main package for scientific computing in Python. It is maintained by a large community ( In this exercise you will learn several key numpy functions such as np.exp, np.log, and np.reshape. You will need to know how to use these functions for future assignments.

1.1 – sigmoid function, np.exp()

Exercise: Build a function that returns the sigmoid of a real number x. using numpy.

1.2 – Sigmoid gradient

Exercise: Implement the function sigmoid_grad() to compute the gradient of the sigmoid function with respect to its input x. 

You often code this function in two steps: 
1. Set s to be the sigmoid of x. You might find your sigmoid(x) function useful. 
2. Compute σ′(x)=s(1−s)

1.3 – Reshaping arrays

Two common numpy functions used in deep learning are np.shape and np.reshape()
– X.shape is used to get the shape (dimension) of a matrix/vector X. 
– X.reshape(…) is used to reshape X into some other dimension.

Exercise: Implement image2vector() that takes an input of shape (length, height, 3) and returns a vector of shape (length*height*3, 1). For example, if you would like to reshape an array v of shape (a, b, c) into a vector of shape (a*b,c) you would do:

  • Please don’t hardcode the dimensions of image as a constant. Instead look up the quantities you need with image.shape[0], etc.


1.4 – Normalizing rows

Another common technique we use in Machine Learning and Deep Learning is to normalize our data. It often leads to a better performance because gradient descent converges faster after normalization. Here, by normalization we mean changing x to xx (dividing each row vector of x by its norm).

What you need to remember: 
– np.exp(x) works for any np.array x and applies the exponential function to every coordinate 
– the sigmoid function and its gradient 
– image2vector is commonly used in deep learning 
– np.reshape is widely used. In the future, you’ll see that keeping your matrix/vector dimensions straight will go toward eliminating a lot of bugs. 

– numpy has efficient built-in functions 
– broadcasting is extremely useful

2) Vectorization

In deep learning, you deal with very large datasets. Hence, a non-computationally-optimal function can become a huge bottleneck in your algorithm and can result in a model that takes ages to run. To make sure that your code is computationally efficient, you will use vectorization. 

Note that performs a matrix-matrix or matrix-vector multiplication. This is different from np.multiply() and the * operator (which is equivalent to .* in Matlab/Octave), which performs an element-wise multiplication.

2.1 Implement the L1 and L2 loss functions

Exercise: Implement the numpy vectorized version of the L1 loss. You may find the function abs(x) (absolute value of x) useful.

– The loss is used to evaluate the performance of your model. The bigger your loss is, the more different your predictions (
y) are from the true values (y). In deep learning, you use optimization algorithms like Gradient Descent to train your model and to minimize the cost. 

Exercise: Implement the numpy vectorized version of the L2 loss. There are several way of implementing the L2 loss but you may find the function useful. As a reminder, if x=[x1,x2,...,xn], then,x) nj=0x2j.

What to remember: 
– Vectorization is very important in deep learning. It provides computational efficiency and clarity. 
– You have reviewed the L1 and L2 loss. 
– You are familiar with many numpy functions such as np.sum,, np.multiply, np.maximum, etc

Part 2: Logistic Regression with a Neural Network mindset

You will learn to: 
– Build the general architecture of a learning algorithm, including: 
– Initializing parameters 
– Calculating the cost function and its gradient 
– Using an optimization algorithm (gradient descent) 
– Gather all three functions above into a main model function, in the right order.

1 – Packages

First, let’s run the cell below to import all the packages that you will need during this assignment. 
– numpy is the fundamental package for scientific computing with Python. 
h5py is a common package to interact with a dataset that is stored on an H5 file. 
– matplotlib is a famous library to plot graphs in Python. 
PIL and scipy are used here to test your model with your own picture at the end.

2 – Overview of the Problem set

Problem Statement: You are given a dataset (“data.h5”) containing: 
– a training set of m_train images labeled as cat (y=1) or non-cat (y=0) 
– a test set of m_test images labeled as cat or non-cat 
– each image is of shape (num_px, num_px, 3) where 3 is for the 3 channels (RGB). Thus, each image is square (height = num_px) and (width = num_px).

You will build a simple image-recognition algorithm that can correctly classify pictures as cat or non-cat.

Let’s get more familiar with the dataset. Load the data by running the following code.

train_set_x_orig, train_set_y, test_set_x_orig, test_set_y, classes = load_dataset()

We added “_orig” at the end of image datasets (train and test) because we are going to preprocess them. After preprocessing, we will end up with train_set_x and test_set_x (the labels train_set_y and test_set_y don’t need any preprocessing).

Each line of your train_set_x_orig and test_set_x_orig is an array representing an image. You can visualize an example by running the following code. Feel free also to change the index value and re-run to see other images.

# Example of a picture

Many software bugs in deep learning come from having matrix/vector dimensions that don’t fit. If you can keep your matrix/vector dimensions straight you will go a long way toward eliminating many bugs.

Exercise: Find the values for: 
– m_train (number of training examples) 
– m_test (number of test examples) 
– num_px (= height = width of a training image) 
Remember that 
train_set_x_orig is a numpy-array of shape (m_train, num_px, num_px, 3). For instance, you can access m_train by writing train_set_x_orig.shape[0]

For convenience, you should now reshape images of shape (num_px, num_px, 3) in a numpy-array of shape (num_px  num_px  3, 1). After this, our training (and test) dataset is a numpy-array where each column represents a flattened image. There should be m_train (respectively m_test) columns.

Exercise: Reshape the training and test data sets so that images of size (num_px, num_px, 3) are flattened into single vectors of shape (num_px  num_px  3, 1).

To represent color images, the red, green and blue channels (RGB) must be specified for each pixel, and so the pixel value is actually a vector of three numbers ranging from 0 to 255.

One common preprocessing step in machine learning is to center and standardize your dataset, meaning that you substract the mean of the whole numpy array from each example, and then divide each example by the standard deviation of the whole numpy array. But for picture datasets, it is simpler and more convenient and works almost as well to just divide every row of the dataset by 255 (the maximum value of a pixel channel).

Let’s standardize our dataset.

What you need to remember:

Common steps for pre-processing a new dataset are: 
– Figure out the dimensions and shapes of the problem (m_train, m_test, num_px, …) 
– Reshape the datasets such that each example is now a vector of size (num_px * num_px * 3, 1) 
– “Standardize” the data