COMP9444 Neural Networks and Deep Learning
Term 3, 2020

Exercises 2: Backpropagation


  1. Identical Inputs

    Consider a degenerate case where the training set consists of just a single input, repeated 100 times. In 80 of the 100 cases, the target output value is 1; in the other 20, it is 0. What will a back-propagation neural network predict for this example, assuming that it has been trained and reaches a global optimum? (Hint: to find the global optimum, differentiate the error function and set to zero.)

  2. Linear Transfer Functions

    Suppose you had a neural network with linear transfer functions. That is, for each unit the activation is some constant c times the weighted sum of the inputs.

    1. Assume that the network has one hidden layer. We can write the weights from the input to the hidden layer as a matrix WHI, the weights from the hidden to output layer as WOH, and the bias at the hidden and output layer as vectors bH and bO. Using matrix notation, write down equations for the value O of the units in the output layer as a function of these weights and biases, and the input I. Show that, for any given assignment of values to these weights and biases, there is a simpler network with no hidden layer that computes the same function.

    2. Repeat the calculation in part (a), this time for a network with any number of hidden layers. What can you say about the usefulness of linear transfer functions?