Development, Education

Backpropagation Tutorial

The PhD thesis of Paul J. Werbos at Harvard in 1974 described backpropagation as a method of teaching feed-forward artificial neural networks (ANNs). In the words of Wikipedia, it lead to a "rennaisance" in the ANN research in 1980s.

As we will see later, it is an extremely straightforward technique, yet most of the tutorials online seem to skip a fair amount of details. Here's a simple (yet still thorough and mathematical) tutorial of how backpropagation works from the ground-up; together with a couple of example applets. Feel free to play with them (and watch the videos) to get a better understanding of the methods described below!

Training a single perceptron

Training a multilayer neural network

1. Background

To start with, imagine that you have gathered some empirical data relevant to the situation that you are trying to predict - be it fluctuations in the stock market, chances that a tumour is benign, likelihood that the picture that you are seeing is a face or (like in the applets above) the coordinates of red and blue points.

We will call this data training examples and we will describe ith training example as a tuple (\vec{x_i}, y_i), where \vec{x_i} \in \mathbb{R}^n is a vector of inputs and y_i \in \mathbb{R} is the observed output.

Ideally, our neural network should output y_i when given \vec{x_i} as an input. In case that does not always happen, let's define the error measure as a simple squared distance between the actual observed output and the prediction of the neural network: E := \sum_i (h(\vec{x_i}) - y_i)^2, where h(\vec{x_i}) is the output of the network.

2. Perceptrons (building-blocks)

The simplest classifiers out of which we will build our neural network are perceptrons (fancy name thanks to Frank Rosenblatt). In reality, a perceptron is a plain-vanilla linear classifier which takes a number of inputs a_1, ..., a_n, scales them using some weights w_1, ..., w_n, adds them all up (together with some bias b) and feeds everything through an activation function \sigma \in \mathbb{R} \rightarrow \mathbb{R}.

A picture is worth a thousand equations:

Perceptron (linear classifier)

Perceptron (linear classifier)

To slightly simplify the equations, define w_0 := b and a_0 := 1. Then the behaviour of the perceptron can be described as \sigma(\vec{a} \cdot \vec{w}), where \vec{a} := (a_0, a_1, ..., a_n) and \vec{w} := (w_0, w_1, ..., w_n).

To complete our definition, here are a few examples of typical activation functions:

  • sigmoid: \sigma(x) = \frac{1}{1 + \exp(-x)},
  • hyperbolic tangent: \sigma(x) = \tanh(x),
  • plain linear \sigma(x) = x and so on.

Now we can finally start building neural networks. Continue reading

 
1683 Kudos
Don't
move!
Development, Education, Life

Halfway There

Another term in Cambridge has gone by - four out of nine to go. In the meantime, here's a quick update of what I've been up to in the past few months.

1. Microsoft internship

Redmond, WA, 2011

Redmond, WA, 2011

In January I had the opportunity to visit Microsoft's headquarters in Redmond, WA, to interview for the Software Development Engineer in Test intern position in the Office team. In short - a great trip, in every aspect.

I left London Heathrow on January 11th, 2:20 PM and landed in Seattle Tacoma at 4:10 PM (I suspect that there might have been a few time zones in between those two points). I arrived in Mariott Redmond roughly an hour later, which meant that because of my anti-jetlag technique ("do not go to bed until 10-11 PM in the new timezone no matter what") I had a few hours to kill. Ample time to unpack, grab a dinner in Mariott's restaurant and go for a short stroll around Redmond before going to sleep.

On the next day I had four interviews arranged. The interviews themselves were absolutely stress-free, it felt more like a chance to meet and have a chat with some properly smart (and down-to-earth) folks.

Top of the Space Needle. Seattle, WA, 2011

Top of the Space Needle. Seattle, WA, 2011

The structure of the interviews seemed fairly typical: each interview consisted of some algorithm/data structure problems, a short discussion about the past experience and the opportunity to ask questions (obviously a great chance to learn more about the team/company/company culture, etc). Since this was my third round of summer internship applications (I have worked as a software engineer for Wolfson Microelectronics in '09 and Morgan Stanley in '10), everything made sense and was pretty much what I expected.

My trip ended with a quick visit to Seattle on the next day: a few pictures of the Space Needle, a cup of Seattle's Best Coffee and there I was on my flight back to London, having spent $0.00 (yeap, Microsoft paid for everything - flights, hotel, meals, taxis, etc). Even so, the best thing about Microsoft definitely seemed to be the people working there; since I have received and accepted the offer, we'll see if my opinion remains unchanged after this summer!

2. Lent term v2.0

Well, things are still picking up the speed. Seven courses with twenty-eight supervisions in under two months, plus managing a group project (crowd-sourcing mobile network signal strength), a few basketball practices each week on top of that and you'll see a reason why this blog has not been updated for a couple of months.

It's not all doom and gloom, of course. Courses themselves are great, lecturers make some decently convoluted material understandable in minutes and an occasional formal hall (e.g. below) also helps.

All in all, my opinion, that Cambridge provides a great opportunity to learn a huge amount of material in a very short timeframe, remains unchanged.

There will be more to come about some cool things that I've learnt in separate posts, but now speaking of learning - it's revision time... :-)

Me and Ada at the CompSci formal. Cambridge, England, 2011

Me and Ada at the CompSci formal hall. Cambridge, England, 2011

 
163 Kudos
Don't
move!