up

### Problem Set 2 (22.4 - 29.4.07)

This problem set deals with vector quantisation as it was presented in the lecture (another reference can be found here).

#### Exercise 1

1. Implement both algorithms for vector quantisation that were presented in the lecture (i.e. the batch update LBG and the online algorithm). The input for the algorithms should be the data points, the number of reference vectors and the number of iterations.
To calculate the nearest reference vector for a given data vector you can simply calculate the distance to all reference vectors.
2. Generate a set of two dimensional data points and test both algorithms on it. Plot the data points and the reference vectors (e.g. as colored points). Visualize the development of the reference vectors during learning (e.g. by creating a picture sequence or animation).
3. Plot the quantisation error during learning.
4. Modify the algorithms so that they automatically stop when the reference vectors become more or less stable (e.g. by defining some kind of threshold).

#### Exercise 2 (optional)

1. Extend your implementation from exercise 1 so that it adjusts the number of reference vectors until a given quantisation error is reached. What would be a good way to increase the number of reference vectors, i.e. how do you pick the new ones? An example can be found here.
2. Think about faster ways of getting the closest reference vector for a given data vector. Take a look at algorithms for the calculation of the Voronoi tessellation and the Delaunay triangulation (you can use Wikipedia as a starting point).