Gradient as a Guide : A Simple Game

 
Gradient as a Guide : A Simple Game

  The Backpropagation algorithm is the powerhouse of all Deep Learning models. It is one of the methods of efficiently calculating Gradient for billions of parameters. Therefore, it is vital to understand how the Gradient information guides the the function's parameters to traverse the loss-landscape and eventually halt in a valley. I thought of building a simple game centred around the concepts to ensure whether a learner has understood them or not. Here it is for you to explore

where,

  • \(w \rightarrow \) weight

  • \( \eta \rightarrow \) the learning rate, default:1, range: \( 0 \leq \eta \leq1 \)

  • \( dw \rightarrow \) gradient at \(w\)

Goal

  Reach the minimum (or maximum) of the function using Gradient as a guide.

$$ w_{t+1} = w_t - \eta \cdot dw_t $$

Listed below are some information about the applet that help you get started quickly.

  1. By default, the weight \(w\) value is initialized to 1.2 and the learning rate is set to \(\eta=1\). You can modify the value of \(w\) by replacing the number in the input box. The learning rate \(\eta\) can be changed using the slider.

  2. The gradient \(dw\) for the weight \(w\) is computed and then scaled by \(\eta\). It is displayed as \(\eta \cdot dw=0.31\)

  3. The direction (to the minima) of the negative gradient is represented by an arrow originating at the point \((2,2)\).Therefore, it is pointing to the left now.

  4. You can update the weight \(w\) by adding or subtracting the scaled-gradient value,\(\eta \cdot dw\), directly in the input box,like \([1.2 - 0.31]\).

The video below is my attempt to reach the minimum of the function. As you can see, the process enters into the vicious cycle after the \(5^{th}\) iteration!.To avoid it, you must choose the learning rate less than 1. Try it yourself!

Deep Learning is a game of Gradients at a scale!