Deep neural networks visualization
In this publication I present a "toy tool" to see how neurons on a Deep Neural Network (DNN) modify the links between them. These links are also called Weights. For those with technical knowledge, the "bias" unit is not shown (Because of lack of time, I will upgrade it later). In addition I haven't implemented yet the predict option, I would probably add it later.
Technical Notes:
- This DNN uses standard back propagation without optimizations such as Adam optimizer or regularization parameters.
- I am using Math.js for the matrix operations. There are some errors I haven't figured out yet how to solve them, specially for Nan values that will appear when showing the visualization (Sometimes happens, no worry =) )
- The last layer, the output layer is a sigmoid layer by default. Maybe later I will implement a linear function for regression analysis. And it has been implemented for binary classification by the moment.
- The weights are randomly initialized between zero and 1. the weights you see are multiplied by 100 and round up to an integer. In addition, the weights reflect the magnitude (Absolute value) only. This is because the Sankey Google Chart doesn't allow floats nor numbers below 1 as weights.
Some instructions:
- In the X input text box write your matrix n by m, n features and m training examples. Each row should be separated by a ; and each column by a , . There is an example in the input box. The same applies for the Y matrix, even though I only tested with a 1 by m, shape. If you try with other shape let me know what happens.
- The Layer size input defines the DNN architecture. So if you want 3 hidden units followed by 2 hidden units in the next layer you should type 3,2
- The activation function option has two possibilities sigmoid or tanh. I will implement RelU soon.
Input data X. Separate samples by commas "," and features by ";". X:
Input data Y. Set the target binary class. Separate samples by commas "," and classes by ";". Y:
Learning rate:
Number of iterations:
Layer's size (Values separated by a comma ","):
Type of activation function for hidden layers
Update the graph each "n" iterations. n:
Make an iteration each "m" milliseconds. m:
Process
Stop

Comentarios
Publicar un comentario