Neural Network

Simple model

In this model we have some input x1xnx_1 … x_n as our dendrites and output is the result of our hypothesis function (axons).

Or we can say our input nodes (input layer) go into another node (hidden layer), and are output as the hypothesis function (output layer).

For a full connect linear propagation, we need understand BackPropagation(反向传播法):

If we have some input data xx, target yy, output oo and initial weight ww. Our aim of Neural Network is to let the estiamte output close to the true output.

Input Layer to Hidden Layer

result=wx+bresult = w \cdot x + b

Note that the weight is between input layer and hidden layer(w1,w2,w3,w4)(w_1, w_2, w_3, w_4).

The output of hidden layer with sigmoid function is

outhidden=11+eresultout_{hidden} = \frac{1}{1+e^{-result}}

Hidden Layer to Output Layer

Similar to the above and we can get outoutputout_{output}

By using the initial weight the outoutputout_{output} always far away from the true output result then we need to use BackPropagation to renew the weight.

BackPropagation

Find the Total variance of output:

Vartotal=E[(targetoutoutput)]2Var_{total} = E[(target - out_{output})]^2

For example in the figure above, the VartotalVar_{total} can also write like Varo1+Varo2Var_o1 + Var_o2

To renew the a weight, we want to know how much of influence of this weight to the total variance. Then this in Math is partial derivative:

δVartotalδw=δVartotalδoutoutputoutoutputouthidden\frac{\delta Var_{total}}{\delta w} = \frac{\delta Var_{total}}{\delta out_{output}} \cdot \frac{out_{output}}{out_{hidden}} \cdot

Author: shixuan liu
Link: http://tedlsx.github.io/2019/08/30/neural-network/
Copyright Notice: All articles in this blog are licensed under CC BY-NC-SA 4.0 unless stating additionally.
Donate
  • Wechat
  • Alipay

Comment