Relative Content

Tag Archive for machine-learninggradient-descent

What am I exactly doing wrong in this weight updation?

I’m trying to implement backpropagation from scratch. I’m trying to update the weights of a network with multiple inputs and multiple outputs but no hidden layers. The logic I have implemented seems fine to me. But the weights are not converging but rather increasing. I’m not able to debug as to why it is happening.

Why does Learning rate need to be so low?

I am new to Ml, so I tried to make a gradient descend in multiple linear regression. The function is simple, it’s just z = x + y. But for some reason the learning rate needs to be 1e-16 for it to work. Any ideas why?
here is the code