def train():
with tf.GradientTape() as tape:
loss_f = loss()
trainable_variables = list(weights.values()) + list(biases.values())
gradients = tape.gradient(loss_f, trainable_variables)
optimizer.apply_gradients(zip(gradients, trainable_variables))
l1=[]
l3=[]
for i in range(2000):
train()
if i % 10 == 0:
l1.append(loss())
l3.append((1 / n) * tf.math.reduce_sum(tf.pow(tf.ones(n)-NN(X)[0][0][:n],2)))
print('loss',i,loss())
#print('error=', (1 / n) * tf.math.reduce_sum(tf.pow(tf.ones(n)-NN(X)[0][0][:n],2)))
l2=tf.convert_to_tensor(l1)
l4=tf.convert_to_tensor(l3)
#print(NN(x))
I designed a neural network to solve Fredholm’s integral equations.
The loss function is a very large value. The answer of the NN network and the exact answer is 1, represented by ones.
Thank you in advance for your help
New contributor
Nnnnnn is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.