I’m trying to run the following chunk of code that’s when the error occurs.
# Initialize a random value for our initial x
x = tf.Variable([tf.random.normal([1])])
print("Initializing x={}".format(x.numpy()))
learning_rate = 1e-2 # learning rate for SGD
history = []
# Define the target value
x_f = 4
# We will run SGD for a number of iterations. At each iteration, we compute the loss,
# compute the derivative of the loss with respect to x, and perform the SGD update.
for i in range(500):
with tf.GradientTape() as tape:
'''TODO: define the loss as described above'''
loss = (x - x_f)**2# TODO
# loss minimization using gradient tape
grad = tape.gradient(loss, x) # compute the derivative of the loss with respect to x
new_x = x - learning_rate*grad # sgd update
x.assign(new_x) # update the value of x
history.append(x.numpy()[0])
# Plot the evolution of x as we optimize towards x_f!
plt.plot(history)
plt.plot([0, 500],[x_f,x_f])
plt.legend(('Predicted', 'True'))
plt.xlabel('Iteration')
plt.ylabel('x value')
I tried solutions provided for other similar questions, such as installing mkl, conda installing tensorflow and such. But none of them seem to help me.
Just to check if the system is running out of memory, I closed all applications which are heavy on memory. My system has 16 GB physical memory and even when there is over 8 GB of free memory, this problem happens (I’ve tried intensive codes before, but this is the first time I’m facing this issue. I had uninstalled anaconda navigator and reinstalled recently.)
Conda version: 24.5.0
Python version: 3.9
Tensorflow version: 2.10.0
Keerthana V is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.