I’m working on a project where I need to model a non-linear relationship using a neural network. The relationship is ( y = 3x_1^2x_2^3 ). The network setup is as follows:
- Preprocessing: Natural logarithm of inputs
- Network Design: Single layer with one neuron
- Activation Function: Exponential
- Loss Function: MAE (Mean Absolute Error)
- Optimizer: Adam
- Epochs: 50
- Batch Size: 32
Input and Expected Output:
- Input: ([x1, x2])
- Correct weights: ([2, 3])
- Correct bias: (ln 3)
Despite these settings, I am not able to achieve 100% accuracy. I’ve tried initializing weights and biases randomly as well as with specific values.
Here is the code:
import numpy as np
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.optimizers import Adam
# Generate data
x1 = np.random.randint(1, 21, size=(1000, 1))
x2 = np.random.randint(1, 21, size=(1000, 1))
y = 3 * (x1 ** 2) * (x2 ** 3)
# Preprocess data
log_x1 = np.log(x1)
log_x2 = np.log(x2)
log_inputs = np.hstack((log_x1, log_x2))
# Define model
model = Sequential()
model.add(Dense(1, input_dim=2, activation='exponential', kernel_initializer='ones', bias_initializer='zeros'))
# Compile model
model.compile(optimizer=Adam(learning_rate=0.01), loss='mae')
# Train model
model.fit(log_inputs, np.log(y), epochs=50, batch_size=32)
# Evaluate model
test_x1 = np.array([[2], [4], [5]])
test_x2 = np.array([[3], [7], [19]])
test_inputs = np.hstack((np.log(test_x1), np.log(test_x2)))
predicted = model.predict(test_inputs)
print(np.exp(predicted))
Does anyone have suggestions on how to improve the accuracy of this model?
Mo McWebmo is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.