I can share the full data on the stack, so I would just share the code below.
I have been trying for the past few days on this data but it seems that the loss is coming around 18000 or 15000 which is to high. My colleagues told me to use neural network for this data.
model = Sequential([
layers.Dense(128, activation='relu'),
layers.Dense(64, activation='relu'),
layers.Dense(32, activation='relu'),
layers.Dense(1, activation='relu')
])
model.compile(loss='mse', optimizer='adam', metrics=['mae'])
model.fit(x_train, y_train, epochs=100, batch_size=80)
`Epoch 1/100
3/3 ━━━━━━━━━━━━━━━━━━━━ 1s 78ms/step – loss: 242343.5312 – mae: 465.7755
Epoch 2/100
3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 20ms/step – loss: 243483.7969 – mae: 468.0649
Epoch 3/100
3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 29ms/step – loss: 246071.8281 – mae: 468.4903
Epoch 4/100
3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 20ms/step – loss: 250888.7188 – mae: 476.0695
Epoch 5/100
3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 15ms/step – loss: 242264.2188 – mae: 465.4283
Epoch 6/100
3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 16ms/step – loss: 242441.9062 – mae: 464.7845
Can someone please help me to get this loss as low as possible. It will be very much appreciated.