I was trying for a Keras based model for DEM Improvisation, where the target image is a high resolution DSM and input image is a stack of Course Resolution DSM, LULC and Building Footprint.
This is the UNET Model i tried :
#Build UNet
x_inputs = Input(shape=(512, 512, 3))
#Contraction path
c1 = tf.keras.layers.Conv2D(16, (3, 3), activation='leaky_relu', kernel_initializer='he_normal', padding='same')(x_inputs) c1 = tf.keras.layers.Dropout(0.1)(c1) c1 = tf.keras.layers.Conv2D(16, (3, 3), activation='leaky_relu', kernel_initializer='he_normal', padding='same')(c1) p1 = tf.keras.layers.MaxPooling2D((2, 2))(c1)
c2 = tf.keras.layers.Conv2D(32, (3, 3), activation='leaky_relu', kernel_initializer='he_normal', padding='same')(p1) c2 = tf.keras.layers.Dropout(0.1)(c2) c2 = tf.keras.layers.Conv2D(32, (3, 3), activation='leaky_relu', kernel_initializer='he_normal', padding='same')(c2) p2 = tf.keras.layers.MaxPooling2D((2, 2))(c2)
c3 = tf.keras.layers.Conv2D(64, (3, 3), activation='leaky_relu', kernel_initializer='he_normal', padding='same')(p2) c3 = tf.keras.layers.Dropout(0.2)(c3) c3 = tf.keras.layers.Conv2D(64, (3, 3), activation='leaky_relu', kernel_initializer='he_normal', padding='same')(c3) p3 = tf.keras.layers.MaxPooling2D((2, 2))(c3)
c4 = tf.keras.layers.Conv2D(128, (3, 3), activation='leaky_relu', kernel_initializer='he_normal', padding='same')(p3) c4 = tf.keras.layers.Dropout(0.2)(c4) c4 = tf.keras.layers.Conv2D(128, (3, 3), activation='leaky_relu', kernel_initializer='he_normal', padding='same')(c4) p4 = tf.keras.layers.MaxPooling2D(pool_size=(2, 2))(c4)
c5 = tf.keras.layers.Conv2D(256, (3, 3), activation='leaky_relu', kernel_initializer='he_normal', padding='same')(p4) c5 = tf.keras.layers.Dropout(0.3)(c5) c5 = tf.keras.layers.Conv2D(256, (3, 3), activation='leaky_relu', kernel_initializer='he_normal', padding='same')(c5)
#Expansive path
u6 = tf.keras.layers.Conv2DTranspose(128, (2, 2), strides=(2, 2), padding='same')(c5) u6 = tf.keras.layers.concatenate([u6, c4]) c6 = tf.keras.layers.Conv2D(128, (3, 3), activation='leaky_relu', kernel_initializer='he_normal', padding='same')(u6) c6 = tf.keras.layers.Dropout(0.2)(c6) c6 = tf.keras.layers.Conv2D(128, (3, 3), activation='leaky_relu', kernel_initializer='he_normal', padding='same')(c6)
u7 = tf.keras.layers.Conv2DTranspose(64, (2, 2), strides=(2, 2), padding='same')(c6) u7 = tf.keras.layers.concatenate([u7, c3]) c7 = tf.keras.layers.Conv2D(64, (3, 3), activation='leaky_relu', kernel_initializer='he_normal', padding='same')(u7) c7 = tf.keras.layers.Dropout(0.2)(c7) c7 = tf.keras.layers.Conv2D(64, (3, 3), activation='leaky_relu', kernel_initializer='he_normal', padding='same')(c7)
u8 = tf.keras.layers.Conv2DTranspose(32, (2, 2), strides=(2, 2), padding='same')(c7) u8 = tf.keras.layers.concatenate([u8, c2]) c8 = tf.keras.layers.Conv2D(32, (3, 3), activation='leaky_relu', kernel_initializer='he_normal', padding='same')(u8) c8 = tf.keras.layers.Dropout(0.1)(c8) c8 = tf.keras.layers.Conv2D(32, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same')(c8)
u9 = tf.keras.layers.Conv2DTranspose(16, (2, 2), strides=(2, 2), padding='same')(c8) u9 = tf.keras.layers.concatenate([u9, c1], axis=3) c9 = tf.keras.layers.Conv2D(16, (3, 3), activation='leaky_relu', kernel_initializer='he_normal', padding='same')(u9) c9 = tf.keras.layers.Dropout(0.1)(c9) c9 = tf.keras.layers.Conv2D(16, (3, 3), activation='leaky_relu', kernel_initializer='he_normal', padding='same')(c9)
x_outputs = tf.keras.layers.Conv2D(1, (1, 1), activation='linear', padding='same')(c11)
callback = tf.keras.callbacks.EarlyStopping(monitor='loss', patience=3) model = Model(inputs=x_inputs, outputs=x_outputs, name="U-Net")
model.compile(optimizer=tf.keras.optimizers.RMSprop(learning_rate=0.0001), loss='binary_crossentropy')
model.summary() history = model.fit(X_train_, y_train, batch_size=16, verbose=1, epochs=100, validation_data=(X_test_, y_test), shuffle=False)
After Training it is giving:
Epoch 1/100 28/28 [==============================] - 426s 15s/step - loss: 0.9158 - accuracy: 1.7300e-08 - val_loss: 0.6581 - val_accuracy: 0.0000e+00 Epoch 2/100 28/28 [==============================] - 420s 15s/step - loss: 0.7135 - accuracy: 8.6501e-09 - val_loss: 0.6451 - val_accuracy: 0.0000e+00 Epoch 3/100 28/28 [==============================] - 421s 15s/step - loss: 0.6699 - accuracy: 8.6501e-09 - val_loss: 0.6321 - val_accuracy: 0.0000e+00 Epoch 4/100 28/28 [==============================] - 419s 15s/step - loss: 0.6465 - accuracy: 8.6501e-09 - val_loss: 0.6194 - val_accuracy: 0.0000e+00 Epoch 5/100 28/28 [==============================] - 421s 15s/step - loss: 0.6339 - accuracy: 8.6501e-09 - val_loss: 0.6133 - val_accuracy: 0.0000e+00 Epoch 6/100 28/28 [==============================] - 420s 15s/step - loss: 0.6271 - accuracy: 1.7300e-08 - val_loss: 0.6132 - val_accuracy: 0.0000e+00
The training and validation loss is as follows
And the accuracy is coming like this
The model is failing to predict anything.
How i can solve this error?? How can i train the model??
Kakali Deka is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.