I’m trying to finetune the inceptionv3 model on my dataset.The unfrozen layers starting from the 250th layer onwards. And I’m using RMSprop optimizer. While during training there is huge difference between training loss and validation loss and overfitting occurs. How can I reduce the validation loss and optimize the validation accuracy during training?
I have greyscale image dataset and have 4 classes each for training and validation.
- 26115 images for training.
- 6528 images for validation.
Following is the code:
datagen = ImageDataGenerator(
rescale=1.0 / 255.0,
validation_split=0.2
)
train_generator = datagen.flow_from_directory(
data_dir,
target_size=(img_width, img_height),
batch_size=100,
color_mode='grayscale',
class_mode='categorical',
shuffle=True,
subset='training',
seed=10
)
validation_generator = datagen.flow_from_directory(
data_dir,
target_size=(img_width, img_height),
batch_size=10,
color_mode='grayscale',
class_mode='categorical',
shuffle=False,
subset='validation'
)
model = Sequential()
model.add(tf.keras.layers.Conv2D(3, (3, 3), activation='relu', padding='same',input_shape(img_width, img_height, 1)))
model.add(BatchNormalization())
model.add(inception_base)
model.add(Flatten())
model.add(Dense(512, activation='relu'))
model.add(Dense(512, activation='relu'))
model.add(BatchNormalization())
model.add(Dropout(0.5))
model.add(Dense(4, activation='softmax'))
checkpoint = ModelCheckpoint("inception_v3_1.h5", monitor='val_acc', verbose=2,save_best_only=True,save_weights_only=False, mode='auto', period=1)
early = EarlyStopping(monitor='val_acc', min_delta=0, patience=3, verbose=2, mode='auto')
model.compile(
loss='categorical_crossentropy',
optimizer=RMSprop(learning_rate=0.0001),
metrics=['accuracy']
)
history = model.fit(
train_generator,
epochs=epochs,
validation_data=validation_generator,
callbacks=[checkpoint,early]
)
enter image description here
enter image description here
Mughni Qureshi is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.