I have a problem with Keras where I am training two separate models one after the other. However, the second model seems to keep using the training data or states of the first model.
image_size = 150
input_shape = (image_size, image_size, 3)
pre_trained_model = VGG16(input_shape=input_shape, include_top=False, weights="imagenet")
for layer in pre_trained_model.layers[:15]:
layer.trainable = False
for layer in pre_trained_model.layers[15:]:
layer.trainable = True
last_layer = pre_trained_model.get_layer('block5_pool')
last_output = last_layer.output
........
model = Model(pre_trained_model.input, x)
model.compile(loss='binary_crossentropy',
optimizer=optimizers.SGD(learning_rate=1e-4, momentum=0.9),
metrics=['acc'])
model.summary()
return model
Init:
model = create_model()
Datagen:
train_datagen = ImageDataGenerator(
rescale=1./255,
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,)
train_generator = train_datagen.flow_from_directory(
data_path+"/training_set",
target_size=(150, 150),
batch_size=5,
class_mode='binary'
)
test_datagen = ImageDataGenerator(rescale=1./255)
validation_generator = test_datagen.flow_from_directory(
data_path+"/test_set",
target_size=(150, 150),
batch_size=5,
class_mode='binary')
Training:
history = model.fit(
train_generator,
steps_per_epoch=70, # Anzahl der Batches pro Epoche
epochs=20,
validation_data=validation_generator,
validation_steps=20 # Anzahl der Validierungsbatches
)
After the first training i Initalize everything the same:
model1 = create_model()
…….
history1 = model1.fit(
train_generator,
steps_per_epoch=70, # Anzahl der Batches pro Epoche
epochs=20,
validation_data=validation_generator,
validation_steps=20 # Anzahl der Validierungsbatches
)
although I use
tf.keras.backend.clear_session()
and instantiate each model in a separate function. I am on the second epoch on an acc of 1.00.
How can I solve this?