I would like to implement a DANN (domain adversarial neural network) by making use of the ADAPT package for python, however, there seems to be an issue regarding the input shape of the task classifier or discriminator.
Below, you can find my code.
As input I am using EEG data meaning that the data has the following shape:
Xs: (254, 60, 257, 1) (meaning: (#trials, #channels, #timepoints, 1))
Xt: (255, 60, 257, 1)
ys: (254,)
yt: (255,)
I want to use EEGNet as encoder (feature extractor) and have also defined the task and discriminator classifier. I have not yet done anything fancy with the optimizers and also have not run a hyperparameter search, as I keep getting the same error when trying to run the code.
channels = Xs.shape[1]
timepoints = Xs.shape[2]
def get_EEGNet_feature_encoder(Chans=channels, Samples=timepoints,
dropoutRate=0.3, kernLength=64, F1=6,
D=1, F2=6, dropoutType='Dropout'):
if dropoutType == 'SpatialDropout2D':
dropoutType = SpatialDropout2D
elif dropoutType == 'Dropout':
dropoutType = Dropout
else:
raise ValueError('dropoutType must be one of SpatialDropout2D '
'or Dropout, passed as a string.')
model = Sequential()
model.add(Conv2D(F1, (1, kernLength), padding='valid',
input_shape=(Chans, Samples, 1),
use_bias=False))
model.add(BatchNormalization())
model.add(DepthwiseConv2D((Chans, 1), use_bias=False,
depth_multiplier=D,
depthwise_constraint=max_norm(1.), padding='valid'))
model.add(BatchNormalization())
model.add(Activation('elu'))
model.add(AveragePooling2D((1, 4)))
model.add(dropoutType(dropoutRate))
model.add(SeparableConv2D(F2, (1, 16),
use_bias=False, padding='valid'))
model.add(BatchNormalization())
model.add(Activation('elu'))
model.add(AveragePooling2D((1, 8)))
model.add(dropoutType(dropoutRate))
model.add(Flatten())
model.compile(optimizer=SGD(0.01), loss='mse')
return model
def get_EEGNet_task(input_shape, nb_classes=4, norm_rate=0.25):
model = Sequential()
model.add(Input(shape=input_shape))
model.add(Dense(nb_classes, name='task_dense', kernel_constraint=max_norm(norm_rate), activation='softmax'))
model.compile(optimizer=SGD(0.01), loss='mse')
return model
def get_EEGNet_discriminator(input_shape):
model = Sequential()
model.add(Input(shape=input_shape))
model.add(Dense(10, activation='elu'))
model.add(Dense(1, name='disc_dense', activation='sigmoid'))
model.compile(optimizer=SGD(0.01), loss='mse')
return model
encoder = get_EEGNet_feature_encoder()
encoder_output_shape = encoder.output_shape[1:]
model = DANN(encoder=get_EEGNet_feature_encoder(), task=get_EEGNet_task(input_shape=encoder_output_shape),
discriminator=get_EEGNet_discriminator(input_shape=encoder_output_shape), lambda_=0.01, Xt=Xt, metrics=["acc"],
optimizer=SGD(0.001), random_state=304)
# Train the model
model.fit(Xs, ys, Xt, epochs=100)
# Evaluate the model
model.score(Xt, yt)
Error message when executing model.fit():
Input to reshape is a tensor with 128 values, but the requested shape has 32
[[{{node Reshape}}]] [Op:__inference_train_function_3508]
I cannot even say whether the issue is related to the task or discriminator classifier (could also be both). I thought, I had solved the issue by extracting the output shape of the encoder as input shape for the task and discriminator classifier, but obviously not. Removing the flatten layer from the encoder did also not work.
I’d be happy, if someone could help me out here. My question might be easy to solve, however, if you’re doing it yourself, you get blind to the errors you make.
Thanks in advance for any advice!
Johanna is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.