Getting most powerful online machine to run my DNN model
I have a model with three layers, each layer having 2048 nodes. The input data has a size of 2048, and I want to perform the training using 5×10^6 (5000000) samples (making the training data size 5000000×2048).
How to compute keras categorical cross entrophy correctly?
I am using keras categorical cross entrophy, however I want to know about its calculation properly.
How do I get rid of this error and unfreeze the top half of the layers for fine-tuning?
# Function to build the model def build_model(hp): base_model = MobileNetV3Large(weights=’imagenet’, include_top=False, input_shape=(224, 224, 3)) base_model.trainable = False # Freeze the base model x = base_model.output x = GlobalAveragePooling2D()(x) x = Dense(hp.Int(‘units’, min_value=64, max_value=512, step=64), activation=’relu’)(x) x = Dense(1, activation=’sigmoid’)(x) model = Model(inputs=base_model.input, outputs=x) model.compile(optimizer=Adam(hp.Choice(‘learning_rate’, [1e-2, 1e-3, 1e-4])), loss=’binary_crossentropy’, metrics=[‘accuracy’]) return model # Hyperparameter tuning […]
y_true.shape=(None, None) with custom TimeseriesGenerator for multiple inputs for LSTM model
I’m building LSTM model and it requires multiple Input (each has different features) and because of RAM issue, i have to create custom generator for my model:
Custom TimeseriesGenerator for multiple inputs for LSTM model
I’m building LSTM model and it requires multiple Input (each has different features) and because of RAM issue, i have to create custom generator for my model:
ValueError: Data cardinality is ambiguous: x sizes: 19824, 19824 y sizes: 45312 Make sure all arrays contain the same number of samples
An error occurred when I executed the following statement
How to save a keras model just for inference?
I trained a CNN model and saved it as a .keras file. Now I want other people to use it for making predictions. I am planning on deploying it using a flask server and package the whole thing in an exe. The problem is When I do a .summary() after I load it back, I am able to see the entire model architecture. I was also able to see the values of the hyperparameters that I used when I trained the model.