-
Notifications
You must be signed in to change notification settings - Fork 28
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fine-Tuning EfficientNetB0 with pretrained 'imagenet' is not reproducIble (i.e,. saving model and loading) #797
Comments
I have managed to resolve part of the issue. The problem stemmed from my use of the Keras ModelCheckpoint callback (https://keras.io/api/callbacks/model_checkpoint/) during training, which was inadvertently overwriting the saved file. After removing this callback, the model's behavior aligns with expectations. However, I remain puzzled as to why the callback is saving incorrect model settings. I am still exploring to find a possible solution because my objective is to save the weights corresponding to the best-performing model which can be done using modelCheckpoint. |
Hi @Qasim-Latrobe, I got stuck today while trying to create a full repro and found another bug that is now fixed.
If you do, please install the path to master. Otherwise just use keras nightly and any other datasets. If you can't I'll try tomorrow. |
Hi @ghsanti, Thanks for a prompt response. I believe there is a possible bug in the Keras ModelCheckpoint callback, which triggers only with efficientnetBx models. I am able to work around by writing a manual ModelCheckpoint callback. |
This Gist uses CIFAR10. No difference shows up though (last line in the image is the evaluation step.): |
Thank you for your efforts in understanding and reproducing the issue. I will explore the possibility of resolving the problem using tf-nightly, as you suggested. At this point, I have no further comments, as I have implemented a workaround. Thank you once again for your assistance. |
Nightly was used to grab a fix for cifar10 dataset (and testing whether it works). It's not mandatory. Using stable versions like ('nightly' is not a fixed version, so you could see breaking changes further down your project.) |
-- tensorflow 2.16.1
-- Similar behavior is observed in tensorflow 2.17.0
I am encountering an issue with fine-tuning an EfficientNetB0 model that was originally pretrained on ImageNet.
Model Training and Fine-Tuning: I start with an EfficientNetB0 model pretrained on ImageNet and fine-tune it on my specific dataset.
Saving the Model: After fine-tuning, I save the model using model.save() with the .keras format.
Loading the Model: When I later load the model using load_model(), the performance of the model does not match the performance achieved during the fine-tuning phase. The results appear to be inconsistent or random.
I am initializing random states through seed for reproducibility.
from tensorflow.keras.applications import EfficientNetB0
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Dense, GlobalAveragePooling2D
import tensorflow as tf
Set seed for reproducibility
tf.random.set_seed(42)
np.random.seed(42)
Define and compile the model
base_model = EfficientNetB0(weights='imagenet', include_top=False, input_shape=(224, 224, 3))
x = base_model.output
x = GlobalAveragePooling2D()(x)
x = Dense(1024, activation='relu')(x)
predictions = Dense(10, activation='softmax')(x) # Adjust number of classes
model = Model(inputs=base_model.input, outputs=predictions)
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
Train and fine-tune the model
model.fit(x_train, y_train, epochs=10)
Save the fine-tuned model
model.save('fine_tuned_model.keras')
Load the model later
loaded_model = tensorflow.keras.models.load_model('fine_tuned_model.keras')
Evaluate performance
results = loaded_model.evaluate(x_test, y_test)
I have observed that when saving the model weights in HDF5 format (.h5) and subsequently loading them within the same session, the validation performance is consistently reproduced. However, when the .h5 model weights are loaded in a different session, the validation performance does not match the original results, and returns random accuracies.
Additionally, when using EfficientNetB0 with pretrained weights set to 'None', the model's performance remains consistent regardless of the session, and it can reproduce its performance.
Other models such as ResNet50, VGG16 runs as expected, only the efficientNetBx are having this issue.
The text was updated successfully, but these errors were encountered: