Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

loss_weights depending on epoch number #20294

Open
LucaSCostanzo opened this issue Sep 26, 2024 · 2 comments
Open

loss_weights depending on epoch number #20294

LucaSCostanzo opened this issue Sep 26, 2024 · 2 comments
Assignees
Labels

Comments

@LucaSCostanzo
Copy link

Hi,
I'm trying to train a multi output nn and I need to change the weight of each loss component depending on the epoch number. In some previous versions of keras I implemented this mechanism by defining each weight of the loss_weights parameter as a K.variable() type, and changing the value with K.set_value() in an on_epoch_begin() method of a custom callback.
Now, in keras 3.4 this is not allowed, as loss_weights must be a list/dict of float, so within the callback I can't change the value in place with K.set_value(). Is there something I can do to overcome this issue?
Thanks

@mehtamansi29
Copy link
Collaborator

Hi @LucaSCostanzo -

Can you help me with sample code where you are facing error ?

@LucaSCostanzo
Copy link
Author

Hi mehtamansi29,
thanks for the answer.
Here is an example of what I meant.
This is the custom callback to update the weights after each epoch:

import tensorflow.keras.backend as K
from tensorflow.keras import callbacks

class WeightAdjuster(callbacks.Callback):
    def __init__(self, weights: dict):
        self.w1 = weights['loss1']
        self.w2 = weights['loss2']  
    def on_epoch_end(self, epoch, logs):
        K.set_value(self.w1, self.w1 + 1)
        K.set_value(self.w2, self.w2 - 1)

and the training pipeline I used in a previous version of Keras:

loss = {'loss1':   BinaryCrossentropy(),#BinaryFocalCrossentropy,
           'loss2': BinaryCrossentropy()}
loss_weights = {'loss1':   K.variable(1.),
                          'loss2': K.variable(0.)}
metrics = {'loss1':   'accuracy',
                  'loss2': 'accuracy'} 
nn.compile(loss=loss, loss_weights = loss_weights, optimizer=optimizer, metrics=metrics)
nn.fit(training_generator, epochs=epochs, validation_data=validation_generator, steps_per_epoch = None)

but now it is not allowed, as I receive this message:

ValueError: For a model with multiple outputs, when providing the `loss_weights` argument as a dict, each dict entry should be a Python float (the weighting coefficient corresponding to the loss for that output). At key 'loss1', received invalid type:
<tf.Variable 'Variable:0' shape=() dtype=float32>

Of course I'm forced to define loss_weights and the callback as:

loss_weights = {'loss1':   1.,  'loss2': 0.}

class WeightAdjuster(callbacks.Callback):
    def __init__(self, weights: dict):
        self.w1 = weights['loss1']
        self.w2 = weights['loss2']  
    def on_epoch_end(self, epoch, logs):
        self.w1 = self.w1 + 1
        self.w2 = self.w2 - 1

but now changing w1 and w2 have no effect on the loss_weights dict.

I hope the issue is clearer now.
Thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants