Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

I have a question about using lora for fine-tuning #64

Open
h0ngc opened this issue May 22, 2023 · 4 comments
Open

I have a question about using lora for fine-tuning #64

h0ngc opened this issue May 22, 2023 · 4 comments

Comments

@h0ngc
Copy link

h0ngc commented May 22, 2023

I have trained VITS model now and when I apply LORA to attention layer, fine-tuning is not working properly, could you please tell me which layer you applied to fine-tune VITS model with LORA and what values you used for rank and alpha ?

@MaxMax2016
Copy link
Collaborator

there has no vits, just a bigvgan, after upsample layers use speaker info to change x with weights and biases.

@h0ngc
Copy link
Author

h0ngc commented May 23, 2023

Thanks for your reply. Can I ask one more thing?
While i'm checking your repo, i noticed that you set the conv_post, activation and speaker_adaptor to be trainable.
As i know, LoRA is something like attaching linear layers to adapt other weights, but your repo seems like fine-tuning part of the model.
Is it some other adaptation of LoRA?

@MaxMax2016
Copy link
Collaborator

MaxMax2016 commented May 23, 2023

@MaxMax2016
Copy link
Collaborator

lora_svc is not real lora, use this name is just want svc developers to think about lora.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants