Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Q]: Quantize Per-Trained model Using QLoRa or LoRa , PFET Technique #13

Open
deep-matter opened this issue Aug 19, 2023 · 0 comments
Open

Comments

@deep-matter
Copy link

deep-matter commented Aug 19, 2023

hey all i hope you have a good day
i would like to ask a question please :
Q : Quantize Per-Trained model Using QLoRa or LoRa , PFET Technique
i would like to ask how can I use QLoRa or Parameter-Efficient Fine-Tuning thin a model does not register at Hugging face instead is Based on OFA

Here the repo of the model: GitHub

i am trying to Quantize the Tiny version but I don’t know if I need to use Lora in which way to Parameter-Efficient Fine-Tuning

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant