-
Notifications
You must be signed in to change notification settings - Fork 23
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add oobabooga text generation webui api completer #13
Conversation
Just opened this; will see what I can do in the meantime |
Yes, thanks! @rizerphe . It was a real problem |
Just opened this as well; currently working under the assumption that my changes get accepted |
Overall this seems to be working for me; I hope the generation web UI gets CORS support; other than that the only real problem left seems to be model switching - I don't like the idea of it in general, and I don't like how clumsily I had to do it even more. Oobabooga only allows you to have one model loaded at a time, and switching takes a lot of time, and this isn't too convenient. For now, what I do is before every generate call, I check which model is loaded (through another API call) and load the one I need if necessary. Doing it differently could be smarter but requires a bit of a rewrite. |
Quickly wanted to mention that oobabooga merged my pull request, therefore you don't need to use my fork anymore. |
Connects to local models: LLaMa (including llama.cpp), local GPT-derivatives via oobabooga's text generation webui
Closes #11