Local LLM issue #382
Replies: 1 comment 7 replies
-
Hey there, @chemical-accuracy! I'm here to help you with any bugs, questions, or contributions while we wait for a human maintainer. I found a similar issue that might help you resolve the error you're encountering. The error Here is the expected structure of the local_llm_config = dict(
model_list=[
dict(
model_name="my_llm_model",
litellm_params=dict(
model="my-llm-model",
api_base="http://localhost:8080/v1",
api_key="sk-no-key-required",
temperature=0.1,
frequency_penalty=1.5,
max_tokens=512,
),
)
],
router_kwargs=dict(
some_router_param="value"
)
) Notice that Additionally, there was a similar closed issue where the solution involved ensuring the correct structure and parameters for the local LLM configuration [1]. Try updating your |
Beta Was this translation helpful? Give feedback.
-
Hi there,
I am trying to use Llama so I downloaded llava-v1.5-7b-q4.llamafile (4.29 GB), added x permission and executed
./llava-v1.5-7b-q4.llamafile -cb -np 4 -a my-llm-model --embedding
I can see the LLM running in the browser and it seems to work.
However, when I try to run the example code:
I get the following error:
Beta Was this translation helpful? Give feedback.
All reactions