Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add llama-cpp-python server #452

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open

Add llama-cpp-python server #452

wants to merge 1 commit into from

Conversation

ericcurtin
Copy link
Collaborator

@ericcurtin ericcurtin commented Nov 13, 2024

Changed default runtime from 'llama.cpp' to 'llama-cpp-python'. Added 'llama-cpp-python' as a runtime option for better flexibility with the --runtime flag.

Summary by Sourcery

Add 'llama-cpp-python' as a new runtime option and set it as the default runtime, enhancing flexibility in model serving.

New Features:

  • Introduce 'llama-cpp-python' as a new runtime option for serving models, providing more flexibility with the '--runtime' flag.

Enhancements:

  • Change the default runtime from 'llama.cpp' to 'llama-cpp-python' for improved flexibility.

@ericcurtin ericcurtin self-assigned this Nov 13, 2024
Copy link
Contributor

sourcery-ai bot commented Nov 13, 2024

Reviewer's Guide by Sourcery

This PR changes the default runtime from 'llama.cpp' to 'llama-cpp-python' and adds support for the 'llama-cpp-python' server implementation. The changes involve modifying the server execution logic and updating the CLI configuration to accommodate the new runtime option.

Sequence diagram for server execution logic

sequenceDiagram
    participant User
    participant CLI
    participant Model
    User->>CLI: Run command with --runtime flag
    CLI->>Model: Pass runtime argument
    alt Runtime is vllm
        Model->>CLI: Execute vllm server
    else Runtime is llama.cpp
        Model->>CLI: Execute llama-server
    else Runtime is llama-cpp-python
        Model->>CLI: Execute llama_cpp.server
    end
Loading

Updated class diagram for runtime configuration

classDiagram
    class CLI {
        -runtime: String
        +configure_arguments(parser)
    }
    class Model {
        +serve(args)
    }
    CLI --> Model: uses
    note for CLI "Updated default runtime to 'llama-cpp-python' and added it as a choice"
Loading

File-Level Changes

Change Details Files
Added llama-cpp-python as a new runtime option and made it the default
  • Added 'llama-cpp-python' as a new choice in runtime options
  • Changed default runtime from 'llama.cpp' to 'llama-cpp-python'
  • Updated help text to include the new runtime option
ramalama/cli.py
Implemented server execution logic for llama-cpp-python runtime
  • Restructured server execution logic to handle three different runtimes
  • Added specific command construction for llama-cpp-python server
  • Maintained existing logic for vllm and llama.cpp runtimes
ramalama/model.py

Possibly linked issues


Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time. You can also use
    this command to specify where the summary should be inserted.

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

Copy link
Contributor

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @ericcurtin - I've reviewed your changes - here's some feedback:

Overall Comments:

  • Please provide justification for changing the default runtime from llama.cpp to llama-cpp-python. What are the benefits that led to this decision?
  • Consider updating any additional documentation to reflect the new runtime option and explain the differences between the available runtimes
Here's what I looked at during the review
  • 🟢 General issues: all looks good
  • 🟢 Security: all looks good
  • 🟢 Testing: all looks good
  • 🟢 Complexity: all looks good
  • 🟢 Documentation: all looks good

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

@ericcurtin ericcurtin marked this pull request as draft November 13, 2024 19:24
@ericcurtin ericcurtin marked this pull request as ready for review November 15, 2024 13:02
Copy link
Contributor

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We have skipped reviewing this pull request. It looks like we've already reviewed the commit 47fdf9a in this pull request.

@ericcurtin ericcurtin force-pushed the llama-cpp-python branch 4 times, most recently from a401718 to c4f91bc Compare November 15, 2024 16:41
Changed default runtime from 'llama.cpp' to 'llama-cpp-python'.
Added 'llama-cpp-python' as a runtime option for better
flexibility with the `--runtime` flag.

Signed-off-by: Eric Curtin <[email protected]>
@ericcurtin
Copy link
Collaborator Author

Copy link
Collaborator

@ygalblum ygalblum left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM for the code.
But, I think I'm missing something. Is llama-cpp-python already a dependency?

@ericcurtin
Copy link
Collaborator Author

@ygalblum we probably need to push some container images before merging this, but when we do that, we should be all good.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants