-
Notifications
You must be signed in to change notification settings - Fork 169
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Set gpu per model #3085
Closed
Closed
Set gpu per model #3085
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
…ilar to Text2Image and Image2Video (livepeer#3092)
…3093) This commit ensures that the I2I pipeline latency score calculation now considers the number of images.
…ivepeer#3099) This commit adds support for the `num_inference_steps` parameter to the I2I, I2V and upscale pipelines. It also fixes a incorrect latencyScore calculation for the bytedance model.
* Add speech-to-text pipeline, refactor processAIRequest and handleAIRequest to allow for various response types * Pin gomod to ai-runner for testing * Revert "Pin gomod to ai-runner for testing" This reverts commit d4ba500. * Update go mod dep for ai-worker * Calculate pixel value of audio file * fix go-mod deps * Adjust price calculation * one second per pixel * cleanup, fix missing duration * Add supported file types, calculate price by milliseconds * Add bad request response for unsupported file types * Update name of function * Update go mod to ai-runner * Use ffmpeg to get duration * update install_ffmpeg.sh to parse audio better * Check for audio codec instead of video codec * gomod edits * add docker file * Update install_ffmpeg.sh to improve audio support, Add duration validation and logging, pin lpms * rename speech-to-text to audio-to-text * Update go-mod * cleanup * update go mod * remove comment * update gomod * Update lpms mod * Update to latest lpms * Update lpms * feat(ai): apply code improvements to AudioToText pipeline This commit applies several code improvements to the AudioToText codebase. * Remove unnecessary logic * Remove unused error * Fix missing err * Update go.mod and tidy * chore(ai): update ai-worker and lpms to latest version This commit ensures that the ai-worker and lpms are at the latest versions which contain the changes needed for the audio-to-text pipeline. --------- Co-authored-by: 0xb79orch <[email protected]> Co-authored-by: Rick Staa <[email protected]>
* Add gateway metric for roundtrip ai times by model and pipeline * Rename metrics and add unique manifest * Fix name mismatch * modelsRequested not working correctly * feat: add initial POC AI gateway metrics This commit adds the initial AI gateway metrics so that they can reviewed by others. The code still need to be cleaned up and the buckets adjusted. * feat: improve AI metrics This commit improves the AI metrics so that they are easier to work with. * feat(ai): log no capacity error to metrics This commit ensures that an error is logged when the Gateway could not find orchestrators for a given model and capability. * feat(ai): add TicketValueSent and TicketsSent metrics This commit ensure that the `ticket_value_sent` abd `tickets_sent` metrics are also created for a AI Gateway. * fix(ai): ensure that AI metrics have orch address label This commit ensures that the AI gateway metrics contain the orch address label. * fix(ai): fix incorrect Gateway pricing metric This commit ensures that the AI job pricing is calculated correctly and cleans up the codebase. * refactor(ai): remove Orch label from ai_request_price metric This commit removes the Orch label from the ai_request_price metrics since that information is better to be retrieved from another endpoint. --------- Co-authored-by: Elite Encoder <[email protected]>
This commit adds the gateway metrics to the Audio-to-text pipeline.
* Add gateway metric for roundtrip ai times by model and pipeline * Rename metrics and add unique manifest * Fix name mismatch * modelsRequested not working correctly * feat: add initial POC AI gateway metrics This commit adds the initial AI gateway metrics so that they can reviewed by others. The code still need to be cleaned up and the buckets adjusted. * feat: improve AI metrics This commit improves the AI metrics so that they are easier to work with. * feat(ai): log no capacity error to metrics This commit ensures that an error is logged when the Gateway could not find orchestrators for a given model and capability. * feat(ai): add TicketValueSent and TicketsSent metrics This commit ensure that the `ticket_value_sent` abd `tickets_sent` metrics are also created for a AI Gateway. * fix(ai): ensure that AI metrics have orch address label This commit ensures that the AI gateway metrics contain the orch address label. * feat(ai): add orchestrator AI census metrics This commit introduces a suite of AI orchestrator metrics to the census module, mirroring those received by the Gateway. The newly added metrics include `ai_models_requested`, `ai_request_latency_score`, `ai_request_price`, and `ai_request_errors`, facilitating comprehensive tracking and analysis of AI request handling performance on the orchestrator side. * refactor: improve orchestrator metrics tags This commit ensures that the right tags are attached to the Orchestrator AI metrics. * refactor(ai): improve latency score calculations This commit ensures that no devide by zero errors can occur in the latency score calculations. --------- Co-authored-by: Elite Encoder <[email protected]>
This commit applies some small comment changes to ease the conflicts between the main and ai-video branch.
Closing in favor of changes coming in #3106 |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
What does this pull request do? Explain your changes. (required)
Adds the ability to set preferred gpu by model in the aimodels config.
Only works for warm models currently. More changes needed to follow gpu flag when loading cold models
Linked to changes in livepeer/ai-worker#111
Specific updates (required)
How did you test each of these updates (required)
Does this pull request close any open issues?
AI-134
Checklist:
make
runs successfully./test.sh
pass