Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enable stable-fast pretracing in T2I, I2I and upscale pipelines [30 LPT] #39

Open
rickstaa opened this issue Jul 28, 2024 · 3 comments
Open
Labels
AI AI SPE bounties bounty Software bounies.

Comments

@rickstaa
Copy link
Collaborator

rickstaa commented Jul 28, 2024

Overview

Since its implementation, the stable-fast optimization has enhanced the image-to-video pipeline by providing a 50% speed-up. The primary drawback is that it utilizes dynamic pre-tracing to optimize requests, causing a longer response time on the first try. However, if the first request involves the largest possible image size, all smaller image sizes will also be pre-traced.

This optimization feature was initially not added to the text-to-image, image-to-image, and upscale pipelines due to the support for batch requests at that time, which required pre-tracing each batch size. Since then, we've transitioned from batch requests to sequential requests to mitigate memory issues and improve performance. Consequently, pre-tracing for these pipelines has become feasible again 🚀.

We are seeking a dedicated community member to apply the existing code from this pull request to the text-to-image, image-to-image, and upscale pipelines. Completing this bounty will significantly enhance network performance by enabling orchestrators to achieve a 50% speed-up in these pipelines ⚡🙏.

Required Skillset

Bounty Requirements

Bounty Requirements

  1. Pre-Tracing Implementation:
    • For each pipeline (text-to-image, image-to-image, and upscale), ensure that when orchestrators have the SFAST setting enabled, the largest image size request is pre-traced during startup.
    • Implement this by setting SFAST_WARMUP=true, ensuring orchestrators achieve the quickest response times.
  2. Performance Comparison Table:
    • Create a comparison table for each pipeline.
    • The table should include:
      • Pre-trace time
      • Resulting inference time
      • Comparison with non-optimized time

Implementation Tips

How to Apply

  1. Express Your Interest: Comment on this issue to indicate your interest and explain why you're the ideal candidate for the task.
  2. Wait for Review: Our team will review expressions of interest and select the best candidate.
  3. Get Assigned: If selected, we'll assign the GitHub issue to you.
  4. Start Working: Dive into your task! If you need assistance or guidance, comment on the issue or join the discussions in the #🛋│developer-lounge channel on our Discord server.
  5. Submit Your Work: Create a pull request in the relevant repository and request a review.
  6. Notify Us: Comment on this GitHub issue when your pull request is ready for review.
  7. Receive Your Bounty: We'll arrange the bounty payment once your pull request is approved.
  8. Gain Recognition: Your valuable contributions will be showcased in our project's changelog.

Thank you for your interest in contributing to our project 💛!

Warning

Please wait for the issue to be assigned to you before starting work. To prevent duplication of effort, submissions for unassigned issues will not be accepted.

@rickstaa rickstaa added the AI AI SPE bounties label Jul 28, 2024
@rickstaa rickstaa added the bounty Software bounies. label Aug 26, 2024
@rickstaa
Copy link
Collaborator Author

This bounty was implemented by @lukiod in livepeer/ai-worker#134.

@JJassonn69
Copy link
Collaborator

This bounty was implemented by @lukiod in livepeer/ai-worker#134.

I think you mean @JJassonn69.

@RaghavArora14
Copy link

Hi, I'm interested in this bounty

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
AI AI SPE bounties bounty Software bounies.
Projects
None yet
Development

No branches or pull requests

3 participants