Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Compare different models for the same dataset #30

Open
aarora79 opened this issue Feb 17, 2024 · 0 comments
Open

Compare different models for the same dataset #30

aarora79 opened this issue Feb 17, 2024 · 0 comments
Assignees
Labels
enhancement New feature or request

Comments

@aarora79
Copy link
Contributor

There is nothing in FMBench which prevents different experiments in the same config file to use different models, but the generated report is not architected in the same way i.e. it is not created to compare different models but rather compare the same model across serving stacks, so that would need to change. This has been requested by multiple customers, the idea being if we find different models that are fit for task, we now want to find the model and serving stack combination which provides the best price:performance.

@aarora79 aarora79 self-assigned this Feb 17, 2024
@aarora79 aarora79 added the enhancement New feature or request label Feb 17, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

1 participant