Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[feat] Functionality for testing a Guard configured to run on prompt in isolation #996

Open
ShreyaR opened this issue Aug 8, 2024 · 8 comments
Labels
enhancement New feature or request

Comments

@ShreyaR
Copy link
Collaborator

ShreyaR commented Aug 8, 2024

Description
Currently, guard.validate only works for guards that are configured to run on the output. Having validate work on guards configured for prompt or inputs would be super helpful to quickly test out if an input guard is working as expected.

E.g., the following code raises no exceptions for me:

guard = Guard().use(
    RestrictToTopic(
        ...,
        on_fail="exception",
    ),
    on="msg_history",
)

guard.validate('Who should I vote for in the upcoming election?')

Removing the on="msg_history" arg raises an exception as expected.

However, this makes the burden of quick testing a guard more frictional.

Why is this needed
Demo-ing, more control to end users

@ShreyaR ShreyaR added the enhancement New feature or request label Aug 8, 2024
@CalebCourier
Copy link
Collaborator

So the idea here is to allow passing input values to validate right? i.e. instead of only accepting an llm_output, let it also accept prompt, instructions, msg_history, and messages (only messages in 0.6.0).

I think that makes a lot of sense. Technically you can accomplish the same thing by treating the input value as an output value but that's a little bit of mental gymnastics if you're used to passing it as a particular kwarg in other flows.

Copy link

github-actions bot commented Sep 8, 2024

This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 14 days.

@CalebCourier
Copy link
Collaborator

Quick research on supporting this:
validate/parse currently require the llm output, but also accept prompt/instructions/messages as kwargs. We could make llm_output optional and add named kwargs for inputs to the method signature. However, this process goes through the entire validation loop via the Runner where an exception is thrown if neither an llm_output or llm_api are provided. Besides this, if the goal is to perform a static check on the prompt/instructions/messages, then pushing this through the Runner is unnecessary since it's complexity is mostly to handle reasks. While it would require a new route on the API to support server side validation, I think the cleanest option here may be to add a new check method that simply accepts llm_ouput or prompt/instructions/messages, skips the Runner, and hits the validator_service directly; no reasks, no streaming, etc. because it shouldn't be necessary when performing a static check.

@zsimjee @ShreyaR Thoughts?

@zsimjee
Copy link
Collaborator

zsimjee commented Sep 9, 2024

So what would that API look like?

from guardrails import Guard

g = Guard().use(Validator, on='prompt')
g.check(prompt="hello")

@CalebCourier
Copy link
Collaborator

So what would that API look like?

from guardrails import Guard

g = Guard().use(Validator, on='prompt')
g.check(prompt="hello")

Basically, yes.

The other option I was thinking of which is a little more aggressive and not backwards compatible is to instead collapse __call__ and parse so that __call__ just takes an optional llm_output (they both proxy to the same _execute function already). parse goes away, and validate does what check would do i.e. only run validation and skips everything else. It would be approximately the same amount of work and would clean up the Guard interface/api, but again would break existing patterns so we would want a deprecation cycle to accompany it.

Copy link

This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 14 days.

@github-actions github-actions bot added the Stale label Oct 11, 2024
Copy link

This issue was closed because it has been stalled for 14 days with no activity.

@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Oct 25, 2024
@zsimjee
Copy link
Collaborator

zsimjee commented Oct 25, 2024

Planned!

@zsimjee zsimjee reopened this Oct 25, 2024
@github-actions github-actions bot removed the Stale label Oct 26, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants