Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[bug] #1099

Closed
abhishek9sharma opened this issue Sep 26, 2024 · 2 comments
Closed

[bug] #1099

abhishek9sharma opened this issue Sep 26, 2024 · 2 comments
Labels
bug Something isn't working

Comments

@abhishek9sharma
Copy link

abhishek9sharma commented Sep 26, 2024

Describe the bug
Using await on an AsyncGuard with the callable litellm.acompletiion results in following error when using guardrails==0.5.11

   async for chunk in resp_gen:
  File "/opt/miniconda3/envs/GUARDENV/lib/python3.9/site-packages/guardrails/telemetry/guard_tracing.py", line 197, in trace_async_stream_guard
    res = await anext(result)  # type: ignore
NameError: name 'anext' is not defined

To Reproduce

  • Install guardrails-ai==0.5.11
  • Install guardrails hub install hub://guardrails/profanity_free
  • Set below env variables
   export OPENAI_ENDPOINT_URL=https://api.openai.com/v1/chat/completions
   export OPENAI_API_KEY=sk-...
  • Copy below script to demo.py and run as python demo.py
import asyncio
import json
import time

import guardrails as gd
import litellm
from guardrails.hub import ProfanityFree

guard = gd.AsyncGuard().use(ProfanityFree, on_fail="exception")


def outcome_to_stream_response(validation_outcome):
    stream_chunk_template = {
        "choices": [
            {
                "delta": {
                    "content": validation_outcome.validated_output,
                },
            }
        ],
        "guardrails": {
            "reask": validation_outcome.reask or None,
            "validation_passed": validation_outcome.validation_passed,
            "error": validation_outcome.error or None,
        },
    }
    stream_chunk = stream_chunk_template
    stream_chunk["choices"][0]["delta"]["content"] = validation_outcome.validated_output
    return stream_chunk


async def guarded(messages):
    # Call the OpenAI API through litellm and guard
    fragment_generator = await guard(
        litellm.acompletion,
        model="gpt-4o-mini",
        messages=messages,
        max_tokens=1024,
        temperature=0,
        stream=True,
    )
    return fragment_generator


async def process_chunks(resp_gen):
    async for chunk in resp_gen:
        chunk_string = f"data: {json.dumps(outcome_to_stream_response(chunk))}\n\n"
        print(chunk_string)


user_message = "tell me about singapore"
messages = [{"content": user_message, "role": "user"}]
resp_gen = asyncio.run(guarded(messages))
print(resp_gen)
asyncio.run(process_chunks(resp_gen))

Above code is adapted form

Expected behavior
Code should not error out

Library version:
Version (e.g. 0.5.11)

Additional context
Works fine on guardrails-ai==0.5.1

@abhishek9sharma abhishek9sharma added the bug Something isn't working label Sep 26, 2024
@dtam
Copy link
Contributor

dtam commented Oct 1, 2024

@abhishek9sharma this should have been addressed in our latest release. please try 0.5.12 and let me know if youre still running into issues

@abhishek9sharma
Copy link
Author

@dtam tested it. Works with 0.5.12. Thanks for the quick fix

@zsimjee zsimjee closed this as completed Oct 2, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants