Skip to content

Commit

Permalink
fix pre-commit fail
Browse files Browse the repository at this point in the history
  • Loading branch information
Frawa Vetterli committed Jan 10, 2024
1 parent 4d3a5aa commit 6055ef3
Show file tree
Hide file tree
Showing 2 changed files with 11 additions and 2 deletions.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -141,7 +141,7 @@ LakeraGuardError: Lakera Guard detected prompt_injection.
## Features
With **ChainGuard**, you can guard:
With **Lakera ChainGuard**, you can guard:
- LLM and ChatLLM by chaining with Lakera Guard so that an error will be raised upon risk detection
- alternatively, you can run the Lakera Guard component and the LLM in parallel and decide what to do upon risk detection
Expand Down
11 changes: 10 additions & 1 deletion lakera_chainguard/lakera_chainguard.py
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ def __init__(self, message: str, lakera_guard_response: dict):
class LakeraChainGuard:
def __init__(
self,
api_key: str = os.environ.get("LAKERA_GUARD_API_KEY", ""),
api_key: str = "",
classifier: str = "prompt_injection",
raise_error: bool = True,
):
Expand All @@ -64,6 +64,15 @@ def __init__(
Returns:
"""
# We cannot set default value for api_key to
# os.environ.get("LAKERA_GUARD_API_KEY", "") because this would only be
# evaluated once when the class is created. This would mean that if the
# user sets the environment variable after creating the class, the class
# would not use the environment variable.
if api_key == "":
self.api_key = os.environ.get("LAKERA_GUARD_API_KEY", "")
else:
self.api_key
self.api_key = api_key
self.classifier = classifier
self.raise_error = raise_error
Expand Down

0 comments on commit 6055ef3

Please sign in to comment.