Skip to content

Commit

Permalink
guided-conversation-assistant for exploring assistant guided experien…
Browse files Browse the repository at this point in the history
…ces (microsoft#94)

This is a work-in-progress merge. The assistant is functional and works
with the default config, but further work on a better config experience
for workbench users will follow.

Fixes some issues w/ the drawers in the UX as well.
  • Loading branch information
bkrabach authored Oct 6, 2024
1 parent 65368ef commit bd4537a
Show file tree
Hide file tree
Showing 52 changed files with 7,533 additions and 216 deletions.
11 changes: 11 additions & 0 deletions assistants/guided-conversation-assistant/.env.example
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
# Description: Example of .env file
# Usage: Copy this file to .env and set the values

# NOTE:
# - Environment variables in the host environment will take precedence over values in this file.
# - When running with VS Code, you must 'stop' and 'start' the process for changes to take effect.
# It is not enough to just use the VS Code 'restart' button

# Assistant Service
ASSISTANT__AZURE_OPENAI_ENDPOINT=https://<YOUR-RESOURCE-NAME>.openai.azure.com/
ASSISTANT__AZURE_CONTENT_SAFETY_ENDPOINT=https://<YOUR-RESOURCE-NAME>.cognitiveservices.azure.com/
14 changes: 14 additions & 0 deletions assistants/guided-conversation-assistant/.vscode/launch.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
{
"version": "0.2.0",
"configurations": [
{
"type": "debugpy",
"request": "launch",
"name": "assistants: guided-conversation-assistant",
"cwd": "${workspaceFolder}",
"module": "semantic_workbench_assistant.start",
"args": ["assistant.chat:app"],
"consoleTitle": "${workspaceFolderBasename}"
}
]
}
70 changes: 70 additions & 0 deletions assistants/guided-conversation-assistant/.vscode/settings.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,70 @@
{
"editor.bracketPairColorization.enabled": true,
"editor.codeActionsOnSave": {
"source.organizeImports": "explicit",
"source.fixAll": "explicit"
},
"editor.guides.bracketPairs": "active",
"editor.formatOnPaste": true,
"editor.formatOnType": true,
"editor.formatOnSave": true,
"files.eol": "\n",
"files.trimTrailingWhitespace": true,
"[json]": {
"editor.defaultFormatter": "esbenp.prettier-vscode",
"editor.formatOnSave": true
},
"[jsonc]": {
"editor.defaultFormatter": "esbenp.prettier-vscode",
"editor.formatOnSave": true
},
"python.analysis.autoFormatStrings": true,
"python.analysis.autoImportCompletions": true,
"python.analysis.diagnosticMode": "workspace",
"python.analysis.exclude": [
"**/.venv/**",
"**/.data/**",
"**/__pycache__/**"
],
"python.analysis.fixAll": ["source.unusedImports"],
"python.analysis.inlayHints.functionReturnTypes": true,
"python.analysis.typeCheckingMode": "basic",
"python.defaultInterpreterPath": "${workspaceFolder}/.venv",
"[python]": {
"editor.defaultFormatter": "charliermarsh.ruff",
"editor.formatOnSave": true,
"editor.codeActionsOnSave": {
"source.fixAll": "explicit",
"source.unusedImports": "explicit",
"source.organizeImports": "explicit",
"source.formatDocument": "explicit"
}
},
"ruff.nativeServer": "on",
"search.exclude": {
"**/.venv": true,
"**/.data": true,
"**/__pycache__": true
},
// For use with optional extension: "streetsidesoftware.code-spell-checker"
"cSpell.words": [
"Codespaces",
"contentsafety",
"deepmerge",
"devcontainer",
"dotenv",
"endregion",
"Excalidraw",
"fastapi",
"jsonschema",
"Langchain",
"moderations",
"openai",
"pdfplumber",
"pydantic",
"pyproject",
"tiktoken",
"updown",
"virtualenvs"
]
}
3 changes: 3 additions & 0 deletions assistants/guided-conversation-assistant/Makefile
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
repo_root = $(shell git rev-parse --show-toplevel)
include $(repo_root)/tools/makefiles/python.mk
include $(repo_root)/tools/makefiles/docker-assistant.mk
75 changes: 75 additions & 0 deletions assistants/guided-conversation-assistant/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,75 @@
# Using Semantic Workbench with python assistants

This project provides an assistant to demonstrate how to guide a user towards a goal, leveraging the [guided-conversation library](../../libraries/python/guided-conversation/), which is a modified copy of the [guided-conversation](https://github.com/microsoft/semantic-kernel/tree/main/python/samples/demos/guided_conversations) library from the [Semantic Kernel](https://github.com/microsoft/semantic-kernel) repository.

## Responsible AI

The chatbot includes some important best practices for AI development, such as:

- **System prompt safety**, ie a set of LLM guardrails to protect users. As a developer you should understand how these
guardrails work in your scenarios, and how to change them if needed. The system prompt and the prompt safety
guardrails are split in two to help with testing. When talking to LLM models, prompt safety is injected before the
system prompt.
- See https://learn.microsoft.com/azure/ai-services/openai/concepts/system-message for more details
about protecting application and users in different scenarios.
- **Content moderation**, via [Azure AI Content Safety](https://azure.microsoft.com/products/ai-services/ai-content-safety)
or [OpenAI Content Moderation](https://platform.openai.com/docs/guides/moderation).

See the [Responsible AI FAQ](../../RESPONSIBLE_AI_FAQ.md) for more information.

# Suggested Development Environment

- Use GitHub Codespaces for a quick, turn-key dev environment: [/.devcontainer/README.md](../../.devcontainer/README.md)
- VS Code is recommended for development

## Pre-requisites

- Set up your dev environment
- SUGGESTED: Use GitHub Codespaces for a quick, easy, and consistent dev
environment: [/.devcontainer/README.md](../../.devcontainer/README.md)
- ALTERNATIVE: Local setup following the [main README](../../README.md#quick-start---local-development-environment)
- Set up and verify that the workbench app and service are running using the [semantic-workbench.code-workspace](../../semantic-workbench.code-workspace)
- If using Azure OpenAI, set up an Azure account and create a Content Safety resource
- See [Azure AI Content Safety](https://azure.microsoft.com/products/ai-services/ai-content-safety) for more information
- Copy the `.env.example` to `.env` and update the `ASSISTANT__AZURE_CONTENT_SAFETY_ENDPOINT` value with the endpoint of your Azure Content Safety resource
- From VS Code > `Terminal`, run `az login` to authenticate with Azure prior to starting the assistant

## Steps

- Use VS Code > `Run and Debug` (ctrl/cmd+shift+d) > `semantic-workbench` to start the app and service from this workspace
- Use VS Code > `Run and Debug` (ctrl/cmd+shift+d) > `launch assistant` to start the assistant.
- If running in a devcontainer, follow the instructions in [.devcontainer/POST_SETUP_README.md](../../.devcontainer/POST_SETUP_README.md#start-the-app-and-service) for any additional steps.
- Return to the workbench app to interact with the assistant
- Add a new assistant from the main menu of the app, choose the assistant name as defined by the `service_name` in [chat.py](./assistant/chat.py)
- Click the newly created assistant to configure and interact with it

## Starting the example from CLI

If you're not using VS Code and/or Codespaces, you can also work from the
command line, using `uv`:

```
cd <PATH TO THIS FOLDER>
uv sync
uv run start-semantic-workbench-assistant assistant.chat:app
```

## Create your own assistant

Copy the contents of this folder to your project.

- The paths are already set if you put in the same repo root and relative path of `/<your_projects>/<your_assistant_name>`
- If placed in a different location, update the references in the `pyproject.toml` to point to the appropriate locations for the `semantic-workbench-*` packages

## From Development to Production

It's important to highlight how Semantic Workbench is a development tool, and it's not designed to host agents in
a production environment. The workbench helps with testing and debugging, in a development and isolated environment, usually your localhost.

The core of your assistant/AI application, e.g. how it reacts to users, how it invokes tools, how it stores data, can be
developed with any framework, such as Semantic Kernel, Langchain, OpenAI assistants, etc. That is typically the code
you will add to `chat.py`.

**Semantic Workbench is not a framework**. Dependencies on `semantic-workbench-assistant` package are used only to test and debug your code in Semantic Workbench. **When an assistant is fully developed and ready for production, configurable settings should be hard coded, dependencies on `semantic-workbench-assistant` and similar should be removed**.
11 changes: 11 additions & 0 deletions assistants/guided-conversation-assistant/assistant.code-workspace
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
{
"folders": [
{
"path": ".",
"name": "assistants/prospector-assistant"
},
{
"path": "../.."
}
]
}
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
from .chat import app
from .config import AssistantConfigModel

__all__ = ["app", "AssistantConfigModel"]
Original file line number Diff line number Diff line change
@@ -0,0 +1,144 @@
import json
from typing import TYPE_CHECKING, Annotated, Any, Dict, List, Type, get_type_hints

from guided_conversation.utils.resources import ResourceConstraint, ResourceConstraintMode, ResourceConstraintUnit
from pydantic import BaseModel, Field, create_model
from pydantic_core import PydanticUndefinedType
from semantic_workbench_assistant.config import UISchema

from . import config_defaults as config_defaults

if TYPE_CHECKING:
pass


#
# region Helpers
#


def determine_type(type_str: str) -> Type:
type_mapping = {"str": str, "int": int, "float": float, "bool": bool, "list": List[Any], "dict": Dict[str, Any]}
return type_mapping.get(type_str, Any)


def create_pydantic_model_from_json(json_data: str) -> Type[BaseModel]:
data = json.loads(json_data)

def create_fields(data: Dict[str, Any]) -> Dict[str, Any]:
fields = {}
for key, value in data.items():
if value["type"] == "dict":
nested_model = create_pydantic_model_from_json(json.dumps(value["value"]))
fields[key] = (nested_model, Field(description=value["description"]))
else:
fields[key] = (
determine_type(value["type"]),
Field(default=value["value"], description=value["description"]),
)
return fields

fields = create_fields(data)
return create_model("DynamicModel", **fields)


def pydantic_model_to_json(model: BaseModel) -> Dict[str, Any]:
def get_type_str(py_type: Any) -> str:
type_mapping = {str: "str", int: "int", float: "float", bool: "bool", list: "list", dict: "dict"}
return type_mapping.get(py_type, "any")

json_dict = {}
for field_name, field in model.model_fields.items():
field_type = get_type_hints(model)[field_name]
default_value = field.default if not isinstance(field.default, PydanticUndefinedType) else ""
json_dict[field_name] = {
"value": default_value,
"type": get_type_str(field_type),
"description": field.description or "",
}
return json_dict


# endregion


#
# region Models
#


class GuidedConversationAgentConfigModel(BaseModel):
artifact: Annotated[
str,
Field(
title="Artifact",
description="The artifact that the agent will manage.",
),
UISchema(widget="textarea"),
] = json.dumps(pydantic_model_to_json(config_defaults.ArtifactModel), indent=2) # type: ignore

rules: Annotated[
list[str],
Field(title="Rules", description="Do's and don'ts that the agent should attempt to follow"),
UISchema(schema={"items": {"ui:widget": "textarea"}}),
] = config_defaults.rules

conversation_flow: Annotated[
str,
Field(
title="Conversation Flow",
description="A loose natural language description of the steps of the conversation",
),
UISchema(widget="textarea", placeholder="[optional]"),
] = config_defaults.conversation_flow

context: Annotated[
str,
Field(
title="Context",
description="General background context for the conversation.",
),
UISchema(widget="textarea", placeholder="[optional]"),
] = config_defaults.context

class ResourceConstraint(ResourceConstraint):
mode: Annotated[
ResourceConstraintMode,
Field(
title="Resource Mode",
description=(
'If "exact", the agents will try to pace the conversation to use exactly the resource quantity. If'
' "maximum", the agents will try to pace the conversation to use at most the resource quantity.'
),
),
] = config_defaults.resource_constraint.mode

unit: Annotated[
ResourceConstraintUnit,
Field(
title="Resource Unit",
description="The unit for the resource constraint.",
),
] = config_defaults.resource_constraint.unit

quantity: Annotated[
float,
Field(
title="Resource Quantity",
description="The quantity for the resource constraint. If <=0, the resource constraint is disabled.",
),
] = config_defaults.resource_constraint.quantity

resource_constraint: Annotated[
ResourceConstraint,
Field(
title="Resource Constraint",
),
UISchema(schema={"quantity": {"ui:widget": "updown"}}),
] = ResourceConstraint()

def get_artifact_model(self) -> Type[BaseModel]:
return create_pydantic_model_from_json(self.artifact)


# endregion
Loading

0 comments on commit bd4537a

Please sign in to comment.