Skip to content

Commit

Permalink
import skill-assistant and related libraries (#72)
Browse files Browse the repository at this point in the history
Co-authored-by: Brian Krabach <[email protected]>
  • Loading branch information
markwaddle and bkrabach authored Oct 1, 2024
1 parent c5bc84f commit ff004c7
Show file tree
Hide file tree
Showing 131 changed files with 6,414 additions and 0 deletions.
9 changes: 9 additions & 0 deletions assistants/skill-assistant/.env.example
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
# Description: Example of .env file
# Usage: Copy this file to .env and set the values

# NOTE: Changes to this file will not take effect until the project service is 'stopped' and 'started'
# It is not enough to just use the VS Code 'restart' button

# Assistant Service
ASSISTANT__AZURE_OPENAI_ENDPOINT=https://<YOUR-RESOURCE-NAME>.openai.azure.com/
ASSISTANT__AZURE_CONTENT_SAFETY_ENDPOINT=https://<YOUR-RESOURCE-NAME>.cognitiveservices.azure.com/
9 changes: 9 additions & 0 deletions assistants/skill-assistant/.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
__pycache__
.pytest_cache
*.egg*
.data
.venv
venv
.env

poetry.lock
14 changes: 14 additions & 0 deletions assistants/skill-assistant/.vscode/launch.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
{
"version": "0.2.0",
"configurations": [
{
"type": "debugpy",
"request": "launch",
"name": "assistants: skill-assistant",
"cwd": "${workspaceFolder}",
"module": "semantic_workbench_assistant.start",
"args": ["assistant.skill_assistant:app", "--port", "3012"],
"consoleTitle": "${workspaceFolderBasename}"
}
]
}
72 changes: 72 additions & 0 deletions assistants/skill-assistant/.vscode/settings.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,72 @@
{
"editor.bracketPairColorization.enabled": true,
"editor.codeActionsOnSave": {
"source.organizeImports": "explicit",
"source.fixAll": "explicit"
},
"editor.guides.bracketPairs": "active",
"editor.formatOnPaste": true,
"editor.formatOnType": true,
"editor.formatOnSave": true,
"files.eol": "\n",
"files.trimTrailingWhitespace": true,
"flake8.ignorePatterns": ["**/*.py"], // disable flake8 in favor of ruff
"[json]": {
"editor.defaultFormatter": "esbenp.prettier-vscode",
"editor.formatOnSave": true
},
"[jsonc]": {
"editor.defaultFormatter": "esbenp.prettier-vscode",
"editor.formatOnSave": true
},
"python.analysis.autoFormatStrings": true,
"python.analysis.autoImportCompletions": true,
"python.analysis.diagnosticMode": "workspace",
"python.analysis.exclude": [
"**/.venv/**",
"**/.data/**",
"**/__pycache__/**",
"**/.pytest_cache/**"
],
"python.analysis.fixAll": ["source.unusedImports"],
"python.analysis.inlayHints.functionReturnTypes": true,
"python.analysis.typeCheckingMode": "basic",
"python.defaultInterpreterPath": "${workspaceFolder}/.venv/bin/python",
"python.testing.pytestEnabled": false,
"[python]": {
"editor.defaultFormatter": "charliermarsh.ruff",
"editor.formatOnSave": true,
"editor.codeActionsOnSave": {
"source.fixAll": "explicit",
"source.unusedImports": "explicit",
"source.organizeImports": "explicit",
"source.formatDocument": "explicit"
}
},
"ruff.nativeServer": "on",
"search.exclude": {
"**/.venv": true,
"**/.data": true,
"**/__pycache__": true
},
"cSpell.words": [
"Cmder",
"Codespaces",
"contentsafety",
"devcontainer",
"dotenv",
"endregion",
"fastapi",
"httpx",
"jsonschema",
"Langchain",
"openai",
"pdfs",
"Posix",
"pydantic",
"pypdf",
"pyproject",
"quickstart",
"tiktoken"
]
}
43 changes: 43 additions & 0 deletions assistants/skill-assistant/Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,43 @@
ARG python_image=python:3.11-slim

FROM ${python_image} AS build

RUN apt-get update && \
apt-get install -y gcc git

RUN python3 -m venv /venv
ENV PATH=/venv/bin:$PATH

RUN pip3 install --no-cache-dir --upgrade pip


# copy the directory structure so the relative paths remain the same

# semantic workbench packages
COPY ./libraries/python/semantic-workbench-api-model /packages/libraries/python/semantic-workbench-api-model
COPY ./libraries/python/semantic-workbench-assistant /packages/libraries/python/semantic-workbench-assistant

# content safety
COPY ./libraries/python/content-safety /packages/libraries/python/content-safety

# skills
COPY ./libraries/python/chat-driver /packages/libraries/python/chat-driver
COPY ./libraries/python/context /packages/libraries/python/context
COPY ./libraries/python/events /packages/libraries/python/events
COPY ./libraries/python/function-registry /packages/libraries/python/function-registry
COPY ./libraries/python/skills/skill-library /packages/libraries/python/skills/skill-library
COPY ./libraries/python/skills/skills /packages/libraries/python/skills/skills

# this assistant package
COPY ./assistants/skill-assistant/ /packages/assistants/skill-assistant

# install the assistant package dependencies - ensure the path matches the path in the COPY command above
RUN pip3 install --no-cache-dir /packages/assistants/skill-assistant


FROM ${python_image}

COPY --from=build /venv /venv
ENV PATH=/venv/bin:$PATH

ENTRYPOINT ["start-semantic-workbench-assistant", "assistant.skill_assistant:app", "--host", "0.0.0.0"]
11 changes: 11 additions & 0 deletions assistants/skill-assistant/Makefile
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
repo_root = $(shell git rev-parse --show-toplevel)

.DEFAULT_GOAL := venv

include $(repo_root)/tools/makefiles/poetry.mk

include $(repo_root)/tools/makefiles/docker.mk

DOCKER_PATH = $(repo_root)

DOCKER_IMAGE_NAME := skill_assistant
94 changes: 94 additions & 0 deletions assistants/skill-assistant/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,94 @@
# Skill Assistant

The Skill Assistant serves as a demonstration of integrating the Skill Library within an Assistant in the Semantic Workbench. Specifically, this assistant showcases the Posix skill and the chat driver. The [Posix skill](../../libraries/python/skills/skills/posix-skill/README.md) demonstrates file system management by allowing the assistant to perform posix-style actions. The [chat driver](../../libraries/python/chat-driver/README.md) handles conversations and interacts with underlying AI models like OpenAI and Azure OpenAI.

## Overview

[skill_controller.py](assistant/skill_controller.py) file is responsible for managing the assistant instances. It includes functionality to create and retrieve assistants, configure chat drivers, and map skill events to the Semantic Workbench.

- AssistantRegistry: Manages multiple assistant instances, each associated with a unique conversation.
- \_event_mapper: Maps skill events to message types understood by the Semantic Workbench.
- create_assistant: Defines how to create and configure a new assistant.

[skill_assistant.py](assistant/skill_assistant.py) file defines the main Skill Assistant class that integrates with the Semantic Workbench. It handles workbench events and coordinates the assistant's responses based on the conversation state.

- SkillAssistant Class: The main class that integrates with the Semantic Workbench.
- on_workbench_event: Handles various workbench events to drive the assistant's behavior.

[config.py](assistant/config.py) file defines the configuration model for the Skill Assistant. It includes settings for both Azure OpenAI and OpenAI services, along with request-specific settings such as max_tokens and response_tokens.

- AzureOpenAIServiceConfig: Configuration for Azure OpenAI services.
- OpenAIServiceConfig: Configuration for OpenAI services.
- RequestConfig: Defines parameters for generating responses, including tokens settings.

## Responsible AI

The assistant includes some important best practices for AI development, such as:

- **System prompt safety**, ie a set of LLM guardrails to protect users. As a developer you should understand how these
guardrails work in your scenarios, and how to change them if needed. The system prompt and the prompt safety
guardrails are split in two to help with testing. When talking to LLM models, prompt safety is injected before the
system prompt.
- See https://learn.microsoft.com/azure/ai-services/openai/concepts/system-message for more details
about protecting application and users in different scenarios.
- **Content moderation**, via [Azure AI Content Safety](https://azure.microsoft.com/products/ai-services/ai-content-safety)
or [OpenAI Content Moderation](https://platform.openai.com/docs/guides/moderation).

See the [Responsible AI FAQ](../../RESPONSIBLE_AI_FAQ.md) for more information.

# Suggested Development Environment

- Use GitHub Codespaces for a quick, turn-key dev environment: [/.devcontainer/README.md](../../.devcontainer/README.md)
- VS Code is recommended for development

## Pre-requisites

- Set up your dev environment
- SUGGESTED: Use GitHub Codespaces for a quick, easy, and consistent dev
environment: [/.devcontainer/README.md](../../.devcontainer/README.md)
- ALTERNATIVE: Local setup following the [main README](../../README.md#quick-start---local-development-environment)
- Set up and verify that the workbench app and service are running using the [semantic-workbench.code-workspace](../../semantic-workbench.code-workspace)
- If using Azure OpenAI, set up an Azure account and create a Content Safety resource
- See [Azure AI Content Safety](https://azure.microsoft.com/products/ai-services/ai-content-safety) for more information
- Copy the `.env.example` to `.env` and update the `ASSISTANT__AZURE_CONTENT_SAFETY_ENDPOINT` value with the endpoint of your Azure Content Safety resource
- From VS Code > `Terminal`, run `az login` to authenticate with Azure prior to starting the assistant

## Steps

- Use VS Code > `Run and Debug` (ctrl/cmd+shift+d) > `semantic-workbench` to start the app and service from this workspace
- Use VS Code > `Run and Debug` (ctrl/cmd+shift+d) > `launch assistant` to start the assistant.
- If running in a devcontainer, follow the instructions in [GitHub Codespaces / devcontainer README](../../.devcontainer/README.md#start-the-app-and-service) for any additional steps.
- Return to the workbench app to interact with the assistant
- Add a new assistant from the main menu of the app, choose the assistant name as defined by the `service_name` in [chat.py](./assistant/chat.py)
- Click the newly created assistant to configure and interact with it

## Starting the example from CLI

If you're not using VS Code and/or Codespaces, you can also work from the
command line, using `poetry`:

```
cd <PATH TO THIS FOLDER>
poetry install
poetry run start-semantic-workbench-assistant assistant.chat:app
```

## Create your own assistant

Copy the contents of this folder to your project.

- The paths are already set if you put in the same repo root and relative path of `/<your_projects>/<your_assistant_name>`
- If placed in a different location, update the references in the `pyproject.toml` to point to the appropriate locations for the `semantic-workbench-*` packages

## From Development to Production

It's important to highlight how Semantic Workbench is a development tool, and it's not designed to host agents in
a production environment. The workbench helps with testing and debugging, in a development and isolated environment, usually your localhost.

The core of your assistant/AI application, e.g. how it reacts to users, how it invokes tools, how it stores data, can be
developed with any framework, such as Semantic Kernel, Langchain, OpenAI assistants, etc. That is typically the code
you will add to `chat.py`.

**Semantic Workbench is not a framework**. Dependencies on `semantic-workbench-assistant` package are used only to test and debug your code in Semantic Workbench. **When an assistant is fully developed and ready for production, configurable settings should be hard coded, dependencies on `semantic-workbench-assistant` and similar should be removed**.
Empty file.
Loading

0 comments on commit ff004c7

Please sign in to comment.