Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[C#] chore: Bump dotnet library to v1.0.1 #1156

Merged
merged 3 commits into from
Jan 11, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ namespace Microsoft.Teams.AI.AI
/// </summary>
/// <remarks>
/// The AI system is responsible for generating plans, moderating input and output, and
/// generating prompts. It can be used free standing or routed to by the Application object.
/// generating prompts. It can be used free standing or routed to by the <see cref="Application{TState}"/> object.
/// </remarks>
/// <typeparam name="TState">Optional. Type of the turn state.</typeparam>
public class AI<TState> where TState : TurnState
Expand Down Expand Up @@ -65,7 +65,7 @@ public AI(AIOptions<TState> options, ILoggerFactory? loggerFactory = null)
/// Registers a handler for a named action.
/// </summary>
/// <remarks>
/// The AI systems planner returns plans that are made up of a series of commands or actions
/// The AI system's planner returns plans that are made up of a series of commands or actions
/// that should be performed. Registering a handler lets you provide code that should be run in
/// response to one of the predicted actions.
///
Expand Down Expand Up @@ -112,7 +112,7 @@ public AI<TState> RegisterAction(string name, IActionHandler<TState> handler)
/// Registers the default handler for a named action.
/// </summary>
/// <remarks>
/// Default handlers can be replaced by calling the RegisterAction() method with the same name.
/// Default handlers can be replaced by calling the <see cref="RegisterAction(string, IActionHandler{TState})"/> method with the same name.
/// </remarks>
/// <param name="name">The name of the action.</param>
/// <param name="handler">The action handler function.</param>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -13,49 +13,42 @@ namespace Microsoft.Teams.AI.AI.Clients
/// LLMClient class that's used to complete prompts.
/// </summary>
/// <remarks>
/// Each wave, at a minimum needs to be configured with a `client`, `prompt`, and `prompt_options`.
/// Each LLMClient, at a minimum needs to be configured with a <see cref="LLMClientOptions{TContent}.Model"/> and <see cref="LLMClientOptions{TContent}.Template"/>.
///
/// Configuring the wave to use a `validator` is optional but recommended. The primary benefit to
/// Configuring the LLMClient to use a <see cref="LLMClientOptions{TContent}.Validator"/> is optional but recommended. The primary benefit to
/// using LLMClient is it's response validation and automatic response repair features. The
/// validator acts as guard and guarantees that you never get an malformed response back from the
/// model. At least not without it being flagged as an `invalid_response`.
///
/// Using the `JsonResponseValidator`, for example, guarantees that you only ever get a valid
/// object back from `CompletePromptAsync()`. In fact, you'll get back a fully parsed object and any
/// additional response text from the model will be dropped. If you give the `JsonResponseValidator`
/// Using the <see cref="JsonResponseValidator"/>, for example, guarantees that you only ever get a valid
/// object back from <see cref="CompletePromptAsync(ITurnContext, IMemory, IPromptFunctions{List{string}}, string?, CancellationToken)"/>. In fact, you'll get back a fully parsed object and any
/// additional response text from the model will be dropped. If you give the <see cref="JsonResponseValidator"/>
/// a JSON Schema, you will get back a strongly typed and validated instance of an object in
/// the returned `response.message.content`.
///
/// When a validator detects a bad response from the model, it gives the model "feedback" as to the
/// problem it detected with its response and more importantly an instruction that tells the model
/// how it should repair the problem. This puts the wave into a special repair mode where it first
/// how it should repair the problem. This puts the LLMClient into a special repair mode where it first
/// forks the memory for the conversation and then has a side conversation with the model in an
/// effort to get it to repair its response. By forking the conversation, this isolates the bad
/// response and prevents it from contaminating the main conversation history. If the response can
/// be repaired, the wave will un-fork the memory and use the repaired response in place of the
/// be repaired, the LLMClient will un-fork the memory and use the repaired response in place of the
/// original bad response. To the model it's as if it never made a mistake which is important for
/// future turns with the model. If the response can't be repaired, a response status of
/// `invalid_response` will be returned.
///
/// When using a well designed validator, like the `JsonResponseValidator`, the wave can typically
/// When using a well designed validator, like the <see cref="JsonResponseValidator"/>, the LLMClient can typically
/// repair a bad response in a single additional model call. Sometimes it takes a couple of calls
/// to effect a repair and occasionally it won't be able to repair it at all. If your prompt is
/// well designed and you only occasionally see failed repair attempts, I'd recommend just calling
/// the wave a second time. Given the stochastic nature of these models, there's a decent chance
/// the LLMClient a second time. Given the stochastic nature of these models, there's a decent chance
/// it won't make the same mistake on the second call. A well designed prompt coupled with a well
/// designed validator should get the reliability of calling these models somewhere close to 99%
/// reliable.
///
/// This "feedback" technique works with all the GPT-3 generation of models and I've tested it with
/// `text-davinci-003`, `gpt-3.5-turbo`, and `gpt-4`. There's a good chance it will work with other
/// open source models like `LLaMA` and Googles `Bard` but I have yet to test it with those models.
///
/// LLMClient supports OpenAI's functions feature and can validate the models response against the
/// schema for the supported functions. When an LLMClient is configured with both a `OpenAIModel`
/// and a `FunctionResponseValidator`, the model will be cloned and configured to send the
/// validators configured list of functions with the request. There's no need to separately
/// configure the models `functions` list, but if you do, the models functions list will be sent
/// instead.
/// </remarks>
/// <typeparam name="TContent">
/// Type of message content returned for a 'success' response. The `response.message.content` field will be of type TContent.
Expand Down Expand Up @@ -120,7 +113,7 @@ public void AddFunctionResultToHistory(IMemory memory, string name, object resul
/// conversation history and formatted like `{ role: 'user', content: input }`.
///
/// It's important to note that if you want the users input sent to the model as part of the
/// prompt, you will need to add a `UserMessageSection` to your prompt. The wave does not do
/// prompt, you will need to add a `UserMessageSection` to your prompt. The LLMClient does not do
/// anything to modify your prompt, except when performing repairs and those changes are
/// temporary.
///
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -14,15 +14,15 @@ namespace Microsoft.Teams.AI.AI.Planners
/// The ActionPlanner is a powerful planner that uses a LLM to generate plans. The planner can
/// trigger parameterized actions and send text based responses to the user. The ActionPlanner
/// supports the following advanced features:
/// - //////Augmentations:////// Augmentations virtually eliminate the need for prompt engineering. Prompts
/// - Augmentations: Augmentations virtually eliminate the need for prompt engineering. Prompts
/// can be configured to use a named augmentation which will be automatically appended to the outgoing
/// prompt. Augmentations let the developer specify whether they want to support multi-step plans (sequence),
/// use OpenAI's functions support (functions), or create an AutoGPT style agent (monologue).
/// - //////Validations:////// Validators are used to validate the response returned by the LLM and can guarantee
/// - Validations: Validators are used to validate the response returned by the LLM and can guarantee
/// that the parameters passed to an action match a supplied schema. The validator used is automatically
/// selected based on the augmentation being used. Validators also prevent hallucinated action names
/// making it impossible for the LLM to trigger an action that doesn't exist.
/// - //////Repair:////// The ActionPlanner will automatically attempt to repair invalid responses returned by the
/// - Repair: The ActionPlanner will automatically attempt to repair invalid responses returned by the
/// LLM using a feedback loop. When a validation fails, the ActionPlanner sends the error back to the
/// model, along with an instruction asking it to fix its mistake. This feedback technique leads to a
/// dramatic reduction in the number of invalid responses returned by the model.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ public class ActionPlannerOptions<TState> where TState : TurnState, IMemory
/// tokenizer to use.
/// </summary>
/// <remarks>
/// If not specified, a new `GPTTokenizer` instance will be created.
/// If not specified, a new <see cref="GPTTokenizer"/> instance will be created.
/// </remarks>
public ITokenizer Tokenizer { get; set; } = new GPTTokenizer();

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -16,15 +16,15 @@ public class ApplicationOptions<TState>
/// Optional. Bot adapter being used.
/// </summary>
/// <remarks>
/// If using the LongRunningMessages option or calling the ContinueConversationAsync method, this property is required.
/// If using the <see cref="ApplicationOptions{TState}.LongRunningMessages"/> option, calling the <see cref="CloudAdapterBase.ContinueConversationAsync(string, Bot.Schema.Activity, BotCallbackHandler, CancellationToken)"/> method, or configuring user authentication, this property is required.
/// </remarks>
public BotAdapter? Adapter { get; set; }

/// <summary>
/// Optional. Application ID of the bot.
/// </summary>
/// <remarks>
/// If using the <see cref="ApplicationOptions{TState}.LongRunningMessages"/> option or calling the <see cref="CloudAdapterBase.ContinueConversationAsync(string, Bot.Schema.Activity, BotCallbackHandler, CancellationToken)"/> method, this property is required.
/// If using the <see cref="ApplicationOptions{TState}.LongRunningMessages"/> option, calling the <see cref="CloudAdapterBase.ContinueConversationAsync(string, Bot.Schema.Activity, BotCallbackHandler, CancellationToken)"/> method, or configuring user authentication, this property is required.
/// </remarks>
public string? BotAppId { get; set; }

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@
<Nullable>enable</Nullable>
<PackageId>Microsoft.Teams.AI</PackageId>
<Product>Microsoft Teams AI SDK</Product>
<Version>1.0.0</Version>
<Version>1.0.1</Version>
<Authors>Microsoft</Authors>
<Company>Microsoft</Company>
<Copyright>© Microsoft Corporation. All rights reserved.</Copyright>
Expand Down
Loading