Skip to content

Commit

Permalink
Merge pull request #1418 from samchon/feat/llm-of-validate
Browse files Browse the repository at this point in the history
New function `typia.llm.applicationOfValidate()`.
  • Loading branch information
samchon authored Dec 9, 2024
2 parents 9916e9a + 9bde368 commit 97589c7
Show file tree
Hide file tree
Showing 512 changed files with 13,164 additions and 92 deletions.
2 changes: 1 addition & 1 deletion package.json
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
{
"name": "typia",
"version": "7.0.2",
"version": "7.1.0",
"description": "Superfast runtime validators with only one line",
"main": "lib/index.js",
"typings": "lib/index.d.ts",
Expand Down
4 changes: 2 additions & 2 deletions packages/typescript-json/package.json
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
{
"name": "typescript-json",
"version": "7.0.2-dev.20241205",
"version": "7.1.0-dev.20241209",
"description": "Superfast runtime validators with only one line",
"main": "lib/index.js",
"typings": "lib/index.d.ts",
Expand Down Expand Up @@ -37,7 +37,7 @@
},
"homepage": "https://typia.io",
"dependencies": {
"typia": "7.0.2-dev.20241205"
"typia": "7.1.0-dev.20241209"
},
"peerDependencies": {
"typescript": ">=4.8.0 <5.8.0",
Expand Down
2 changes: 2 additions & 0 deletions src/factories/LiteralFactory.ts
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,8 @@ import { IdentifierFactory } from "./IdentifierFactory";
export namespace LiteralFactory {
export const write = (input: any): ts.Expression => {
if (input === null) return ts.factory.createNull();
else if (ts.isArrowFunction(input)) return input;
else if (ts.isCallExpression(input)) return input;
else if (ts.isIdentifier(input)) return input;
else if (input instanceof Array) return writeArray(input);
else if (typeof input === "object") return writeObject(input);
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -58,6 +58,7 @@ export const iterate_metadata_function = (
},
intersected: false,
}),
tsType: props.checker.getTypeOfSymbol(p),
description: CommentFactory.description(p) ?? null,
jsDocTags: p?.getJsDocTags() ?? [],
}),
Expand Down
143 changes: 141 additions & 2 deletions src/llm.ts
Original file line number Diff line number Diff line change
@@ -1,12 +1,151 @@
import { ILlmApplication, ILlmSchema } from "@samchon/openapi";

import { ILlmApplicationOfValidate } from "./module";

/**
* > You must configure the generic argument `App`.
*
* TypeScript functions to LLM function calling application with validators.
*
* Creates an application of LLM (Large Language Model) function calling application
* from a TypeScript class or interface type containing the target functions to be
* called by the LLM function calling feature.
*
* If you put the returned {@link ILlmApplicationOfValidate.functions} objects to the
* LLM provider like [OpenAI (ChatGPT)](https://openai.com/), the LLM will automatically
* select the proper function and fill its arguments from the conversation
* (maybe chatting text) with user (human). This is the concept of the LLM function calling.
*
* Additionally, the LLM function calling sometimes take a mistake that composing wrong typed
* {@link ILlmFunctionOfValidate.parameters}. In that case, deliver return value of the
* {@link ILlmFunctionOfValidate.validate} function, then LLM provider will correct the
* parameters at the next conversation. The {@link ILlmFunctionOfValidate.validate} function
* is a validator function reporting the detailed information about the wrong typed parameters.
*
* By the way, there can be some parameters (or their nested properties) which must be
* composed by human, not by LLM. File uploading feature or some sensitive information
* like secrety key (password) are the examples. In that case, you can separate the
* function parameters to both LLM and human sides by configuring the
* {@link ILlmApplicationOfValidate.IOptions.separate} property. The separated parameters
* are assigned to the {@link ILlmFunctionOfValidate.separated} property.
*
* For reference, the actual function call execution is not by LLM, but by you.
* When the LLM selects the proper function and fills the arguments, you just call
* the function with the LLM prepared arguments. And then informs the return value to
* the LLM by system prompt. The LLM will continue the next conversation based on
* the return value.
*
* Additionally, if you've configured {@link ILlmApplicationOfValidate.IOptions.separate},
* so that the parameters are separated to human and LLM sides, you can merge these
* humand and LLM sides' parameters into one through {@link HttpLlm.mergeParameters}
* before the actual LLM function call execution.
*
* Here is the list of available `Model` types with their corresponding LLM schema.
* Reading the following list, and determine the `Model` type considering the
* characteristics of the target LLM provider.
*
* - LLM provider schemas
* - `chatgpt`: [`IChatGptSchema`](https://github.com/samchon/openapi/blob/master/src/structures/IChatGptSchema.ts)
* - `claude`: [`IClaudeSchema`](https://github.com/samchon/openapi/blob/master/src/structures/IClaudeSchema.ts)
* - `gemini`: [`IGeminiSchema`](https://github.com/samchon/openapi/blob/master/src/structures/IGeminiSchema.ts)
* - `llama`: [`ILlamaSchema`](https://github.com/samchon/openapi/blob/master/src/structures/ILlamaSchema.ts)
* - Midldle layer schemas
* - `3.0`: [`ILlmSchemaV3`](https://github.com/samchon/openapi/blob/master/src/structures/ILlmSchemaV3.ts)
* - `3.1`: [`ILlmSchemaV3_1`](https://github.com/samchon/openapi/blob/master/src/structures/ILlmSchemaV3_1.ts)
*
* @template App Target class or interface type collecting the functions to call
* @template Model LLM schema model
* @template Config Configuration of LLM schema composition
* @param options Options for the LLM application construction
* @returns Application of LLM function calling schemas
* @reference https://platform.openai.com/docs/guides/function-calling
* @author Jeongho Nam - https://github.com/samchon
*/
export function applicationOfValidate(
options?: Partial<Pick<ILlmApplicationOfValidate.IOptions<any>, "separate">>,
): never;

/**
* TypeScript functions to LLM function calling application with validators.
*
* Creates an application of LLM (Large Language Model) function calling application
* from a TypeScript class or interface type containing the target functions to be
* called by the LLM function calling feature.
*
* If you put the returned {@link ILlmApplicationOfValidate.functions} objects to the
* LLM provider like [OpenAI (ChatGPT)](https://openai.com/), the LLM will automatically
* select the proper function and fill its arguments from the conversation
* (maybe chatting text) with user (human). This is the concept of the LLM function calling.
*
* Additionally, the LLM function calling sometimes take a mistake that composing wrong typed
* {@link ILlmFunctionOfValidate.parameters}. In that case, deliver return value of the
* {@link ILlmFunctionOfValidate.validate} function, then LLM provider will correct the
* parameters at the next conversation. The {@link ILlmFunctionOfValidate.validate} function
* is a validator function reporting the detailed information about the wrong typed parameters.
*
* By the way, there can be some parameters (or their nested properties) which must be
* composed by human, not by LLM. File uploading feature or some sensitive information
* like secrety key (password) are the examples. In that case, you can separate the
* function parameters to both LLM and human sides by configuring the
* {@link ILlmApplicationOfValidate.IOptions.separate} property. The separated parameters
* are assigned to the {@link ILlmFunctionOfValidate.separated} property.
*
* For reference, the actual function call execution is not by LLM, but by you.
* When the LLM selects the proper function and fills the arguments, you just call
* the function with the LLM prepared arguments. And then informs the return value to
* the LLM by system prompt. The LLM will continue the next conversation based on
* the return value.
*
* Additionally, if you've configured {@link ILlmApplicationOfValidate.IOptions.separate},
* so that the parameters are separated to human and LLM sides, you can merge these
* humand and LLM sides' parameters into one through {@link HttpLlm.mergeParameters}
* before the actual LLM function call execution.
*
* Here is the list of available `Model` types with their corresponding LLM schema.
* Reading the following list, and determine the `Model` type considering the
* characteristics of the target LLM provider.
*
* - LLM provider schemas
* - `chatgpt`: [`IChatGptSchema`](https://github.com/samchon/openapi/blob/master/src/structures/IChatGptSchema.ts)
* - `claude`: [`IClaudeSchema`](https://github.com/samchon/openapi/blob/master/src/structures/IClaudeSchema.ts)
* - `gemini`: [`IGeminiSchema`](https://github.com/samchon/openapi/blob/master/src/structures/IGeminiSchema.ts)
* - `llama`: [`ILlamaSchema`](https://github.com/samchon/openapi/blob/master/src/structures/ILlamaSchema.ts)
* - Midldle layer schemas
* - `3.0`: [`ILlmSchemaV3`](https://github.com/samchon/openapi/blob/master/src/structures/ILlmSchemaV3.ts)
* - `3.1`: [`ILlmSchemaV3_1`](https://github.com/samchon/openapi/blob/master/src/structures/ILlmSchemaV3_1.ts)
*
* @template App Target class or interface type collecting the functions to call
* @template Model LLM schema model
* @template Config Configuration of LLM schema composition
* @param options Options for the LLM application construction
* @returns Application of LLM function calling schemas
* @reference https://platform.openai.com/docs/guides/function-calling
* @author Jeongho Nam - https://github.com/samchon
*/
export function applicationOfValidate<
App extends Record<string, any>,
Model extends ILlmSchema.Model,
Config extends Partial<ILlmSchema.ModelConfig[Model]> = {},
>(
options?: Partial<
Pick<ILlmApplicationOfValidate.IOptions<Model>, "separate">
>,
): ILlmApplicationOfValidate<Model>;

/**
* @internal
*/
export function applicationOfValidate(): never {
halt("applicationOfValidate");
}

/**
* > You must configure the generic argument `App`.
*
* TypeScript functions to LLM function calling application.
*
* Creates an application of LLM (Large Language Model) function calling application
* from a TypeScript class or interface type containig the target functions to be
* from a TypeScript class or interface type containing the target functions to be
* called by the LLM function calling feature.
*
* If you put the returned {@link ILlmApplication.functions} objects to the LLM provider
Expand Down Expand Up @@ -61,7 +200,7 @@ export function application(
* TypeScript functions to LLM function calling application.
*
* Creates an application of LLM (Large Language Model) function calling application
* from a TypeScript class or interface type containig the target functions to be
* from a TypeScript class or interface type containing the target functions to be
* called by the LLM function calling feature.
*
* If you put the returned {@link ILlmApplication.functions} objects to the LLM provider
Expand Down
2 changes: 2 additions & 0 deletions src/module.ts
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,8 @@ export * as tags from "./tags";
export * from "./schemas/metadata/IJsDocTagInfo";
export * from "./schemas/json/IJsonApplication";
export * from "./schemas/json/IJsonSchemaCollection";
export * from "./schemas/llm/ILlmApplicationOfValidate";
export * from "./schemas/llm/ILlmFunctionOfValidate";
export * from "./AssertionGuard";
export * from "./IRandomGenerator";
export * from "./IValidation";
Expand Down
81 changes: 81 additions & 0 deletions src/programmers/llm/LlmApplicationOfValidateProgrammer.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,81 @@
import { ILlmApplication, ILlmSchema } from "@samchon/openapi";
import ts from "typescript";

import { ILlmApplicationOfValidate } from "../../schemas/llm/ILlmApplicationOfValidate";
import { Metadata } from "../../schemas/metadata/Metadata";
import { MetadataParameter } from "../../schemas/metadata/MetadataParameter";

import { ITypiaContext } from "../../transformers/ITypiaContext";

import { IValidation } from "../../IValidation";
import { ValidateProgrammer } from "../ValidateProgrammer";
import { LlmApplicationProgrammer } from "./LlmApplicationProgrammer";

export namespace LlmApplicationOfValidateProgrammer {
export const validate = (model: ILlmSchema.Model) =>
LlmApplicationProgrammer.validate(model);

export const write = <Model extends ILlmSchema.Model>(props: {
context: ITypiaContext;
modulo: ts.LeftHandSideExpression;
model: Model;
metadata: Metadata;
config?: Partial<ILlmSchema.ModelConfig[Model]>;
}): ILlmApplicationOfValidate<Model> => {
const app: ILlmApplication<Model> = LlmApplicationProgrammer.write(props);
const parameters: Record<string, MetadataParameter> = Object.fromEntries(
props.metadata.objects[0]!.type.properties.filter(
(p) =>
p.key.isSoleLiteral() &&
p.value.size() === 1 &&
p.value.nullable === false &&
p.value.isRequired() === true &&
p.value.functions.length === 1,
)
.filter(
(p) =>
p.jsDocTags.find(
(tag) => tag.name === "hidden" || tag.name === "internal",
) === undefined,
)
.map((p) => [
p.key.getSoleLiteral()!,
p.value.functions[0]!.parameters[0]!,
]),
);
return {
...app,
functions: app.functions.map((func) => ({
...func,
validate: writeValidadtor({
context: props.context,
modulo: props.modulo,
parameter: parameters[func.name]!,
}),
})),
};
};

const writeValidadtor = (props: {
context: ITypiaContext;
modulo: ts.LeftHandSideExpression;
parameter: MetadataParameter;
}): ((props: object) => IValidation<unknown>) => {
const type = props.parameter.tsType;
if (type === undefined)
// unreachable
throw new Error(
"Failed to write LLM application's function validator. You don't have to call `LlmApplicationOfValidator.write()` function by yourself, but only by the `typia.llm.applicationOfValidate()` function.",
);
return ValidateProgrammer.write({
...props,
type: props.parameter.tsType!,
config: {
equals: false,
},
name: undefined,
}) satisfies ts.CallExpression as any as (
props: object,
) => IValidation<unknown>;
};
}
55 changes: 55 additions & 0 deletions src/schemas/llm/ILlmApplicationOfValidate.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,55 @@
import { ILlmApplication, ILlmSchema } from "@samchon/openapi";

import { ILlmFunctionOfValidate } from "./ILlmFunctionOfValidate";

/**
* Application of LLM function calling with validators.
*
* `ILlmApplication` is a data structure representing a collection of
* {@link ILlmFunctionOfValidate LLM function calling schemas}, composed from a native
* TypeScript class (or interface) type by the `typia.llm.applicationOfValidate<App, Model>()`
* function.
*
* If you put the returned {@link ILlmApplicationOfValidate.functions} objects to the
* LLM provider like [OpenAI (ChatGPT)](https://openai.com/), the LLM will automatically
* select the proper function and fill its arguments from the conversation
* (maybe chatting text) with user (human). This is the concept of the LLM function calling.
*
* Additionally, the LLM function calling sometimes take a mistake that composing wrong typed
* {@link ILlmFunctionOfValidate.parameters}. In that case, deliver return value of the
* {@link ILlmFunctionOfValidate.validate} function, then LLM provider will correct the
* parameters at the next conversation. The {@link ILlmFunctionOfValidate.validate} function
* is a validator function reporting the detailed information about the wrong typed parameters.
*
* By the way, there can be some parameters (or their nested properties) which must be
* composed by Human, not by LLM. File uploading feature or some sensitive information
* like secrety key (password) are the examples. In that case, you can separate the
* function parameters to both LLM and human sides by configuring the
* {@link ILlmApplication.IOptions.separate} property. The separated parameters are
* assigned to the {@link ILlmFunction.separated} property.
*
* For reference, when both LLM and Human filled parameter values to call, you can
* merge them by calling the {@link HttpLlm.mergeParameters} function. In other words,
* if you've configured the {@link ILlmApplication.IOptions.separate} property, you
* have to merge the separated parameters before the funtion call execution.
*
* @reference https://platform.openai.com/docs/guides/function-calling
* @author Jeongho Nam - https://github.com/samchon
*/
export interface ILlmApplicationOfValidate<Model extends ILlmSchema.Model>
extends ILlmApplication<Model> {
/**
* List of function metadata.
*
* List of function metadata that can be used for the LLM function call.
*
* Also, every functions have their own parameters validator
* {@link ILlmFunctionOfValidate.validate}. If the LLM function calling composes wrong
* typed parameters, then deliver return value of it, then LLM will correct parameters
* at the next conversation.
*/
functions: ILlmFunctionOfValidate<Model>[];
}
export namespace ILlmApplicationOfValidate {
export import IOptions = ILlmApplication.IOptions;
}
39 changes: 39 additions & 0 deletions src/schemas/llm/ILlmFunctionOfValidate.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
import { ILlmFunction, ILlmSchema } from "@samchon/openapi";

import { IValidation } from "../../IValidation";

/**
* LLM function metadata with validator.
*
* `ILlmFunctionOfValidate` is an interface representing a function metadata,
* which has been used for the LLM (Language Large Model) function
* calling. Also, it's a function structure containing the function
* {@link name}, {@link parameters} and {@link output return type}.
*
* If you provide this `ILlmFunctionOfValidate` data to the LLM provider like "OpenAI",
* the "OpenAI" will compose a function arguments by analyzing conversations
* with the user. With the LLM composed arguments, you can execute the function
* and get the result.
*
* If the LLM function calling take s a mistake that composing wrong typed
* {@link parameters}, you can correct the parameters by delivering the return
* value of the {@link validate} function. The {@link validate} function is a
* validator function reporting the detailed information about the wrong typed
* {@link parameters}.
*
* By the way, do not ensure that LLM will always provide the correct arguments.
* The LLM of present age is not perfect, and sometimes takes a mistake that composing
* wrong typed {@link parameters}. In that case, you can correc the parameters by
* delivering the return value of the {@link validate} function. The {@link validate}
* function reports the detailed information about the wrong typed {@link parameters},
*
* @reference https://platform.openai.com/docs/guides/function-calling
* @author Jeongho Nam - https://github.com/samchon
*/
export interface ILlmFunctionOfValidate<Model extends ILlmSchema.Model>
extends ILlmFunction<Model> {
validate(props: object): IValidation<unknown>;
}
export namespace ILlmFunctionOfValidate {
export import ISeparated = ILlmFunction.ISeparated;
}
Loading

0 comments on commit 97589c7

Please sign in to comment.