Skip to content

Commit

Permalink
Change supported models (#9271)
Browse files Browse the repository at this point in the history
- add support for `gpt-4-1106-preview` and `gpt-3.5-turbo-1106`
- remove support for `text-davinci-003` and `text-davinci-002`
  • Loading branch information
simicvm authored Nov 16, 2023
1 parent 1394b52 commit 7ec7470
Show file tree
Hide file tree
Showing 4 changed files with 25 additions and 11 deletions.
6 changes: 6 additions & 0 deletions extensions/openai-gpt/CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,11 @@
# AI Assistant Changelog

## [Version: 1.6.0] - 2023-11-16

- Add support for `gpt-4-1106-preview` and `gpt-3.5-turbo-1106` models
- Remove support for `text-davinci-003` and `text-davinci-002` models
- Update README.md

## [Version: 1.5.0] - 2023-03-30

- Change name from `OpenAI GPT3` to `OpenAI GPT`
Expand Down
10 changes: 5 additions & 5 deletions extensions/openai-gpt/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ The interface of the extension follows the interface of the OpenAI Playground.

You can set different parameters for the AI model:

`AI Model`: type of the model you want to use. `gpt-4` is the most powerful one for now, but `gpt-3.5-turbo` is cheaper, faster, and almost as capable.
`AI Model`: type of the model you want to use. `gpt-4-1106-preview` is the most powerful one for now, but `gpt-3.5-turbo-1106` is cheaper, faster, and almost as capable.

`Temperature`: controls randomness of the AI model. The lower it is, the less random (and "creative") the results will be.

Expand All @@ -47,10 +47,10 @@ You can set different parameters for the AI model:

### Supported AI Models

1. `gpt-4`
2. `gpt-3.5-turbo`
3. `text-davinci-003`
4. `text-davinci-002`
1. `gpt-4-1106-preview`
2. `gpt-3.5-turbo-1106`
3. `gpt-4`
4. `gpt-3.5-turbo`
5. `text-curie-001`
6. `text-babbage-001`
7. `text-ada-001`
Expand Down
16 changes: 12 additions & 4 deletions extensions/openai-gpt/src/ai.tsx
Original file line number Diff line number Diff line change
Expand Up @@ -46,6 +46,8 @@ const configuration = new Configuration({
const openai = new OpenAIApi(configuration);

export default function Command() {
const maxTokensGPT41106Preview = 128000;
const maxTokensGPT35Turbo1106 = 16385;
const maxTokensGPT4 = 8192;
const maxTokensGPT35Turbo = 4096;
const maxTokensDavinci = 4000;
Expand All @@ -64,10 +66,10 @@ export default function Command() {
const [maxModelTokens, setMaxModelTokens] = useState<number>(maxTokensDavinci);

const modelLimit = {} as modelTokenLimit;
modelLimit["gpt-4-1106-preview"] = maxTokensGPT41106Preview;
modelLimit["gpt-3.5-turbo-1106"] = maxTokensGPT35Turbo1106;
modelLimit["gpt-4"] = maxTokensGPT4;
modelLimit["gpt-3.5-turbo"] = maxTokensGPT35Turbo;
modelLimit["text-davinci-003"] = maxTokensDavinci;
modelLimit["text-davinci-002"] = maxTokensDavinci;
modelLimit["text-curie-001"] = maxTokensAdaBabbageCurie;
modelLimit["text-babbage-001"] = maxTokensAdaBabbageCurie;
modelLimit["text-ada-001"] = maxTokensAdaBabbageCurie;
Expand Down Expand Up @@ -120,7 +122,10 @@ export default function Command() {
setIsLoading(true);
try {
const completion: gptCompletion =
formRequest.model === "gpt-3.5-turbo" || formRequest.model === "gpt-4"
formRequest.model === "gpt-3.5-turbo" ||
formRequest.model === "gpt-4" ||
formRequest.model === "gpt-4-1106-preview" ||
formRequest.model === "gpt-3.5-turbo-1106"
? await openai.createChatCompletion({
model: formRequest.model,
messages: [
Expand All @@ -146,7 +151,10 @@ export default function Command() {
});
await showToast({ title: "Answer Received" });
const response =
formRequest.model === "gpt-3.5-turbo" || formRequest.model === "gpt-4"
formRequest.model === "gpt-3.5-turbo" ||
formRequest.model === "gpt-4" ||
formRequest.model === "gpt-4-1106-preview" ||
formRequest.model === "gpt-3.5-turbo-1106"
? `\n\n${completion.data.choices[0].message.content}`
: completion.data.choices[0].text;
setTextPrompt(textPrompt + response);
Expand Down
4 changes: 2 additions & 2 deletions extensions/openai-gpt/src/info-messages.ts
Original file line number Diff line number Diff line change
Expand Up @@ -4,11 +4,11 @@ We generally recommend altering this or top_p but not both.
Default: 0.7`;

export const model = `The model which will generate the completion. Some models are more sutiable for certain tasks than others. "gpt-4" is the most powerful one, but "gpt-3.5-turbo" is cheaper, faster, and almost as capable`;
export const model = `The model which will generate the completion. Some models are more sutiable for certain tasks than others. "gpt-4-1106-preview" is the most powerful one, but "gpt-3.5-turbo-1106" is cheaper, faster, and almost as capable`;

export const maxTokens = `The maximum number of tokens to generate in the completion.
The token count of your prompt plus this parameter cannot exceed the model's context length. "text-davinci-002" and "text-davinci-003" models have a context length of 4000 tokens, while the others have 2048.
The token count of your prompt plus this parameter cannot exceed the model's context length. Please consult OpenAI documentation for the token limits.
Default: 256`;

Expand Down

0 comments on commit 7ec7470

Please sign in to comment.