-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathREADME.Rmd
99 lines (70 loc) · 3.28 KB
/
README.Rmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
---
output: github_document
---
<!-- README.md is generated from README.Rmd. Please edit that file -->
```{r, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>",
fig.path = "man/figures/README-",
out.width = "100%"
)
```
# rtiktoken
<!-- badges: start -->
[![R-CMD-check](https://github.com/DavZim/rtiktoken/actions/workflows/R-CMD-check.yaml/badge.svg)](https://github.com/DavZim/rtiktoken/actions/workflows/R-CMD-check.yaml)
[![CRAN status](https://www.r-pkg.org/badges/version/rtiktoken)](https://CRAN.R-project.org/package=rtiktoken)
<!-- badges: end -->
`{rtiktoken}` is a thin wrapper around [`tiktoken-rs`](https://github.com/zurawiki/tiktoken-rs) (and in turn around [OpenAI's Python library `tiktoken`](https://github.com/openai/tiktoken)).
It provides functions to encode text into tokens used by OpenAI's models and decode tokens back into text using [BPE](https://en.wikipedia.org/wiki/Byte_pair_encoding) tokenizers.
It is also useful to count the numbers of tokens in a text to guess how expensive a call to OpenAI's API would be.
Note that all the tokenization happens offline and no internet connection is required.
Another use-case is to compute similarity scores between texts using tokens.
Other use-cases can be found in the OpenAI's cookbook [How to Count Tokens with `tiktoken`](https://github.com/openai/openai-cookbook/blob/main/examples/How_to_count_tokens_with_tiktoken.ipynb).
To verify the outputs of the functions, see also [OpenAI's Tokenizer Platform](https://platform.openai.com/tokenizer).
## Installation
You can install `rtiktoken` like so:
``` r
# Dev version
# install.packages("devtools")
# devtools::install_github("DavZim/rtiktoken")
# CRAN version
install.packages("rtiktoken")
```
## Example
```{r example}
library(rtiktoken)
# 1. Encode text into tokens
text <- c(
"Hello World, this is a text that we are going to use in rtiktoken!",
"Note that the functions are vectorized! Yay!"
)
tokens <- get_tokens(text, "gpt-4o")
tokens
# 2. Decode tokens back into text
decoded_text <- decode_tokens(tokens, "gpt-4o")
decoded_text
# Note that it's not guaranteed to produce the identical text as text-parts
# might be dropped if no token match is found.
identical(text, decoded_text)
# 3. Count the number of tokens in a text
n_tokens <- get_token_count(text, "gpt-4o")
n_tokens
```
### Models & Tokenizers
The different OpenAI models use different tokenizers (see also [source code of `tikoken-rs`](https://github.com/zurawiki/tiktoken-rs/blob/main/tiktoken-rs/src/tokenizer.rs) for a full list).
The following models use the following tokenizers (note that all functions of this package both allow to use the model names as well as the tokenizer names):
| Model Name | Tokenizer Name |
|------------|----------------|
| GPT-4o models | `o200k_base` |
| ChatGPT models, e.g., `text-embedding-ada-002`, `gpt-3.5-turbo`, `gpt-4-` | `cl100k_base` |
| Code models, e.g., `text-davinci-002`, `text-davinci-003` | `p50k_base` |
| Edit models, e.g., `text-davinci-edit-001`, `code-davinci-edit-001` | `p50k_edit` |
| GPT-3 models, e.g., `davinci` | `r50k_base` or `gpt2` |
## Development
`rtiktoken` is built using `extendr` and `Rust`.
To build the package, you need to have `Rust` installed on your machine.
```r
rextendr::document()
devtools::document()
```