Skip to content

Commit

Permalink
Leaderboard Update, in sync with BFCL April 27th Release (#391)
Browse files Browse the repository at this point in the history
As mentioned in #390, in this PR, we fix some inconsistency issues in
the cost and latency calculation for open-source models, which are now
all calculated when serving the model with
[vLLM](https://github.com/vllm-project/vllm) using 8 V100 GPUs.
$$\text{Cost} = \text{Latency per 1000 function call} * (\text{8xV100
azure-pay-as-you-go-price per hour / 3600})$$

We want to thank the community for pointing out this oversight. Thanks
[@abacaj](https://twitter.com/abacaj) and
[@teknium1](https://twitter.com/Teknium1) for initially raising the
issue, and thanks [@natikgadzhi](https://twitter.com/natikgadzhi)
[@HamelHusain](https://twitter.com/HamelHusain)
[@nicoritschel](https://twitter.com/nicoritschel)
[@winglian](https://twitter.com/winglian)
[@olafgeibig](https://twitter.com/olafgeibig) and many others for
joining the conversation. We are listening to community feedback and
continuously improving our Berkeley Function Calling Leaderboard.
Discussions like
[this](https://twitter.com/abacaj/status/1784003306508980250) serve as
great examples. Let us know what you want us to include next!

This PR DOES change the leaderboard scores for `costs` and `latency`,
but not `accuracy`.

---------

Co-authored-by: Charlie Cheng-Jie Ji
[[email protected]](mailto:[email protected])
Co-authored-by: Fanjia Yan
[[email protected]](mailto:[email protected])
  • Loading branch information
HuanzhiMao authored Apr 27, 2024
1 parent 46e959b commit 2c87d43
Show file tree
Hide file tree
Showing 4 changed files with 18 additions and 16 deletions.
Binary file modified assets/img/blog_post_8_gpu_formula.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
11 changes: 6 additions & 5 deletions blogs/8_berkeley_function_calling_leaderboard.html
Original file line number Diff line number Diff line change
Expand Up @@ -99,7 +99,7 @@ <h4 class="text-center" style="margin: 0px;">
<p></p>
</h4>
</div>
<b><i style="font-size: 1.0em;">Last updated: 2024-04-19 <a
<b><i style="font-size: 1.0em;">Last updated: 2024-04-27 <a
href="https://github.com/ShishirPatil/gorilla/tree/main/berkeley-function-call-leaderboard#changelog">[Change
Log]</a></i></b>
<br></br>
Expand Down Expand Up @@ -881,7 +881,8 @@ <h5>Executable Function (<code>Non-REST</code>) Evaluation:
<ul>
<li><b>Exact match</b>: The output must exactly match the expected result.
</li>
<li><b>Real-time match</b>: A looser form of exact match that only applies to numerical execution result, where the execution result
<li><b>Real-time match</b>: A looser form of exact match that only applies to
numerical execution result, where the execution result
must be within a certain percentage threshold (20%) from the
expected result to accommodate the live updates of API responses.
</li>
Expand Down Expand Up @@ -1011,14 +1012,14 @@ <h3 id="cost">Cost & Latency</h3>
<li>For models that we evaluate using local hosting. This includes Deepseek, Gemma, and
etc.:
<ul>
<li><b>Latency</b>: For ~7B Models, we evaluated on single A100 40GB. For ~40B
Models, we evaluated on 4*A100 40GB. Since we batched and evaluated the
<li><b>Latency</b>: We calculated the number when serving the model with vLLM
using 8 V100 GPUs. Since we batched and evaluated the
model, we derive latency by dividing the total time by the number of
evaluation dataset entries.
</li>
<li><b>Cost</b>: Since the open source model does not have a price tag, we
estimate the cost by:
<img src="../assets/img/blog_post_8_gpu_formula.jpg" width="50%">
<img src="../assets/img/blog_post_8_gpu_formula.jpg" width="90%">
</li>
</ul>
</li>
Expand Down
20 changes: 10 additions & 10 deletions data.csv
Original file line number Diff line number Diff line change
Expand Up @@ -3,17 +3,17 @@ Rank,Overall Acc,Model,Model Link,Organization,License,AST Summary,Exec Summary,
2,86.29%,Claude-3-Opus-20240229 (Prompt),https://www.anthropic.com/news/claude-3-family,Anthropic,Proprietary,86.09%,86.66%,86.36%,93.25%,66.00%,72.00%,93.50%,86.00%,78.50%,97.65%,98.00%,97.14%,94.00%,80.00%,75.00%,80.42%,10.85,4.96,1.79,7.43
3,83.94%,GPT-4-turbo-2024-04-09 (Prompt),https://platform.openai.com/docs/models/gpt-4-and-gpt-4-turbo,OpenAI,Proprietary,86.83%,86.04%,85.82%,93.00%,60.00%,80.00%,94.50%,90.00%,77.00%,97.65%,97.00%,98.57%,94.00%,80.00%,72.50%,62.50%,5.25,2.6,2.34,5.81
4,83.65%,GPT-4-1106-Preview (FC),https://platform.openai.com/docs/models/gpt-4-and-gpt-4-turbo,OpenAI,Proprietary,84.75%,81.73%,82.00%,90.25%,61.00%,58.00%,91.00%,91.50%,74.50%,89.41%,95.00%,81.43%,92.00%,78.00%,67.50%,80.42%,5.08,6.27,6.17,18.14
5,83.12%,Gorilla-OpenFunctions-v2 (FC),https://gorilla.cs.berkeley.edu/blogs/7_open_functions_v2.html,Gorilla LLM,Apache 2.0,86.16%,81.55%,87.64%,94.25%,67.00%,76.00%,94.50%,87.50%,75.00%,94.71%,94.00%,95.71%,94.00%,70.00%,67.50%,61.25%,1.65,2.57,2.21,6.01
5,83.12%,Gorilla-OpenFunctions-v2 (FC),https://gorilla.cs.berkeley.edu/blogs/7_open_functions_v2.html,Gorilla LLM,Apache 2.0,86.16%,81.55%,87.64%,94.25%,67.00%,76.00%,94.50%,87.50%,75.00%,94.71%,94.00%,95.71%,94.00%,70.00%,67.50%,61.25%,0.31,0.05,N/A,N/A
6,83.00%,GPT-4-0125-Preview (FC),https://platform.openai.com/docs/models/gpt-4-and-gpt-4-turbo,OpenAI,Proprietary,83.75%,84.76%,80.00%,90.25%,53.00%,52.00%,92.50%,90.00%,72.50%,83.53%,98.00%,62.86%,92.00%,86.00%,77.50%,82.92%,4.83,4.79,5.73,18.58
7,82.24%,Meta-Llama-3-70B-Instruct (Prompt),https://llama.meta.com/llama3,Meta,Meta Llama 3 Community,84.49%,85.90%,81.45%,92.25%,49.00%,60.00%,92.50%,91.00%,73.00%,94.12%,97.00%,90.00%,90.00%,82.00%,77.50%,66.67%,0.07,0.18,N/A,N/A
7,82.24%,Meta-Llama-3-70B-Instruct (Prompt),https://llama.meta.com/llama3,Meta,Meta Llama 3 Community,84.49%,85.90%,81.45%,92.25%,49.00%,60.00%,92.50%,91.00%,73.00%,94.12%,97.00%,90.00%,90.00%,82.00%,77.50%,66.67%,1.1,0.18,N/A,N/A
8,80.88%,GPT-4-turbo-2024-04-09 (FC),https://platform.openai.com/docs/models/gpt-4-and-gpt-4-turbo,OpenAI,Proprietary,81.70%,78.61%,73.82%,90.00%,33.00%,26.00%,89.50%,89.00%,74.50%,82.94%,93.00%,68.57%,88.00%,76.00%,67.50%,88.75%,4.78,5.48,6.42,19.13
9,80.53%,Claude-3-Sonnet-20240229 (Prompt),https://www.anthropic.com/news/claude-3-family,Anthropic,Proprietary,85.20%,86.76%,81.82%,90.25%,53.00%,72.00%,88.50%,88.00%,82.50%,93.53%,96.00%,90.00%,92.00%,84.00%,77.50%,51.25%,2.13,2.07,1.13,3.1
10,80.00%,Mistral-Medium-2312 (Prompt),https://docs.mistral.ai/guides/model-selection/,Mistral AI,Proprietary,81.44%,73.47%,79.27%,88.50%,54.00%,56.00%,92.50%,84.00%,70.00%,65.88%,96.00%,22.86%,76.00%,82.00%,70.00%,88.33%,1.76,2.81,2.36,6.38
11,78.94%,Functionary-Medium-v2.4 (FC),https://huggingface.co/meetkai/functionary-medium-v2.4,MeetKai,MIT,82.36%,75.71%,79.45%,88.00%,55.00%,60.00%,90.00%,87.50%,72.50%,68.82%,85.00%,45.71%,84.00%,80.00%,70.00%,74.17%,1.63,2.54,2.64,7.38
11,78.94%,Functionary-Medium-v2.4 (FC),https://huggingface.co/meetkai/functionary-medium-v2.4,MeetKai,MIT,82.36%,75.71%,79.45%,88.00%,55.00%,60.00%,90.00%,87.50%,72.50%,68.82%,85.00%,45.71%,84.00%,80.00%,70.00%,74.17%,N/A,2.54,2.64,7.38
12,78.65%,Command-R-Plus (Prompt) (Original),https://txt.cohere.com/command-r-plus-microsoft-azure,Cohere For AI,cc-by-nc-4.0,80.78%,86.24%,81.64%,88.50%,63.00%,64.00%,89.50%,80.00%,72.00%,92.94%,98.00%,85.71%,88.00%,84.00%,80.00%,53.75%,1.9,1.33,0.94,3.25
13,78.65%,Command-R-Plus (Prompt) (Optimized),https://txt.cohere.com/command-r-plus-microsoft-azure,Cohere For AI,cc-by-nc-4.0,80.45%,86.74%,81.82%,88.25%,64.00%,66.00%,88.00%,81.00%,71.00%,92.94%,97.00%,87.14%,90.00%,84.00%,80.00%,54.17%,1.9,1.28,0.93,3.24
14,78.00%,Command-R-Plus (FC) (Optimized),https://txt.cohere.com/command-r-plus-microsoft-azure,Cohere For AI,cc-by-nc-4.0,81.56%,77.17%,78.73%,89.25%,46.00%,60.00%,90.50%,87.50%,69.50%,81.18%,95.00%,61.43%,86.00%,74.00%,67.50%,63.75%,1.06,1.86,1.35,3.98
15,77.82%,Functionary-Small-v2.4 (FC),https://huggingface.co/meetkai/functionary-small-v2.4,MeetKai,MIT,80.00%,76.31%,80.00%,88.75%,56.00%,58.00%,89.00%,82.00%,69.00%,78.24%,96.00%,52.86%,82.00%,80.00%,65.00%,67.92%,1.66,2.58,2.47,7.26
15,77.82%,Functionary-Small-v2.4 (FC),https://huggingface.co/meetkai/functionary-small-v2.4,MeetKai,MIT,80.00%,76.31%,80.00%,88.75%,56.00%,58.00%,89.00%,82.00%,69.00%,78.24%,96.00%,52.86%,82.00%,80.00%,65.00%,67.92%,N/A,2.58,2.47,7.26
16,75.82%,Claude-3-Opus-20240229 (FC tools-2024-04-04),https://www.anthropic.com/news/claude-3-family,Anthropic,Proprietary,70.35%,71.27%,80.91%,87.00%,61.00%,72.00%,91.00%,58.00%,51.50%,90.59%,97.00%,81.43%,94.00%,38.00%,62.50%,82.50%,30.89,13.04,3.89,20.48
17,75.41%,Claude-instant-1.2 (Prompt),https://www.anthropic.com/news/releasing-claude-instant-1-2,Anthropic,Proprietary,76.63%,77.93%,80.00%,87.25%,56.00%,70.00%,86.00%,83.00%,57.50%,84.71%,94.00%,71.43%,80.00%,82.00%,65.00%,57.50%,0.95,1.32,0.65,2.22
18,73.59%,Claude-3-Haiku-20240307 (Prompt),https://www.anthropic.com/news/claude-3-family,Anthropic,Proprietary,77.36%,70.49%,85.45%,94.25%,55.00%,76.00%,92.00%,84.00%,48.00%,92.94%,100.00%,82.86%,94.00%,70.00%,25.00%,34.58%,0.18,1.0,0.49,1.77
Expand All @@ -24,17 +24,17 @@ Rank,Overall Acc,Model,Model Link,Organization,License,AST Summary,Exec Summary,
23,63.47%,Mistral-large-2402 (FC Any),https://docs.mistral.ai/guides/model-selection/,Mistral AI,Proprietary,68.98%,64.93%,82.91%,91.50%,62.00%,56.00%,93.00%,31.50%,68.50%,94.71%,95.00%,94.29%,92.00%,8.00%,65.00%,0.00%,3.94,2.04,1.31,4.88
24,61.00%,GPT-3.5-Turbo-0125 (FC),https://platform.openai.com/docs/models/gpt-3-5-turbo,OpenAI,Proprietary,70.52%,81.38%,57.09%,57.50%,53.00%,62.00%,65.50%,90.00%,69.50%,93.53%,95.00%,91.43%,80.00%,82.00%,70.00%,2.08%,0.43,1.28,0.76,2.49
25,59.88%,Mistral-small-2402 (FC Any),https://docs.mistral.ai/guides/model-selection/,Mistral AI,Proprietary,64.27%,52.62%,81.09%,90.25%,56.00%,58.00%,95.50%,39.00%,41.50%,96.47%,100.00%,91.43%,92.00%,12.00%,10.00%,0.00%,0.96,1.11,0.93,2.8
26,59.24%,Meta-Llama-3-8B-Instruct (Prompt),https://llama.meta.com/llama3,Meta,Meta Llama 3 Community,59.68%,70.01%,58.73%,63.00%,44.00%,54.00%,73.00%,58.50%,48.50%,67.06%,67.00%,67.14%,82.00%,66.00%,65.00%,45.83%,0.02,0.04,N/A,N/A
26,59.24%,Meta-Llama-3-8B-Instruct (Prompt),https://llama.meta.com/llama3,Meta,Meta Llama 3 Community,59.68%,70.01%,58.73%,63.00%,44.00%,54.00%,73.00%,58.50%,48.50%,67.06%,67.00%,67.14%,82.00%,66.00%,65.00%,45.83%,0.24,0.04,N/A,N/A
27,59.18%,Claude-3-Sonnet-20240229 (FC tools-2024-04-04),https://www.anthropic.com/news/claude-3-family,Anthropic,Proprietary,44.06%,43.32%,76.73%,86.00%,49.00%,58.00%,87.50%,6.00%,6.00%,85.29%,96.00%,70.00%,88.00%,0.00%,0.00%,81.67%,3.43,3.32,1.45,6.91
28,58.53%,Hermes-2-Pro-Mistral-7B (FC),https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B,NousResearch,apache-2.0,67.99%,55.62%,71.45%,81.00%,42.00%,54.00%,81.00%,66.50%,53.00%,56.47%,78.00%,25.71%,70.00%,56.00%,40.00%,10.83%,0.15,0.39,N/A,N/A
29,56.24%,Gemini-1.5-Pro (FC),https://deepmind.google/technologies/gemini/#introduction,Google,Proprietary,43.11%,44.26%,81.45%,91.25%,51.00%,64.00%,91.00%,0.00%,0.00%,87.06%,97.00%,72.86%,90.00%,0.00%,0.00%,55.42%,1.51,2.35,3.19,3.78
28,58.53%,Hermes-2-Pro-Mistral-7B (FC),https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B,NousResearch,apache-2.0,67.99%,55.62%,71.45%,81.00%,42.00%,54.00%,81.00%,66.50%,53.00%,56.47%,78.00%,25.71%,70.00%,56.00%,40.00%,10.83%,0.49,0.08,N/A,N/A
29,56.24%,Gemini-1.5-Pro (FC),https://deepmind.google/technologies/gemini/#introduction,Google,Proprietary,43.11%,44.26%,81.45%,91.25%,51.00%,64.00%,91.00%,0.00%,0.00%,87.06%,97.00%,72.86%,90.00%,0.00%,0.00%,55.42%,1.28,2.35,3.19,3.78
30,53.47%,Claude-3-Haiku-20240307 (FC tools-2024-04-04),https://www.anthropic.com/news/claude-3-family,Anthropic,Proprietary,44.69%,46.79%,85.27%,94.25%,60.00%,64.00%,93.00%,0.50%,0.00%,91.18%,96.00%,84.29%,94.00%,2.00%,0.00%,20.83%,0.29,1.54,0.61,2.46
31,52.82%,GPT-4-0613 (FC),https://platform.openai.com/docs/models/gpt-4-and-gpt-4-turbo,OpenAI,Proprietary,38.53%,38.53%,61.64%,83.50%,4.00%,2.00%,92.50%,0.00%,0.00%,64.12%,95.00%,20.00%,90.00%,0.00%,0.00%,91.67%,10.39,3.41,3.21,10.69
32,52.76%,Gemini-1.0-Pro (FC),https://deepmind.google/technologies/gemini/#introduction,Google,Proprietary,39.58%,35.79%,67.82%,76.75%,37.00%,58.00%,90.50%,0.00%,0.00%,71.18%,74.00%,67.14%,72.00%,0.00%,0.00%,77.50%,0.2,1.16,0.74,1.93
33,52.71%,FireFunction-v1 (FC),https://huggingface.co/fireworks-ai/firefunction-v1,Fireworks,Apache 2.0,39.94%,39.79%,67.27%,86.50%,13.00%,22.00%,92.50%,0.00%,0.00%,71.18%,95.00%,37.14%,88.00%,0.00%,0.00%,73.33%,N/A,1.5,1.51,4.49
34,52.18%,Nexusflow-Raven-v2 (FC),https://huggingface.co/Nexusflow/NexusRaven-V2-13B,Nexusflow,Apache 2.0,55.09%,61.78%,70.36%,76.25%,52.00%,60.00%,75.50%,30.50%,44.00%,64.12%,93.00%,22.86%,82.00%,46.00%,55.00%,2.08%,N/A,1.85,1.39,4.47
35,50.12%,Mistral-tiny-2312 (Prompt),https://docs.mistral.ai/guides/model-selection/,Mistral AI,Proprietary,46.91%,36.16%,49.64%,61.75%,26.00%,0.00%,56.50%,47.50%,34.00%,27.65%,46.00%,1.43%,20.00%,62.00%,35.00%,83.75%,0.13,1.66,1.53,4.99
36,42.76%,Gemma-7b-it (Prompt),https://blog.google/technology/developers/gemma-open-models/,Google,gemma-terms-of-use,39.05%,31.75%,42.18%,47.75%,29.00%,24.00%,48.00%,30.00%,36.00%,30.00%,44.00%,10.00%,32.00%,40.00%,25.00%,70.83%,0.03,0.09,N/A,N/A
37,39.65%,Deepseek-v1.5 (Prompt),https://huggingface.co/deepseek-ai/deepseek-coder-7b-instruct-v1.5,Deepseek,Deepseek License,36.98%,30.89%,38.91%,49.50%,4.00%,24.00%,48.50%,37.00%,23.50%,37.06%,38.00%,35.71%,38.00%,36.00%,12.50%,57.08%,0.45,1.2,N/A,N/A
38,39.53%,Mistral-Small-2402 (Prompt),https://docs.mistral.ai/guides/model-selection/,Mistral AI,Proprietary,37.78%,38.03%,5.64%,5.75%,6.00%,4.00%,8.00%,79.00%,58.50%,34.12%,6.00%,74.29%,20.00%,68.00%,30.00%,98.33%,0.71,1.11,0.95,3.03
36,42.76%,Gemma-7b-it (Prompt),https://blog.google/technology/developers/gemma-open-models/,Google,gemma-terms-of-use,39.05%,31.75%,42.18%,47.75%,29.00%,24.00%,48.00%,30.00%,36.00%,30.00%,44.00%,10.00%,32.00%,40.00%,25.00%,70.83%,0.37,0.06,N/A,N/A
37,39.65%,Deepseek-v1.5 (Prompt),https://huggingface.co/deepseek-ai/deepseek-coder-7b-instruct-v1.5,Deepseek,Deepseek License,36.98%,30.89%,38.91%,49.50%,4.00%,24.00%,48.50%,37.00%,23.50%,37.06%,38.00%,35.71%,38.00%,36.00%,12.50%,57.08%,3.24,0.53,N/A,N/A
38,39.53%,Mistral-Small-2402 (Prompt),https://docs.mistral.ai/guides/model-selection/,Mistral AI,Proprietary,37.78%,38.03%,5.64%,5.75%,6.00%,4.00%,8.00%,79.00%,58.50%,34.12%,6.00%,74.29%,20.00%,68.00%,30.00%,98.33%,2.26,1.11,0.95,3.03
39,23.59%,Mistral-small-2402 (FC Auto),https://docs.mistral.ai/guides/model-selection/,Mistral AI,Proprietary,2.53%,34.37%,1.64%,2.25%,0.00%,0.00%,2.50%,3.00%,3.00%,56.47%,79.00%,24.29%,70.00%,6.00%,5.00%,99.58%,1.95,2.93,1.9,6.23
3 changes: 2 additions & 1 deletion leaderboard.html
Original file line number Diff line number Diff line change
Expand Up @@ -95,7 +95,7 @@ <h2>Leaderboard</h2>
</p>
<div style="margin-bottom: 15px;">
<button id="expand-btn" onclick="toggleExpand()">Expand/Collapse Table</button>
<span style="margin-left: 10px;"><b><i style="font-size: 1.0em;">Last updated: 2024-04-25 <a
<span style="margin-left: 10px;"><b><i style="font-size: 1.0em;">Last updated: 2024-04-27 <a
href="https://github.com/ShishirPatil/gorilla/tree/main/berkeley-function-call-leaderboard#changelog">[Change
Log]</a></i></b></span>
</div>
Expand Down Expand Up @@ -151,6 +151,7 @@ <h2>Leaderboard</h2>
<p>
<b>Cost</b> is calculated as an estimate of the cost per 1000 function calls, in USD.
<b>Latency</b> is measured in seconds.
For <b>Open-Source Models</b>, the cost and latency are all calculated when serving with <a href="https://github.com/vllm-project/vllm">vLLM</a> using 8 V100 GPUs.
</p>
<p>
<b>AST Summary</b> is the unweighted average of the four test categories under AST Evaluation.
Expand Down

0 comments on commit 2c87d43

Please sign in to comment.