Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: Code for function calling using 01/o3 models #152

Merged
merged 3 commits into from
Feb 27, 2025
Merged

Conversation

dhruvj07
Copy link
Contributor

No description provided.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hello @dhruvj07, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

Summary of Changes

This pull request introduces code for function calling using OpenAI's 01/o3 models within the Javelin SDK. It includes a new Jupyter notebook example demonstrating the usage of synchronous and asynchronous OpenAI endpoints integrated with Javelin. The notebook initializes both synchronous and asynchronous OpenAI clients, registers them with the Javelin client, and then calls various OpenAI endpoints such as chat completions, completions, embeddings, and streaming. The example showcases how to use the Javelin SDK to monitor and manage OpenAI API calls.

Highlights

  • Function Calling with OpenAI Models: Implements function calling using OpenAI's 01/o3 models, enabling the use of tools and functions within chat completions.
  • Jupyter Notebook Example: Provides a comprehensive Jupyter notebook demonstrating synchronous and asynchronous OpenAI endpoint integration with Javelin.
  • OpenAI Endpoint Integration: Demonstrates the usage of various OpenAI endpoints including chat completions, completions, embeddings, and streaming within the Javelin framework.
  • Asynchronous OpenAI Client: Includes an example of using an asynchronous OpenAI client to perform chat completions.

Changelog

  • examples/openai/openai-azure-fun_calling.ipynb
    • Added a new Jupyter notebook demonstrating synchronous and asynchronous OpenAI endpoints integrated with Javelin.
    • Initialized synchronous and asynchronous OpenAI clients.
    • Registered OpenAI clients with the Javelin client.
    • Called OpenAI chat completions endpoint with a simple query.
    • Demonstrated OpenAI completions endpoint.
    • Showcased OpenAI embeddings endpoint.
    • Implemented OpenAI streaming endpoint.
    • Included an asynchronous OpenAI chat completions example.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.


Did you know?

The name 'Jupyter' is a combination of Julia, Python, and R, which are the core programming languages initially supported by the Jupyter project.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces code for function calling using 01/o3 models in an OpenAI notebook. The notebook demonstrates the usage of synchronous and asynchronous OpenAI endpoints integrated with Javelin. Overall, the code provides a good starting point for users to explore function calling with Javelin and OpenAI. However, there are a few areas that could be improved for clarity and error handling.

Summary of Findings

  • Error Handling: The notebook currently displays a traceback due to an APIConnectionError. This needs to be addressed to ensure the notebook runs smoothly for users.
  • API Key Security: The notebook prints the OpenAI API key to stdout, which is a security risk. This should be removed.
  • Clarity of comments: Some comments could be improved to better explain the purpose of the code.

Assessment

The pull request introduces a valuable feature by demonstrating function calling with 01/o3 models in an OpenAI notebook integrated with Javelin. However, the presence of an APIConnectionError traceback and the insecure printing of the OpenAI API key are significant issues that need to be addressed before merging. Additionally, improving the clarity of comments would enhance the user experience. I recommend addressing these comments before requesting a review from someone else. Please also ensure that users have others review and approve this code before merging.

"\n",
"# Create OpenAI client using the API key from the environment variable\n",
"openai_api_key = os.getenv(\"OPENAI_API_KEY\")\n",
"print(openai_api_key)\n",

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

Printing the OpenAI API key to stdout is a security risk. Remove this line to prevent accidental exposure of the key.

Comment on lines +37 to +40
"outputs": [
{
"name": "stdout",
"output_type": "stream",

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The notebook is throwing an APIConnectionError. It would be useful to add some error handling to gracefully handle this case, and provide the user with a more informative message. Also, it would be useful to add a comment explaining the purpose of this code block.

try:
    chat_completions = openai_client.chat.completions.create(
        model="gpt-3.5-turbo",
        messages=[{"role": "user", "content": "What is machine learning?"}],
    )
    print(chat_completions.model_dump_json(indent=2))
except Exception as e:
    print(f"Error calling chat completions: {e}")

@dhruvj07 dhruvj07 merged commit d59b63c into main Feb 27, 2025
5 of 6 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants