Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for thinking LLMs directly in gr.ChatInterface #10305

Merged
merged 98 commits into from
Jan 10, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
98 commits
Select commit Hold shift + click to select a range
cc5c2cb
ungroup thoughts from messages
hannahblair Dec 18, 2024
773db7f
rename messagebox to thought
hannahblair Dec 18, 2024
2b8da48
refactor
hannahblair Dec 18, 2024
281ff2a
* add metadata typing
hannahblair Dec 19, 2024
915439e
tweaks
hannahblair Dec 19, 2024
134a7b7
tweak
hannahblair Dec 19, 2024
462f394
add changeset
gradio-pr-bot Dec 19, 2024
894d5bd
Merge branch 'thought-ui' of github.com:gradio-app/gradio into though…
hannahblair Dec 19, 2024
56fcac2
fix expanded rotation
hannahblair Dec 19, 2024
ea9eaf3
border radius
hannahblair Dec 19, 2024
1e731b7
update thought design
hannahblair Jan 6, 2025
bde1c69
move spinner
hannahblair Jan 6, 2025
983059a
Merge branch 'main' into thought-ui
hannahblair Jan 7, 2025
9aedecc
Merge branch 'main' into thought-ui
hannahblair Jan 7, 2025
8d2aa02
prevent circular reference
hannahblair Jan 7, 2025
653f66e
revert border removal
hannahblair Jan 7, 2025
220804c
css tweaks
hannahblair Jan 7, 2025
e40f69a
border tweak
hannahblair Jan 7, 2025
9d4e00a
move chevron to the left
hannahblair Jan 7, 2025
bd47db8
tweak nesting logic
hannahblair Jan 7, 2025
811429d
thought group spacing
hannahblair Jan 7, 2025
fa2caa4
update run.py
hannahblair Jan 7, 2025
1cdaf40
icon changes
abidlabs Jan 7, 2025
1d2129d
format
abidlabs Jan 7, 2025
cc9ef78
add changeset
gradio-pr-bot Jan 7, 2025
cc31a37
Merge branch 'main' into thought-ui
abidlabs Jan 7, 2025
259a8ea
add nested thought demo
abidlabs Jan 7, 2025
4e117eb
changes
abidlabs Jan 7, 2025
ebafced
changes
abidlabs Jan 7, 2025
b928677
changes
abidlabs Jan 7, 2025
bc1ba8c
add demo
abidlabs Jan 7, 2025
105009b
docs
abidlabs Jan 7, 2025
e77db20
guide
abidlabs Jan 7, 2025
87d79b5
refactor styles and clean up logic
hannahblair Jan 7, 2025
8914bc5
Merge branch 'thought-ui' of github.com:gradio-app/gradio into though…
hannahblair Jan 8, 2025
863a794
revert demo change and and deeper nested thought to demo
hannahblair Jan 8, 2025
0469e48
add optional duration to message types
hannahblair Jan 8, 2025
340d980
add nested thoughts story
hannahblair Jan 8, 2025
39e3e0c
format
hannahblair Jan 8, 2025
6fa2005
guide
abidlabs Jan 8, 2025
e4c6fc2
Merge branch 'thought-ui' into thought-ci
abidlabs Jan 8, 2025
ce07e83
change dropdown icon button
hannahblair Jan 8, 2025
0f7515a
remove requirement for id's in nested thoughts
hannahblair Jan 8, 2025
93972a8
support markdown in thought title
hannahblair Jan 8, 2025
7de8b1d
get thought content in copied value
hannahblair Jan 8, 2025
02eefbc
add funcs to utils
hannahblair Jan 8, 2025
fc993c4
move is_all_text
hannahblair Jan 8, 2025
e85b242
remove comment
hannahblair Jan 8, 2025
519351a
Merge branch 'main' into thought-ui
hannahblair Jan 8, 2025
af59ddc
Merge branch 'main' into thought-ui
hannahblair Jan 8, 2025
9cf679d
Merge branch 'thought-ui' into thought-ci
abidlabs Jan 8, 2025
7bf0e7d
notebook
abidlabs Jan 8, 2025
57afd35
Merge branch 'thought-ui' into thought-ci
abidlabs Jan 8, 2025
8f85a10
change bot padding
hannahblair Jan 8, 2025
7c0bb48
Merge branch 'thought-ui' into thought-ci
abidlabs Jan 8, 2025
cc224db
changes
abidlabs Jan 8, 2025
c8c6668
Merge branch 'thought-ci' of github.com:gradio-app/gradio into though…
abidlabs Jan 8, 2025
8ed7aa7
changes
abidlabs Jan 8, 2025
5c77033
changes
abidlabs Jan 8, 2025
0846df2
panel css fix
hannahblair Jan 8, 2025
52615f9
Merge branch 'thought-ui' into thought-ci
abidlabs Jan 8, 2025
7a8b4e9
changes
abidlabs Jan 8, 2025
2b0f98e
Merge branch 'thought-ci' of github.com:gradio-app/gradio into though…
abidlabs Jan 8, 2025
b2a23a3
changes
abidlabs Jan 8, 2025
294ebe6
changes
abidlabs Jan 8, 2025
2811955
changes
abidlabs Jan 8, 2025
369d7a3
tweak thought content opacity
hannahblair Jan 8, 2025
fc05ff3
more changes
abidlabs Jan 8, 2025
ef41371
Merge branch 'main' into thought-ci
abidlabs Jan 8, 2025
74ac11e
add changeset
gradio-pr-bot Jan 8, 2025
ea0300f
changes
abidlabs Jan 8, 2025
54b20c5
Merge branch 'thought-ci' of github.com:gradio-app/gradio into though…
abidlabs Jan 8, 2025
86062fe
restore
abidlabs Jan 9, 2025
1cd931e
changes
abidlabs Jan 9, 2025
dc95cbe
changes
abidlabs Jan 9, 2025
430a6ba
revert everythign
abidlabs Jan 9, 2025
94d2053
revert everythign
abidlabs Jan 9, 2025
2290fdf
revert
abidlabs Jan 9, 2025
ec7ea8e
changes
abidlabs Jan 9, 2025
793fc4d
revert
abidlabs Jan 9, 2025
82f77e5
make changes to demo
abidlabs Jan 9, 2025
dd21c41
notebooks
abidlabs Jan 9, 2025
2b180c3
more docs
abidlabs Jan 9, 2025
75b74ae
format
abidlabs Jan 9, 2025
1d74544
changes
abidlabs Jan 9, 2025
eea54bf
changes
abidlabs Jan 9, 2025
fd0969c
update demo
abidlabs Jan 9, 2025
085ee60
fix typing issues
abidlabs Jan 9, 2025
8f0dbce
chatbot
abidlabs Jan 9, 2025
d1a0d76
document chatmessage helper class
aliabd Jan 9, 2025
edb7b40
add changeset
gradio-pr-bot Jan 9, 2025
0c4cfb0
Merge branches 'thought-ci' and 'thought-ci' of github.com:gradio-app…
hannahblair Jan 9, 2025
e061be1
changes
abidlabs Jan 9, 2025
8ad9dbd
format
abidlabs Jan 9, 2025
edb057b
docs
abidlabs Jan 9, 2025
8607dcc
Merge branch 'main' into thought-ci
abidlabs Jan 9, 2025
19c2b1e
fix issue with chatmessage
aliabd Jan 9, 2025
e229dfa
Merge branch 'thought-ci' of https://github.com/gradio-app/gradio int…
aliabd Jan 9, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 7 additions & 0 deletions .changeset/rich-ducks-grow.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
---
"@gradio/chatbot": minor
"gradio": minor
"website": minor
---

feat:Add support for thinking LLMs directly in `gr.ChatInterface`
1 change: 1 addition & 0 deletions demo/chatinterface_nested_thoughts/run.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
{"cells": [{"cell_type": "markdown", "id": "302934307671667531413257853548643485645", "metadata": {}, "source": ["# Gradio Demo: chatinterface_nested_thoughts"]}, {"cell_type": "code", "execution_count": null, "id": "272996653310673477252411125948039410165", "metadata": {}, "outputs": [], "source": ["!pip install -q gradio "]}, {"cell_type": "code", "execution_count": null, "id": "288918539441861185822528903084949547379", "metadata": {}, "outputs": [], "source": ["import gradio as gr\n", "from gradio import ChatMessage\n", "import time\n", "\n", "sleep_time = 0.1\n", "long_sleep_time = 1\n", "\n", "def generate_response(message, history):\n", " start_time = time.time()\n", " responses = [\n", " ChatMessage(\n", " content=\"In order to find the current weather in San Francisco, I will need to use my weather tool.\",\n", " )\n", " ]\n", " yield responses\n", " time.sleep(sleep_time)\n", "\n", " main_thought = ChatMessage(\n", " content=\"\",\n", " metadata={\"title\": \"Using Weather Tool\", \"id\": 1, \"status\": \"pending\"},\n", " )\n", "\n", " responses.append(main_thought)\n", "\n", " yield responses\n", " time.sleep(long_sleep_time)\n", " responses[-1].content = \"Will check: weather.com and sunny.org\"\n", " yield responses\n", " time.sleep(sleep_time)\n", " responses.append(\n", " ChatMessage(\n", " content=\"Received weather from weather.com.\",\n", " metadata={\"title\": \"Checking weather.com\", \"parent_id\": 1, \"id\": 2, \"duration\": 0.05},\n", " )\n", " )\n", " yield responses\n", "\n", " sunny_start_time = time.time()\n", " time.sleep(sleep_time)\n", " sunny_thought = ChatMessage(\n", " content=\"API Error when connecting to sunny.org \ud83d\udca5\",\n", " metadata={\"title\": \"Checking sunny.org\", \"parent_id\": 1, \"id\": 3, \"status\": \"pending\"},\n", " )\n", "\n", " responses.append(sunny_thought)\n", " yield responses\n", "\n", " time.sleep(sleep_time)\n", " responses.append(\n", " ChatMessage(\n", " content=\"Failed again\",\n", " metadata={\"title\": \"I will try again\", \"id\": 4, \"parent_id\": 3, \"duration\": 0.1},\n", "\n", " )\n", " )\n", " sunny_thought.metadata[\"status\"] = \"done\"\n", " sunny_thought.metadata[\"duration\"] = time.time() - sunny_start_time\n", "\n", " main_thought.metadata[\"status\"] = \"done\"\n", " main_thought.metadata[\"duration\"] = time.time() - start_time\n", "\n", " yield responses\n", "\n", " time.sleep(long_sleep_time)\n", "\n", " responses.append(\n", " ChatMessage(\n", " content=\"Based on the data only from weather.com, the current weather in San Francisco is 60 degrees and sunny.\",\n", " )\n", " )\n", " yield responses\n", "\n", "demo = gr.ChatInterface(\n", " generate_response,\n", " type=\"messages\",\n", " title=\"Nested Thoughts Chat Interface\",\n", " examples=[\"What is the weather in San Francisco right now?\"]\n", ")\n", "\n", "if __name__ == \"__main__\":\n", " demo.launch()\n"]}], "metadata": {}, "nbformat": 4, "nbformat_minor": 5}
81 changes: 81 additions & 0 deletions demo/chatinterface_nested_thoughts/run.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,81 @@
import gradio as gr
from gradio import ChatMessage
import time

sleep_time = 0.1
long_sleep_time = 1

def generate_response(message, history):
start_time = time.time()
responses = [
ChatMessage(
content="In order to find the current weather in San Francisco, I will need to use my weather tool.",
)
]
yield responses
time.sleep(sleep_time)

main_thought = ChatMessage(
content="",
metadata={"title": "Using Weather Tool", "id": 1, "status": "pending"},
)

responses.append(main_thought)

yield responses
time.sleep(long_sleep_time)
responses[-1].content = "Will check: weather.com and sunny.org"
yield responses
time.sleep(sleep_time)
responses.append(
ChatMessage(
content="Received weather from weather.com.",
metadata={"title": "Checking weather.com", "parent_id": 1, "id": 2, "duration": 0.05},
)
)
yield responses

sunny_start_time = time.time()
time.sleep(sleep_time)
sunny_thought = ChatMessage(
content="API Error when connecting to sunny.org 💥",
metadata={"title": "Checking sunny.org", "parent_id": 1, "id": 3, "status": "pending"},
)

responses.append(sunny_thought)
yield responses

time.sleep(sleep_time)
responses.append(
ChatMessage(
content="Failed again",
metadata={"title": "I will try again", "id": 4, "parent_id": 3, "duration": 0.1},

)
)
sunny_thought.metadata["status"] = "done"
sunny_thought.metadata["duration"] = time.time() - sunny_start_time

main_thought.metadata["status"] = "done"
main_thought.metadata["duration"] = time.time() - start_time

yield responses

time.sleep(long_sleep_time)

responses.append(
ChatMessage(
content="Based on the data only from weather.com, the current weather in San Francisco is 60 degrees and sunny.",
)
)
yield responses

demo = gr.ChatInterface(
generate_response,
type="messages",
title="Nested Thoughts Chat Interface",
examples=["What is the weather in San Francisco right now?"]
)

if __name__ == "__main__":
demo.launch()
2 changes: 1 addition & 1 deletion demo/chatinterface_options/run.ipynb
Original file line number Diff line number Diff line change
@@ -1 +1 @@
{"cells": [{"cell_type": "markdown", "id": "302934307671667531413257853548643485645", "metadata": {}, "source": ["# Gradio Demo: chatinterface_options"]}, {"cell_type": "code", "execution_count": null, "id": "272996653310673477252411125948039410165", "metadata": {}, "outputs": [], "source": ["!pip install -q gradio "]}, {"cell_type": "code", "execution_count": null, "id": "288918539441861185822528903084949547379", "metadata": {}, "outputs": [], "source": ["import gradio as gr\n", "import random\n", "\n", "example_code = \"\"\"\n", "Here's an example Python lambda function:\n", "\n", "lambda x: x + {}\n", "\n", "Is this correct?\n", "\"\"\"\n", "\n", "def chat(message, history):\n", " if message == \"Yes, that's correct.\":\n", " return \"Great!\"\n", " else:\n", " return {\n", " \"role\": \"assistant\",\n", " \"content\": example_code.format(random.randint(1, 100)),\n", " \"options\": [\n", " {\"value\": \"Yes, that's correct.\", \"label\": \"Yes\"},\n", " {\"value\": \"No\"}\n", " ]\n", " }\n", "\n", "demo = gr.ChatInterface(\n", " chat,\n", " type=\"messages\",\n", " examples=[\"Write an example Python lambda function.\"]\n", ")\n", "\n", "if __name__ == \"__main__\":\n", " demo.launch()\n"]}], "metadata": {}, "nbformat": 4, "nbformat_minor": 5}
{"cells": [{"cell_type": "markdown", "id": "302934307671667531413257853548643485645", "metadata": {}, "source": ["# Gradio Demo: chatinterface_options"]}, {"cell_type": "code", "execution_count": null, "id": "272996653310673477252411125948039410165", "metadata": {}, "outputs": [], "source": ["!pip install -q gradio "]}, {"cell_type": "code", "execution_count": null, "id": "288918539441861185822528903084949547379", "metadata": {}, "outputs": [], "source": ["import gradio as gr\n", "import random\n", "\n", "example_code = \"\"\"\n", "Here's an example Python lambda function:\n", "\n", "lambda x: x + {}\n", "\n", "Is this correct?\n", "\"\"\"\n", "\n", "def chat(message, history):\n", " if message == \"Yes, that's correct.\":\n", " return \"Great!\"\n", " else:\n", " return gr.ChatMessage(\n", " content=example_code.format(random.randint(1, 100)),\n", " options=[\n", " {\"value\": \"Yes, that's correct.\", \"label\": \"Yes\"},\n", " {\"value\": \"No\"}\n", " ]\n", " )\n", "\n", "demo = gr.ChatInterface(\n", " chat,\n", " type=\"messages\",\n", " examples=[\"Write an example Python lambda function.\"]\n", ")\n", "\n", "if __name__ == \"__main__\":\n", " demo.launch()\n"]}], "metadata": {}, "nbformat": 4, "nbformat_minor": 5}
11 changes: 5 additions & 6 deletions demo/chatinterface_options/run.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,14 +13,13 @@ def chat(message, history):
if message == "Yes, that's correct.":
return "Great!"
else:
return {
"role": "assistant",
"content": example_code.format(random.randint(1, 100)),
"options": [
return gr.ChatMessage(
content=example_code.format(random.randint(1, 100)),
options=[
{"value": "Yes, that's correct.", "label": "Yes"},
{"value": "No"}
]
}
]
)

demo = gr.ChatInterface(
chat,
Expand Down
1 change: 1 addition & 0 deletions demo/chatinterface_thoughts/run.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
{"cells": [{"cell_type": "markdown", "id": "302934307671667531413257853548643485645", "metadata": {}, "source": ["# Gradio Demo: chatinterface_thoughts"]}, {"cell_type": "code", "execution_count": null, "id": "272996653310673477252411125948039410165", "metadata": {}, "outputs": [], "source": ["!pip install -q gradio "]}, {"cell_type": "code", "execution_count": null, "id": "288918539441861185822528903084949547379", "metadata": {}, "outputs": [], "source": ["import gradio as gr\n", "from gradio import ChatMessage\n", "import time\n", "\n", "sleep_time = 0.5\n", "\n", "def simulate_thinking_chat(message, history):\n", " start_time = time.time()\n", " response = ChatMessage(\n", " content=\"\",\n", " metadata={\"title\": \"_Thinking_ step-by-step\", \"id\": 0, \"status\": \"pending\"}\n", " )\n", " yield response\n", "\n", " thoughts = [\n", " \"First, I need to understand the core aspects of the query...\",\n", " \"Now, considering the broader context and implications...\",\n", " \"Analyzing potential approaches to formulate a comprehensive answer...\",\n", " \"Finally, structuring the response for clarity and completeness...\"\n", " ]\n", "\n", " accumulated_thoughts = \"\"\n", " for thought in thoughts:\n", " time.sleep(sleep_time)\n", " accumulated_thoughts += f\"- {thought}\\n\\n\"\n", " response.content = accumulated_thoughts.strip()\n", " yield response\n", "\n", " response.metadata[\"status\"] = \"done\"\n", " response.metadata[\"duration\"] = time.time() - start_time\n", " yield response\n", "\n", " response = [\n", " response,\n", " ChatMessage(\n", " content=\"Based on my thoughts and analysis above, my response is: This dummy repro shows how thoughts of a thinking LLM can be progressively shown before providing its final answer.\"\n", " )\n", " ]\n", " yield response\n", "\n", "\n", "demo = gr.ChatInterface(\n", " simulate_thinking_chat,\n", " title=\"Thinking LLM Chat Interface \ud83e\udd14\",\n", " type=\"messages\",\n", ")\n", "\n", "if __name__ == \"__main__\":\n", " demo.launch()\n"]}], "metadata": {}, "nbformat": 4, "nbformat_minor": 5}
49 changes: 49 additions & 0 deletions demo/chatinterface_thoughts/run.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@
import gradio as gr
from gradio import ChatMessage
import time

sleep_time = 0.5

def simulate_thinking_chat(message, history):
start_time = time.time()
response = ChatMessage(
content="",
metadata={"title": "_Thinking_ step-by-step", "id": 0, "status": "pending"}
)
yield response

thoughts = [
"First, I need to understand the core aspects of the query...",
"Now, considering the broader context and implications...",
"Analyzing potential approaches to formulate a comprehensive answer...",
"Finally, structuring the response for clarity and completeness..."
]

accumulated_thoughts = ""
for thought in thoughts:
time.sleep(sleep_time)
accumulated_thoughts += f"- {thought}\n\n"
response.content = accumulated_thoughts.strip()
yield response

response.metadata["status"] = "done"
response.metadata["duration"] = time.time() - start_time
yield response

response = [
response,
ChatMessage(
content="Based on my thoughts and analysis above, my response is: This dummy repro shows how thoughts of a thinking LLM can be progressively shown before providing its final answer."
)
]
yield response


demo = gr.ChatInterface(
simulate_thinking_chat,
title="Thinking LLM Chat Interface 🤔",
type="messages",
)

if __name__ == "__main__":
demo.launch()
7 changes: 7 additions & 0 deletions gradio/chat_interface.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@

import builtins
import copy
import dataclasses
import inspect
import os
import warnings
Expand All @@ -32,6 +33,7 @@
get_component_instance,
)
from gradio.components.chatbot import (
ChatMessage,
ExampleMessage,
Message,
MessageDict,
Expand Down Expand Up @@ -808,6 +810,11 @@ def _message_as_message_dict(
for msg in message:
if isinstance(msg, Message):
message_dicts.append(msg.model_dump())
elif isinstance(msg, ChatMessage):
msg.role = role
message_dicts.append(
dataclasses.asdict(msg, dict_factory=utils.dict_factory)
)
elif isinstance(msg, (str, Component)):
message_dicts.append({"role": role, "content": msg})
elif (
Expand Down
26 changes: 21 additions & 5 deletions gradio/components/chatbot.py
Original file line number Diff line number Diff line change
Expand Up @@ -37,6 +37,8 @@ class MetadataDict(TypedDict):
title: Union[str, None]
id: NotRequired[int | str]
parent_id: NotRequired[int | str]
duration: NotRequired[float]
status: NotRequired[Literal["pending", "done"]]
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note that I moved duration inside of metadata where I think it makes more sense. I also added a status for more control over the loading spinner, as it improves the UI a bit (cc @yvrjsharma)

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yep that makes sense!



class Option(TypedDict):
Expand All @@ -59,7 +61,6 @@ class MessageDict(TypedDict):
role: Literal["user", "assistant", "system"]
metadata: NotRequired[MetadataDict]
options: NotRequired[list[Option]]
duration: NotRequired[int]


class FileMessage(GradioModel):
Expand Down Expand Up @@ -87,14 +88,21 @@ class Metadata(GradioModel):
title: Optional[str] = None
id: Optional[int | str] = None
parent_id: Optional[int | str] = None
duration: Optional[float] = None
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should it be float or int? it's an int on line 40

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it can be a float in the general case, I'll update the other reference!

status: Optional[Literal["pending", "done"]] = None

def __setitem__(self, key: str, value: Any) -> None:
setattr(self, key, value)

def __getitem__(self, key: str) -> Any:
return getattr(self, key)


class Message(GradioModel):
role: str
metadata: Metadata = Field(default_factory=Metadata)
content: Union[str, FileMessage, ComponentMessage]
options: Optional[list[Option]] = None
duration: Optional[int] = None


class ExampleMessage(TypedDict):
Expand All @@ -110,13 +118,22 @@ class ExampleMessage(TypedDict):
] # list of file paths or URLs to be added to chatbot when example is clicked


@document()
@dataclass
class ChatMessage:
role: Literal["user", "assistant", "system"]
"""
A dataclass to represent a message in the Chatbot component (type="messages").
Parameters:
content: The content of the message. Can be a string or a Gradio component.
role: The role of the message, which determines the alignment of the message in the chatbot. Can be "user", "assistant", or "system". Defaults to "assistant".
metadata: The metadata of the message, which is used to display intermediate thoughts / tool usage. Should be a dictionary with the following keys: "title" (required to display the thought), and optionally: "id" and "parent_id" (to nest thoughts), "duration" (to display the duration of the thought), "status" (to display the status of the thought).
options: The options of the message. A list of Option objects, which are dictionaries with the following keys: "label" (the text to display in the option), and optionally "value" (the value to return when the option is selected if different from the label).
"""

content: str | FileData | Component | FileDataDict | tuple | list
role: Literal["user", "assistant", "system"] = "assistant"
metadata: MetadataDict | Metadata = field(default_factory=Metadata)
options: Optional[list[Option]] = None
duration: Optional[int] = None


class ChatbotDataMessages(GradioRootModel):
Expand Down Expand Up @@ -545,7 +562,6 @@ def _postprocess_message_messages(
content=message.content, # type: ignore
metadata=message.metadata, # type: ignore
options=message.options,
duration=message.duration,
)
elif isinstance(message, Message):
return message
Expand Down
2 changes: 1 addition & 1 deletion gradio/monitoring_dashboard.py
Original file line number Diff line number Diff line change
Expand Up @@ -60,7 +60,7 @@ def gen_plot(start, end, selected_fn):
if selected_fn != "All":
df = df[df["function"] == selected_fn]
df = df[(df["time"] >= start) & (df["time"] <= end)]
df["time"] = pd.to_datetime(df["time"], unit="s")
df["time"] = pd.to_datetime(df["time"], unit="s") # type: ignore

unique_users = len(df["session_hash"].unique()) # type: ignore
total_requests = len(df)
Expand Down
13 changes: 13 additions & 0 deletions gradio/utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -1609,3 +1609,16 @@ def get_icon_path(icon_name: str) -> str:
set_static_paths(icon_path)
return icon_path
raise ValueError(f"Icon file not found: {icon_name}")


def dict_factory(items):
"""
A utility function to convert a dataclass that includes pydantic fields to a dictionary.
"""
d = {}
for key, value in items:
if hasattr(value, "model_dump"):
d[key] = value.model_dump()
else:
d[key] = value
return d
Loading
Loading