Skip to content

Commit

Permalink
Add back api docs (#62)
Browse files Browse the repository at this point in the history
  • Loading branch information
samuelcolvin authored Nov 18, 2024
1 parent 3c3041f commit cd47b0f
Show file tree
Hide file tree
Showing 24 changed files with 200 additions and 98 deletions.
5 changes: 3 additions & 2 deletions .github/workflows/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -59,8 +59,9 @@ jobs:
enable-cache: true

- run: uv sync --python 3.12 --frozen --group docs
- run: docs
if: github.repository_owner != 'pydantic'

# always build docs to check it works without insiders packages
- run: make docs

- run: make docs-insiders
if: github.repository_owner == 'pydantic'
Expand Down
7 changes: 5 additions & 2 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -71,6 +71,9 @@ docs-serve:
.docs-insiders-install:
ifeq ($(shell uv pip show mkdocs-material | grep -q insiders && echo 'installed'), installed)
@echo 'insiders packages already installed'
else ifeq ($(PPPR_TOKEN),)
@echo "Error: PPPR_TOKEN is not set, can't install insiders packages"
@exit 1
else
@echo 'installing insiders packages...'
@uv pip install -U mkdocs-material mkdocstrings-python \
Expand All @@ -79,11 +82,11 @@ endif

.PHONY: docs-insiders # Build the documentation using insiders packages
docs-insiders: .docs-insiders-install
uv run --no-sync mkdocs build
uv run --no-sync mkdocs build -f mkdocs.insiders.yml

.PHONY: docs-serve-insiders # Build and serve the documentation using insiders packages
docs-serve-insiders: .docs-insiders-install
uv run --no-sync mkdocs serve
uv run --no-sync mkdocs serve -f mkdocs.insiders.yml

.PHONY: cf-pages-build # Install uv, install dependencies and build the docs, used on CloudFlare Pages
cf-pages-build:
Expand Down
26 changes: 4 additions & 22 deletions docs/agents.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ The [`Agent`][pydantic_ai.Agent] class is well documented, but in essence you ca
* One or more [retrievers](#retrievers) — functions that the LLM may call to get information while generating a response
* An optional structured [result type](results.md) — the structured datatype the LLM must return at the end of a run
* A [dependency](dependencies.md) type constraint — system prompt functions, retrievers and result validators may all use dependencies when they're run
* Agents may optionally also have a default [model](models/index.md) associated with them, the model to use can also be defined when running the agent
* Agents may optionally also have a default [model](api/models/base.md) associated with them, the model to use can also be defined when running the agent

In typing terms, agents are generic in their dependency and result types, e.g. an agent which required `#!python Foobar` dependencies and returned data of type `#!python list[str]` results would have type `#!python Agent[Foobar, list[str]]`.

Expand Down Expand Up @@ -350,7 +350,9 @@ agent.run_sync('hello', model=FunctionModel(print_schema))

_(This example is complete, it can be run "as is")_

The return type of retriever can any valid JSON object ([`JsonData`][pydantic_ai.dependencies.JsonData]) as some models (e.g. Gemini) support semi-structured return values, some expect text (OpenAI) but seem to be just as good at extracting meaning from the data, if a Python is returned and the model expects a string, the value will be serialized to JSON
The return type of retriever can any valid JSON object ([`JsonData`][pydantic_ai.dependencies.JsonData]) as some models (e.g. Gemini) support semi-structured return values, some expect text (OpenAI) but seem to be just as good at extracting meaning from the data, if a Python is returned and the model expects a string, the value will be serialized to JSON.

If a retriever has a single parameter that can be represented as an object in JSON schema (e.g. dataclass, TypedDict, pydantic model), the schema for the retriever is simplified to be just that object. (TODO example)

## Reflection and self-correction

Expand Down Expand Up @@ -478,23 +480,3 @@ else:
1. Define a retriever that will raise `ModelRetry` repeatedly in this case.

_(This example is complete, it can be run "as is")_

## API Reference

::: pydantic_ai.Agent
options:
members:
- __init__
- run
- run_sync
- run_stream
- model
- override_deps
- override_model
- last_run_messages
- system_prompt
- retriever_plain
- retriever_context
- result_validator

::: pydantic_ai.exceptions
17 changes: 17 additions & 0 deletions docs/api/agent.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
# `pydantic_ai.Agent`

::: pydantic_ai.Agent
options:
members:
- __init__
- run
- run_sync
- run_stream
- model
- override_deps
- override_model
- last_run_messages
- system_prompt
- retriever_plain
- retriever_context
- result_validator
3 changes: 3 additions & 0 deletions docs/api/dependencies.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
# `pydantic_ai.dependencies`

::: pydantic_ai.dependencies
3 changes: 3 additions & 0 deletions docs/api/exceptions.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
# `pydantic_ai.exceptions`

::: pydantic_ai.exceptions
17 changes: 17 additions & 0 deletions docs/api/messages.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
# `pydantic_ai.messages`

::: pydantic_ai.messages
options:
members:
- Message
- SystemPrompt
- UserPrompt
- ToolReturn
- RetryPrompt
- ModelAnyResponse
- ModelTextResponse
- ModelStructuredResponse
- ToolCall
- ArgsJson
- ArgsObject
- MessagesTypeAdapter
File renamed without changes.
2 changes: 1 addition & 1 deletion docs/models/function.md → docs/api/models/function.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
# FunctionModel
# `pydantic_ai.models.function`

::: pydantic_ai.models.function
2 changes: 1 addition & 1 deletion docs/models/gemini.md → docs/api/models/gemini.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
# Gemini
# `pydantic_ai.models.gemini`

::: pydantic_ai.models.gemini
2 changes: 1 addition & 1 deletion docs/models/openai.md → docs/api/models/openai.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
# OpenAI
# `pydantic_ai.models.openai`

::: pydantic_ai.models.openai
2 changes: 1 addition & 1 deletion docs/models/test.md → docs/api/models/test.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
# TestModel
# `pydantic_ai.models.test`

::: pydantic_ai.models.test
10 changes: 10 additions & 0 deletions docs/api/result.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
# `pydantic_ai.result`

::: pydantic_ai.result
options:
inherited_members: true
members:
- ResultData
- RunResult
- StreamedRunResult
- Cost
4 changes: 0 additions & 4 deletions docs/dependencies.md
Original file line number Diff line number Diff line change
Expand Up @@ -342,7 +342,3 @@ The following examples demonstrate how to use dependencies in PydanticAI:
- [Weather Agent](examples/weather-agent.md)
- [SQL Generation](examples/sql-gen.md)
- [RAG](examples/rag.md)

## API Reference

::: pydantic_ai.dependencies
Binary file added docs/img/logfire-weather-agent.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
26 changes: 26 additions & 0 deletions docs/logfire.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
# Monitoring and Performance

Applications that use LLMs have some challenges that are well known and understood: LLMs are **slow**, **unreliable** and **expensive**.
These applications also have some challenges that most developers have encountered much less often: they're **fickle** and **non-deterministic**. Subtle changes in a prompt can completely change a model's performance, and there's no `EXPLAIN` query you can run to understand why.

From a software engineers point of view, you can think of LLMs as the worst database you've ever heard of, but worse.

To build successful applications with LLMs, we need new tools to understand both model performance, and the behavior of applications that rely on them.

LLM Observability tools that just let you understand how your model is performing are useless: making API calls to an LLM is easy, it's building that into an application that's hard.

## Pydantic Logfire

[Pydantic Logfire](https://pydantic.dev/logfire) is an observability platform from the developers of Pydantic and PydanticAI, that aims to let you understand your entire application: Gen AI, classic predictive AI, HTTP traffic, database queries and everything else a modern application needs.

!!! note "Pydantic Logfire is a commercial product"
Logfire is a commercially supported, hosted platform with an extremely generous and perpetual free tier.
You can sign up and start using Logfire in a couple of minutes.

PydanticAI has built-in (but optional) support for Logfire via the [`logfire-api`](https://github.com/pydantic/logfire/tree/main/logfire-api) no-op package.

That means if the `logfire` package is installed, detailed information about agent runs is sent to Logfire. But if the `logfire` package is not installed, there's no overhead and nothing is sent.

Here's an example showing details of running the [Weather Agent](examples/weather-agent.md) in Logfire:

![Weather Agent Logfire](img/logfire-weather-agent.png)
24 changes: 3 additions & 21 deletions docs/message-history.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ and [`StreamedRunResult`][pydantic_ai.result.StreamedRunResult] (returned by [`A

Example of accessing methods on a [`RunResult`][pydantic_ai.result.RunResult] :

```python title="Accessing messages from a RunResult" hl_lines="9 12"
```python title="run_result_messages.py" hl_lines="10 28"
from pydantic_ai import Agent

agent = Agent('openai:gpt-4o', system_prompt='Be a helpful assistant.')
Expand Down Expand Up @@ -73,7 +73,7 @@ _(This example is complete, it can be run "as is")_

Example of accessing methods on a [`StreamedRunResult`][pydantic_ai.result.StreamedRunResult] :

```python title="Accessing messages from a StreamedRunResult" hl_lines="7 13"
```python title="streamed_run_result_messages.py" hl_lines="9 31"
from pydantic_ai import Agent

agent = Agent('openai:gpt-4o', system_prompt='Be a helpful assistant.')
Expand Down Expand Up @@ -142,7 +142,7 @@ To use existing messages in a run, pass them to the `message_history` parameter
[`all_messages()`][pydantic_ai.result.RunResult.all_messages] or [`new_messages()`][pydantic_ai.result.RunResult.new_messages].


```py title="Reusing messages in a conversation" hl_lines="8 11"
```py title="Reusing messages in a conversation" hl_lines="9 13"
from pydantic_ai import Agent

agent = Agent('openai:gpt-4o', system_prompt='Be a helpful assistant.')
Expand Down Expand Up @@ -236,21 +236,3 @@ print(result2.all_messages())
## Examples

For a more complete example of using messages in conversations, see the [chat app](examples/chat-app.md) example.

## API Reference

::: pydantic_ai.messages
options:
members:
- Message
- SystemPrompt
- UserPrompt
- ToolReturn
- RetryPrompt
- ModelAnyResponse
- ModelTextResponse
- ModelStructuredResponse
- ToolCall
- ArgsJson
- ArgsObject
- MessagesTypeAdapter
41 changes: 27 additions & 14 deletions docs/results.md
Original file line number Diff line number Diff line change
@@ -1,28 +1,41 @@
## Ending runs

TODO
**TODO**

* runs end when either a plain text response is received or the model calls a tool associated with one of the structured result types
* example
* we should add `message_limit` (number of model messages) and `cost_limit` to `run()` etc.

## Structured result validation

**TODO**

* structured results (like retrievers) use Pydantic, Pydantic builds the JSON schema and does the validation
* PydanticAI tries hard to simplify the schema, this means:
* if the return type is `str` or a union including `str`, plain text responses are enabled
* if the schema is a union (after remove `str` from the members), each member is registered as its own tool call
* if the schema is not an object, the result type is wrapped in a single element object

## Result validators functions

TODO
**TODO**

* Some validation is inconvenient or impossible to do in Pydantic validators, in particular when the validation requires IO and is asynchronous. PydanticAI provides a way to add validation functions via the [`agent.result_validator`][pydantic_ai.Agent.result_validator] decorator.
* example

## Streamed Results

TODO
**TODO**

## Cost
Streamed responses provide a unique challenge:
* validating the partial result is both practically and semantically complex, but pydantic can do this
* we don't know if a result will be the final result of a run until we start streaming it, so PydanticAI has to start streaming just enough of the response to sniff out if it's the final response, then either stream the rest of the response to call a retriever, or return an object that lets the rest of the response be streamed by the user
* examples including: streaming text, streaming validated data, streaming the raw data to do validation inside a try/except block when necessary
* explanation of how streamed responses are "debounced"

TODO
## Cost

## API Reference
**TODO**

::: pydantic_ai.result
options:
inherited_members: true
members:
- ResultData
- RunResult
- StreamedRunResult
- Cost
* counts tokens, not dollars
* example
35 changes: 35 additions & 0 deletions mkdocs.insiders.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
INHERIT: mkdocs.yml

markdown_extensions:
- tables
- admonition
- attr_list
- md_in_html
- pymdownx.details
- pymdownx.caret
- pymdownx.critic
- pymdownx.mark
- pymdownx.superfences
- pymdownx.snippets
- pymdownx.tilde
- pymdownx.inlinehilite
- pymdownx.highlight:
pygments_lang_class: true
- pymdownx.extra:
pymdownx.superfences:
custom_fences:
- name: mermaid
class: mermaid
format: !!python/name:pymdownx.superfences.fence_code_format
- pymdownx.emoji:
emoji_index: !!python/name:material.extensions.emoji.twemoji
emoji_generator: !!python/name:material.extensions.emoji.to_svg
- pymdownx.tabbed:
alternate_style: true
- pymdownx.tasklist:
custom_checkbox: true
- sane_lists # this means you can start a list from any number
- material.extensions.preview:
targets:
include:
- '*'
23 changes: 13 additions & 10 deletions mkdocs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -18,12 +18,7 @@ nav:
- results.md
- message-history.md
- testing-evals.md
- Models:
- models/index.md
- models/openai.md
- models/gemini.md
- models/test.md
- models/function.md
- logfire.md
- Examples:
- examples/index.md
- examples/pydantic-model.md
Expand All @@ -33,6 +28,17 @@ nav:
- examples/stream-markdown.md
- examples/stream-whales.md
- examples/chat-app.md
- API Reference:
- api/agent.md
- api/result.md
- api/messages.md
- api/dependencies.md
- api/exceptions.md
- api/models/base.md
- api/models/openai.md
- api/models/gemini.md
- api/models/test.md
- api/models/function.md

extra:
# hide the "Made with Material for MkDocs" message
Expand Down Expand Up @@ -120,10 +126,6 @@ markdown_extensions:
- pymdownx.tasklist:
custom_checkbox: true
- sane_lists # this means you can start a list from any number
- material.extensions.preview:
targets:
include:
- '*'

watch:
- pydantic_ai
Expand All @@ -132,6 +134,7 @@ watch:
plugins:
- search
- social
- glightbox
- mkdocstrings:
handlers:
python:
Expand Down
Loading

0 comments on commit cd47b0f

Please sign in to comment.