Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix(doc): fixed all broken links #2121

Merged
merged 1 commit into from
Oct 15, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions docs/blog/main.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -111,7 +111,7 @@ We're excited to announce two major features this week:

1. We've integrated [RAGAS evaluators](https://docs.ragas.io/) into agenta. Two new evaluators have been added: **RAG Faithfulness** (measuring how consistent the LLM output is with the context) and **Context Relevancy** (assessing how relevant the retrieved context is to the question). Both evaluators use intermediate outputs within the trace to calculate the final score.

[Check out the tutorial](evaluation/rag_evaluators) to learn how to use RAG evaluators.
[Check out the tutorial](/evaluation/evaluators/rag-evaluators) to learn how to use RAG evaluators.

{" "}

Expand Down Expand Up @@ -259,7 +259,7 @@ config = agenta.get_config(base_id="xxxxx", environment="production", cache_time

```

You can find additional documentation [here](/prompt_management/integrating).
You can find additional documentation [here](/prompt-management/integration/how-to-integrate-with-agenta).

**Improvements**

Expand Down
4 changes: 2 additions & 2 deletions docs/docs/evaluation/evaluators/05-rag-evaluators.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -124,9 +124,9 @@ On the evlauators page, click on `RAG Faithfulness` for example.
RAG evaluators, based on [RAGAS](https://docs.ragas.io/) are different from
other evaluators in Agenta in that they often require internal variables. For
instance,
[Faithfulness](https://docs.ragas.io/en/stable/concepts/metrics/faithfulness.html)
[Faithfulness](https://docs.ragas.io/en/stable/concepts/metrics/available_metrics/faithfulness/)
and [Context
Relevancy](https://docs.ragas.io/en/stable/concepts/metrics/context_precision.html)
Relevancy](https://docs.ragas.io/en/stable/concepts/metrics/available_metrics/context_precision/)
both require `question`, `answer`, and `contexts`.

From the trace we saw before, we could say that `answer` maps to the second rag summarizer output report, denoted by `rag.summarizer[1].outputs.report`. Similarly, the `contexts` map to the rag retriever output movies, denoted by `rag.retriever.outputs.movies`.
Expand Down
8 changes: 4 additions & 4 deletions docs/docs/getting-started/01-introduction.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -18,12 +18,12 @@ management and evaluation**.

### With Agenta, you can:

1. Rapidly [**experiment** and **compare** prompts](/prompt_management/prompt_engineering) on [any LLM workflow](/prompt_management/setting_up/custom_applications) (chain-of-prompts, Retrieval Augmented Generation (RAG), LLM agents...)
2. Rapidly [**create test sets**](/evaluation/test_sets) and **golden datasets** for evaluation
1. Rapidly [**experiment** and **compare** prompts](/prompt-management/overview) on [any LLM workflow](/prompt-management/creating-a-custom-template) (chain-of-prompts, Retrieval Augmented Generation (RAG), LLM agents...)
2. Rapidly [**create test sets**](/evaluation/create-test-sets) and **golden datasets** for evaluation
3. **Evaluate** your application with pre-existing or **custom evaluators**
4. **Annotate** and **A/B test** your applications with **human feedback**
5. [**Collaborate with product teams**](/misc/team_management) for prompt engineering and evaluation
6. [**Deploy your application**](/prompt_management/deployment) in one-click in the UI, through CLI, or through github workflows.
6. [**Deploy your application**](/concepts/concepts#environments) in one-click in the UI, through CLI, or through github workflows.

Agenta focuses on increasing the speed of the development cycle of LLM applications by increasing the speed of experimentation.

Expand All @@ -33,7 +33,7 @@ Agenta focuses on increasing the speed of the development cycle of LLM applicati

Agenta enables prompt engineering and evaluation on any LLM app architecture, such as **Chain of Prompts**, **RAG**, or **LLM agents**. It is compatible with any framework like **Langchain** or **LlamaIndex**, and works with any model provider, such as **OpenAI**, **Cohere**, or **local models**.

[Jump here](/prompt_management/setting_up/custom_applications) to see how to use your own custom application with Agenta and [here](/guides/how_does_agenta_work) to understand more how Agenta works.
[Jump here](/prompt-management/creating-a-custom-template) to see how to use your own custom application with Agenta and [here](/concepts/architecture) to understand more how Agenta works.

### Enable collaboration between developers and product teams

Expand Down
4 changes: 2 additions & 2 deletions docs/docs/getting-started/02-quick-start.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ description: "Create and deploy your first LLM app in one minute"

:::note
This tutorial helps users create LLM apps using templates within the UI. For more complex applications involving code
in Agenta, please refer to Using code in Agenta [Using code in agenta](/prompt_management/setting_up/custom_applications){" "}
in Agenta, please refer to Using code in Agenta [Using code in agenta](/prompt-management/creating-a-custom-template){" "}
:::

Want a video tutorial instead? We have a 4-minute video for you. [Watch it here](https://youtu.be/plPVrHXQ-DU).
Expand Down Expand Up @@ -76,6 +76,6 @@ You can now find the API endpoint in the "Endpoints" menu. Copy and paste the co

:::info
Congratulations! You've created your first LLM application. Feel free to modify it, explore its parameters, and discover
Agenta's features. Your next steps could include [building an application using your own code](/prompt_management/custom_applications),
Agenta's features. Your next steps could include [building an application using your own code](/prompt-management/creating-a-custom-template),
or following one of our UI-based tutorials.
:::
6 changes: 3 additions & 3 deletions docs/docs/prompt-management/01-overview.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -81,7 +81,7 @@ Agenta enables you to version the entire **configuration** of the LLM app as a u
<DocCard
item={{
type: "link",
href: "/prompt-management/prompt-engineering-in-the-playground",
href: "/prompt-management/using-the-playground",
label: "Using the playground",
description: "Perform prompt engineering in the playground",
}}
Expand All @@ -97,7 +97,7 @@ Agenta enables you to version the entire **configuration** of the LLM app as a u
<DocCard
item={{
type: "link",
href: "/prompt-management/how-to-publish-a-prompt",
href: "/prompt-management/quick-start#2-publish-a-variant",
label: "Publishing a prompt",
description: "How to publish a prompt to an endpoint from the web UI.",
}}
Expand All @@ -108,7 +108,7 @@ Agenta enables you to version the entire **configuration** of the LLM app as a u
<DocCard
item={{
type: "link",
href: "/prompt-management/how-to-use-a-prompt",
href: "/prompt-management/quick-start#3-integrate-with-your-code",
label: "How to use a prompt",
description: "How to use a published a prompt in your code",
}}
Expand Down
4 changes: 2 additions & 2 deletions docs/docs/prompt-management/02-quick-start.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ title: "Quick Start"
In this tutorial, we will **create a prompt** in the web UI, **publish** it to a deployment, **integrate** it with our code base.

:::note
If you want to do this whole process programatically, jump to [this guide](/prompt-management/prompt-management-from-sdk)
If you want to do this whole process programatically, jump to [this guide](/prompt-management/integration/how-to-integrate-with-agenta)
:::

## 1. Create a prompt
Expand Down Expand Up @@ -95,4 +95,4 @@ Optionally you would like to revert to a previously published commit. For this c

## Next steps

Now that you've created and published your first prompt, you can learn how to do [prompt engineering in the playground](/prompt_management/prompt_engineering) or dive deeper into [the capabilities of the prompt management SDK](/prompt-management/the-sdk)
Now that you've created and published your first prompt, you can learn how to do [prompt engineering in the playground](/prompt-management/using-the-playground) or dive deeper into [the capabilities of the prompt management SDK](/prompt-management/creating-a-custom-template)
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ Agenta comes with several pre-built template LLM applications for common use cas
This guide will show you how to create a custom application and use it with Agenta.

:::tip
We recommend reading ["How does Agenta work"](/guides/how_does_agenta_work) beforehand to familiarize yourself with the main concepts of Agenta.
We recommend reading ["How does Agenta work"](/concepts/architecture) beforehand to familiarize yourself with the main concepts of Agenta.
:::

## How to create a custom application in Agenta?
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ In addition to prompt management, agenta provides observability features.
If you're using Agenta [as a proxy](#2-as-a-middleware--model-proxy), all your calls are traced automatically without any additional setup. However, if you're using Agenta [as prompt management system](#1-as-a-prompt-management-system) (i.e. only fetching the prompts), you need to integrate observability manually into your code base. You can learn how to do this [here](/observability/quickstart).
:::

### [1. As a prompt management system](/prompt-management/integration/02-fetch-prompts):
### [1. As a prompt management system](/prompt-management/integration/fetch-prompts):

In this approach, prompts are managed and stored in the Agenta backend. You use the Agenta SDK to fetch the latest deployed version of your prompt and use it in your application.

Expand All @@ -33,7 +33,7 @@ In this approach, prompts are managed and stored in the Agenta backend. You use
alt="A sequence diagram showing how to integrate with Agenta as a prompt management system"
/>

### **[2. As a middleware / model proxy](/prompt-management/integration/03-proxy-calls)**:
### **[2. As a middleware / model proxy](/prompt-management/integration/proxy-calls)**:

In this setup, Agenta provides you with an endpoint that forwards requests to the LLM on your behalf.

Expand Down
2 changes: 1 addition & 1 deletion docs/docs/reference/api/agenta-backend.info.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ import Export from "@theme/ApiExplorer/Export";
</span>

<Export
url={"https://raw.githubusercontent.com/PaloAltoNetworks/docusaurus-template-openapi-docs/main/examples/agenta.yaml"}
url={"https://raw.githubusercontent.com/Agenta-AI/agenta/refs/heads/main/docs/docs/reference/openapi.json"}
proxy={undefined}
>

Expand Down
2 changes: 1 addition & 1 deletion docs/docs/self-host/deploy_remotly/host-on-gcp.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ Step 2 until 6 can be skipped if you already have a project with billing enabled

In order to ssh into the instance you need to:

1. Uncomment these lines in the [Terraform instance file](https://github.com/Agenta-AI/agenta/blob/main/self-host/gcp/agenta_instance.tf)
1. Uncomment these lines in the [Terraform instance file](https://github.com/Agenta-AI/agenta/blob/main/self-host/gcp/agenta-instance.tf)

```bash
metadata = {
Expand Down
2 changes: 1 addition & 1 deletion docs/docusaurus.config.ts
Original file line number Diff line number Diff line change
Expand Up @@ -249,7 +249,7 @@ const config: Config = {
specPath: "docs/reference/openapi.json",
outputDir: "docs/reference/api",
downloadUrl:
"https://raw.githubusercontent.com/PaloAltoNetworks/docusaurus-template-openapi-docs/main/examples/agenta.yaml",
"https://raw.githubusercontent.com/Agenta-AI/agenta/refs/heads/main/docs/docs/reference/openapi.json",
sidebarOptions: {
groupPathsBy: "tag",
categoryLinkSource: "tag",
Expand Down