diff --git a/docs/blog/main.mdx b/docs/blog/main.mdx index 4f77259937..7cd807ccbf 100644 --- a/docs/blog/main.mdx +++ b/docs/blog/main.mdx @@ -111,7 +111,7 @@ We're excited to announce two major features this week: 1. We've integrated [RAGAS evaluators](https://docs.ragas.io/) into agenta. Two new evaluators have been added: **RAG Faithfulness** (measuring how consistent the LLM output is with the context) and **Context Relevancy** (assessing how relevant the retrieved context is to the question). Both evaluators use intermediate outputs within the trace to calculate the final score. - [Check out the tutorial](evaluation/rag_evaluators) to learn how to use RAG evaluators. + [Check out the tutorial](/evaluation/evaluators/rag-evaluators) to learn how to use RAG evaluators. {" "} @@ -259,7 +259,7 @@ config = agenta.get_config(base_id="xxxxx", environment="production", cache_time ``` -You can find additional documentation [here](/prompt_management/integrating). +You can find additional documentation [here](/prompt-management/integration/how-to-integrate-with-agenta). **Improvements** diff --git a/docs/docs/evaluation/evaluators/05-rag-evaluators.mdx b/docs/docs/evaluation/evaluators/05-rag-evaluators.mdx index 4563ca3c5a..a8cceb011a 100644 --- a/docs/docs/evaluation/evaluators/05-rag-evaluators.mdx +++ b/docs/docs/evaluation/evaluators/05-rag-evaluators.mdx @@ -124,9 +124,9 @@ On the evlauators page, click on `RAG Faithfulness` for example. RAG evaluators, based on [RAGAS](https://docs.ragas.io/) are different from other evaluators in Agenta in that they often require internal variables. For instance, -[Faithfulness](https://docs.ragas.io/en/stable/concepts/metrics/faithfulness.html) +[Faithfulness](https://docs.ragas.io/en/stable/concepts/metrics/available_metrics/faithfulness/) and [Context -Relevancy](https://docs.ragas.io/en/stable/concepts/metrics/context_precision.html) +Relevancy](https://docs.ragas.io/en/stable/concepts/metrics/available_metrics/context_precision/) both require `question`, `answer`, and `contexts`. From the trace we saw before, we could say that `answer` maps to the second rag summarizer output report, denoted by `rag.summarizer[1].outputs.report`. Similarly, the `contexts` map to the rag retriever output movies, denoted by `rag.retriever.outputs.movies`. diff --git a/docs/docs/getting-started/01-introduction.mdx b/docs/docs/getting-started/01-introduction.mdx index 497ff096b7..57c2282e66 100644 --- a/docs/docs/getting-started/01-introduction.mdx +++ b/docs/docs/getting-started/01-introduction.mdx @@ -18,12 +18,12 @@ management and evaluation**. ### With Agenta, you can: -1. Rapidly [**experiment** and **compare** prompts](/prompt_management/prompt_engineering) on [any LLM workflow](/prompt_management/setting_up/custom_applications) (chain-of-prompts, Retrieval Augmented Generation (RAG), LLM agents...) -2. Rapidly [**create test sets**](/evaluation/test_sets) and **golden datasets** for evaluation +1. Rapidly [**experiment** and **compare** prompts](/prompt-management/overview) on [any LLM workflow](/prompt-management/creating-a-custom-template) (chain-of-prompts, Retrieval Augmented Generation (RAG), LLM agents...) +2. Rapidly [**create test sets**](/evaluation/create-test-sets) and **golden datasets** for evaluation 3. **Evaluate** your application with pre-existing or **custom evaluators** 4. **Annotate** and **A/B test** your applications with **human feedback** 5. [**Collaborate with product teams**](/misc/team_management) for prompt engineering and evaluation -6. [**Deploy your application**](/prompt_management/deployment) in one-click in the UI, through CLI, or through github workflows. +6. [**Deploy your application**](/concepts/concepts#environments) in one-click in the UI, through CLI, or through github workflows. Agenta focuses on increasing the speed of the development cycle of LLM applications by increasing the speed of experimentation. @@ -33,7 +33,7 @@ Agenta focuses on increasing the speed of the development cycle of LLM applicati Agenta enables prompt engineering and evaluation on any LLM app architecture, such as **Chain of Prompts**, **RAG**, or **LLM agents**. It is compatible with any framework like **Langchain** or **LlamaIndex**, and works with any model provider, such as **OpenAI**, **Cohere**, or **local models**. -[Jump here](/prompt_management/setting_up/custom_applications) to see how to use your own custom application with Agenta and [here](/guides/how_does_agenta_work) to understand more how Agenta works. +[Jump here](/prompt-management/creating-a-custom-template) to see how to use your own custom application with Agenta and [here](/concepts/architecture) to understand more how Agenta works. ### Enable collaboration between developers and product teams diff --git a/docs/docs/getting-started/02-quick-start.mdx b/docs/docs/getting-started/02-quick-start.mdx index b5d8d3019f..8d380965f9 100644 --- a/docs/docs/getting-started/02-quick-start.mdx +++ b/docs/docs/getting-started/02-quick-start.mdx @@ -5,7 +5,7 @@ description: "Create and deploy your first LLM app in one minute" :::note This tutorial helps users create LLM apps using templates within the UI. For more complex applications involving code -in Agenta, please refer to Using code in Agenta [Using code in agenta](/prompt_management/setting_up/custom_applications){" "} +in Agenta, please refer to Using code in Agenta [Using code in agenta](/prompt-management/creating-a-custom-template){" "} ::: Want a video tutorial instead? We have a 4-minute video for you. [Watch it here](https://youtu.be/plPVrHXQ-DU). @@ -76,6 +76,6 @@ You can now find the API endpoint in the "Endpoints" menu. Copy and paste the co :::info Congratulations! You've created your first LLM application. Feel free to modify it, explore its parameters, and discover -Agenta's features. Your next steps could include [building an application using your own code](/prompt_management/custom_applications), +Agenta's features. Your next steps could include [building an application using your own code](/prompt-management/creating-a-custom-template), or following one of our UI-based tutorials. ::: diff --git a/docs/docs/prompt-management/01-overview.mdx b/docs/docs/prompt-management/01-overview.mdx index bc112418ce..95dbe8a213 100644 --- a/docs/docs/prompt-management/01-overview.mdx +++ b/docs/docs/prompt-management/01-overview.mdx @@ -81,7 +81,7 @@ Agenta enables you to version the entire **configuration** of the LLM app as a u -### **[2. As a middleware / model proxy](/prompt-management/integration/03-proxy-calls)**: +### **[2. As a middleware / model proxy](/prompt-management/integration/proxy-calls)**: In this setup, Agenta provides you with an endpoint that forwards requests to the LLM on your behalf. diff --git a/docs/docs/reference/api/agenta-backend.info.mdx b/docs/docs/reference/api/agenta-backend.info.mdx index aa6c266b45..17ebb4ec4a 100644 --- a/docs/docs/reference/api/agenta-backend.info.mdx +++ b/docs/docs/reference/api/agenta-backend.info.mdx @@ -21,7 +21,7 @@ import Export from "@theme/ApiExplorer/Export"; diff --git a/docs/docs/self-host/deploy_remotly/host-on-gcp.mdx b/docs/docs/self-host/deploy_remotly/host-on-gcp.mdx index 4498b32f17..395e15f0ea 100644 --- a/docs/docs/self-host/deploy_remotly/host-on-gcp.mdx +++ b/docs/docs/self-host/deploy_remotly/host-on-gcp.mdx @@ -47,7 +47,7 @@ Step 2 until 6 can be skipped if you already have a project with billing enabled In order to ssh into the instance you need to: -1. Uncomment these lines in the [Terraform instance file](https://github.com/Agenta-AI/agenta/blob/main/self-host/gcp/agenta_instance.tf) +1. Uncomment these lines in the [Terraform instance file](https://github.com/Agenta-AI/agenta/blob/main/self-host/gcp/agenta-instance.tf) ```bash metadata = { diff --git a/docs/docusaurus.config.ts b/docs/docusaurus.config.ts index becc4044f0..6324d7596a 100644 --- a/docs/docusaurus.config.ts +++ b/docs/docusaurus.config.ts @@ -249,7 +249,7 @@ const config: Config = { specPath: "docs/reference/openapi.json", outputDir: "docs/reference/api", downloadUrl: - "https://raw.githubusercontent.com/PaloAltoNetworks/docusaurus-template-openapi-docs/main/examples/agenta.yaml", + "https://raw.githubusercontent.com/Agenta-AI/agenta/refs/heads/main/docs/docs/reference/openapi.json", sidebarOptions: { groupPathsBy: "tag", categoryLinkSource: "tag",