Skip to content

Commit

Permalink
Correct grammar in RAG and Function Calling sections
Browse files Browse the repository at this point in the history
  • Loading branch information
John Blum committed Jul 17, 2024
1 parent feb036d commit ed494ca
Showing 1 changed file with 9 additions and 9 deletions.
18 changes: 9 additions & 9 deletions spring-ai-docs/src/main/antora/modules/ROOT/pages/concepts.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -70,11 +70,11 @@ Initially starting as simple strings, prompts have evolved to include multiple m

== Embeddings

Embeddings are numerical representations of text, images, or videos that capture relationships between inputs.
Embeddings are numerical representations of text, images, or videos that capture relationships between inputs.

Embeddings work by converting text, image, and video into arrays of floating point numbers, called vectors.
These vectors are designed to capture the meaning of the text, images, and videos.
The length of the embedding array is called the vector's dimensionality.
Embeddings work by converting text, image, and video into arrays of floating point numbers, called vectors.
These vectors are designed to capture the meaning of the text, images, and videos.
The length of the embedding array is called the vector's dimensionality.

By calculating the numerical distance between the vector representations of two pieces of text, an application can determine the similarity between the objects used to generate the embedding vectors.

Expand Down Expand Up @@ -169,13 +169,13 @@ This is the reason to use a vector database. It is very good at finding similar

image::spring-ai-rag.jpg[Spring AI RAG, width=1000, align="center"]

* The xref::api/etl-pipeline.adoc[ETL pipeline] provides further information about orchestrating the flow of extracting data from the data sources and store it in a structured vector store, ensuring data is in the optimal format for retrieval by the AI model.
* The xref::api/chatclient.adoc#_retrieval_augmented_generation[ChatClient - RAG] explains how to use the `QuestionAnswerAdvisor` advisor to enable the RAG capability to your application.
* The xref::api/etl-pipeline.adoc[ETL pipeline] provides further information about orchestrating the flow of extracting data from data sources and storing it in a structured vector store, ensuring data is in the optimal format for retrieval when passing it to the AI model.
* The xref::api/chatclient.adoc#_retrieval_augmented_generation[ChatClient - RAG] explains how to use the `QuestionAnswerAdvisor` advisor to enable the RAG capability in your application.

[[concept-fc]]
=== Function Calling

Large Language Models (LLMs) are frozen after training, leading to stale knowledge and they are unable to access or modify external data.
Large Language Models (LLMs) are frozen after training, leading to stale knowledge, and they are unable to access or modify external data.

The xref::api/functions.adoc[Function Calling] mechanism addresses these shortcomings.
It allows you to register your own functions to connect the large language models to the APIs of external systems.
Expand All @@ -188,8 +188,8 @@ Additionally, you can define and reference multiple functions in a single prompt

image::function-calling-basic-flow.jpg[Function calling, width=700, align="center"]

* (1) perform a chat request along with a function definition information.
Later provides the `name`, `description` (e.g. explaining when the Model should call the function), and `input parameters` (e.g. the function's input parameters schema).
* (1) perform a chat request sending along function definition information.
The later provides the `name`, `description` (e.g. explaining when the Model should call the function), and `input parameters` (e.g. the function's input parameters schema).
* (2) when the Model decides to call the function, it will call the function with the input parameters and return the output to the model.
* (3) Spring AI handles this conversation for you.
It dispatches the function call to the appropriate function and returns the result to the model.
Expand Down

0 comments on commit ed494ca

Please sign in to comment.