diff --git a/README.md b/README.md
index 489257a41c..ed4808b409 100644
--- a/README.md
+++ b/README.md
@@ -1,7 +1,7 @@

-
Connecting data science teams seamlessly to cloud infrastructure.
-
+
Beyond The Demo: Production-Grade AI Systems
+
ZenML brings battle-tested MLOps practices to your AI applications, handling evaluation, monitoring, and deployment at scale
@@ -100,40 +100,44 @@ Take a tour with the guided quickstart by running:
zenml go
```
-## ๐ช Simple, integrated, End-to-end MLOps
+## ๐ช From Prototype to Production: AI Made Simple
-### Create machine learning pipelines with minimal code changes
+### Create AI pipelines with minimal code changes
-ZenML is a MLOps framework intended for data scientists or ML engineers looking to standardize machine learning practices. Just add `@step` and `@pipeline` to your existing Python functions to get going. Here is a toy example:
+ZenML is an open-source framework that handles MLOps and LLMOps for engineers scaling AI beyond prototypes. Automate evaluation loops, track performance, and deploy updates across 100s of pipelinesโall while your RAG apps run like clockwork.
```python
from zenml import pipeline, step
-@step # Just add this decorator
-def load_data() -> dict:
- training_data = [[1, 2], [3, 4], [5, 6]]
- labels = [0, 1, 0]
- return {'features': training_data, 'labels': labels}
+@step
+def load_rag_documents() -> dict:
+ # Load and chunk documents for RAG pipeline
+ documents = extract_web_content(url="https://www.zenml.io/")
+ return {"chunks": chunk_documents(documents)}
@step
-def train_model(data: dict) -> None:
- total_features = sum(map(sum, data['features']))
- total_labels = sum(data['labels'])
-
- print(f"Trained model using {len(data['features'])} data points. "
- f"Feature sum is {total_features}, label sum is {total_labels}")
+def generate_embeddings(data: dict) -> None:
+ # Generate embeddings for RAG pipeline
+ embeddings = embed_documents(data['chunks'])
+ return {"embeddings": embeddings}
-@pipeline # This function combines steps together
-def simple_ml_pipeline():
- dataset = load_data()
- train_model(dataset)
+@step
+def index_generator(
+ embeddings: dict,
+) -> str:
+ # Generate index for RAG pipeline
+ index = create_index(embeddings)
+ return index.id
+
-if __name__ == "__main__":
- run = simple_ml_pipeline() # call this to run the pipeline
-
+@pipeline
+def rag_pipeline() -> str:
+ documents = load_rag_documents()
+ embeddings = generate_embeddings(documents)
+ index = index_generator(embeddings)
+ return index
```
-
-
+
### Easily provision an MLOps stack or reuse your existing infrastructure
@@ -185,18 +189,47 @@ def training(...):
Create a complete lineage of who, where, and what data and models are produced.
-Youโll be able to find out who produced which model, at what time, with which data, and on which version of the code. This guarantees full reproducibility and auditability.
+You'll be able to find out who produced which model, at what time, with which data, and on which version of the code. This guarantees full reproducibility and auditability.
```python
from zenml import Model
-@step(model=Model(name="classification"))
-def trainer(training_df: pd.DataFrame) -> Annotated["model", torch.nn.Module]:
- ...
+@step(model=Model(name="rag_llm", tags=["staging"]))
+def deploy_rag(index_id: str) -> str:
+ deployment_id = deploy_to_endpoint(index_id)
+ return deployment_id
```

+## ๐ Key LLMOps Capabilities
+
+### Continual RAG Improvement
+**Build production-ready retrieval systems**
+
+
+

+
+
+ZenML tracks document ingestion, embedding versions, and query patterns. Implement feedback loops and:
+- Fix your RAG logic based on production logs
+- Automatically re-ingest updated documents
+- A/B test different embedding models
+- Monitor retrieval quality metrics
+
+### Reproducible Model Fine-Tuning
+**Confidence in model updates**
+
+
+

+
+
+Maintain full lineage of SLM/LLM training runs:
+- Version training data and hyperparameters
+- Track performance across iterations
+- Automatically promote validated models
+- Roll back to previous versions if needed
+
### Purpose built for machine learning with integrations to your favorite tools
While ZenML brings a lot of value out of the box, it also integrates into your existing tooling and infrastructure without you having to be locked in.
@@ -213,6 +246,14 @@ def train_and_deploy(training_df: pd.DataFrame) -> bento.Bento

+## ๐ Your LLM Framework Isn't Enough for Production
+
+While tools like LangChain and LlamaIndex help you **build** LLM workflows, ZenML helps you **productionize** them by adding:
+
+โ
**Artifact Tracking** - Every vector store index, fine-tuned model, and evaluation result versioned automatically
+โ
**Pipeline History** - See exactly what code/data produced each version of your RAG system
+โ
**Stage Promotion** - Move validated pipelines from staging โ production with one click
+
## ๐ผ๏ธ Learning
The best way to learn about ZenML is the [docs](https://docs.zenml.io/). We recommend beginning with the [Starter Guide](https://docs.zenml.io/user-guide/starter-guide) to get up and running quickly.
@@ -297,13 +338,23 @@ Or, if you
prefer, [open an issue](https://github.com/zenml-io/zenml/issues/new/choose) on
our GitHub repo.
-## โญ๏ธ Show Your Support
+## ๐ LLM-focused Learning Resources
-If you find ZenML helpful or interesting, please consider giving us a star on GitHub. Your support helps promote the project and lets others know that it's worth checking out.
+1. [LL Complete Guide - Full RAG Pipeline](https://github.com/zenml-io/zenml-projects/tree/main/llm-complete-guide) - Document ingestion, embedding management, and query serving
+2. [LLM Fine-Tuning Pipeline](https://github.com/zenml-io/zenml-projects/tree/main/llm-finetuning) - From data prep to deployed model
+3. [LLM Agents Example](https://github.com/zenml-io/zenml-projects/tree/main/llm-agents) - Track conversation quality and tool usage
-Thank you for your support! ๐
+## ๐ค AI-Friendly Documentation with llms.txt
-[](https://github.com/zenml-io/zenml/stargazers)
+ZenML implements the llms.txt standard to make our documentation more accessible to AI assistants and LLMs. Our implementation includes:
+
+- Base documentation at [zenml.io/llms.txt](https://zenml.io/llms.txt) with core user guides
+- Specialized files for different documentation aspects:
+ - [Component guides](https://zenml.io/component-guide.txt) for integration details
+ - [How-to guides](https://zenml.io/how-to-guides.txt) for practical implementations
+ - [Complete documentation corpus](https://zenml.io/llms-full.txt) for comprehensive access
+
+This structured approach helps AI tools better understand and utilize ZenML's documentation, enabling more accurate code suggestions and improved documentation search.
## ๐ License
diff --git a/docs/book/.gitbook/assets/finetune_zenml_home.png b/docs/book/.gitbook/assets/finetune_zenml_home.png
new file mode 100644
index 0000000000..0294dd7f15
Binary files /dev/null and b/docs/book/.gitbook/assets/finetune_zenml_home.png differ
diff --git a/docs/book/.gitbook/assets/rag_zenml_home.png b/docs/book/.gitbook/assets/rag_zenml_home.png
new file mode 100644
index 0000000000..a67d1fa9b8
Binary files /dev/null and b/docs/book/.gitbook/assets/rag_zenml_home.png differ
diff --git a/docs/book/.gitbook/assets/readme_simple_pipeline.gif b/docs/book/.gitbook/assets/readme_simple_pipeline.gif
new file mode 100755
index 0000000000..4ca922b228
Binary files /dev/null and b/docs/book/.gitbook/assets/readme_simple_pipeline.gif differ