From 22ccdb1641a73f66b3053cd45b4dc630610838ac Mon Sep 17 00:00:00 2001 From: Enis Necipoglu Date: Tue, 17 Mar 2026 12:44:46 +0300 Subject: [PATCH] Potential fix for pull request finding Co-authored-by: Copilot Autofix powered by AI <175728472+Copilot@users.noreply.github.com> --- docs/en/modules/ai-management/index.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/en/modules/ai-management/index.md b/docs/en/modules/ai-management/index.md index 6176d9c015..a551d5beef 100644 --- a/docs/en/modules/ai-management/index.md +++ b/docs/en/modules/ai-management/index.md @@ -319,7 +319,7 @@ RAG requires an **embedder** and a **vector store** to be configured on the work * **Vector Store**: Stores and retrieves vector embeddings. Supported providers: **MongoDb**, **Pgvector**, and **Qdrant**. > [!IMPORTANT] -> If the workspace uses Ollama for chat or embeddings, the configured model names must exist in the local Ollama instance first. For example, if you configure `ModelName = llama3.2` and `EmbedderModelName = nomic-embed-text`, pull both models before using the workspace: +> If the workspace uses Ollama for chat or embeddings, the configured model names must exist in the local Ollama instance first. For example, if you configure `ModelName = "llama3.2"` and `EmbedderModelName = "nomic-embed-text"`, pull both models before using the workspace: > > ```bash > ollama pull llama3.2