From 6af345610b4e05966b8b70489739f076d184f310 Mon Sep 17 00:00:00 2001 From: enisn Date: Tue, 17 Mar 2026 12:02:02 +0300 Subject: [PATCH 1/2] Add Ollama setup and model pull instructions Add IMPORTANT notes explaining that the Ollama server must be installed and running and that models referenced by a workspace must be pulled locally before configuring. Include example commands (e.g. ollama pull llama3.2 and ollama pull nomic-embed-text) and note that nomic-embed-text is embedding-only. Also add a similar reminder in the RAG section to pull both chat and embedding models when using Ollama. --- docs/en/modules/ai-management/index.md | 18 ++++++++++++++++++ 1 file changed, 18 insertions(+) diff --git a/docs/en/modules/ai-management/index.md b/docs/en/modules/ai-management/index.md index 5e529853df..6176d9c015 100644 --- a/docs/en/modules/ai-management/index.md +++ b/docs/en/modules/ai-management/index.md @@ -44,6 +44,16 @@ abp add-package Volo.AIManagement.OpenAI abp add-package Volo.AIManagement.Ollama ``` +> [!IMPORTANT] +> If you use Ollama, make sure the Ollama server is installed and running, and that the models referenced by your workspace are already available locally. Before configuring an Ollama workspace, pull the chat model and any embedding model you plan to use. For example: +> +> ```bash +> ollama pull llama3.2 +> ollama pull nomic-embed-text +> ``` +> +> Replace the model names with the exact models you configure in the workspace. `nomic-embed-text` is an embedding-only model and can't be used as a chat model. + > [!TIP] > You can install multiple provider packages to support different AI providers simultaneously in your workspaces. @@ -308,6 +318,14 @@ RAG requires an **embedder** and a **vector store** to be configured on the work * **Embedder**: Converts documents and queries into vector embeddings. You can use any provider that supports embedding generation (e.g., OpenAI `text-embedding-3-small`, Ollama `nomic-embed-text`). * **Vector Store**: Stores and retrieves vector embeddings. Supported providers: **MongoDb**, **Pgvector**, and **Qdrant**. +> [!IMPORTANT] +> If the workspace uses Ollama for chat or embeddings, the configured model names must exist in the local Ollama instance first. For example, if you configure `ModelName = llama3.2` and `EmbedderModelName = nomic-embed-text`, pull both models before using the workspace: +> +> ```bash +> ollama pull llama3.2 +> ollama pull nomic-embed-text +> ``` + ### Configuring RAG on a Workspace To enable RAG for a workspace, configure the embedder and vector store settings in the workspace edit page. From 22ccdb1641a73f66b3053cd45b4dc630610838ac Mon Sep 17 00:00:00 2001 From: Enis Necipoglu Date: Tue, 17 Mar 2026 12:44:46 +0300 Subject: [PATCH 2/2] Potential fix for pull request finding Co-authored-by: Copilot Autofix powered by AI <175728472+Copilot@users.noreply.github.com> --- docs/en/modules/ai-management/index.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/en/modules/ai-management/index.md b/docs/en/modules/ai-management/index.md index 6176d9c015..a551d5beef 100644 --- a/docs/en/modules/ai-management/index.md +++ b/docs/en/modules/ai-management/index.md @@ -319,7 +319,7 @@ RAG requires an **embedder** and a **vector store** to be configured on the work * **Vector Store**: Stores and retrieves vector embeddings. Supported providers: **MongoDb**, **Pgvector**, and **Qdrant**. > [!IMPORTANT] -> If the workspace uses Ollama for chat or embeddings, the configured model names must exist in the local Ollama instance first. For example, if you configure `ModelName = llama3.2` and `EmbedderModelName = nomic-embed-text`, pull both models before using the workspace: +> If the workspace uses Ollama for chat or embeddings, the configured model names must exist in the local Ollama instance first. For example, if you configure `ModelName = "llama3.2"` and `EmbedderModelName = "nomic-embed-text"`, pull both models before using the workspace: > > ```bash > ollama pull llama3.2