diff --git a/docs/en/Community-Articles/2025-11-22-building-production-ready-llm-applications/coverimage.png b/docs/en/Community-Articles/2025-11-22-building-production-ready-llm-applications/coverimage.png new file mode 100644 index 0000000000..de77177842 Binary files /dev/null and b/docs/en/Community-Articles/2025-11-22-building-production-ready-llm-applications/coverimage.png differ diff --git a/docs/en/Community-Articles/2025-11-22-building-production-ready-llm-applications/images/chat-history-hybrid.svg b/docs/en/Community-Articles/2025-11-22-building-production-ready-llm-applications/images/chat-history-hybrid.svg new file mode 100644 index 0000000000..ab1bb36114 --- /dev/null +++ b/docs/en/Community-Articles/2025-11-22-building-production-ready-llm-applications/images/chat-history-hybrid.svg @@ -0,0 +1,114 @@ + + + + + + + Hybrid Chat History: Truncation + RAG on History + + + + + + Full Chat History + + + (100 messages, 20K tokens) + + + + + Messages 1-10 (1 day ago) + + + Messages 11-20 (12 hours ago) + + ... + + + Messages 81-90 + + + Messages 91-100 (Last 10) + + + + + + + + + + + + Old Messages + Recent Messages + + + + + Vector DB + + + (Long-term Memory) + + + Messages 1-90 with embeddings + + + Tool: SearchChatHistory() + + + + + + Prompt (Short-term) + + + Messages 91-100 + + + Truncation (Last 10 messages) + + + Low tokens, fast + + + + + + + + + LLM + + + Short-term context + + + + Long-term memory via tool + + + access when needed + + + + + + βœ… Hybrid Approach Benefits + + + + + + Low Cost: Only last 10 messages in prompt per request (truncation) + + + + + + + High Fidelity: LLM can access old messages via SearchChatHistory tool when needed + + + \ No newline at end of file diff --git a/docs/en/Community-Articles/2025-11-22-building-production-ready-llm-applications/images/mcp-architecture.svg b/docs/en/Community-Articles/2025-11-22-building-production-ready-llm-applications/images/mcp-architecture.svg new file mode 100644 index 0000000000..ee590d27eb --- /dev/null +++ b/docs/en/Community-Articles/2025-11-22-building-production-ready-llm-applications/images/mcp-architecture.svg @@ -0,0 +1,150 @@ + + + + + + + Model Context Protocol (MCP): Out-of-Process Tools + + + + + MCP Hosts (Clients) + + + + + + Semantic Kernel + + + (.NET Agent) + + + + + + + VS Code Copilot + + + (.vscode/mcp.json) + + + + + + + Claude Desktop + + + (Anthropic) + + + + + + + + + + + + + + + + stdio/http + + + JSON-RPC + + + + + + MCP Protocol + + + (Standardized Interface) + + + ModelContextProtocol SDK + + + + + + + + + + MCP Servers (Tools) + + + + + + filesystem.mcp.exe + + + ReadFile(), ListFiles() + + + (.NET Console App) + + + + + + + sqlserver.mcp.exe + + + ExecuteQuery(), GetSchema() + + + (.NET Console App) + + + + + + + github.mcp.js + + + CreateIssue(), GetPR() + + + (Node.js / TypeScript) + + + + + + + βœ… MCP Benefits + + + + + + Reusability: Write once, use everywhere (SK, VS Code, Claude) + + + + + + + Independence: MCP server runs separately, doesn't affect main app (out-of-process) + + + + + + + Language Agnostic: Can be written in C#, Python, Node.js, everyone speaks same protocol + + + \ No newline at end of file diff --git a/docs/en/Community-Articles/2025-11-22-building-production-ready-llm-applications/images/multilingual-rag.svg b/docs/en/Community-Articles/2025-11-22-building-production-ready-llm-applications/images/multilingual-rag.svg new file mode 100644 index 0000000000..81173091f0 --- /dev/null +++ b/docs/en/Community-Articles/2025-11-22-building-production-ready-llm-applications/images/multilingual-rag.svg @@ -0,0 +1,135 @@ + + + + + + + Multilingual RAG: Query Translation Pattern + + + + + + User Query + + + πŸ‡ΉπŸ‡· "YazΔ±cΔ±yΔ± ağa + + + nasΔ±l bağlarΔ±m?" + + + + + + + + + + + + Tool 1 + + + + + + TranslationPlugin + + + TranslateText() + + + Target: English + + + + + + Tool 2 + + + + + RAGPlugin + + + πŸ‡¬πŸ‡§ "How do I connect + + + the printer to network?" + + + + + + Vector Search + + + + + Vector DB + + + (English Docs) + + + "Navigate to Settings + + + > Network > Wi-Fi..." + + + + + + + + Retrieved Context + + + πŸ‡¬πŸ‡§ English text + + + (Manual excerpt) + + + + + + + + + LLM (GPT-5) + + + Context: [English] + + + Generates: [Turkish Response] + + + + + + + + + Response to User + + + πŸ‡ΉπŸ‡· "Ayarlar > Ağ > + + + Wi-Fi bΓΆlΓΌmΓΌne gidin..." + + + + + + βœ… Benefit: Single language (English) docs, multi-language query support + + + Tool Chain: TranslationPlugin β†’ RAGPlugin β†’ LLM Final Generation (Original language) + + \ No newline at end of file diff --git a/docs/en/Community-Articles/2025-11-22-building-production-ready-llm-applications/images/pgvector-integration.svg b/docs/en/Community-Articles/2025-11-22-building-production-ready-llm-applications/images/pgvector-integration.svg new file mode 100644 index 0000000000..2903740e57 --- /dev/null +++ b/docs/en/Community-Articles/2025-11-22-building-production-ready-llm-applications/images/pgvector-integration.svg @@ -0,0 +1,112 @@ + + + + + + + PostgreSQL + pgvector: Integrated RAG with EF Core + + + + + + .NET Application + + + (EF Core DbContext) + + + + + + + + + + + + LINQ Query + + + + + + Pgvector.EntityFrameworkCore + + + CosineDistance(), L2Distance() + + + EF Core Extensions + + + + + + SQL Query + + + + + + PostgreSQL + pgvector + + + + + + + + id + content + embedding + + + + + 1 + Contoso... + [0.2, -0.1,...] + + 2 + Revenue... + [0.5, 0.3,...] + + + + + + βœ… Benefits + + + + + + Existing SQL Knowledge: PostgreSQL is already a familiar database + + + + + + + EF Core Integration: Vector queries with LINQ (.OrderBy(), .Where()) + + + + + + + Metadata JOIN: Vector + Relational data in same query (tenant_id, user_id...) + + + + + + + ACID Compliant: Transaction support (rollback, commit) + + + + + + \ No newline at end of file diff --git a/docs/en/Community-Articles/2025-11-22-building-production-ready-llm-applications/images/rag-parent-child.svg b/docs/en/Community-Articles/2025-11-22-building-production-ready-llm-applications/images/rag-parent-child.svg new file mode 100644 index 0000000000..752c2c42b1 --- /dev/null +++ b/docs/en/Community-Articles/2025-11-22-building-production-ready-llm-applications/images/rag-parent-child.svg @@ -0,0 +1,118 @@ + + + + + + + Parent-Child RAG Pattern: Search Small, Respond Large + + + + + Original Document + + + + Parent 1 (800 token) + + + Parent 2 (800 token) + + + Parent 3... + + + + + + + + + + + + + + + + + + + Child Chunks + (In Vector DB) + + + + + Child 1.1 (100 token) [ParentID=1] + + + + + Child 1.2 (100 token) [ParentID=1] + + + + + Child 1.3 (100 token) [ParentID=1] + + + + + Child 2.1 (100 token) [ParentID=2] + + + + + Child 2.2... + + + + + User Query + "What was Contoso's + 2024 revenue?" + + + + 1. Vector Search + (On Child chunks) + + + + Best Match + Child 1.2 (Score: 0.95) + + + + + + 2. Fetch Parent via + ParentID + + + + Retrieved Parent Chunk + Parent 1 (800 tokens) + Full context + details + + + + 3. Send to LLM + + + + LLM Response + "Contoso's 2024 + revenue was $2.5 billion + as reported." + + + + + βœ… Benefit: Precise search (Child) + Rich context (Parent) = Optimal quality + + + Alternative: Only large chunks β†’ Lower precision | Only small chunks β†’ Insufficient context + + \ No newline at end of file diff --git a/docs/en/Community-Articles/2025-11-22-building-production-ready-llm-applications/images/reasoning-effort-diagram.svg b/docs/en/Community-Articles/2025-11-22-building-production-ready-llm-applications/images/reasoning-effort-diagram.svg new file mode 100644 index 0000000000..fc6a18d68d --- /dev/null +++ b/docs/en/Community-Articles/2025-11-22-building-production-ready-llm-applications/images/reasoning-effort-diagram.svg @@ -0,0 +1,60 @@ + + + + + + + ReasoningEffortLevel: Cost vs Quality + + + + + + + + High + Medium + Low + + + + Quality / Cost + + + + + + Minimal + Fast + Cheap + + + + + Low + Simple Queries + + + + + Medium + Standard + + + + + High + Complex + Coding + + + + + + + + + + + Increasing Cost (Reasoning Tokens ↑) + + \ No newline at end of file diff --git a/docs/en/Community-Articles/2025-11-22-building-production-ready-llm-applications/images/svg-diagram-example.svg b/docs/en/Community-Articles/2025-11-22-building-production-ready-llm-applications/images/svg-diagram-example.svg new file mode 100644 index 0000000000..6087893702 --- /dev/null +++ b/docs/en/Community-Articles/2025-11-22-building-production-ready-llm-applications/images/svg-diagram-example.svg @@ -0,0 +1,149 @@ + + + + + + + PostgreSQL + pgvector Architecture + + + + + + + .NET Application + + + + + + Web API / + + + Controllers + + + + + + Business Logic / + + + Services + + + + + + Data Access + + + Layer + + + + + + + + + + + + ORM + + + + + + Entity Framework + + + Core + + + DbContext + + + LINQ Queries + + + + + + Npgsql + + + + + + PostgreSQL + + + + + + Relational Tables + + + (Standard Data) + + + + + + pgvector + + + Vector Storage + + + + + + Vector Search + + + Similarity Queries + + + (<=>, <->, <#>) + + + + + + + + + + + Search Results + + + β€’ Embeddings + + + β€’ Similarity Score + + + β€’ Ranked Results + + + + + + + + + + Data Flow: + + + 1. .NET β†’ EF Core β†’ PostgreSQL (Data Operations) + + + 2. Vector Similarity Search with pgvector + + + \ No newline at end of file diff --git a/docs/en/Community-Articles/2025-11-22-building-production-ready-llm-applications/post.md b/docs/en/Community-Articles/2025-11-22-building-production-ready-llm-applications/post.md new file mode 100644 index 0000000000..8fa2067d01 --- /dev/null +++ b/docs/en/Community-Articles/2025-11-22-building-production-ready-llm-applications/post.md @@ -0,0 +1,414 @@ +# Building Production-Ready LLM Applications with .NET: A Practical Guide + +Large Language Models (LLMs) have evolved rapidly, and integrating them into production .NET applications requires staying current with the latest approaches. In this article, I'll share practical tips and patterns I've learned while building LLM-powered systems, covering everything from API changes in GPT-5 to implementing efficient RAG (Retrieval Augmented Generation) architectures. + +Whether you're building a chatbot, a knowledge base assistant, or integrating AI into your enterprise applications, these production-tested insights will help you avoid common pitfalls and build more reliable systems. + +## The Temperature Paradigm Shift: GPT-5 Changes Everything + +If you've been working with GPT-4 or earlier models, you're familiar with the `temperature` and `top_p` parameters for controlling response randomness. **Here's the critical update**: GPT-5 no longer supports these parameters! + +### The Old Way (GPT-4) +```csharp +var chatRequest = new ChatOptions +{ + Temperature = 0.7, // βœ… Worked with GPT-4 + TopP = 0.9 // βœ… Worked with GPT-4 +}; +``` + +### The New Way (GPT-5) +```csharp +var chatRequest = new ChatOptions +{ + RawRepresentationFactory = (client => new ChatCompletionOptions() + { +#pragma warning disable OPENAI001 + ReasoningEffortLevel = "minimal", +#pragma warning restore OPENAI001 + }) +}; +``` + +**Why the change?** GPT-5 incorporates an internal reasoning and verification process. Instead of controlling randomness, you now specify how much computational effort the model should invest in reasoning through the problem. + +![Reasoning Effort Levels](images/reasoning-effort-diagram.svg) + +### Choosing the Right Reasoning Level + +- **Low**: Quick responses for simple queries (e.g., "What's the capital of France?") +- **Medium**: Balanced approach for most use cases +- **High**: Complex reasoning tasks (e.g., code generation, multi-step problem solving) + +> **Pro Tip**: Reasoning tokens are included in your API costs. Use "High" only when necessary to optimize your budget. + +## System Prompts: The "Lost in the Middle" Problem + +Here's a critical insight that can save you hours of debugging: **Important rules must be repeated at the END of your prompt!** + +### ❌ What Doesn't Work +``` +You are a helpful assistant. +RULE: Never share passwords or sensitive information. + +[User Input] +``` + +### βœ… What Actually Works +``` +You are a helpful assistant. +RULE: Never share passwords or sensitive information. + +[User Input] + +⚠️ REMINDER: Apply the rules above strictly, ESPECIALLY regarding passwords. +``` + +**Why?** LLMs suffer from the "Lost in the Middle" phenomenonβ€”they pay more attention to the beginning and end of the context window. Critical instructions buried in the middle are often ignored. + +## RAG Architecture: The Parent-Child Pattern + +Retrieval Augmented Generation (RAG) is essential for grounding LLM responses in your own data. The most effective pattern I've found is the **Parent-Child approach**. + +![RAG Parent-Child Architecture](images/rag-parent-child.svg) + +### How It Works + +1. **Split documents into hierarchies**: + - **Parent chunks**: Large sections (1000-2000 tokens) for context + - **Child chunks**: Small segments (200-500 tokens) for precise retrieval + +2. **Store both in vector database** with references + +3. **Query flow**: + - Search using child chunks (higher precision) + - Return parent chunks to LLM (richer context) + +### The Overlap Strategy + +Always use overlapping chunks to prevent information loss at boundaries! + +``` +Chunk 1: Token 0-500 +Chunk 2: Token 400-900 ← 100 token overlap +Chunk 3: Token 800-1300 ← 100 token overlap +``` + +**Standard recommendation**: 10-20% overlap (for 500 tokens, use 50-100 token overlap) + +### Implementation with Semantic Kernel + +```csharp +using Microsoft.SemanticKernel.Text; + +var chunks = TextChunker.SplitPlainTextParagraphs( + documentText, + maxTokensPerParagraph: 500, + overlapTokens: 50 +); + +foreach (var chunk in chunks) +{ + var embedding = await embeddingService.GenerateEmbeddingAsync(chunk); + await vectorDb.StoreAsync(chunk, embedding); +} +``` + +## PostgreSQL + pgvector: The Pragmatic Choice + +For .NET developers, choosing a vector database can be overwhelming. After evaluating multiple options, **PostgreSQL with pgvector** is the most practical choice for most scenarios. + +![pgvector Integration](images/pgvector-integration.svg) + +### Why pgvector? + +βœ… **Use existing SQL knowledge** - No new query language to learn +βœ… **EF Core integration** - Works with your existing data access layer +βœ… **JOIN with metadata** - Combine vector search with traditional queries +βœ… **WHERE clause filtering** - Filter by tenant, user, date, etc. +βœ… **ACID compliance** - Transaction support for data consistency +βœ… **No separate infrastructure** - One database for everything + +### Setting Up pgvector with EF Core + +First, install the NuGet package: + +```bash +dotnet add package Pgvector.EntityFrameworkCore +``` + +Define your entity: + +```csharp +using Pgvector; +using Pgvector.EntityFrameworkCore; + +public class DocumentChunk +{ + public Guid Id { get; set; } + public string Content { get; set; } + public Vector Embedding { get; set; } // πŸ‘ˆ pgvector type + public Guid ParentChunkId { get; set; } + public DateTime CreatedAt { get; set; } +} +``` + +Configure in DbContext: + +```csharp +protected override void OnModelCreating(ModelBuilder builder) +{ + builder.HasPostgresExtension("vector"); + + builder.Entity() + .Property(e => e.Embedding) + .HasColumnType("vector(1536)"); // πŸ‘ˆ OpenAI embedding dimension + + builder.Entity() + .HasIndex(e => e.Embedding) + .HasMethod("hnsw") // πŸ‘ˆ Fast approximate search + .HasOperators("vector_cosine_ops"); +} +``` + +### Performing Vector Search + +```csharp +using Pgvector.EntityFrameworkCore; + +public async Task> SearchAsync(string query) +{ + // 1. Convert query to embedding + var queryVector = await _embeddingService.GetEmbeddingAsync(query); + + // 2. Search + return await _context.DocumentChunks + .OrderBy(c => c.Embedding.L2Distance(queryVector)) // πŸ‘ˆ Lower is better + .Take(5) + .ToListAsync(); +} +``` + +**Source**: [Pgvector.NET on GitHub](https://github.com/pgvector/pgvector-dotnet?tab=readme-ov-file#entity-framework-core) + +## Smart Tool Usage: Make RAG a Tool, Not a Tax + +A common mistake is calling RAG on every single user message. This wastes tokens and money. Instead, **make RAG a tool** and let the LLM decide when to use it. + +### ❌ Expensive Approach +```csharp +// Always call RAG, even for "Hello" +var context = await PerformRAG(userMessage); +var response = await chatClient.CompleteAsync($"{context}\n\n{userMessage}"); +``` + +### βœ… Smart Approach +```csharp +[KernelFunction] +[Description("Search the company knowledge base for information")] +public async Task SearchKnowledgeBase( + [Description("The search query")] string query) +{ + var results = await _vectorDb.SearchAsync(query); + return string.Join("\n---\n", results.Select(r => r.Content)); +} +``` + +The LLM will call `SearchKnowledgeBase` only when needed: +- "Hello" β†’ No tool call +- "What was our 2024 revenue?" β†’ Calls tool +- "Tell me a joke" β†’ No tool call + +## Multilingual RAG: Query Translation Strategy + +When your documents are in one language (e.g., English) but users query in another (e.g., Turkish), you need a translation strategy. + +![Multilingual RAG Architecture](images/multilingual-rag.svg) + +### Solution Options + +**Option 1**: Use an LLM that automatically calls tools in English +- Many modern LLMs can do this if properly instructed + +**Option 2**: Tool chain approach +```csharp +[KernelFunction] +[Description("Translate text to English")] +public async Task TranslateToEnglish(string text) +{ + // Translation logic +} + +[KernelFunction] +[Description("Search knowledge base (English only)")] +public async Task SearchKnowledgeBase(string englishQuery) +{ + // Search logic +} +``` + +The LLM will: +1. Call `TranslateToEnglish("2024 geliri nedir?")` +2. Get "What was 2024 revenue?" +3. Call `SearchKnowledgeBase("What was 2024 revenue?")` +4. Return results and respond in Turkish + +## Model Context Protocol (MCP): Beyond In-Process Tools + +Microsoft and Anthropic recently released official C# SDKs for the Model Context Protocol (MCP). This is a game-changer for tool reusability. + +![MCP Architecture](images/mcp-architecture.svg) + +### MCP vs. Semantic Kernel Plugins + +| Feature | SK Plugins | MCP Servers | +|---------|-----------|-------------| +| **Process** | In-process | Out-of-process (stdio/http) | +| **Reusability** | Application-specific | Cross-application | +| **Examples** | Used within your app | VS Code Copilot, Claude Desktop | + +### Creating an MCP Server + +```csharp +using Microsoft.Extensions.Hosting; +using ModelContextProtocol.Extensions.Hosting; + +var builder = Host.CreateEmptyApplicationBuilder(settings: null); + +builder.Services.AddMcpServer() +.WithStdioServerTransport() +.WithToolsFromAssembly(); + +await builder.Build().RunAsync(); +``` + +Define your tools: + +```csharp +[McpServerToolType] +public static class FileSystemTools +{ + [McpServerTool, Description("Read a file from the file system")] + public static async Task ReadFile(string path) + { + // ⚠️ SECURITY: Always validate paths! + if (!IsPathSafe(path)) + throw new SecurityException("Invalid path"); + + return await File.ReadAllTextAsync(path); + } + + private static bool IsPathSafe(string path) + { + // Implement path traversal prevention + var fullPath = Path.GetFullPath(path); + return fullPath.StartsWith(AllowedDirectory); + } +} +``` + +Your MCP server can now be used by VS Code Copilot, Claude Desktop, or any other MCP client! + +## Chat History Management: Truncation + RAG Hybrid + +For long conversations, storing all history in the context window becomes impractical. Here's the pattern that works: + +![Chat History Hybrid Strategy](images/chat-history-hybrid.svg) + +### ❌ Lossy Approach +``` +First 50 messages β†’ Summarize with LLM β†’ Single summary message +``` +**Problem**: Detail loss (fidelity loss) + +### βœ… Hybrid Approach +1. **Recent messages** (last 5-10): Keep in prompt for immediate context +2. **Older messages**: Store in vector database as a tool + +```csharp +[KernelFunction] +[Description("Search conversation history for past discussions")] +public async Task SearchChatHistory( + [Description("What to search for")] string query) +{ + var relevantMessages = await _vectorDb.SearchAsync(query); + return string.Join("\n", relevantMessages.Select(m => + $"[{m.Timestamp}] {m.Role}: {m.Content}")); +} +``` + +The LLM retrieves only relevant past context when needed, avoiding summary-induced information loss. + +## RAG vs. Fine-Tuning: Choose Wisely + +A common misconception is using fine-tuning for knowledge injection. Here's when to use each: + +| Purpose | RAG | Fine-Tuning | +|---------|-----|-------------| +| **Goal** | Memory (provide facts) | Behavior (teach style) | +| **Updates** | Dynamic (add docs anytime) | Static (requires retraining) | +| **Cost** | Low dev, higher inference | High dev, lower inference | +| **Hallucination** | Reduces | Doesn't reduce | +| **Use Case** | Company docs, FAQs | Brand voice, specific format | + +**Common mistake**: "Let's fine-tune on our company documents" ❌ +**Better approach**: Use RAG! βœ… + +Fine-tuning is for teaching the model *how* to respond, not *what* to know. + +**Source**: [Oracle - RAG vs Fine-Tuning](https://www.oracle.com/artificial-intelligence/generative-ai/retrieval-augmented-generation-rag/rag-fine-tuning/) + +## Bonus: Why SVG is Superior for LLM-Generated Images + +When using LLMs to generate diagrams and visualizations, always request SVG format instead of PNG or JPG. + +### Why SVG? + +βœ… **Text-based** β†’ LLMs produce better results +βœ… **Lower cost** β†’ Fewer tokens than base64-encoded images +βœ… **Editable** β†’ Easy to modify after generation +βœ… **Scalable** β†’ Perfect quality at any size +βœ… **Version control friendly** β†’ Works great in Git + +### Example Prompt + +``` +Create an architecture diagram showing PostgreSQL with pgvector integration. +Format: SVG, 800x400 pixels. Show: .NET Application β†’ EF Core β†’ PostgreSQL β†’ Vector Search. +Use arrows to connect stages. Color scheme: Blue tones. +``` + +![SVG Diagram Example](images/svg-diagram-example.svg) + +All diagrams in this article were generated as SVG, resulting in excellent quality and lower token costs! + +> **Pro Tip**: If you don't need photographs or complex renders, always choose SVG. + +## Architecture Roadmap: Putting It All Together + +Here's the recommended stack for building production LLM applications with .NET: + +1. **Orchestration**: Microsoft.Extensions.AI + Semantic Kernel (when needed) +2. **Vector Database**: PostgreSQL + Pgvector.EntityFrameworkCore +3. **RAG Pattern**: Parent-Child chunks with 10-20% overlap +4. **Tools**: MCP servers for reusability +5. **Reasoning**: ReasoningEffortLevel instead of temperature +6. **Prompting**: Critical rules at the end +7. **Cost Optimization**: Make RAG a tool, not automatic + +## Key Takeaways + +Let me summarize the most important production tips: + +1. **Temperature is gone** β†’ Use `ReasoningEffortLevel` with GPT-5 +2. **Rules at the end** β†’ Combat "Lost in the Middle" +3. **RAG as a tool** β†’ Reduce costs significantly +4. **Parent-Child pattern** β†’ Search small, respond with large +5. **Always use overlap** β†’ 10-20% is the standard +6. **pgvector for most cases** β†’ Unless you have billions of vectors +7. **MCP for reusability** β†’ One codebase, works everywhere +8. **SVG for diagrams** β†’ Better results, lower cost +9. **Hybrid chat history** β†’ Recent in prompt, old in vector DB +10. **RAG > Fine-tuning** β†’ For knowledge, not behavior + +Happy coding! πŸš€ \ No newline at end of file diff --git a/docs/en/Community-Articles/2025-11-22-building-production-ready-llm-applications/summary.md b/docs/en/Community-Articles/2025-11-22-building-production-ready-llm-applications/summary.md new file mode 100644 index 0000000000..fb1d41af5c --- /dev/null +++ b/docs/en/Community-Articles/2025-11-22-building-production-ready-llm-applications/summary.md @@ -0,0 +1 @@ +Learn how to build production-ready LLM applications with .NET. This comprehensive guide covers GPT-5 API changes, advanced RAG architectures with parent-child patterns, PostgreSQL pgvector integration, smart tool usage strategies, multilingual query handling, Model Context Protocol (MCP) for cross-application tool reusability, and chat history management techniques for enterprise applications.