diff --git a/docs/en/Community-Articles/2025-11-22-building-production-ready-llm-applications/coverimage.png b/docs/en/Community-Articles/2025-11-22-building-production-ready-llm-applications/coverimage.png
new file mode 100644
index 0000000000..de77177842
Binary files /dev/null and b/docs/en/Community-Articles/2025-11-22-building-production-ready-llm-applications/coverimage.png differ
diff --git a/docs/en/Community-Articles/2025-11-22-building-production-ready-llm-applications/images/chat-history-hybrid.svg b/docs/en/Community-Articles/2025-11-22-building-production-ready-llm-applications/images/chat-history-hybrid.svg
new file mode 100644
index 0000000000..ab1bb36114
--- /dev/null
+++ b/docs/en/Community-Articles/2025-11-22-building-production-ready-llm-applications/images/chat-history-hybrid.svg
@@ -0,0 +1,114 @@
+
\ No newline at end of file
diff --git a/docs/en/Community-Articles/2025-11-22-building-production-ready-llm-applications/images/mcp-architecture.svg b/docs/en/Community-Articles/2025-11-22-building-production-ready-llm-applications/images/mcp-architecture.svg
new file mode 100644
index 0000000000..ee590d27eb
--- /dev/null
+++ b/docs/en/Community-Articles/2025-11-22-building-production-ready-llm-applications/images/mcp-architecture.svg
@@ -0,0 +1,150 @@
+
\ No newline at end of file
diff --git a/docs/en/Community-Articles/2025-11-22-building-production-ready-llm-applications/images/multilingual-rag.svg b/docs/en/Community-Articles/2025-11-22-building-production-ready-llm-applications/images/multilingual-rag.svg
new file mode 100644
index 0000000000..81173091f0
--- /dev/null
+++ b/docs/en/Community-Articles/2025-11-22-building-production-ready-llm-applications/images/multilingual-rag.svg
@@ -0,0 +1,135 @@
+
\ No newline at end of file
diff --git a/docs/en/Community-Articles/2025-11-22-building-production-ready-llm-applications/images/pgvector-integration.svg b/docs/en/Community-Articles/2025-11-22-building-production-ready-llm-applications/images/pgvector-integration.svg
new file mode 100644
index 0000000000..2903740e57
--- /dev/null
+++ b/docs/en/Community-Articles/2025-11-22-building-production-ready-llm-applications/images/pgvector-integration.svg
@@ -0,0 +1,112 @@
+
\ No newline at end of file
diff --git a/docs/en/Community-Articles/2025-11-22-building-production-ready-llm-applications/images/rag-parent-child.svg b/docs/en/Community-Articles/2025-11-22-building-production-ready-llm-applications/images/rag-parent-child.svg
new file mode 100644
index 0000000000..752c2c42b1
--- /dev/null
+++ b/docs/en/Community-Articles/2025-11-22-building-production-ready-llm-applications/images/rag-parent-child.svg
@@ -0,0 +1,118 @@
+
\ No newline at end of file
diff --git a/docs/en/Community-Articles/2025-11-22-building-production-ready-llm-applications/images/reasoning-effort-diagram.svg b/docs/en/Community-Articles/2025-11-22-building-production-ready-llm-applications/images/reasoning-effort-diagram.svg
new file mode 100644
index 0000000000..fc6a18d68d
--- /dev/null
+++ b/docs/en/Community-Articles/2025-11-22-building-production-ready-llm-applications/images/reasoning-effort-diagram.svg
@@ -0,0 +1,60 @@
+
\ No newline at end of file
diff --git a/docs/en/Community-Articles/2025-11-22-building-production-ready-llm-applications/images/svg-diagram-example.svg b/docs/en/Community-Articles/2025-11-22-building-production-ready-llm-applications/images/svg-diagram-example.svg
new file mode 100644
index 0000000000..6087893702
--- /dev/null
+++ b/docs/en/Community-Articles/2025-11-22-building-production-ready-llm-applications/images/svg-diagram-example.svg
@@ -0,0 +1,149 @@
+
\ No newline at end of file
diff --git a/docs/en/Community-Articles/2025-11-22-building-production-ready-llm-applications/post.md b/docs/en/Community-Articles/2025-11-22-building-production-ready-llm-applications/post.md
new file mode 100644
index 0000000000..8fa2067d01
--- /dev/null
+++ b/docs/en/Community-Articles/2025-11-22-building-production-ready-llm-applications/post.md
@@ -0,0 +1,414 @@
+# Building Production-Ready LLM Applications with .NET: A Practical Guide
+
+Large Language Models (LLMs) have evolved rapidly, and integrating them into production .NET applications requires staying current with the latest approaches. In this article, I'll share practical tips and patterns I've learned while building LLM-powered systems, covering everything from API changes in GPT-5 to implementing efficient RAG (Retrieval Augmented Generation) architectures.
+
+Whether you're building a chatbot, a knowledge base assistant, or integrating AI into your enterprise applications, these production-tested insights will help you avoid common pitfalls and build more reliable systems.
+
+## The Temperature Paradigm Shift: GPT-5 Changes Everything
+
+If you've been working with GPT-4 or earlier models, you're familiar with the `temperature` and `top_p` parameters for controlling response randomness. **Here's the critical update**: GPT-5 no longer supports these parameters!
+
+### The Old Way (GPT-4)
+```csharp
+var chatRequest = new ChatOptions
+{
+ Temperature = 0.7, // β Worked with GPT-4
+ TopP = 0.9 // β Worked with GPT-4
+};
+```
+
+### The New Way (GPT-5)
+```csharp
+var chatRequest = new ChatOptions
+{
+ RawRepresentationFactory = (client => new ChatCompletionOptions()
+ {
+#pragma warning disable OPENAI001
+ ReasoningEffortLevel = "minimal",
+#pragma warning restore OPENAI001
+ })
+};
+```
+
+**Why the change?** GPT-5 incorporates an internal reasoning and verification process. Instead of controlling randomness, you now specify how much computational effort the model should invest in reasoning through the problem.
+
+
+
+### Choosing the Right Reasoning Level
+
+- **Low**: Quick responses for simple queries (e.g., "What's the capital of France?")
+- **Medium**: Balanced approach for most use cases
+- **High**: Complex reasoning tasks (e.g., code generation, multi-step problem solving)
+
+> **Pro Tip**: Reasoning tokens are included in your API costs. Use "High" only when necessary to optimize your budget.
+
+## System Prompts: The "Lost in the Middle" Problem
+
+Here's a critical insight that can save you hours of debugging: **Important rules must be repeated at the END of your prompt!**
+
+### β What Doesn't Work
+```
+You are a helpful assistant.
+RULE: Never share passwords or sensitive information.
+
+[User Input]
+```
+
+### β What Actually Works
+```
+You are a helpful assistant.
+RULE: Never share passwords or sensitive information.
+
+[User Input]
+
+β οΈ REMINDER: Apply the rules above strictly, ESPECIALLY regarding passwords.
+```
+
+**Why?** LLMs suffer from the "Lost in the Middle" phenomenonβthey pay more attention to the beginning and end of the context window. Critical instructions buried in the middle are often ignored.
+
+## RAG Architecture: The Parent-Child Pattern
+
+Retrieval Augmented Generation (RAG) is essential for grounding LLM responses in your own data. The most effective pattern I've found is the **Parent-Child approach**.
+
+
+
+### How It Works
+
+1. **Split documents into hierarchies**:
+ - **Parent chunks**: Large sections (1000-2000 tokens) for context
+ - **Child chunks**: Small segments (200-500 tokens) for precise retrieval
+
+2. **Store both in vector database** with references
+
+3. **Query flow**:
+ - Search using child chunks (higher precision)
+ - Return parent chunks to LLM (richer context)
+
+### The Overlap Strategy
+
+Always use overlapping chunks to prevent information loss at boundaries!
+
+```
+Chunk 1: Token 0-500
+Chunk 2: Token 400-900 β 100 token overlap
+Chunk 3: Token 800-1300 β 100 token overlap
+```
+
+**Standard recommendation**: 10-20% overlap (for 500 tokens, use 50-100 token overlap)
+
+### Implementation with Semantic Kernel
+
+```csharp
+using Microsoft.SemanticKernel.Text;
+
+var chunks = TextChunker.SplitPlainTextParagraphs(
+ documentText,
+ maxTokensPerParagraph: 500,
+ overlapTokens: 50
+);
+
+foreach (var chunk in chunks)
+{
+ var embedding = await embeddingService.GenerateEmbeddingAsync(chunk);
+ await vectorDb.StoreAsync(chunk, embedding);
+}
+```
+
+## PostgreSQL + pgvector: The Pragmatic Choice
+
+For .NET developers, choosing a vector database can be overwhelming. After evaluating multiple options, **PostgreSQL with pgvector** is the most practical choice for most scenarios.
+
+
+
+### Why pgvector?
+
+β **Use existing SQL knowledge** - No new query language to learn
+β **EF Core integration** - Works with your existing data access layer
+β **JOIN with metadata** - Combine vector search with traditional queries
+β **WHERE clause filtering** - Filter by tenant, user, date, etc.
+β **ACID compliance** - Transaction support for data consistency
+β **No separate infrastructure** - One database for everything
+
+### Setting Up pgvector with EF Core
+
+First, install the NuGet package:
+
+```bash
+dotnet add package Pgvector.EntityFrameworkCore
+```
+
+Define your entity:
+
+```csharp
+using Pgvector;
+using Pgvector.EntityFrameworkCore;
+
+public class DocumentChunk
+{
+ public Guid Id { get; set; }
+ public string Content { get; set; }
+ public Vector Embedding { get; set; } // π pgvector type
+ public Guid ParentChunkId { get; set; }
+ public DateTime CreatedAt { get; set; }
+}
+```
+
+Configure in DbContext:
+
+```csharp
+protected override void OnModelCreating(ModelBuilder builder)
+{
+ builder.HasPostgresExtension("vector");
+
+ builder.Entity()
+ .Property(e => e.Embedding)
+ .HasColumnType("vector(1536)"); // π OpenAI embedding dimension
+
+ builder.Entity()
+ .HasIndex(e => e.Embedding)
+ .HasMethod("hnsw") // π Fast approximate search
+ .HasOperators("vector_cosine_ops");
+}
+```
+
+### Performing Vector Search
+
+```csharp
+using Pgvector.EntityFrameworkCore;
+
+public async Task> SearchAsync(string query)
+{
+ // 1. Convert query to embedding
+ var queryVector = await _embeddingService.GetEmbeddingAsync(query);
+
+ // 2. Search
+ return await _context.DocumentChunks
+ .OrderBy(c => c.Embedding.L2Distance(queryVector)) // π Lower is better
+ .Take(5)
+ .ToListAsync();
+}
+```
+
+**Source**: [Pgvector.NET on GitHub](https://github.com/pgvector/pgvector-dotnet?tab=readme-ov-file#entity-framework-core)
+
+## Smart Tool Usage: Make RAG a Tool, Not a Tax
+
+A common mistake is calling RAG on every single user message. This wastes tokens and money. Instead, **make RAG a tool** and let the LLM decide when to use it.
+
+### β Expensive Approach
+```csharp
+// Always call RAG, even for "Hello"
+var context = await PerformRAG(userMessage);
+var response = await chatClient.CompleteAsync($"{context}\n\n{userMessage}");
+```
+
+### β Smart Approach
+```csharp
+[KernelFunction]
+[Description("Search the company knowledge base for information")]
+public async Task SearchKnowledgeBase(
+ [Description("The search query")] string query)
+{
+ var results = await _vectorDb.SearchAsync(query);
+ return string.Join("\n---\n", results.Select(r => r.Content));
+}
+```
+
+The LLM will call `SearchKnowledgeBase` only when needed:
+- "Hello" β No tool call
+- "What was our 2024 revenue?" β Calls tool
+- "Tell me a joke" β No tool call
+
+## Multilingual RAG: Query Translation Strategy
+
+When your documents are in one language (e.g., English) but users query in another (e.g., Turkish), you need a translation strategy.
+
+
+
+### Solution Options
+
+**Option 1**: Use an LLM that automatically calls tools in English
+- Many modern LLMs can do this if properly instructed
+
+**Option 2**: Tool chain approach
+```csharp
+[KernelFunction]
+[Description("Translate text to English")]
+public async Task TranslateToEnglish(string text)
+{
+ // Translation logic
+}
+
+[KernelFunction]
+[Description("Search knowledge base (English only)")]
+public async Task SearchKnowledgeBase(string englishQuery)
+{
+ // Search logic
+}
+```
+
+The LLM will:
+1. Call `TranslateToEnglish("2024 geliri nedir?")`
+2. Get "What was 2024 revenue?"
+3. Call `SearchKnowledgeBase("What was 2024 revenue?")`
+4. Return results and respond in Turkish
+
+## Model Context Protocol (MCP): Beyond In-Process Tools
+
+Microsoft and Anthropic recently released official C# SDKs for the Model Context Protocol (MCP). This is a game-changer for tool reusability.
+
+
+
+### MCP vs. Semantic Kernel Plugins
+
+| Feature | SK Plugins | MCP Servers |
+|---------|-----------|-------------|
+| **Process** | In-process | Out-of-process (stdio/http) |
+| **Reusability** | Application-specific | Cross-application |
+| **Examples** | Used within your app | VS Code Copilot, Claude Desktop |
+
+### Creating an MCP Server
+
+```csharp
+using Microsoft.Extensions.Hosting;
+using ModelContextProtocol.Extensions.Hosting;
+
+var builder = Host.CreateEmptyApplicationBuilder(settings: null);
+
+builder.Services.AddMcpServer()
+.WithStdioServerTransport()
+.WithToolsFromAssembly();
+
+await builder.Build().RunAsync();
+```
+
+Define your tools:
+
+```csharp
+[McpServerToolType]
+public static class FileSystemTools
+{
+ [McpServerTool, Description("Read a file from the file system")]
+ public static async Task ReadFile(string path)
+ {
+ // β οΈ SECURITY: Always validate paths!
+ if (!IsPathSafe(path))
+ throw new SecurityException("Invalid path");
+
+ return await File.ReadAllTextAsync(path);
+ }
+
+ private static bool IsPathSafe(string path)
+ {
+ // Implement path traversal prevention
+ var fullPath = Path.GetFullPath(path);
+ return fullPath.StartsWith(AllowedDirectory);
+ }
+}
+```
+
+Your MCP server can now be used by VS Code Copilot, Claude Desktop, or any other MCP client!
+
+## Chat History Management: Truncation + RAG Hybrid
+
+For long conversations, storing all history in the context window becomes impractical. Here's the pattern that works:
+
+
+
+### β Lossy Approach
+```
+First 50 messages β Summarize with LLM β Single summary message
+```
+**Problem**: Detail loss (fidelity loss)
+
+### β Hybrid Approach
+1. **Recent messages** (last 5-10): Keep in prompt for immediate context
+2. **Older messages**: Store in vector database as a tool
+
+```csharp
+[KernelFunction]
+[Description("Search conversation history for past discussions")]
+public async Task SearchChatHistory(
+ [Description("What to search for")] string query)
+{
+ var relevantMessages = await _vectorDb.SearchAsync(query);
+ return string.Join("\n", relevantMessages.Select(m =>
+ $"[{m.Timestamp}] {m.Role}: {m.Content}"));
+}
+```
+
+The LLM retrieves only relevant past context when needed, avoiding summary-induced information loss.
+
+## RAG vs. Fine-Tuning: Choose Wisely
+
+A common misconception is using fine-tuning for knowledge injection. Here's when to use each:
+
+| Purpose | RAG | Fine-Tuning |
+|---------|-----|-------------|
+| **Goal** | Memory (provide facts) | Behavior (teach style) |
+| **Updates** | Dynamic (add docs anytime) | Static (requires retraining) |
+| **Cost** | Low dev, higher inference | High dev, lower inference |
+| **Hallucination** | Reduces | Doesn't reduce |
+| **Use Case** | Company docs, FAQs | Brand voice, specific format |
+
+**Common mistake**: "Let's fine-tune on our company documents" β
+**Better approach**: Use RAG! β
+
+Fine-tuning is for teaching the model *how* to respond, not *what* to know.
+
+**Source**: [Oracle - RAG vs Fine-Tuning](https://www.oracle.com/artificial-intelligence/generative-ai/retrieval-augmented-generation-rag/rag-fine-tuning/)
+
+## Bonus: Why SVG is Superior for LLM-Generated Images
+
+When using LLMs to generate diagrams and visualizations, always request SVG format instead of PNG or JPG.
+
+### Why SVG?
+
+β **Text-based** β LLMs produce better results
+β **Lower cost** β Fewer tokens than base64-encoded images
+β **Editable** β Easy to modify after generation
+β **Scalable** β Perfect quality at any size
+β **Version control friendly** β Works great in Git
+
+### Example Prompt
+
+```
+Create an architecture diagram showing PostgreSQL with pgvector integration.
+Format: SVG, 800x400 pixels. Show: .NET Application β EF Core β PostgreSQL β Vector Search.
+Use arrows to connect stages. Color scheme: Blue tones.
+```
+
+
+
+All diagrams in this article were generated as SVG, resulting in excellent quality and lower token costs!
+
+> **Pro Tip**: If you don't need photographs or complex renders, always choose SVG.
+
+## Architecture Roadmap: Putting It All Together
+
+Here's the recommended stack for building production LLM applications with .NET:
+
+1. **Orchestration**: Microsoft.Extensions.AI + Semantic Kernel (when needed)
+2. **Vector Database**: PostgreSQL + Pgvector.EntityFrameworkCore
+3. **RAG Pattern**: Parent-Child chunks with 10-20% overlap
+4. **Tools**: MCP servers for reusability
+5. **Reasoning**: ReasoningEffortLevel instead of temperature
+6. **Prompting**: Critical rules at the end
+7. **Cost Optimization**: Make RAG a tool, not automatic
+
+## Key Takeaways
+
+Let me summarize the most important production tips:
+
+1. **Temperature is gone** β Use `ReasoningEffortLevel` with GPT-5
+2. **Rules at the end** β Combat "Lost in the Middle"
+3. **RAG as a tool** β Reduce costs significantly
+4. **Parent-Child pattern** β Search small, respond with large
+5. **Always use overlap** β 10-20% is the standard
+6. **pgvector for most cases** β Unless you have billions of vectors
+7. **MCP for reusability** β One codebase, works everywhere
+8. **SVG for diagrams** β Better results, lower cost
+9. **Hybrid chat history** β Recent in prompt, old in vector DB
+10. **RAG > Fine-tuning** β For knowledge, not behavior
+
+Happy coding! π
\ No newline at end of file
diff --git a/docs/en/Community-Articles/2025-11-22-building-production-ready-llm-applications/summary.md b/docs/en/Community-Articles/2025-11-22-building-production-ready-llm-applications/summary.md
new file mode 100644
index 0000000000..fb1d41af5c
--- /dev/null
+++ b/docs/en/Community-Articles/2025-11-22-building-production-ready-llm-applications/summary.md
@@ -0,0 +1 @@
+Learn how to build production-ready LLM applications with .NET. This comprehensive guide covers GPT-5 API changes, advanced RAG architectures with parent-child patterns, PostgreSQL pgvector integration, smart tool usage strategies, multilingual query handling, Model Context Protocol (MCP) for cross-application tool reusability, and chat history management techniques for enterprise applications.