Browse Source

Merge branch 'dev' into issue-24714

pull/24725/head
Fahri Gedik 1 month ago
committed by GitHub
parent
commit
2bf1c5155d
No known key found for this signature in database GPG Key ID: B5690EEEBB952194
  1. 406
      .github/scripts/test_update_dependency_changes.py
  2. 331
      .github/scripts/update_dependency_changes.py
  3. 1
      .github/workflows/auto-pr.yml
  4. 71
      .github/workflows/nuget-packages-version-change-detector.yml
  5. 658
      .github/workflows/update-studio-docs.yml
  6. 2
      abp_io/AbpIoLocalization/AbpIoLocalization/Admin/Localization/Resources/de.json
  7. 2
      abp_io/AbpIoLocalization/AbpIoLocalization/Commercial/Localization/Resources/de.json
  8. 2
      abp_io/AbpIoLocalization/AbpIoLocalization/Www/Localization/Resources/de.json
  9. 6
      ai-rules/common/application-layer.mdc
  10. 5
      ai-rules/common/authorization.mdc
  11. 4
      ai-rules/common/cli-commands.mdc
  12. 5
      ai-rules/common/ddd-patterns.mdc
  13. 4
      ai-rules/common/dependency-rules.mdc
  14. 10
      ai-rules/common/development-flow.mdc
  15. 7
      ai-rules/common/infrastructure.mdc
  16. 5
      ai-rules/common/multi-tenancy.mdc
  17. 5
      ai-rules/data/ef-core.mdc
  18. 5
      ai-rules/data/mongodb.mdc
  19. 6
      ai-rules/template-specific/app-nolayers.mdc
  20. 6
      ai-rules/testing/patterns.mdc
  21. 5
      ai-rules/ui/angular.mdc
  22. 5
      ai-rules/ui/blazor.mdc
  23. 6
      ai-rules/ui/mvc.mdc
  24. 1
      delete-bin-obj.ps1
  25. BIN
      docs/en/Blog-Posts/2026-01-08 v10_1_Preview/dotnet-conf-china-2025.png
  26. BIN
      docs/en/Blog-Posts/2026-01-08 v10_1_Preview/my-passkey.png
  27. BIN
      docs/en/Blog-Posts/2026-01-08 v10_1_Preview/passkey-login.png
  28. BIN
      docs/en/Blog-Posts/2026-01-08 v10_1_Preview/passkey-registration.png
  29. BIN
      docs/en/Blog-Posts/2026-01-08 v10_1_Preview/passkey-setting.png
  30. BIN
      docs/en/Blog-Posts/2026-01-08 v10_1_Preview/password-history-settings.png
  31. BIN
      docs/en/Blog-Posts/2026-01-08 v10_1_Preview/password-history-warning.png
  32. BIN
      docs/en/Blog-Posts/2026-01-08 v10_1_Preview/reset-password-error-modal.png
  33. BIN
      docs/en/Blog-Posts/2026-01-08 v10_1_Preview/set-password-error-modal.png
  34. BIN
      docs/en/Blog-Posts/2026-01-08 v10_1_Preview/studio-switch-to-preview.png
  35. 377
      docs/en/Community-Articles/09-02-2026-building-multiagent-system-in-dotnet/articles.md
  36. BIN
      docs/en/Community-Articles/09-02-2026-building-multiagent-system-in-dotnet/images/agent-context.png
  37. BIN
      docs/en/Community-Articles/09-02-2026-building-multiagent-system-in-dotnet/images/agent-ecosystem.png
  38. BIN
      docs/en/Community-Articles/09-02-2026-building-multiagent-system-in-dotnet/images/agent-state-flow.png
  39. BIN
      docs/en/Community-Articles/09-02-2026-building-multiagent-system-in-dotnet/images/image-1.png
  40. BIN
      docs/en/Community-Articles/09-02-2026-building-multiagent-system-in-dotnet/images/image-2.png
  41. BIN
      docs/en/Community-Articles/09-02-2026-building-multiagent-system-in-dotnet/images/image-3.png
  42. BIN
      docs/en/Community-Articles/09-02-2026-building-multiagent-system-in-dotnet/images/image-4.png
  43. BIN
      docs/en/Community-Articles/09-02-2026-building-multiagent-system-in-dotnet/images/image.png
  44. BIN
      docs/en/Community-Articles/09-02-2026-building-multiagent-system-in-dotnet/images/mcp-client-server-1200x700.png
  45. BIN
      docs/en/Community-Articles/09-02-2026-building-multiagent-system-in-dotnet/images/orchestrator-a2a-routing-1200x700.png
  46. BIN
      docs/en/Community-Articles/09-02-2026-building-multiagent-system-in-dotnet/images/orchestrator-researcher-seq-1200x700.png
  47. BIN
      docs/en/Community-Articles/09-02-2026-building-multiagent-system-in-dotnet/images/sequential-agent-context-flow-1200x700.png
  48. BIN
      docs/en/Community-Articles/2025-12-18-Implementing-Multiple-Global-Query-Filters-With-Entity-Framework-Core/images/cover.png
  49. 728
      docs/en/Community-Articles/2025-12-18-Implementing-Multiple-Global-Query-Filters-With-Entity-Framework-Core/post.md
  50. 1
      docs/en/Community-Articles/2025-12-18-Implementing-Multiple-Global-Query-Filters-With-Entity-Framework-Core/summary.md
  51. 167
      docs/en/Community-Articles/2026-01-24-How-AI-Is-Changing-Developers/POST.md
  52. BIN
      docs/en/Community-Articles/2026-01-24-How-AI-Is-Changing-Developers/image.png
  53. 50
      docs/en/Community-Articles/2026-02-02-ndc-london-article/post.md
  54. BIN
      docs/en/Community-Articles/2026-02-03-Impressions-of-NDC-London-2026/0.png
  55. BIN
      docs/en/Community-Articles/2026-02-03-Impressions-of-NDC-London-2026/1.png
  56. BIN
      docs/en/Community-Articles/2026-02-03-Impressions-of-NDC-London-2026/2.png
  57. BIN
      docs/en/Community-Articles/2026-02-03-Impressions-of-NDC-London-2026/3.png
  58. BIN
      docs/en/Community-Articles/2026-02-03-Impressions-of-NDC-London-2026/4.png
  59. BIN
      docs/en/Community-Articles/2026-02-03-Impressions-of-NDC-London-2026/4_1.png
  60. BIN
      docs/en/Community-Articles/2026-02-03-Impressions-of-NDC-London-2026/4_2.png
  61. BIN
      docs/en/Community-Articles/2026-02-03-Impressions-of-NDC-London-2026/5.png
  62. BIN
      docs/en/Community-Articles/2026-02-03-Impressions-of-NDC-London-2026/6.png
  63. BIN
      docs/en/Community-Articles/2026-02-03-Impressions-of-NDC-London-2026/7.png
  64. 325
      docs/en/Community-Articles/2026-02-03-Impressions-of-NDC-London-2026/Post.md
  65. BIN
      docs/en/Community-Articles/2026-02-03-Impressions-of-NDC-London-2026/cover.png
  66. BIN
      docs/en/Community-Articles/2026-02-03-Impressions-of-NDC-London-2026/image-20260206003328436.png
  67. BIN
      docs/en/Community-Articles/2026-02-03-Impressions-of-NDC-London-2026/image-20260206004046914.png
  68. BIN
      docs/en/Community-Articles/2026-02-03-Impressions-of-NDC-London-2026/image-20260206012506799.png
  69. BIN
      docs/en/Community-Articles/2026-02-03-Impressions-of-NDC-London-2026/steve-sanderson-talk.png
  70. BIN
      docs/en/Community-Articles/2026-02-03-Impressions-of-NDC-London-2026/steve-sanderson-talk_1.png
  71. BIN
      docs/en/Community-Articles/2026-02-03-Impressions-of-NDC-London-2026/steve-sanderson-talk_10.png
  72. BIN
      docs/en/Community-Articles/2026-02-03-Impressions-of-NDC-London-2026/steve-sanderson-talk_11.png
  73. BIN
      docs/en/Community-Articles/2026-02-03-Impressions-of-NDC-London-2026/steve-sanderson-talk_12.png
  74. BIN
      docs/en/Community-Articles/2026-02-03-Impressions-of-NDC-London-2026/steve-sanderson-talk_2.png
  75. BIN
      docs/en/Community-Articles/2026-02-03-Impressions-of-NDC-London-2026/steve-sanderson-talk_3.png
  76. BIN
      docs/en/Community-Articles/2026-02-03-Impressions-of-NDC-London-2026/steve-sanderson-talk_4.png
  77. BIN
      docs/en/Community-Articles/2026-02-03-Impressions-of-NDC-London-2026/steve-sanderson-talk_5.png
  78. BIN
      docs/en/Community-Articles/2026-02-03-Impressions-of-NDC-London-2026/steve-sanderson-talk_6.png
  79. BIN
      docs/en/Community-Articles/2026-02-03-Impressions-of-NDC-London-2026/steve-sanderson-talk_7.png
  80. BIN
      docs/en/Community-Articles/2026-02-03-Impressions-of-NDC-London-2026/steve-sanderson-talk_8.png
  81. BIN
      docs/en/Community-Articles/2026-02-03-Impressions-of-NDC-London-2026/steve-sanderson-talk_9.png
  82. BIN
      docs/en/Community-Articles/2026-02-03-Impressions-of-NDC-London-2026/youtube-cover-1.png
  83. BIN
      docs/en/Community-Articles/2026-02-03-Impressions-of-NDC-London-2026/youtube-cover-2.png
  84. BIN
      docs/en/Community-Articles/2026-02-04-Omni-Moderation-in-AI-Management-Module/demo.gif
  85. BIN
      docs/en/Community-Articles/2026-02-04-Omni-Moderation-in-AI-Management-Module/images/abp-studio-ai-management.png
  86. BIN
      docs/en/Community-Articles/2026-02-04-Omni-Moderation-in-AI-Management-Module/images/ai-management-widget.png
  87. BIN
      docs/en/Community-Articles/2026-02-04-Omni-Moderation-in-AI-Management-Module/images/ai-management-workspaces.png
  88. BIN
      docs/en/Community-Articles/2026-02-04-Omni-Moderation-in-AI-Management-Module/images/example-comment.png
  89. 488
      docs/en/Community-Articles/2026-02-04-Omni-Moderation-in-AI-Management-Module/post.md
  90. 30
      docs/en/cli/index.md
  91. 2
      docs/en/deployment/configuring-production.md
  92. 22
      docs/en/docs-nav.json
  93. 2
      docs/en/framework/api-development/standard-apis/configuration.md
  94. 2
      docs/en/framework/architecture/domain-driven-design/application-services.md
  95. 23
      docs/en/framework/architecture/domain-driven-design/entities.md
  96. 2
      docs/en/framework/architecture/modularity/extending/customizing-application-modules-guide.md
  97. 97
      docs/en/framework/fundamentals/authorization/index.md
  98. 241
      docs/en/framework/fundamentals/authorization/resource-based-authorization.md
  99. 2
      docs/en/framework/fundamentals/dynamic-claims.md
  100. 2
      docs/en/framework/fundamentals/exception-handling.md

406
.github/scripts/test_update_dependency_changes.py

@ -0,0 +1,406 @@
#!/usr/bin/env python3
"""
Comprehensive test suite for update_dependency_changes.py
Tests cover:
- Basic update/add/remove scenarios
- Version revert scenarios
- Complex multi-step change sequences
- Edge cases and duplicate operations
- Document format validation
"""
import sys
import os
sys.path.insert(0, os.path.dirname(__file__))
from update_dependency_changes import merge_changes, render_section
def test_update_then_revert():
"""Test: PR1 updates A->B, PR2 reverts B->A. Should be removed."""
print("Test 1: Update then revert")
existing = (
{"PackageA": ("1.0.0", "2.0.0", "#1")}, # updated
{}, # added
{} # removed
)
new = (
{"PackageA": ("2.0.0", "1.0.0", "#2")}, # updated back
{},
{}
)
updated, added, removed = merge_changes(existing, new)
assert "PackageA" not in updated, f"Expected PackageA removed, got: {updated}"
assert len(added) == 0 and len(removed) == 0
print("✓ Passed: Package correctly removed from updates\n")
def test_add_then_remove_same_version():
"""Test: PR1 adds v1.0, PR2 removes v1.0. Should be completely removed."""
print("Test 2: Add then remove same version")
existing = (
{},
{"PackageB": ("1.0.0", "#1")}, # added
{}
)
new = (
{},
{},
{"PackageB": ("1.0.0", "#2")} # removed
)
updated, added, removed = merge_changes(existing, new)
assert "PackageB" not in added, f"Expected PackageB removed from added, got: {added}"
assert "PackageB" not in removed, f"Expected PackageB removed from removed, got: {removed}"
assert "PackageB" not in updated
print("✓ Passed: Package correctly removed from all sections\n")
def test_remove_then_add_same_version():
"""Test: PR1 removes v1.0, PR2 adds v1.0. Should be removed."""
print("Test 3: Remove then add same version")
existing = (
{},
{},
{"PackageC": ("1.0.0", "#1")} # removed
)
new = (
{},
{"PackageC": ("1.0.0", "#2")}, # added back
{}
)
updated, added, removed = merge_changes(existing, new)
assert "PackageC" not in updated, f"Expected PackageC removed from updated, got: {updated}"
assert "PackageC" not in added, f"Expected PackageC removed from added, got: {added}"
assert "PackageC" not in removed, f"Expected PackageC removed from removed, got: {removed}"
print("✓ Passed: Package correctly removed from all sections\n")
def test_add_then_remove_different_version():
"""Test: PR1 adds v1.0, PR2 removes v2.0. Should show as removed v2.0."""
print("Test 4: Add then remove different version")
existing = (
{},
{"PackageD": ("1.0.0", "#1")}, # added
{}
)
new = (
{},
{},
{"PackageD": ("2.0.0", "#2")} # removed different version
)
updated, added, removed = merge_changes(existing, new)
assert "PackageD" not in added, f"Expected PackageD removed from added, got: {added}"
assert "PackageD" in removed, f"Expected PackageD in removed, got: {removed}"
assert removed["PackageD"][0] == "2.0.0", f"Expected version 2.0.0, got: {removed['PackageD']}"
print(f"✓ Passed: Package correctly tracked as removed with version {removed['PackageD'][0]}\n")
def test_update_in_added():
"""Test: PR1 adds v1.0, PR2 updates to v2.0. Should show as updated 1.0->2.0."""
print("Test 5: Update a package that was added")
existing = (
{},
{"PackageE": ("1.0.0", "#1")}, # added
{}
)
new = (
{"PackageE": ("1.0.0", "2.0.0", "#2")}, # updated
{},
{}
)
updated, added, removed = merge_changes(existing, new)
assert "PackageE" not in added, f"Expected PackageE removed from added, got: {added}"
assert "PackageE" in updated, f"Expected PackageE in updated, got: {updated}"
assert updated["PackageE"] == ("1.0.0", "2.0.0", "#1, #2"), \
f"Expected ('1.0.0', '2.0.0', '#1, #2'), got: {updated['PackageE']}"
print(f"✓ Passed: Package correctly converted to updated: {updated['PackageE']}\n")
def test_multiple_updates():
"""Test: PR1 updates A->B, PR2 updates B->C. Should show A->C."""
print("Test 6: Multiple updates")
existing = (
{"PackageF": ("1.0.0", "2.0.0", "#1")}, # updated
{},
{}
)
new = (
{"PackageF": ("2.0.0", "3.0.0", "#2")}, # updated again
{},
{}
)
updated, added, removed = merge_changes(existing, new)
assert "PackageF" in updated
assert updated["PackageF"] == ("1.0.0", "3.0.0", "#1, #2"), \
f"Expected ('1.0.0', '3.0.0', '#1, #2'), got: {updated['PackageF']}"
print(f"✓ Passed: Package correctly shows full range: {updated['PackageF']}\n")
def test_multiple_updates_back_to_original():
"""Test: PR1 updates 1->2, PR2 updates 2->3, PR3 updates 3->1. Should be removed."""
print("Test 7: Multiple updates ending back at original version")
# Simulate PR1 and PR2 already merged
existing = (
{"PackageG": ("1.0.0", "3.0.0", "#1, #2")}, # updated through PR1 and PR2
{},
{}
)
# PR3 changes back to 1.0.0
new = (
{"PackageG": ("3.0.0", "1.0.0", "#3")}, # updated back to original
{},
{}
)
updated, added, removed = merge_changes(existing, new)
assert "PackageG" not in updated, f"Expected PackageG removed, got: {updated}"
assert len(added) == 0 and len(removed) == 0
print("✓ Passed: Package correctly removed (version returned to original)\n")
def test_update_remove_add_same_version():
"""Test: PR1 updates 1->2, PR2 updates 2->3, PR3 removes, PR4 adds v3. Should show updated 1->3."""
print("Test 8: Update-Update-Remove-Add same version")
# After PR1, PR2, PR3
existing = (
{},
{},
{"PackageH": ("1.0.0", "#1, #2, #3")} # removed (original was 1.0.0)
)
# PR4 adds back the same version that was removed
new = (
{},
{"PackageH": ("3.0.0", "#4")}, # added
{}
)
updated, added, removed = merge_changes(existing, new)
assert "PackageH" in updated, f"Expected PackageH in updated, got: updated={updated}, added={added}, removed={removed}"
assert updated["PackageH"] == ("1.0.0", "3.0.0", "#1, #2, #3, #4"), \
f"Expected ('1.0.0', '3.0.0', '#1, #2, #3, #4'), got: {updated['PackageH']}"
print(f"✓ Passed: Package correctly shows as updated: {updated['PackageH']}\n")
def test_update_remove_add_original_version():
"""Test: PR1 updates 1->2, PR2 updates 2->3, PR3 removes, PR4 adds v1. Should be removed."""
print("Test 9: Update-Update-Remove-Add original version")
# After PR1, PR2, PR3
existing = (
{},
{},
{"PackageI": ("1.0.0", "#1, #2, #3")} # removed (original was 1.0.0)
)
# PR4 adds back the original version
new = (
{},
{"PackageI": ("1.0.0", "#4")}, # added back to original
{}
)
updated, added, removed = merge_changes(existing, new)
assert "PackageI" not in updated, f"Expected PackageI removed, got: updated={updated}"
assert "PackageI" not in added, f"Expected PackageI removed, got: added={added}"
assert "PackageI" not in removed, f"Expected PackageI removed, got: removed={removed}"
print("✓ Passed: Package correctly removed (added back to original version)\n")
def test_update_remove_add_different_version():
"""Test: PR1 updates 1->2, PR2 updates 2->3, PR3 removes, PR4 adds v4. Should show updated 1->4."""
print("Test 10: Update-Update-Remove-Add different version")
# After PR1, PR2, PR3
existing = (
{},
{},
{"PackageJ": ("1.0.0", "#1, #2, #3")} # removed (original was 1.0.0)
)
# PR4 adds a completely different version
new = (
{},
{"PackageJ": ("4.0.0", "#4")}, # added new version
{}
)
updated, added, removed = merge_changes(existing, new)
assert "PackageJ" in updated, f"Expected PackageJ in updated, got: updated={updated}, added={added}, removed={removed}"
assert updated["PackageJ"] == ("1.0.0", "4.0.0", "#1, #2, #3, #4"), \
f"Expected ('1.0.0', '4.0.0', '#1, #2, #3, #4'), got: {updated['PackageJ']}"
print(f"✓ Passed: Package correctly shows as updated: {updated['PackageJ']}\n")
def test_add_update_remove():
"""Test: PR1 adds v1, PR2 updates to v2, PR3 removes v2. Should be completely removed."""
print("Test 11: Add-Update-Remove")
# After PR1 and PR2
existing = (
{"PackageK": ("1.0.0", "2.0.0", "#1, #2")}, # updated (was added in PR1, updated in PR2)
{},
{}
)
# PR3 removes v2
new = (
{},
{},
{"PackageK": ("2.0.0", "#3")} # removed
)
updated, added, removed = merge_changes(existing, new)
assert "PackageK" not in updated, f"Expected PackageK removed from updated, got: {updated}"
assert "PackageK" not in added, f"Expected PackageK removed from added, got: {added}"
assert "PackageK" in removed, f"Expected PackageK in removed, got: {removed}"
# The removed should track from the original first version
assert removed["PackageK"][0] == "1.0.0", f"Expected removed from 1.0.0, got: {removed['PackageK']}"
print(f"✓ Passed: Package correctly shows as removed from original: {removed['PackageK']}\n")
def test_add_remove_add_same_version():
"""Test: PR1 adds v1, PR2 removes v1, PR3 adds v1 again. Should show as added v1."""
print("Test 12: Add-Remove-Add same version")
# After PR1 and PR2 (added then removed)
existing = (
{},
{},
{} # Completely removed after PR2
)
# PR3 adds v1 again
new = (
{},
{"PackageL": ("1.0.0", "#3")}, # added
{}
)
updated, added, removed = merge_changes(existing, new)
assert "PackageL" in added, f"Expected PackageL in added, got: added={added}"
assert added["PackageL"] == ("1.0.0", "#3"), f"Expected ('1.0.0', '#3'), got: {added['PackageL']}"
print(f"✓ Passed: Package correctly shows as added: {added['PackageL']}\n")
def test_update_remove_remove():
"""Test: PR1 updates 1->2, PR2 removes v2, PR3 tries to remove again. Should show removed from v1."""
print("Test 13: Update-Remove (duplicate remove)")
# After PR1 and PR2
existing = (
{},
{},
{"PackageM": ("1.0.0", "#1, #2")} # removed (original was 1.0.0)
)
# PR3 tries to remove again (edge case, might not happen in practice)
new = (
{},
{},
{"PackageM": ("1.0.0", "#3")} # removed again
)
updated, added, removed = merge_changes(existing, new)
assert "PackageM" in removed, f"Expected PackageM in removed, got: {removed}"
# Should keep the original information
assert removed["PackageM"][0] == "1.0.0", f"Expected removed from 1.0.0, got: {removed['PackageM']}"
print(f"✓ Passed: Package correctly maintains removed state: {removed['PackageM']}\n")
def test_add_add():
"""Test: PR1 adds v1, PR2 adds v2 (version changed externally). Should show added v2."""
print("Test 14: Add-Add (version changed between PRs)")
# After PR1
existing = (
{},
{"PackageN": ("1.0.0", "#1")}, # added
{}
)
# PR2 adds different version (edge case)
new = (
{},
{"PackageN": ("2.0.0", "#2")}, # added different version
{}
)
updated, added, removed = merge_changes(existing, new)
assert "PackageN" in added, f"Expected PackageN in added, got: {added}"
assert added["PackageN"][0] == "2.0.0", f"Expected version 2.0.0, got: {added['PackageN']}"
print(f"✓ Passed: Package correctly shows latest added version: {added['PackageN']}\n")
def test_complex_chain_ending_in_original():
"""Test: Complex chain - Add v1, Update to v2, Remove, Add v2, Update to v1. Should be removed."""
print("Test 15: Complex chain ending at nothing changed")
# After PR1 (add), PR2 (update), PR3 (remove), PR4 (add back)
existing = (
{"PackageO": ("1.0.0", "2.0.0", "#1, #2, #3, #4")}, # Complex history
{},
{}
)
# PR5 updates back to v1 (original from perspective of first state)
new = (
{"PackageO": ("2.0.0", "1.0.0", "#5")}, # back to start
{},
{}
)
updated, added, removed = merge_changes(existing, new)
assert "PackageO" not in updated, f"Expected PackageO removed, got: {updated}"
print(f"✓ Passed: Complex chain correctly removed when ending at original\n")
def test_document_format():
"""Test: Verify the document rendering format."""
print("Test 16: Document format validation")
updated = {
"Microsoft.Extensions.Logging": ("8.0.0", "8.0.1", "#123"),
"Newtonsoft.Json": ("13.0.1", "13.0.3", "#456, #789"),
}
added = {
"Azure.Identity": ("1.10.0", "#567"),
}
removed = {
"System.Text.Json": ("7.0.0", "#890"),
}
document = render_section("9.0.0", updated, added, removed)
# Verify document structure
assert "## 9.0.0" in document, "Version header missing"
assert "| Package | Old Version | New Version | PR |" in document, "Updated table header missing"
assert "Microsoft.Extensions.Logging" in document, "Updated package missing"
assert "**Added:**" in document, "Added section missing"
assert "Azure.Identity" in document, "Added package missing"
assert "**Removed:**" in document, "Removed section missing"
assert "System.Text.Json" in document, "Removed package missing"
print("✓ Passed: Document format is correct")
print("\nSample output:")
print("-" * 60)
print(document)
print("-" * 60 + "\n")
def run_all_tests():
"""Run all test cases."""
print("=" * 70)
print("Testing update_dependency_changes.py")
print("=" * 70 + "\n")
test_update_then_revert()
test_add_then_remove_same_version()
test_remove_then_add_same_version()
test_add_then_remove_different_version()
test_update_in_added()
test_multiple_updates()
test_multiple_updates_back_to_original()
test_update_remove_add_same_version()
test_update_remove_add_original_version()
test_update_remove_add_different_version()
test_add_update_remove()
test_add_remove_add_same_version()
test_update_remove_remove()
test_add_add()
test_complex_chain_ending_in_original()
test_document_format()
print("=" * 70)
print("All 16 tests passed! ✓")
print("=" * 70)
print("\nTest coverage summary:")
print(" ✓ Basic scenarios (update, add, remove)")
print(" ✓ Version revert handling")
print(" ✓ Complex multi-step sequences")
print(" ✓ Edge cases and duplicates")
print(" ✓ Document format validation")
print("=" * 70)
if __name__ == "__main__":
run_all_tests()

331
.github/scripts/update_dependency_changes.py

@ -0,0 +1,331 @@
import subprocess
import re
import os
import sys
import xml.etree.ElementTree as ET
HEADER = "# Package Version Changes\n"
DOC_PATH = os.environ.get("DOC_PATH", "docs/en/package-version-changes.md")
def get_version():
"""Read the current version from common.props."""
try:
tree = ET.parse("common.props")
root = tree.getroot()
version_elem = root.find(".//Version")
if version_elem is not None:
return version_elem.text
except FileNotFoundError:
print("Error: 'common.props' file not found.", file=sys.stderr)
except ET.ParseError as ex:
print(f"Error: Failed to parse 'common.props': {ex}", file=sys.stderr)
return None
def get_diff(base_ref):
"""Get diff of Directory.Packages.props against the base branch."""
result = subprocess.run(
["git", "diff", f"origin/{base_ref}", "--", "Directory.Packages.props"],
capture_output=True,
text=True,
)
if result.returncode != 0:
raise RuntimeError(
f"Failed to get diff for base ref 'origin/{base_ref}': {result.stderr}"
)
return result.stdout
def get_existing_doc_from_base(base_ref):
"""Read the existing document from the base branch."""
result = subprocess.run(
["git", "show", f"origin/{base_ref}:{DOC_PATH}"],
capture_output=True,
text=True,
)
if result.returncode == 0:
return result.stdout
return ""
def parse_diff_packages(lines, prefix):
"""Parse package versions from diff lines with the given prefix (+ or -)."""
packages = {}
# Use separate patterns to handle different attribute orders
include_pattern = re.compile(r'Include="([^"]+)"')
version_pattern = re.compile(r'Version="([^"]+)"')
for line in lines:
if line.startswith(prefix) and "PackageVersion" in line and not line.startswith(prefix * 3):
include_match = include_pattern.search(line)
version_match = version_pattern.search(line)
if include_match and version_match:
packages[include_match.group(1)] = version_match.group(1)
return packages
def classify_changes(old_packages, new_packages, pr_number):
"""Classify diff into updated, added, and removed with PR attribution."""
updated = {}
added = {}
removed = {}
all_packages = sorted(set(list(old_packages.keys()) + list(new_packages.keys())))
for pkg in all_packages:
if pkg in old_packages and pkg in new_packages:
if old_packages[pkg] != new_packages[pkg]:
updated[pkg] = (old_packages[pkg], new_packages[pkg], pr_number)
elif pkg in new_packages:
added[pkg] = (new_packages[pkg], pr_number)
else:
removed[pkg] = (old_packages[pkg], pr_number)
return updated, added, removed
def parse_existing_section(section_text):
"""Parse an existing markdown section to extract package records with PR info."""
updated = {}
added = {}
removed = {}
mode = "updated"
for line in section_text.split("\n"):
if "**Added:**" in line:
mode = "added"
continue
if "**Removed:**" in line:
mode = "removed"
continue
if not line.startswith("|") or line.startswith("| Package") or line.startswith("|---"):
continue
parts = [p.strip() for p in line.split("|")[1:-1]]
if mode == "updated" and len(parts) >= 3:
pr = parts[3] if len(parts) >= 4 else ""
updated[parts[0]] = (parts[1], parts[2], pr)
elif len(parts) >= 2:
pr = parts[2] if len(parts) >= 3 else ""
if mode == "added":
added[parts[0]] = (parts[1], pr)
else:
removed[parts[0]] = (parts[1], pr)
return updated, added, removed
def merge_prs(existing_pr, new_pr):
"""Merge PR numbers, avoiding duplicates."""
if not existing_pr or not existing_pr.strip():
return new_pr
if not new_pr or not new_pr.strip():
return existing_pr
# Parse existing PRs
existing_prs = [p.strip() for p in existing_pr.split(",") if p.strip()]
# Add new PR if not already present
if new_pr not in existing_prs:
existing_prs.append(new_pr)
return ", ".join(existing_prs)
def merge_changes(existing, new):
"""Merge new changes into existing records for the same version."""
ex_updated, ex_added, ex_removed = existing
new_updated, new_added, new_removed = new
merged_updated = dict(ex_updated)
merged_added = dict(ex_added)
merged_removed = dict(ex_removed)
for pkg, (old_ver, new_ver, pr) in new_updated.items():
if pkg in merged_updated:
existing_old_ver, existing_new_ver, existing_pr = merged_updated[pkg]
merged_pr = merge_prs(existing_pr, pr)
merged_updated[pkg] = (existing_old_ver, new_ver, merged_pr)
elif pkg in merged_added:
existing_ver, existing_pr = merged_added[pkg]
merged_pr = merge_prs(existing_pr, pr)
# Convert added to updated since the version changed again
del merged_added[pkg]
merged_updated[pkg] = (existing_ver, new_ver, merged_pr)
else:
merged_updated[pkg] = (old_ver, new_ver, pr)
for pkg, (ver, pr) in new_added.items():
if pkg in merged_removed:
removed_ver, removed_pr = merged_removed.pop(pkg)
merged_pr = merge_prs(removed_pr, pr)
merged_updated[pkg] = (removed_ver, ver, merged_pr)
elif pkg in merged_added:
existing_ver, existing_pr = merged_added[pkg]
merged_pr = merge_prs(existing_pr, pr)
merged_added[pkg] = (ver, merged_pr)
else:
merged_added[pkg] = (ver, pr)
for pkg, (ver, pr) in new_removed.items():
if pkg in merged_added:
existing_ver, existing_pr = merged_added[pkg]
# Only delete if versions match (added then removed the same version)
if existing_ver == ver:
del merged_added[pkg]
else:
# Version changed between add and remove, convert to updated then removed
del merged_added[pkg]
merged_removed[pkg] = (ver, merge_prs(existing_pr, pr))
elif pkg in merged_updated:
old_ver, new_ver, existing_pr = merged_updated.pop(pkg)
merged_pr = merge_prs(existing_pr, pr)
# Only keep as removed if the final state is different from original
merged_removed[pkg] = (old_ver, merged_pr)
else:
merged_removed[pkg] = (ver, pr)
# Remove updated entries where old and new versions are the same
merged_updated = {k: v for k, v in merged_updated.items() if v[0] != v[1]}
# Remove added entries that are also in removed with the same version
for pkg in list(merged_added.keys()):
if pkg in merged_removed:
added_ver, added_pr = merged_added[pkg]
removed_ver, removed_pr = merged_removed[pkg]
if added_ver == removed_ver:
# Package was added and removed at the same version, cancel out
del merged_added[pkg]
del merged_removed[pkg]
return merged_updated, merged_added, merged_removed
def render_section(version, updated, added, removed):
"""Render a version section as markdown."""
lines = [f"## {version}\n"]
if updated:
lines.append("| Package | Old Version | New Version | PR |")
lines.append("|---------|-------------|-------------|-----|")
for pkg in sorted(updated):
old_ver, new_ver, pr = updated[pkg]
lines.append(f"| {pkg} | {old_ver} | {new_ver} | {pr} |")
lines.append("")
if added:
lines.append("**Added:**\n")
lines.append("| Package | Version | PR |")
lines.append("|---------|---------|-----|")
for pkg in sorted(added):
ver, pr = added[pkg]
lines.append(f"| {pkg} | {ver} | {pr} |")
lines.append("")
if removed:
lines.append("**Removed:**\n")
lines.append("| Package | Version | PR |")
lines.append("|---------|---------|-----|")
for pkg in sorted(removed):
ver, pr = removed[pkg]
lines.append(f"| {pkg} | {ver} | {pr} |")
lines.append("")
return "\n".join(lines)
def parse_document(content):
"""Split document into a list of (version, section_text) tuples."""
sections = []
current_version = None
current_lines = []
for line in content.split("\n"):
match = re.match(r"^## (.+)$", line)
if match:
if current_version:
sections.append((current_version, "\n".join(current_lines)))
current_version = match.group(1).strip()
current_lines = [line]
elif current_version:
current_lines.append(line)
if current_version:
sections.append((current_version, "\n".join(current_lines)))
return sections
def main():
if len(sys.argv) < 3:
print("Usage: update_dependency_changes.py <base-ref> <pr-number>")
sys.exit(1)
base_ref = sys.argv[1]
pr_arg = sys.argv[2]
# Validate PR number is numeric
if not re.fullmatch(r"\d+", pr_arg):
print("Invalid PR number; must be numeric.")
sys.exit(1)
# Validate base_ref doesn't contain dangerous characters
if not re.fullmatch(r"[a-zA-Z0-9/_.-]+", base_ref):
print("Invalid base ref; contains invalid characters.")
sys.exit(1)
pr_number = f"#{pr_arg}"
version = get_version()
if not version:
print("Could not read version from common.props.")
sys.exit(1)
diff = get_diff(base_ref)
if not diff:
print("No diff found for Directory.Packages.props.")
sys.exit(0)
diff_lines = diff.split("\n")
old_packages = parse_diff_packages(diff_lines, "-")
new_packages = parse_diff_packages(diff_lines, "+")
new_updated, new_added, new_removed = classify_changes(old_packages, new_packages, pr_number)
if not new_updated and not new_added and not new_removed:
print("No package version changes detected.")
sys.exit(0)
# Load existing document from the base branch
existing_content = get_existing_doc_from_base(base_ref)
sections = parse_document(existing_content) if existing_content else []
# Find existing section for this version
version_index = None
for i, (v, _) in enumerate(sections):
if v == version:
version_index = i
break
if version_index is not None:
existing = parse_existing_section(sections[version_index][1])
merged = merge_changes(existing, (new_updated, new_added, new_removed))
section_text = render_section(version, *merged)
sections[version_index] = (version, section_text)
else:
section_text = render_section(version, new_updated, new_added, new_removed)
sections.insert(0, (version, section_text))
# Write document
doc_dir = os.path.dirname(DOC_PATH)
if doc_dir:
os.makedirs(doc_dir, exist_ok=True)
with open(DOC_PATH, "w") as f:
f.write(HEADER + "\n")
for _, text in sections:
f.write(text.rstrip("\n") + "\n\n")
print(f"Updated {DOC_PATH} for version {version}")
if __name__ == "__main__":
main()

1
.github/workflows/auto-pr.yml

@ -26,7 +26,6 @@ jobs:
branch: auto-merge/rel-10-1/${{github.run_number}}
title: Merge branch dev with rel-10.1
body: This PR generated automatically to merge dev with rel-10.1. Please review the changed files before merging to prevent any errors that may occur.
reviewers: maliming
draft: true
token: ${{ github.token }}
- name: Merge Pull Request

71
.github/workflows/nuget-packages-version-change-detector.yml

@ -0,0 +1,71 @@
# Automatically detects and documents NuGet package version changes in PRs.
# Triggers on changes to Directory.Packages.props and:
# - Adds 'dependency-change' label to the PR
# - Updates docs/en/package-version-changes.md with version changes
# - Commits the documentation back to the PR branch
# Note: Only runs for PRs from the same repository (not forks) to ensure write permissions.
name: Nuget Packages Version Change Detector
on:
pull_request:
paths:
- 'Directory.Packages.props'
types:
- opened
- synchronize
- reopened
- ready_for_review
permissions:
contents: read
concurrency:
group: dependency-changes-${{ github.event.pull_request.number }}
cancel-in-progress: false
jobs:
label:
if: ${{ !github.event.pull_request.draft && !startsWith(github.head_ref, 'auto-merge/') && github.event.pull_request.head.repo.full_name == github.repository && !contains(github.event.head_commit.message, '[skip ci]') }}
permissions:
contents: write
pull-requests: write
runs-on: ubuntu-latest
env:
DOC_PATH: docs/en/package-version-changes.md
steps:
- run: gh pr edit "$PR_NUMBER" --add-label "dependency-change"
env:
PR_NUMBER: ${{ github.event.pull_request.number }}
GH_TOKEN: ${{ secrets.BOT_SECRET }}
GH_REPO: ${{ github.repository }}
- uses: actions/checkout@v4
with:
ref: ${{ github.event.pull_request.head.ref }}
fetch-depth: 1
- name: Fetch base branch
run: git fetch origin ${{ github.event.pull_request.base.ref }}:refs/remotes/origin/${{ github.event.pull_request.base.ref }} --depth=1
- uses: actions/setup-python@v5
with:
python-version: '3.x'
- run: python .github/scripts/update_dependency_changes.py ${{ github.event.pull_request.base.ref }} ${{ github.event.pull_request.number }}
- name: Commit changes
run: |
set -e
git config user.name "github-actions[bot]"
git config user.email "github-actions[bot]@users.noreply.github.com"
git add "$DOC_PATH"
if git diff --staged --quiet; then
echo "No changes to commit."
else
git commit -m "docs: update package version changes [skip ci]"
if ! git push; then
echo "Error: Failed to push changes. This may be due to conflicts or permission issues."
exit 1
fi
echo "Successfully committed and pushed documentation changes."
fi

658
.github/workflows/update-studio-docs.yml

@ -0,0 +1,658 @@
name: Update ABP Studio Docs
on:
repository_dispatch:
types: [update_studio_docs]
workflow_dispatch:
inputs:
version:
description: 'Studio version (e.g., 2.1.10)'
required: true
name:
description: 'Release name'
required: true
notes:
description: 'Raw release notes'
required: true
url:
description: 'Release URL'
required: true
target_branch:
description: 'Target branch (default: dev)'
required: false
default: 'dev'
jobs:
update-docs:
runs-on: ubuntu-latest
permissions:
contents: write
pull-requests: write
models: read
steps:
# -------------------------------------------------
# Extract payload (repository_dispatch or workflow_dispatch)
# -------------------------------------------------
- name: Extract payload
id: payload
run: |
if [ "${{ github.event_name }}" = "repository_dispatch" ]; then
echo "version=${{ github.event.client_payload.version }}" >> $GITHUB_OUTPUT
echo "name=${{ github.event.client_payload.name }}" >> $GITHUB_OUTPUT
echo "url=${{ github.event.client_payload.url }}" >> $GITHUB_OUTPUT
echo "target_branch=${{ github.event.client_payload.target_branch || 'dev' }}" >> $GITHUB_OUTPUT
# Save notes to environment variable (multiline)
{
echo "RAW_NOTES<<NOTES_DELIMITER_EOF"
jq -r '.client_payload.notes' "$GITHUB_EVENT_PATH"
echo "NOTES_DELIMITER_EOF"
} >> $GITHUB_ENV
else
echo "version=${{ github.event.inputs.version }}" >> $GITHUB_OUTPUT
echo "name=${{ github.event.inputs.name }}" >> $GITHUB_OUTPUT
echo "url=${{ github.event.inputs.url }}" >> $GITHUB_OUTPUT
echo "target_branch=${{ github.event.inputs.target_branch || 'dev' }}" >> $GITHUB_OUTPUT
# Save notes to environment variable (multiline)
{
echo "RAW_NOTES<<NOTES_DELIMITER_EOF"
echo "${{ github.event.inputs.notes }}"
echo "NOTES_DELIMITER_EOF"
} >> $GITHUB_ENV
fi
- name: Validate payload
env:
VERSION: ${{ steps.payload.outputs.version }}
NAME: ${{ steps.payload.outputs.name }}
URL: ${{ steps.payload.outputs.url }}
TARGET_BRANCH: ${{ steps.payload.outputs.target_branch }}
run: |
if [ -z "$VERSION" ] || [ "$VERSION" = "null" ]; then
echo "❌ Missing: version"
exit 1
fi
if [ -z "$NAME" ] || [ "$NAME" = "null" ]; then
echo "❌ Missing: name"
exit 1
fi
if [ -z "$URL" ] || [ "$URL" = "null" ]; then
echo "❌ Missing: url"
exit 1
fi
if [ -z "$RAW_NOTES" ]; then
echo "❌ Missing: release notes"
exit 1
fi
echo "✅ Payload validated"
echo " Version: $VERSION"
echo " Name: $NAME"
echo " Target Branch: $TARGET_BRANCH"
# -------------------------------------------------
# Checkout target branch
# -------------------------------------------------
- name: Checkout
uses: actions/checkout@v4
with:
ref: ${{ steps.payload.outputs.target_branch }}
fetch-depth: 0
- name: Configure git
run: |
git config user.name "github-actions[bot]"
git config user.email "github-actions[bot]@users.noreply.github.com"
# -------------------------------------------------
# Create working branch
# -------------------------------------------------
- name: Create branch
env:
VERSION: ${{ steps.payload.outputs.version }}
run: |
BRANCH="docs/studio-${VERSION}"
# Delete remote branch if exists (idempotent)
git push origin --delete "$BRANCH" 2>/dev/null || true
git checkout -B "$BRANCH"
echo "BRANCH=$BRANCH" >> $GITHUB_ENV
# -------------------------------------------------
# Analyze existing release notes format
# -------------------------------------------------
- name: Analyze existing format
id: analyze
run: |
FILE="docs/en/studio/release-notes.md"
if [ -f "$FILE" ] && [ -s "$FILE" ]; then
{
echo "EXISTING_FORMAT<<DELIMITER_EOF"
head -50 "$FILE" | sed 's/DELIMITER_EOF/DELIMITER_E0F/g'
echo "DELIMITER_EOF"
} >> $GITHUB_OUTPUT
else
{
echo "EXISTING_FORMAT<<DELIMITER_EOF"
echo "# ABP Studio Release Notes"
echo ""
echo "## 2.1.0 (2025-12-08) Latest"
echo "- Enhanced Module Installation UI"
echo "- Added AI Management option"
echo "DELIMITER_EOF"
} >> $GITHUB_OUTPUT
fi
# -------------------------------------------------
# Try AI formatting (OPTIONAL - never fails workflow)
# -------------------------------------------------
- name: Format release notes with AI
id: ai
continue-on-error: true
uses: actions/ai-inference@v1
with:
model: openai/gpt-4.1
prompt: |
You are a technical writer for ABP Studio release notes.
Existing release notes format:
${{ steps.analyze.outputs.EXISTING_FORMAT }}
New release:
Version: ${{ steps.payload.outputs.version }}
Name: ${{ steps.payload.outputs.name }}
Raw notes:
${{ env.RAW_NOTES }}
CRITICAL RULES:
1. Extract ONLY essential, user-facing changes
2. Format as bullet points starting with "- "
3. Keep it concise and professional
4. Match the style of existing release notes
5. Skip internal/technical details unless critical
6. Return ONLY the bullet points (no version header, no date)
7. One change per line
Output example:
- Fixed books sample for blazor-webapp tiered solution
- Enhanced Module Installation UI
- Added AI Management option to Startup Templates
Return ONLY the formatted bullet points.
# -------------------------------------------------
# Fallback: Use raw notes if AI unavailable
# -------------------------------------------------
- name: Prepare final release notes
run: |
mkdir -p .tmp
AI_RESPONSE="${{ steps.ai.outputs.response }}"
if [ -n "$AI_RESPONSE" ] && [ "$AI_RESPONSE" != "null" ]; then
echo "✅ Using AI-formatted release notes"
echo "$AI_RESPONSE" > .tmp/final-notes.txt
else
echo "⚠️ AI unavailable - using aggressive cleaning on raw release notes"
# Clean and format raw notes with aggressive filtering
echo "$RAW_NOTES" | while IFS= read -r line; do
# Skip empty lines
[ -z "$line" ] && continue
# Skip section headers
[[ "$line" =~ ^#+.*What.*Changed ]] && continue
[[ "$line" =~ ^##[[:space:]] ]] && continue
# Skip full changelog links
[[ "$line" =~ ^\*\*Full\ Changelog ]] && continue
[[ "$line" =~ ^Full\ Changelog ]] && continue
# Remove leading bullet/asterisk
line=$(echo "$line" | sed 's/^[[:space:]]*[*-][[:space:]]*//')
# Aggressive cleaning: remove entire " by @user in https://..." suffix
line=$(echo "$line" | sed 's/[[:space:]]*by @[a-zA-Z0-9_-]*[[:space:]]*in https:\/\/github\.com\/[^[:space:]]*//g')
# Remove remaining "by @username" or "by username"
line=$(echo "$line" | sed 's/[[:space:]]*by @[a-zA-Z0-9_-]*[[:space:]]*$//g')
line=$(echo "$line" | sed 's/[[:space:]]*by [a-zA-Z0-9_-]*[[:space:]]*$//g')
# Remove standalone @mentions
line=$(echo "$line" | sed 's/@[a-zA-Z0-9_-]*//g')
# Clean trailing periods if orphaned
line=$(echo "$line" | sed 's/\.[[:space:]]*$//')
# Trim all whitespace
line=$(echo "$line" | sed 's/^[[:space:]]*//;s/[[:space:]]*$//')
# Skip if line is empty or too short
[ -z "$line" ] && continue
[ ${#line} -lt 5 ] && continue
# Capitalize first letter if lowercase
line="$(echo ${line:0:1} | tr '[:lower:]' '[:upper:]')${line:1}"
# Add clean bullet and output
echo "- $line"
done > .tmp/final-notes.txt
fi
# Safety check: verify we have content
if [ ! -s .tmp/final-notes.txt ]; then
echo "⚠️ No valid release notes extracted, using minimal fallback"
echo "- Release ${{ steps.payload.outputs.version }}" > .tmp/final-notes.txt
fi
echo "=== Final release notes ==="
cat .tmp/final-notes.txt
echo "==========================="
# -------------------------------------------------
# Update release-notes.md (move "Latest" tag correctly)
# -------------------------------------------------
- name: Update release-notes.md
env:
VERSION: ${{ steps.payload.outputs.version }}
NAME: ${{ steps.payload.outputs.name }}
URL: ${{ steps.payload.outputs.url }}
run: |
FILE="docs/en/studio/release-notes.md"
DATE="$(date +%Y-%m-%d)"
mkdir -p docs/en/studio
# Check if version already exists (idempotent)
if [ -f "$FILE" ] && grep -q "^## $VERSION " "$FILE"; then
echo "⚠️ Version $VERSION already exists in release notes - skipping update"
echo "VERSION_UPDATED=false" >> $GITHUB_ENV
exit 0
fi
# Read final notes
NOTES_CONTENT="$(cat .tmp/final-notes.txt)"
# Create new entry
NEW_ENTRY="## $VERSION ($DATE) Latest
$NOTES_CONTENT
"
# Process file
if [ ! -f "$FILE" ]; then
# Create new file
cat > "$FILE" <<EOF
# ABP Studio Release Notes
$NEW_ENTRY
EOF
else
# Remove "Latest" tag from existing entries and insert new one
awk -v new_entry="$NEW_ENTRY" '
BEGIN { inserted = 0 }
# Remove "Latest" from existing entries
/^## [0-9]/ {
gsub(/ Latest$/, "", $0)
}
# Insert after first "## " (version heading) or after title
/^## / && !inserted {
print new_entry
inserted = 1
}
# Print current line
{ print }
# If we reach end without inserting, add at end
END {
if (!inserted) {
print ""
print new_entry
}
}
' "$FILE" > "$FILE.new"
mv "$FILE.new" "$FILE"
fi
echo "VERSION_UPDATED=true" >> $GITHUB_ENV
echo "=== Updated release-notes.md preview ==="
head -30 "$FILE"
echo "========================================"
# -------------------------------------------------
# Fetch latest stable ABP version (no preview/rc/beta)
# -------------------------------------------------
- name: Fetch latest stable ABP version
id: abp
run: |
# Fetch all releases
RELEASES=$(curl -fsS \
-H "Accept: application/vnd.github+json" \
-H "Authorization: Bearer ${{ secrets.GITHUB_TOKEN }}" \
"https://api.github.com/repos/abpframework/abp/releases?per_page=20")
# Filter stable releases (exclude preview, rc, beta, dev)
ABP_VERSION=$(echo "$RELEASES" | jq -r '
[.[] | select(
(.prerelease == false) and
(.tag_name | test("preview|rc|beta|dev"; "i") | not)
)] | first | .tag_name
')
if [ -z "$ABP_VERSION" ] || [ "$ABP_VERSION" = "null" ]; then
echo "❌ Could not determine latest stable ABP version"
exit 1
fi
echo "✅ Latest stable ABP version: $ABP_VERSION"
echo "ABP_VERSION=$ABP_VERSION" >> $GITHUB_ENV
# -------------------------------------------------
# Update version-mapping.md (smart range expansion)
# -------------------------------------------------
- name: Update version-mapping.md
env:
STUDIO_VERSION: ${{ steps.payload.outputs.version }}
run: |
FILE="docs/en/studio/version-mapping.md"
ABP_VERSION="${{ env.ABP_VERSION }}"
mkdir -p docs/en/studio
# Create file if doesn't exist
if [ ! -f "$FILE" ]; then
cat > "$FILE" <<EOF
# ABP Studio and ABP Startup Template Version Mappings
| **ABP Studio Version** | **ABP Version of Startup Template** |
|------------------------|-------------------------------------|
| $STUDIO_VERSION | $ABP_VERSION |
EOF
echo "MAPPING_UPDATED=true" >> $GITHUB_ENV
exit 0
fi
# Use Python for smart version range handling
python3 <<'PYTHON_EOF'
import os
import re
from packaging.version import Version, InvalidVersion
studio_ver = os.environ["STUDIO_VERSION"]
abp_ver = os.environ["ABP_VERSION"]
file_path = "docs/en/studio/version-mapping.md"
try:
studio = Version(studio_ver)
except InvalidVersion:
print(f"❌ Invalid Studio version: {studio_ver}")
exit(1)
with open(file_path, 'r') as f:
lines = f.readlines()
# Find table start (skip SEO and headers)
table_start = 0
table_end = 0
for i, line in enumerate(lines):
if line.strip().startswith('|') and '**ABP Studio Version**' in line:
table_start = i
elif table_start > 0 and line.strip() and not line.strip().startswith('|'):
table_end = i
break
if table_start == 0:
print("❌ Could not find version mapping table")
exit(1)
# If no end found, table goes to end of file
if table_end == 0:
table_end = len(lines)
# Extract sections
before_table = lines[:table_start] # Everything before table
table_header = lines[table_start:table_start+2] # Header + separator
data_rows = [l for l in lines[table_start+2:table_end] if l.strip().startswith('|')] # Data rows
after_table = lines[table_end:] # Everything after table
new_rows = []
handled = False
def parse_version_range(version_str):
"""Parse '2.1.5 - 2.1.9' or '2.1.5' into (start, end)"""
version_str = version_str.strip()
if '–' in version_str or '-' in version_str:
# Handle both em-dash and hyphen
parts = re.split(r'\s*[–-]\s*', version_str)
if len(parts) == 2:
try:
return Version(parts[0].strip()), Version(parts[1].strip())
except InvalidVersion:
return None, None
try:
v = Version(version_str)
return v, v
except InvalidVersion:
return None, None
def format_row(studio_range, abp_version):
"""Format a table row with proper spacing"""
return f"| {studio_range:<22} | {abp_version:<27} |\n"
# Process existing rows
for row in data_rows:
match = re.match(r'\|\s*(.+?)\s*\|\s*(.+?)\s*\|', row)
if not match:
continue
existing_studio_range = match.group(1).strip()
existing_abp = match.group(2).strip()
# Only consider rows with matching ABP version
if existing_abp != abp_ver:
new_rows.append(row)
continue
start_ver, end_ver = parse_version_range(existing_studio_range)
if start_ver is None or end_ver is None:
new_rows.append(row)
continue
# Check if current studio version is in this range
if start_ver <= studio <= end_ver:
print(f"✅ Studio version {studio_ver} already covered in range {existing_studio_range}")
handled = True
new_rows.append(row)
# Check if we should extend the range
elif end_ver < studio:
# Calculate if studio is the next logical version
# For patch versions: 2.1.9 -> 2.1.10
# For minor versions: 2.1.9 -> 2.2.0
# Simple heuristic: if major.minor match and patch increments, extend range
if (start_ver.major == studio.major and
start_ver.minor == studio.minor and
studio.micro <= end_ver.micro + 5): # Allow small gaps
new_range = f"{start_ver} - {studio}"
new_rows.append(format_row(new_range, abp_ver))
print(f"✅ Extended range: {new_range}")
handled = True
else:
new_rows.append(row)
else:
new_rows.append(row)
# If not handled, add new row at top of data
if not handled:
new_row = format_row(str(studio), abp_ver)
new_rows.insert(0, new_row)
print(f"✅ Added new mapping: {studio_ver} -> {abp_ver}")
# Write updated file - preserve ALL content
with open(file_path, 'w') as f:
f.writelines(before_table) # SEO, title, intro text
f.writelines(table_header) # Table header
f.writelines(new_rows) # Updated data rows
f.writelines(after_table) # Content after table (preview section, etc.)
print("MAPPING_UPDATED=true")
PYTHON_EOF
echo "MAPPING_UPDATED=true" >> $GITHUB_ENV
echo "=== Updated version-mapping.md preview ==="
head -35 "$FILE"
echo "=========================================="
# -------------------------------------------------
# Check for changes
# -------------------------------------------------
- name: Check for changes
id: changes
run: |
git add docs/en/studio/
if git diff --cached --quiet; then
echo "has_changes=false" >> $GITHUB_OUTPUT
echo "⚠️ No changes detected"
else
echo "has_changes=true" >> $GITHUB_OUTPUT
echo "✅ Changes detected:"
git diff --cached --stat
fi
# -------------------------------------------------
# Commit & push
# -------------------------------------------------
- name: Commit and push
if: steps.changes.outputs.has_changes == 'true'
env:
VERSION: ${{ steps.payload.outputs.version }}
NAME: ${{ steps.payload.outputs.name }}
run: |
git commit -m "docs(studio): update documentation for release $VERSION
- Updated release notes for $VERSION
- Updated version mapping with ABP ${{ env.ABP_VERSION }}
Release: $NAME"
git push -f origin "$BRANCH"
# -------------------------------------------------
# Create or update PR
# -------------------------------------------------
- name: Create or update PR
if: steps.changes.outputs.has_changes == 'true'
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
VERSION: ${{ steps.payload.outputs.version }}
NAME: ${{ steps.payload.outputs.name }}
URL: ${{ steps.payload.outputs.url }}
TARGET_BRANCH: ${{ steps.payload.outputs.target_branch }}
run: |
# Check for existing PR
EXISTING_PR=$(gh pr list \
--head "$BRANCH" \
--base "$TARGET_BRANCH" \
--json number \
--jq '.[0].number' 2>/dev/null || echo "")
PR_BODY="Automated documentation update for ABP Studio release **$VERSION**.
## Release Information
- **Version**: $VERSION
- **Name**: $NAME
- **Release**: [View on GitHub]($URL)
- **ABP Framework Version**: ${{ env.ABP_VERSION }}
## Changes
- ✅ Updated [release-notes.md](docs/en/studio/release-notes.md)
- ✅ Updated [version-mapping.md](docs/en/studio/version-mapping.md)
---
*This PR was automatically generated by the [update-studio-docs workflow](.github/workflows/update-studio-docs.yml)*"
if [ -n "$EXISTING_PR" ]; then
echo "🔄 Updating existing PR #$EXISTING_PR"
gh pr edit "$EXISTING_PR" \
--title "docs(studio): release $VERSION - $NAME" \
--body "$PR_BODY"
echo "PR_NUMBER=$EXISTING_PR" >> $GITHUB_ENV
else
echo "📝 Creating new PR"
sleep 2 # Wait for GitHub to sync
PR_URL=$(gh pr create \
--title "docs(studio): release $VERSION - $NAME" \
--body "$PR_BODY" \
--base "$TARGET_BRANCH" \
--head "$BRANCH")
PR_NUMBER=$(echo "$PR_URL" | grep -oE '[0-9]+$')
echo "PR_NUMBER=$PR_NUMBER" >> $GITHUB_ENV
echo "✅ Created PR #$PR_NUMBER: $PR_URL"
fi
# -------------------------------------------------
# Enable auto-merge (safe with branch protection)
# -------------------------------------------------
- name: Enable auto-merge
if: steps.changes.outputs.has_changes == 'true'
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
continue-on-error: true
run: |
echo "🔄 Attempting to enable auto-merge for PR #$PR_NUMBER"
gh pr merge "$PR_NUMBER" \
--auto \
--squash \
--delete-branch || {
echo "⚠️ Auto-merge not available (branch protection or permissions)"
echo " PR #$PR_NUMBER is ready for manual review"
}
# -------------------------------------------------
# Summary
# -------------------------------------------------
- name: Workflow summary
if: always()
env:
VERSION: ${{ steps.payload.outputs.version }}
run: |
echo "## 📚 ABP Studio Docs Update Summary" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
echo "**Version**: $VERSION" >> $GITHUB_STEP_SUMMARY
echo "**Release**: ${{ steps.payload.outputs.name }}" >> $GITHUB_STEP_SUMMARY
echo "**Target Branch**: ${{ steps.payload.outputs.target_branch }}" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
if [ "${{ steps.changes.outputs.has_changes }}" = "true" ]; then
echo "### ✅ Changes Applied" >> $GITHUB_STEP_SUMMARY
echo "- Release notes updated: ${{ env.VERSION_UPDATED }}" >> $GITHUB_STEP_SUMMARY
echo "- Version mapping updated: ${{ env.MAPPING_UPDATED }}" >> $GITHUB_STEP_SUMMARY
echo "- ABP Framework version: ${{ env.ABP_VERSION }}" >> $GITHUB_STEP_SUMMARY
echo "- PR: #${{ env.PR_NUMBER }}" >> $GITHUB_STEP_SUMMARY
else
echo "### ⚠️ No Changes" >> $GITHUB_STEP_SUMMARY
echo "Version $VERSION already exists in documentation." >> $GITHUB_STEP_SUMMARY
fi

2
abp_io/AbpIoLocalization/AbpIoLocalization/Admin/Localization/Resources/de.json

@ -261,7 +261,7 @@
"Enum:EntityChangeType:0": "Erstellt",
"Enum:EntityChangeType:1": "Aktualisiert",
"Enum:EntityChangeType:2": "Gelöscht",
"TenantId": "Mieter-ID",
"TenantId": "Mandanten-ID",
"ChangeTime": "Zeit ändern",
"EntityTypeFullName": "Vollständiger Name des Entitätstyps",
"AuditLogsFor{0}Organization": "Audit-Logs für die Organisation \"{0}\"",

2
abp_io/AbpIoLocalization/AbpIoLocalization/Commercial/Localization/Resources/de.json

@ -162,7 +162,7 @@
"WhatIsTheABPCommercial": "Was ist der ABP-Werbespot?",
"WhatAreDifferencesThanAbpFramework": "Was sind die Unterschiede zwischen dem Open Source ABP Framework und dem ABP Commercial?",
"ABPCommercialExplanation": "ABP Commercial ist eine Reihe von Premium-Modulen, Tools, Themen und Diensten, die auf dem Open-Source-<a target=\"_blank\" href=\"{0}\">ABP-Framework</a> aufbauen. ABP Commercial wird von demselben Team entwickelt und unterstützt, das hinter dem ABP-Framework steht.",
"WhatAreDifferencesThanABPFrameworkExplanation": "<p> <a target=\"_blank\" href=\"{0}\">ABP-Framework</a> ist ein modulares, thematisches, Microservice-kompatibles Anwendungsentwicklungsframework für ASP.NET Core. Es bietet eine vollständige Architektur und eine starke Infrastruktur, damit Sie sich auf Ihren eigenen Geschäftscode konzentrieren können, anstatt sich für jedes neue Projekt zu wiederholen. Es basiert auf Best Practices für die Softwareentwicklung und beliebten Tools, die Sie bereits kennen. </p> <p> Das ABP-Framework ist völlig kostenlos, Open Source und wird von der Community betrieben. Es bietet auch ein kostenloses Thema und einige vorgefertigte Module (z. B. Identitätsmanagement und Mieterverwaltung).</p>",
"WhatAreDifferencesThanABPFrameworkExplanation": "<p> <a target=\"_blank\" href=\"{0}\">ABP-Framework</a> ist ein modulares, thematisches, Microservice-kompatibles Anwendungsentwicklungsframework für ASP.NET Core. Es bietet eine vollständige Architektur und eine starke Infrastruktur, damit Sie sich auf Ihren eigenen Geschäftscode konzentrieren können, anstatt sich für jedes neue Projekt zu wiederholen. Es basiert auf Best Practices für die Softwareentwicklung und beliebten Tools, die Sie bereits kennen. </p> <p> Das ABP-Framework ist völlig kostenlos, Open Source und wird von der Community betrieben. Es bietet auch ein kostenloses Thema und einige vorgefertigte Module (z. B. Identitätsmanagement und Mandanten-Verwaltung).</p>",
"VisitTheFrameworkVSCommercialDocument": "Besuchen Sie den folgenden Link für weitere Informationen <a href=\"{0}\" target=\"_blank\"> {1} </a>",
"ABPCommercialFollowingBenefits": "ABP Commercial fügt dem ABP-Framework die folgenden Vorteile hinzu;",
"Professional": "Fachmann",

2
abp_io/AbpIoLocalization/AbpIoLocalization/Www/Localization/Resources/de.json

@ -332,7 +332,7 @@
"ConnectionResolver": "Verbindungslöser",
"TenantBasedDataFilter": "Mandantenbasierter Datenfilter",
"ApplicationCode": "Anwendungscode",
"TenantResolution": "Mieterbeschluss",
"TenantResolution": "Mandanten-Ermittlung",
"TenantUser": "Mandant {0} Benutzer",
"CardTitle": "Kartentitel",
"View": "Sicht",

6
ai-rules/common/application-layer.mdc

@ -1,6 +1,10 @@
---
description: "ABP Application Services, DTOs, validation, and error handling patterns"
globs: "**/*.Application/**/*.cs,**/Application/**/*.cs,**/*AppService*.cs,**/*Dto*.cs"
globs:
- "**/*.Application/**/*.cs"
- "**/Application/**/*.cs"
- "**/*AppService*.cs"
- "**/*Dto*.cs"
alwaysApply: false
---

5
ai-rules/common/authorization.mdc

@ -1,6 +1,9 @@
---
description: "ABP permission system and authorization patterns"
globs: "**/*Permission*.cs,**/*AppService*.cs,**/*Controller*.cs"
globs:
- "**/*Permission*.cs"
- "**/*AppService*.cs"
- "**/*Controller*.cs"
alwaysApply: false
---

4
ai-rules/common/cli-commands.mdc

@ -1,6 +1,8 @@
---
description: "ABP CLI commands: generate-proxy, install-libs, add-package-ref, new-module, install-module, update, clean, suite generate (CRUD pages)"
globs: "**/*.csproj,**/appsettings*.json"
globs:
- "**/*.csproj"
- "**/appsettings*.json"
alwaysApply: false
---

5
ai-rules/common/ddd-patterns.mdc

@ -1,6 +1,9 @@
---
description: "ABP DDD patterns - Entities, Aggregate Roots, Repositories, Domain Services"
globs: "**/*.Domain/**/*.cs,**/Domain/**/*.cs,**/Entities/**/*.cs"
globs:
- "**/*.Domain/**/*.cs"
- "**/Domain/**/*.cs"
- "**/Entities/**/*.cs"
alwaysApply: false
---

4
ai-rules/common/dependency-rules.mdc

@ -1,6 +1,8 @@
---
description: "ABP layer dependency rules and project structure guardrails"
globs: "**/*.csproj,**/*Module*.cs"
globs:
- "**/*.csproj"
- "**/*Module*.cs"
alwaysApply: false
---

10
ai-rules/common/development-flow.mdc

@ -1,6 +1,14 @@
---
description: "ABP development workflow - adding features, entities, and migrations"
globs: "**/*AppService*.cs,**/*Application*/**/*.cs,**/*Application.Contracts*/**/*.cs,**/*Dto*.cs,**/*DbContext*.cs,**/*.EntityFrameworkCore/**/*.cs,**/*.MongoDB/**/*.cs,**/*Permission*.cs"
globs:
- "**/*AppService*.cs"
- "**/*Application*/**/*.cs"
- "**/*Application.Contracts*/**/*.cs"
- "**/*Dto*.cs"
- "**/*DbContext*.cs"
- "**/*.EntityFrameworkCore/**/*.cs"
- "**/*.MongoDB/**/*.cs"
- "**/*Permission*.cs"
alwaysApply: false
---

7
ai-rules/common/infrastructure.mdc

@ -1,6 +1,11 @@
---
description: "ABP infrastructure services - Settings, Features, Caching, Events, Background Jobs"
globs: "**/*Setting*.cs,**/*Feature*.cs,**/*Cache*.cs,**/*Event*.cs,**/*Job*.cs"
globs:
- "**/*Setting*.cs"
- "**/*Feature*.cs"
- "**/*Cache*.cs"
- "**/*Event*.cs"
- "**/*Job*.cs"
alwaysApply: false
---

5
ai-rules/common/multi-tenancy.mdc

@ -1,6 +1,9 @@
---
description: "ABP Multi-Tenancy patterns - tenant-aware entities, data isolation, and tenant switching"
globs: "**/*Tenant*.cs,**/*MultiTenant*.cs,**/Entities/**/*.cs"
globs:
- "**/*Tenant*.cs"
- "**/*MultiTenant*.cs"
- "**/Entities/**/*.cs"
alwaysApply: false
---

5
ai-rules/data/ef-core.mdc

@ -1,6 +1,9 @@
---
description: "ABP Entity Framework Core patterns - DbContext, migrations, repositories"
globs: "**/*.EntityFrameworkCore/**/*.cs,**/EntityFrameworkCore/**/*.cs,**/*DbContext*.cs"
globs:
- "**/*.EntityFrameworkCore/**/*.cs"
- "**/EntityFrameworkCore/**/*.cs"
- "**/*DbContext*.cs"
alwaysApply: false
---

5
ai-rules/data/mongodb.mdc

@ -1,6 +1,9 @@
---
description: "ABP MongoDB patterns - MongoDbContext and repositories"
globs: "**/*.MongoDB/**/*.cs,**/MongoDB/**/*.cs,**/*MongoDb*.cs"
globs:
- "**/*.MongoDB/**/*.cs"
- "**/MongoDB/**/*.cs"
- "**/*MongoDb*.cs"
alwaysApply: false
---

6
ai-rules/template-specific/app-nolayers.mdc

@ -1,6 +1,10 @@
---
description: "ABP Single-Layer (No-Layers) application template specific patterns"
globs: "**/src/*/*Module.cs,**/src/*/Entities/**/*.cs,**/src/*/Services/**/*.cs,**/src/*/Data/**/*.cs"
globs:
- "**/src/*/*Module.cs"
- "**/src/*/Entities/**/*.cs"
- "**/src/*/Services/**/*.cs"
- "**/src/*/Data/**/*.cs"
alwaysApply: false
---

6
ai-rules/testing/patterns.mdc

@ -1,6 +1,10 @@
---
description: "ABP testing patterns - unit tests and integration tests"
globs: "test/**/*.cs,tests/**/*.cs,**/*Tests*/**/*.cs,**/*Test*.cs"
globs:
- "test/**/*.cs"
- "tests/**/*.cs"
- "**/*Tests*/**/*.cs"
- "**/*Test*.cs"
alwaysApply: false
---

5
ai-rules/ui/angular.mdc

@ -1,6 +1,9 @@
---
description: "ABP Angular UI patterns and best practices"
globs: "**/angular/**/*.ts,**/angular/**/*.html,**/*.component.ts"
globs:
- "**/angular/**/*.ts"
- "**/angular/**/*.html"
- "**/*.component.ts"
alwaysApply: false
---

5
ai-rules/ui/blazor.mdc

@ -1,6 +1,9 @@
---
description: "ABP Blazor UI patterns and components"
globs: "**/*.razor,**/Blazor/**/*.cs,**/*.Blazor*/**/*.cs"
globs:
- "**/*.razor"
- "**/Blazor/**/*.cs"
- "**/*.Blazor*/**/*.cs"
alwaysApply: false
---

6
ai-rules/ui/mvc.mdc

@ -1,6 +1,10 @@
---
description: "ABP MVC and Razor Pages UI patterns"
globs: "**/*.cshtml,**/Pages/**/*.cs,**/Views/**/*.cs,**/Controllers/**/*.cs"
globs:
- "**/*.cshtml"
- "**/Pages/**/*.cs"
- "**/Views/**/*.cs"
- "**/Controllers/**/*.cs"
alwaysApply: false
---

1
delete-bin-obj.ps1

@ -10,4 +10,3 @@ Get-ChildItem -Path . -Include bin,obj -Recurse -Directory | ForEach-Object {
}
Write-Host "BIN and OBJ folders have been successfully deleted." -ForegroundColor Green

BIN
docs/en/Blog-Posts/2026-01-08 v10_1_Preview/dotnet-conf-china-2025.png

Binary file not shown.

Before

Width:  |  Height:  |  Size: 194 KiB

After

Width:  |  Height:  |  Size: 160 KiB

BIN
docs/en/Blog-Posts/2026-01-08 v10_1_Preview/my-passkey.png

Binary file not shown.

Before

Width:  |  Height:  |  Size: 21 KiB

After

Width:  |  Height:  |  Size: 19 KiB

BIN
docs/en/Blog-Posts/2026-01-08 v10_1_Preview/passkey-login.png

Binary file not shown.

Before

Width:  |  Height:  |  Size: 18 KiB

After

Width:  |  Height:  |  Size: 15 KiB

BIN
docs/en/Blog-Posts/2026-01-08 v10_1_Preview/passkey-registration.png

Binary file not shown.

Before

Width:  |  Height:  |  Size: 16 KiB

After

Width:  |  Height:  |  Size: 15 KiB

BIN
docs/en/Blog-Posts/2026-01-08 v10_1_Preview/passkey-setting.png

Binary file not shown.

Before

Width:  |  Height:  |  Size: 23 KiB

After

Width:  |  Height:  |  Size: 18 KiB

BIN
docs/en/Blog-Posts/2026-01-08 v10_1_Preview/password-history-settings.png

Binary file not shown.

Before

Width:  |  Height:  |  Size: 15 KiB

After

Width:  |  Height:  |  Size: 13 KiB

BIN
docs/en/Blog-Posts/2026-01-08 v10_1_Preview/password-history-warning.png

Binary file not shown.

Before

Width:  |  Height:  |  Size: 24 KiB

After

Width:  |  Height:  |  Size: 21 KiB

BIN
docs/en/Blog-Posts/2026-01-08 v10_1_Preview/reset-password-error-modal.png

Binary file not shown.

Before

Width:  |  Height:  |  Size: 24 KiB

After

Width:  |  Height:  |  Size: 21 KiB

BIN
docs/en/Blog-Posts/2026-01-08 v10_1_Preview/set-password-error-modal.png

Binary file not shown.

Before

Width:  |  Height:  |  Size: 7.6 KiB

After

Width:  |  Height:  |  Size: 6.7 KiB

BIN
docs/en/Blog-Posts/2026-01-08 v10_1_Preview/studio-switch-to-preview.png

Binary file not shown.

Before

Width:  |  Height:  |  Size: 22 KiB

After

Width:  |  Height:  |  Size: 20 KiB

377
docs/en/Community-Articles/09-02-2026-building-multiagent-system-in-dotnet/articles.md

@ -0,0 +1,377 @@
# Building a Multi-Agent AI System with A2A, MCP, and ADK in .NET
> How we combined three open AI protocols — Google's A2A & ADK with Anthropic's MCP — to build a production-ready Multi-Agent Research Assistant using .NET 10.
---
## Introduction
The AI space is constantly changing and improving. Once again, we've moved past the single LLM calls and into the future of **Multi-Agent Systems**, in which expert AI agents act in unison as a collaborative team.
But here is the problem: **How do you make agents communicate with each other? How do you equip agents with tools? How do you control them?**
Three open protocols have emerged for answering these questions:
- **MCP (Model Context Protocol)** by Anthropic — The "USB-C for AI"
- **A2A (Agent-to-Agent Protocol)** by Google — The "phone line between agents"
- **ADK (Agent Development Kit)** by Google — The "organizational chart for agents"
In this article, I will briefly describe each protocol, highlight the benefits of the combination, and walk you through our own project: a **Multi-Agent Research Assistant** developed via ABP Framework.
---
## The Problem: Why Single-Agent Isn't Enough
Imagine you ask an AI: *"Research the latest AI agent frameworks and give me a comprehensive analysis report."*
A single LLM call would:
- Hallucinate search results (can't actually browse the web)
- Produce a shallow analysis (no structured research pipeline)
- Lose context between steps (no state management)
- Can't save results anywhere (no tool access)
What you actually need is a **team of specialists**:
1. A **Researcher** who searches the web and gathers raw data
2. An **Analyst** who processes that data into a structured report
3. **Tools** that let agents interact with the real world (web, database, filesystem)
4. An **Orchestrator** that coordinates everything
This is exactly what we built.
!["single-vs-multiagent system"](images/image.png)
---
## Protocol #1: MCP — Giving Agents Superpowers
### What is MCP?
**MCP (Model Context Protocol)**: Anthropic's standardized protocol allows AI models to be connected to all external tools and data sources. MCP can be thought of as **the USB-C of AI** – one port compatible with everything.
Earlier, before MCP, if you wanted your LLM to do things such as search the web, query a database, and store files, you would need to write your own integration code for each capability. MCP lets you define your tools one time, and any agent that is MCP-compatible can make use of them.
!["mcp"](images/image-1.png)
### How MCP Works
MCP follows a simple **Client-Server architecture**:
![mcp client server](images/mcp-client-server-1200x700.png)
The flow is straightforward:
1. **Discovery**: The agent asks "What tools do you have?" (`tools/list`)
2. **Invocation**: The agent calls a specific tool (`tools/call`)
3. **Result**: The tool returns data back to the agent
### MCP in Our Project
We built three MCP tool servers:
| MCP Tool | Purpose | Used By |
|----------|---------|---------|
| `web_search` | Searches the web via Tavily API | Researcher Agent |
| `fetch_url_content` | Fetches content from a URL | Researcher Agent |
| `save_research_to_file` | Saves reports to the filesystem | Analysis Agent |
| `save_research_to_database` | Persists results in SQL Server | Analysis Agent |
| `search_past_research` | Queries historical research | Analysis Agent |
The beauty of MCP is that you do not need to know how these tools are implemented inside the tool. You simply need to call them by their names as given in the description.
---
## Protocol #2: A2A — Making Agents Talk to Each Other
### What is A2A?
**A2A (Agent to Agent)**, formerly proposed by Google and now presented under the Linux Foundation, describes a protocol allowing **one AI agent to discover another and trade tasks**. MCP fits as helping agents acquire tools; A2A helps them acquire the ability to speak.
Think of it this way:
- **MCP** = "What can this agent *do*?" (capabilities)
- **A2A** = "How do agents *talk*?" (communication)
### The Agent Card: Your Agent's Business Card
Every A2A-compatible agent publishes an **Agent Card** — a JSON document that describes who it is and what it can do. It's like a business card for AI agents:
```json
{
"name": "Researcher Agent",
"description": "Searches the web to collect comprehensive research data",
"url": "https://localhost:44331/a2a/researcher",
"version": "1.0.0",
"capabilities": {
"streaming": false,
"pushNotifications": false
},
"skills": [
{
"id": "web-research",
"name": "Web Research",
"description": "Searches the web on a given topic and collects raw data",
"tags": ["research", "web-search", "data-collection"]
}
]
}
```
Other agents can discover this card at `/.well-known/agent.json` and immediately know:
- What this agent does
- Where to reach it
- What skills it has
![What is A2A?](images/image-2.png)
### How A2A Task Exchange Works
Once an agent discovers another agent, it can send tasks:
![orchestrator](images/orchestrator-researcher-seq-1200x700.png)
The key concepts:
- **Task**: A unit of work sent between agents (like an email with instructions)
- **Artifact**: The output produced by an agent (like an attachment in the reply)
- **Task State**: `Submitted → Working → Completed/Failed`
### A2A in Our Project
Agent communication in our system uses A2A:
- The **Orchestrator** finds all agents through the Agent Cards
- It sends a research task to the **Researcher Agent**
- The Researcher’s output (artifacts) is used as input by **Analysis Agent** - The Analysis Agent creates the final structured report
---
## Protocol #3: ADK — Organizing Your Agent Team
### What is ADK?
**ADK (Agent Development Kit)**, created by Google, provides patterns for **organizing and orchestrating multiple agents**. It answers the question: "How do you build a team of agents that work together efficiently?"
ADK gives you:
- **BaseAgent**: A foundation every agent inherits from
- **SequentialAgent**: Runs agents one after another (pipeline)
- **ParallelAgent**: Runs agents simultaneously
- **AgentContext**: Shared state that flows through the pipeline
- **AgentEvent**: Control flow signals (escalate, transfer, state updates)
> **Note**: ADK's official SDK is Python-only. We ported the core patterns to .NET for our project.
### The Pipeline Pattern
The most powerful ADK pattern is the **Sequential Pipeline**. Think of it as an assembly line in a factory:
![agent state flow](images/agent-state-flow.png)
Each agent:
1. Receives the shared **AgentContext** (with state from previous agents)
2. Does its work
3. Updates the state
4. Passes it to the next agent
### AgentContext: The Shared Memory
`AgentContext` is like a shared whiteboard that all agents can read from and write to:
![agent context](images/agent-context.png)
This pattern eliminates the need for complex inter-agent messaging — agents simply read and write to a shared context.
### ADK Orchestration Patterns
ADK supports multiple orchestration patterns:
| Pattern | Description | Use Case |
|---------|-------------|----------|
| **Sequential** | A → B → C | Research → Analysis pipeline |
| **Parallel** | A, B, C simultaneously | Multiple searches at once |
| **Fan-Out/Fan-In** | Split → Process → Merge | Distributed research |
| **Conditional Routing** | If/else agent selection | Route by query type |
---
## How the Three Protocols Work Together
Here's the key insight: **MCP, A2A, and ADK are not competitors — they're complementary layers of a complete agent system.**
![agent ecosystem](images/agent-ecosystem.png)
Each protocol handles a different concern:
| Layer | Protocol | Question It Answers |
|-------|----------|-------------------|
| **Top** | ADK | "How are agents organized?" |
| **Middle** | A2A | "How do agents communicate?" |
| **Bottom** | MCP | "What tools can agents use?" |
---
## Our Project: Multi-Agent Research Assistant
### Built With
- **.NET 10.0** — Latest runtime
- **ABP Framework 10.0.2** — Enterprise .NET application framework
- **Semantic Kernel 1.70.0** — Microsoft's AI orchestration SDK
- **Azure OpenAI (GPT)** — LLM backbone
- **Tavily Search API** — Real-time web search
- **SQL Server** — Research persistence
- **MCP SDK** (`ModelContextProtocol` 0.8.0-preview.1)
- **A2A SDK** (`A2A` 0.3.3-preview)
### How It Works (Step by Step)
**Step 1: User Submits a Query**
For example, the user specifies a field of research in the dashboard: *“Compare the latest AI agent frameworks: LangChain, Semantic Kernel, and AutoGen”*, and then specifies execution mode as ADK-Sequential or A2A.
**Step 2: Orchestrator Activates**
The `ResearchOrchestrator` receives the query and constructs the `AgentContext`. In ADK mode, it constructs a `SequentialAgent` with two sub-agents; in A2A mode, it uses the `A2AServer` to send the tasks.
**Step 3: Researcher Agent Goes to Work**
The Researcher Agent:
- Receives the query from the context
- Uses GPT to formulate optimal search queries
- Calls the `web_search` MCP tool (powered by Tavily API)
- Collects and synthesizes raw research data
- Stores results in the shared `AgentContext`
**Step 4: Analysis Agent Takes Over**
The Analysis Agent:
- Reads the Researcher's raw data from `AgentContext`
- Uses GPT to perform deep analysis
- Generates a structured Markdown report with sections:
- Executive Summary
- Key Findings
- Detailed Analysis
- Comparative Assessment
- Conclusion and Recommendations
- Calls MCP tools to save the report to both filesystem and database
**Step 5: Results Returned**
The orchestrator collects all results and returns them to the user via the REST API. The dashboard displays the research report, analysis report, agent event timeline, and raw data.
### Two Execution Modes
Our system supports two execution modes, demonstrating both ADK and A2A approaches:
#### Mode 1: ADK Sequential Pipeline
Agents are organized as a `SequentialAgent`. State flows automatically through the pipeline via `AgentContext`. This is an in-process approach — fast and simple.
![sequential agent context flow](images/sequential-agent-context-flow-1200x700.png)
#### Mode 2: A2A Protocol-Based
Agents communicate via the A2A protocol. The Orchestrator sends `AgentTask` objects to each agent through the `A2AServer`. Each agent has its own `AgentCard` for discovery.
![orchestrator a2a routing](images/orchestrator-a2a-routing-1200x700.png)
### The Dashboard
The UI provides a complete research experience:
- **Hero Section** with system description and protocol badges
- **Architecture Cards** showing all four components (Researcher, Analyst, MCP Tools, Orchestrator)
- **Research Form** with query input and mode selection
- **Live Pipeline Status** tracking each stage of execution
- **Tabbed Results** view: Research Report, Analysis Report, Raw Data, Agent Events
- **Research History** table with past queries and their results
![Dashboard 1](images/image-3.png)
![Dashboard 2](images/image-4.png)
---
## Why ABP Framework?
We chose ABP Framework as our .NET application foundation. Here's why it was a natural fit:
| ABP Feature | How We Used It |
|-------------|---------------|
| **Auto API Controllers** | `ResearchAppService` automatically becomes REST API endpoints |
| **Dependency Injection** | Clean registration of agents, tools, orchestrator, Semantic Kernel |
| **Repository Pattern** | `IRepository<ResearchRecord>` for database operations in MCP tools |
| **Module System** | All agent ecosystem config encapsulated in `AgentEcosystemModule` |
| **Entity Framework Core** | Research record persistence with code-first migrations |
| **Built-in Auth** | OpenIddict integration for securing agent endpoints |
| **Health Checks** | Monitoring agent ecosystem health |
ABP's single layer template provided us the best .NET groundwork, which had all the enterprise features without any unnecessary complexity for a focused AI project. Of course, the agent architecture (MCP, A2A, ADK) is actually framework-agnostic and can be implemented with any .NET application.
---
## Key Takeaways
### 1. Protocols Are Complementary, Not Competing
MCP, A2A, and ADK solve different problems. Using them together creates a complete agent system:
- **MCP**: Standardize tool access
- **A2A**: Standardize inter-agent communication
- **ADK**: Standardize agent orchestration
### 2. Start Simple, Scale Later
Our approach runs all of that in a single process, which is in-process A2A. Using A2A allowed us to design the code so that each agent can be extracted into its own microservice later on without affecting the code logic.
### 3. Shared State > Message Passing (For Simple Cases)
ADK's `AgentContext` with shared state is simpler and faster than A2A message passing for in-process scenarios. Use A2A when agents need to run as separate services.
### 4. MCP is the Real Game-Changer
The ability to define tools once and have any agent use them — with automatic discovery and structured invocations — eliminates enormous amounts of boilerplate code.
### 5. LLM Abstraction is Critical
Using Semantic Kernel's `IChatCompletionService` lets you swap between Azure OpenAI, OpenAI, Ollama, or any provider without touching agent code.
---
## What's Next?
This project demonstrates the foundation of a multi-agent system. Future enhancements could include:
- **Streaming responses** — Real-time updates as agents work (A2A supports this)
- **More specialized agents** — Code analysis, translation, fact-checking agents
- **Distributed deployment** — Each agent as a separate microservice with HTTP-based A2A
- **Agent marketplace** — Discover and integrate third-party agents via A2A Agent Cards
- **Human-in-the-loop** — Using A2A's `InputRequired` state for human approval steps
- **RAG integration** — MCP tools for vector database search
---
## Resources
| Resource | Link |
|----------|------|
| **MCP Specification** | [modelcontextprotocol.io](https://modelcontextprotocol.io) |
| **A2A Specification** | [google.github.io/A2A](https://google.github.io/A2A) |
| **ADK Documentation** | [google.github.io/adk-docs](https://google.github.io/adk-docs) |
| **ABP Framework** | [abp.io](https://abp.io) |
| **Semantic Kernel** | [github.com/microsoft/semantic-kernel](https://github.com/microsoft/semantic-kernel) |
| **MCP .NET SDK** | [NuGet: ModelContextProtocol](https://www.nuget.org/packages/ModelContextProtocol) |
| **A2A .NET SDK** | [NuGet: A2A](https://www.nuget.org/packages/A2A) |
| **Our Source Code** | [GitHub Repository](https://github.com/fahrigedik/agent-ecosystem-in-abp) |
---
## Conclusion
Developing a multi-agent AI system is no longer a futuristic dream; it’s something that can actually be achieved today by using open protocols and available frameworks. In this manner, by using **MCP** for access to tools, **A2A** for communicating between agents, and **ADK** for orchestration, we have actually built a Research Assistant.
ABP Framework and .NET turned out to be an excellent choice, delivering us the infrastructure we needed to implement DI, repositories, auto APIs, and modules, allowing us to work completely on the AI agent architecture.
The era of single LLM calls is ending, and the era of agent ecosystems begins now.
---

BIN
docs/en/Community-Articles/09-02-2026-building-multiagent-system-in-dotnet/images/agent-context.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 48 KiB

BIN
docs/en/Community-Articles/09-02-2026-building-multiagent-system-in-dotnet/images/agent-ecosystem.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 52 KiB

BIN
docs/en/Community-Articles/09-02-2026-building-multiagent-system-in-dotnet/images/agent-state-flow.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 22 KiB

BIN
docs/en/Community-Articles/09-02-2026-building-multiagent-system-in-dotnet/images/image-1.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 86 KiB

BIN
docs/en/Community-Articles/09-02-2026-building-multiagent-system-in-dotnet/images/image-2.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 126 KiB

BIN
docs/en/Community-Articles/09-02-2026-building-multiagent-system-in-dotnet/images/image-3.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 54 KiB

BIN
docs/en/Community-Articles/09-02-2026-building-multiagent-system-in-dotnet/images/image-4.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 61 KiB

BIN
docs/en/Community-Articles/09-02-2026-building-multiagent-system-in-dotnet/images/image.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 169 KiB

BIN
docs/en/Community-Articles/09-02-2026-building-multiagent-system-in-dotnet/images/mcp-client-server-1200x700.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 16 KiB

BIN
docs/en/Community-Articles/09-02-2026-building-multiagent-system-in-dotnet/images/orchestrator-a2a-routing-1200x700.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 17 KiB

BIN
docs/en/Community-Articles/09-02-2026-building-multiagent-system-in-dotnet/images/orchestrator-researcher-seq-1200x700.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 14 KiB

BIN
docs/en/Community-Articles/09-02-2026-building-multiagent-system-in-dotnet/images/sequential-agent-context-flow-1200x700.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 17 KiB

BIN
docs/en/Community-Articles/2025-12-18-Implementing-Multiple-Global-Query-Filters-With-Entity-Framework-Core/images/cover.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 358 KiB

728
docs/en/Community-Articles/2025-12-18-Implementing-Multiple-Global-Query-Filters-With-Entity-Framework-Core/post.md

@ -0,0 +1,728 @@
# Implementing Multiple Global Query Filters with Entity Framework Core
Global query filters are one of Entity Framework Core's most powerful features for automatically filtering data based on certain conditions. They allow you to define filter criteria at the entity level that are automatically applied to all LINQ queries, making it impossible for developers to accidentally forget to include important filtering logic. In this article, we'll explore how to implement multiple global query filters in ABP Framework, covering built-in filters, custom filters, and performance optimization techniques.
By the end of this guide, you'll understand how ABP Framework's data filtering system works, how to create custom global query filters for your specific business requirements, how to combine multiple filters effectively, and how to optimize filter performance using user-defined functions.
## Understanding Global Query Filters in EF Core
Global query filters were introduced in EF Core 2.0 and allow you to automatically append LINQ predicates to queries generated for an entity type. This is particularly useful for scenarios like multi-tenancy, soft delete, data isolation, and row-level security.
In traditional applications, developers must remember to add filter conditions manually to every query:
```csharp
// Manual filtering - error-prone and tedious
var activeBooks = await _bookRepository
.GetListAsync(b => b.IsDeleted == false && b.TenantId == currentTenantId);
```
With global query filters, this logic is applied automatically:
```csharp
// Filter is applied automatically - no manual filtering needed
var activeBooks = await _bookRepository.GetListAsync();
```
ABP Framework provides a sophisticated data filtering system built on top of EF Core's global query filters, with built-in support for soft delete, multi-tenancy, and the ability to easily create custom filters.
### Important: Plain EF Core vs ABP Composition
In plain EF Core, calling `HasQueryFilter` multiple times for the same entity does **not** create multiple active filters. The last call replaces the previous one (unless you use newer named-filter APIs in recent EF Core versions).
ABP provides `HasAbpQueryFilter` to compose query filters safely. This method combines your custom filter with ABP's built-in filters (such as `ISoftDelete` and `IMultiTenant`) and with other `HasAbpQueryFilter` calls.
## ABP Framework's Data Filtering System
ABP's data filtering system is defined in the `Volo.Abp.Data` namespace and provides a consistent way to manage filters across your application. The core interface is `IDataFilter<TFilter>`, which allows you to enable or disable filters programmatically.
### Built-in Filters
ABP Framework comes with several built-in filters:
1. **ISoftDelete**: Automatically filters out soft-deleted entities
2. **IMultiTenant**: Automatically filters entities by current tenant (for SaaS applications)
3. **IIsActive**: Filters entities based on active status
Let's look at how these are implemented in the ABP framework:
The `ISoftDelete` interface is straightforward:
```csharp
namespace Volo.Abp;
public interface ISoftDelete
{
bool IsDeleted { get; }
}
```
Any entity implementing this interface will automatically have deleted records filtered out of queries.
### Enabling and Disabling Filters
ABP provides the `IDataFilter<TFilter>` service to control filter behavior at runtime:
```csharp
public class BookAppService : ApplicationService
{
private readonly IDataFilter<ISoftDelete> _softDeleteFilter;
private readonly IRepository<Book, Guid> _bookRepository;
public BookAppService(
IDataFilter<ISoftDelete> softDeleteFilter,
IRepository<Book, Guid> bookRepository)
{
_softDeleteFilter = softDeleteFilter;
_bookRepository = bookRepository;
}
public async Task<List<Book>> GetAllBooksIncludingDeletedAsync()
{
// Temporarily disable the soft delete filter
using (_softDeleteFilter.Disable())
{
return await _bookRepository.GetListAsync();
}
}
public async Task<List<Book>> GetActiveBooksAsync()
{
// Filter is enabled by default - soft-deleted items are excluded
return await _bookRepository.GetListAsync();
}
}
```
You can also check if a filter is enabled and enable/disable it programmatically:
```csharp
public async Task ProcessBooksAsync()
{
// Check if filter is enabled
if (_softDeleteFilter.IsEnabled)
{
// Enable or disable explicitly
_softDeleteFilter.Enable();
// or
_softDeleteFilter.Disable();
}
}
```
## Creating Custom Global Query Filters
Now let's create custom global query filters for a real-world scenario. Imagine we have a library management system where we need to filter books based on:
1. **Publication Status**: Only show published books in public areas
2. **User's Department**: Users can only see books from their department
3. **Approval Status**: Only show approved content
### Step 1: Define Filter Interfaces
First, create the filter interfaces. You can define them in the same file as your entity or in separate files:
```csharp
// Can be placed in the same file as Book entity or in separate files
namespace Library;
public interface IPublishable
{
bool IsPublished { get; }
DateTime PublishDate { get; set; }
}
public interface IDepartmentRestricted
{
Guid DepartmentId { get; }
}
public interface IApproveable
{
bool IsApproved { get; }
}
public interface IPublishedFilter
{
}
public interface IApprovedFilter
{
}
```
`IPublishable` / `IApproveable` are implemented by entities and define entity properties.
`IPublishedFilter` / `IApprovedFilter` are filter-state interfaces used with `IDataFilter` so you can enable/disable those filters at runtime.
### Step 2: Add Filter Expressions to DbContext
Now let's add the filter expressions to your existing DbContext. First, here's how to use `HasAbpQueryFilter` to create **always-on** filters (they cannot be toggled at runtime):
```csharp
// MyProjectDbContext.cs
using Microsoft.EntityFrameworkCore;
using Volo.Abp.EntityFrameworkCore;
using Volo.Abp.GlobalFeatures;
using Volo.Abp.MultiTenancy;
using Volo.Abp.Authorization;
using Volo.Abp.Data;
using Volo.Abp.EntityFrameworkCore.Modeling;
namespace Library;
public class LibraryDbContext : AbpDbContext<LibraryDbContext>
{
public DbSet<Book> Books { get; set; }
public DbSet<Department> Departments { get; set; }
public DbSet<Author> Authors { get; set; }
public LibraryDbContext(DbContextOptions<LibraryDbContext> options)
: base(options)
{
}
protected override void OnModelCreating(ModelBuilder builder)
{
base.OnModelCreating(builder);
builder.Entity<Book>(b =>
{
b.ToTable("Books");
b.ConfigureByConvention();
// HasAbpQueryFilter creates ALWAYS-ACTIVE filters
// These cannot be toggled at runtime via IDataFilter
b.HasAbpQueryFilter(book =>
book.IsPublished &&
book.PublishDate <= DateTime.UtcNow);
b.HasAbpQueryFilter(book => book.IsApproved);
});
builder.Entity<Department>(b =>
{
b.ToTable("Departments");
b.ConfigureByConvention();
});
}
}
```
> **Note:** Using `HasAbpQueryFilter` alone creates filters that are always active and cannot be toggled at runtime. This approach is simpler but less flexible. For toggleable filters, see Step 3 below.
### Step 3: Make Filters Toggleable (Optional)
If you need filters that can be enabled/disabled at runtime via `IDataFilter<T>`, override `ShouldFilterEntity` and `CreateFilterExpression` instead of (or in addition to) `HasAbpQueryFilter`:
```csharp
// MyProjectDbContext.cs
using System;
using System.Linq.Expressions;
using Microsoft.EntityFrameworkCore;
using Microsoft.EntityFrameworkCore.Metadata;
using Microsoft.EntityFrameworkCore.Metadata.Builders;
using Volo.Abp.EntityFrameworkCore;
namespace Library;
public class LibraryDbContext : AbpDbContext<LibraryDbContext>
{
protected bool IsPublishedFilterEnabled => DataFilter?.IsEnabled<IPublishedFilter>() ?? false;
protected bool IsApprovedFilterEnabled => DataFilter?.IsEnabled<IApprovedFilter>() ?? false;
protected override bool ShouldFilterEntity<TEntity>(IMutableEntityType entityType)
{
if (typeof(IPublishable).IsAssignableFrom(typeof(TEntity)))
{
return true;
}
if (typeof(IApproveable).IsAssignableFrom(typeof(TEntity)))
{
return true;
}
return base.ShouldFilterEntity<TEntity>(entityType);
}
protected override Expression<Func<TEntity, bool>>? CreateFilterExpression<TEntity>(
ModelBuilder modelBuilder,
EntityTypeBuilder<TEntity> entityTypeBuilder)
where TEntity : class
{
var expression = base.CreateFilterExpression<TEntity>(modelBuilder, entityTypeBuilder);
if (typeof(IPublishable).IsAssignableFrom(typeof(TEntity)))
{
Expression<Func<TEntity, bool>> publishFilter = e =>
!IsPublishedFilterEnabled ||
(
EF.Property<bool>(e, nameof(IPublishable.IsPublished)) &&
EF.Property<DateTime>(e, nameof(IPublishable.PublishDate)) <= DateTime.UtcNow
);
expression = expression == null
? publishFilter
: QueryFilterExpressionHelper.CombineExpressions(expression, publishFilter);
}
if (typeof(IApproveable).IsAssignableFrom(typeof(TEntity)))
{
Expression<Func<TEntity, bool>> approvalFilter = e =>
!IsApprovedFilterEnabled || EF.Property<bool>(e, nameof(IApproveable.IsApproved));
expression = expression == null
? approvalFilter
: QueryFilterExpressionHelper.CombineExpressions(expression, approvalFilter);
}
return expression;
}
}
```
This mapping step is what connects `IDataFilter<IPublishedFilter>` and `IDataFilter<IApprovedFilter>` to entity-level predicates. Without this step, `HasAbpQueryFilter` expressions remain always active.
> **Important:** Note that we use `DateTime` (not `DateTime?`) in the filter expression to match the entity property type. Adjust accordingly if your entity uses nullable `DateTime?`.
### Step 4: Disable Custom Filters with IDataFilter
Once custom filters are mapped to the ABP data-filter pipeline, you can disable them just like built-in filters:
```csharp
public class BookAppService : ApplicationService
{
private readonly IRepository<Book, Guid> _bookRepository;
private readonly IDataFilter<IPublishedFilter> _publishedFilter;
private readonly IDataFilter<IApprovedFilter> _approvedFilter;
public BookAppService(
IRepository<Book, Guid> bookRepository,
IDataFilter<IPublishedFilter> publishedFilter,
IDataFilter<IApprovedFilter> approvedFilter)
{
_bookRepository = bookRepository;
_publishedFilter = publishedFilter;
_approvedFilter = approvedFilter;
}
public async Task<List<Book>> GetIncludingUnpublishedAndUnapprovedAsync()
{
using (_publishedFilter.Disable())
using (_approvedFilter.Disable())
{
return await _bookRepository.GetListAsync();
}
}
}
```
## Advanced: Multiple Filters with User-Defined Functions
Starting from ABP v8.3, you can use user-defined function (UDF) mapping for better performance. This approach generates more efficient SQL and allows EF Core to create better execution plans.
### Step 1: Enable UDF Mapping
First, configure your module to use UDF mapping:
```csharp
// MyProjectModule.cs
using Volo.Abp.EntityFrameworkCore;
using Volo.Abp.EntityFrameworkCore.GlobalFilters;
using Microsoft.Extensions.DependencyInjection;
namespace Library;
[DependsOn(
typeof(AbpEntityFrameworkCoreModule),
typeof(AbpDddDomainModule)
)]
public class LibraryModule : AbpModule
{
public override void ConfigureServices(ServiceConfigurationContext context)
{
Configure<AbpEfCoreGlobalFilterOptions>(options =>
{
options.UseDbFunction = true; // Enable UDF mapping
});
}
}
```
### Step 2: Define DbFunctions
Create static methods that EF Core will map to database functions:
```csharp
// LibraryDbFunctions.cs
using Microsoft.EntityFrameworkCore;
namespace Library;
public static class LibraryDbFunctions
{
public static bool IsPublishedFilter(bool isPublished, DateTime? publishDate)
{
return isPublished && (publishDate == null || publishDate <= DateTime.UtcNow);
}
public static bool IsApprovedFilter(bool isApproved)
{
return isApproved;
}
public static bool DepartmentFilter(Guid entityDepartmentId, Guid userDepartmentId)
{
return entityDepartmentId == userDepartmentId;
}
}
```
### Step 4: Apply UDF Filters
Update your DbContext to use the UDF-based filters:
```csharp
// MyProjectDbContext.cs
protected override void OnModelCreating(ModelBuilder builder)
{
base.OnModelCreating(builder);
// Map CLR methods to SQL scalar functions.
// Create matching SQL functions in a migration.
var isPublishedMethod = typeof(LibraryDbFunctions).GetMethod(
nameof(LibraryDbFunctions.IsPublishedFilter),
new[] { typeof(bool), typeof(DateTime?) })!;
builder.HasDbFunction(isPublishedMethod);
var isApprovedMethod = typeof(LibraryDbFunctions).GetMethod(
nameof(LibraryDbFunctions.IsApprovedFilter),
new[] { typeof(bool) })!;
builder.HasDbFunction(isApprovedMethod);
builder.Entity<Book>(b =>
{
b.ToTable("Books");
b.ConfigureByConvention();
// ABP way: define separate filters. HasAbpQueryFilter composes them.
b.HasAbpQueryFilter(book =>
LibraryDbFunctions.IsPublishedFilter(book.IsPublished, book.PublishDate));
b.HasAbpQueryFilter(book =>
LibraryDbFunctions.IsApprovedFilter(book.IsApproved));
});
}
```
This approach generates cleaner SQL and improves query performance, especially in complex scenarios with multiple filters.
## Working with Complex Filter Combinations
When combining multiple filters, it's important to understand how they interact. Let's explore some common scenarios.
### Combining Tenant and Department Filters
In a multi-tenant application, you might need to combine tenant isolation with department-level access control:
```csharp
public class BookAppService : ApplicationService
{
private readonly IRepository<Book, Guid> _bookRepository;
private readonly IDataFilter<IMultiTenant> _tenantFilter;
private readonly ICurrentUser _currentUser;
public BookAppService(
IRepository<Book, Guid> bookRepository,
IDataFilter<IMultiTenant> tenantFilter,
ICurrentUser currentUser)
{
_bookRepository = bookRepository;
_tenantFilter = tenantFilter;
_currentUser = currentUser;
}
public async Task<List<BookDto>> GetMyDepartmentBooksAsync()
{
var currentUser = _currentUser;
var userDepartmentId = GetUserDepartmentId(currentUser);
// Get all books without department filter, then filter in memory
// (for scenarios where you need custom filter logic)
using (_tenantFilter.Disable()) // Optional: disable tenant filter if needed
{
var allBooks = await _bookRepository.GetListAsync();
// Apply department filter in memory (custom logic)
var departmentBooks = allBooks
.Where(b => b.DepartmentId == userDepartmentId)
.ToList();
return ObjectMapper.Map<List<Book>, List<BookDto>>(departmentBooks);
}
}
private Guid GetUserDepartmentId(ICurrentUser currentUser)
{
// Get user's department from claims or database
var departmentClaim = currentUser.FindClaim("DepartmentId");
return Guid.Parse(departmentClaim.Value);
}
}
```
### Filter Priority and Override
Sometimes you need to override filters in specific scenarios. ABP provides a flexible way to handle this:
```csharp
public async Task<Book> GetBookForEditingAsync(Guid id)
{
// Disable soft delete filter to get deleted records for restoration
using (DataFilter.Disable<ISoftDelete>())
{
return await _bookRepository.GetAsync(id);
}
}
public async Task<Book> GetBookIncludingUnpublishedAsync(Guid id)
{
// Use GetQueryableAsync to customize the query
var query = await _bookRepository.GetQueryableAsync();
// Manually apply or bypass filters
var book = await query
.FirstOrDefaultAsync(b => b.Id == id);
return book;
}
```
## Best Practices for Multiple Global Query Filters
When implementing multiple global query filters, consider these best practices:
### 1. Keep Filters Simple
Complex filter expressions can significantly impact query performance. Keep each condition focused on a single concern. In ABP, you can define them separately with `HasAbpQueryFilter`, which composes with ABP's built-in filters:
```csharp
// Good (ABP): separate, focused filters composed by HasAbpQueryFilter
b.HasAbpQueryFilter(b => b.IsPublished);
b.HasAbpQueryFilter(b => b.IsApproved);
b.HasAbpQueryFilter(b => b.DepartmentId == userDeptId);
// Avoid: calling HasQueryFilter multiple times for the same entity
// in plain EF Core (the last call replaces the previous one)
b.HasQueryFilter(b => b.IsPublished);
b.HasQueryFilter(b => b.IsApproved);
```
### 2. Use Indexing
Ensure your database has appropriate indexes for filtered columns:
```csharp
builder.Entity<Book>(b =>
{
b.HasIndex(b => b.IsPublished);
b.HasIndex(b => b.IsApproved);
b.HasIndex(b => b.DepartmentId);
b.HasIndex(b => new { b.IsPublished, b.PublishDate });
});
```
### 3. Consider Performance Impact
Use UDF mapping for better performance with complex filters. Profile your queries and analyze execution plans.
### 4. Document Filter Behavior
Clearly document which filters are applied to each entity to help developers understand the behavior:
```csharp
/// <summary>
/// Book entity with the following global query filters:
/// - ISoftDelete: Automatically excludes soft-deleted books
/// - IMultiTenant: Automatically filters by current tenant
/// - IPublishable: Excludes unpublished books (based on IsPublished and PublishDate)
/// - IApproveable: Excludes unapproved books (based on IsApproved)
/// </summary>
/// <remarks>
/// Filter interfaces (IPublishable, IApproveable, IPublishedFilter, IApprovedFilter)
/// are defined in Step 1: Define Filter Interfaces
/// </remarks>
public class Book : AuditedAggregateRoot<Guid>, ISoftDelete, IMultiTenant, IPublishable, IApproveable
{
public string Name { get; set; }
public BookType Type { get; set; }
public DateTime PublishDate { get; set; }
public float Price { get; set; }
public bool IsPublished { get; set; }
public bool IsApproved { get; set; }
public Guid? TenantId { get; set; }
public bool IsDeleted { get; set; }
public Guid DepartmentId { get; set; }
}
```
## Testing Global Query Filters
Testing with global query filters can be challenging. Here's how to do it effectively:
### Unit Testing Filters
```csharp
[Fact]
public void Book_QueryFilter_Should_Filter_Unpublished()
{
var options = new DbContextOptionsBuilder<BookStoreDbContext>()
.UseInMemoryDatabase(databaseName: "TestDb")
.Options;
using (var context = new BookStoreDbContext(options))
{
context.Books.Add(new Book { Name = "Published Book", IsPublished = true });
context.Books.Add(new Book { Name = "Unpublished Book", IsPublished = false });
context.SaveChanges();
}
using (var context = new BookStoreDbContext(options))
{
// Query with filter enabled (default)
var publishedBooks = context.Books.ToList();
Assert.Single(publishedBooks);
Assert.Equal("Published Book", publishedBooks[0].Name);
}
}
```
### Integration Testing with Filter Control
```csharp
[Fact]
public async Task Should_Get_Deleted_Book_When_Filter_Disabled()
{
var dataFilter = GetRequiredService<IDataFilter>();
// Arrange
var book = await _bookRepository.InsertAsync(
new Book { Name = "Test Book" },
autoSave: true
);
await _bookRepository.DeleteAsync(book);
// Act - with filter disabled
using (dataFilter.Disable<ISoftDelete>())
{
var deletedBook = await _bookRepository
.FirstOrDefaultAsync(b => b.Id == book.Id);
deletedBook.ShouldNotBeNull();
deletedBook.IsDeleted.ShouldBeTrue();
}
}
```
### Testing Custom Global Query Filters
Here's a complete example of testing custom toggleable filters:
```csharp
[Fact]
public async Task Should_Filter_Unpublished_Books_By_Default()
{
// Default: filters are enabled
var result = await WithUnitOfWorkAsync(async () =>
{
var bookRepository = GetRequiredService<IRepository<Book, Guid>>();
return await bookRepository.GetListAsync();
});
// Only published and approved books should be returned
result.All(b => b.IsPublished).ShouldBeTrue();
result.All(b => b.IsApproved).ShouldBeTrue();
}
[Fact]
public async Task Should_Return_All_Books_When_Filter_Disabled()
{
var result = await WithUnitOfWorkAsync(async () =>
{
// Disable the published filter to see unpublished books
using (_publishedFilter.Disable())
{
var bookRepository = GetRequiredService<IRepository<Book, Guid>>();
return await bookRepository.GetListAsync();
}
});
// Should include unpublished books
result.Any(b => b.Name == "Unpublished Book").ShouldBeTrue();
}
[Fact]
public async Task Should_Combine_Filters_Correctly()
{
// Test combining multiple filter disables
using (_publishedFilter.Disable())
using (_approvedFilter.Disable())
{
var bookRepository = GetRequiredService<IRepository<Book, Guid>>();
var allBooks = await bookRepository.GetListAsync();
// All books should be visible
allBooks.Count.ShouldBe(5);
}
}
```
> **Tip:** When using ABP's test base, inject `IDataFilter<IPublishedFilter>` and `IDataFilter<IApprovedFilter>` to control filters in your tests.
## Key Takeaways
**Global query filters automatically apply filter criteria to all queries**, reducing developer error and ensuring consistent data filtering across your application.
**ABP Framework provides a sophisticated data filtering system** with built-in support for soft delete (`ISoftDelete`) and multi-tenancy (`IMultiTenant`), plus the ability to create custom filters.
**Use `IDataFilter<TFilter>` to control filters at runtime**, enabling or disabling filters as needed for specific operations.
**To make custom filters toggleable, override `ShouldFilterEntity` and `CreateFilterExpression`** in your DbContext. Using only `HasAbpQueryFilter` creates filters that are always active.
**Combine multiple filters carefully** and consider performance implications, especially with complex filter expressions.
**Leverage user-defined function (UDF) mapping** for better SQL generation and query performance, available since ABP v8.3.
**Always test filter behavior** to ensure filters work as expected in different scenarios, including edge cases.
## Conclusion
Global query filters are essential for building secure, well-isolated applications. ABP Framework's data filtering system provides a robust foundation that builds on EF Core's capabilities while adding convenient features like runtime filter control and UDF mapping optimization.
By implementing multiple global query filters strategically, you can ensure data isolation, simplify your query logic, and reduce the risk of accidentally exposing unauthorized data. Remember to keep filters simple, add appropriate database indexes, and test thoroughly to maintain optimal performance.
Start implementing global query filters in your ABP applications today to leverage automatic data filtering across all your repositories and queries.
### See Also
- [ABP Data Filtering Documentation](https://abp.io/docs/latest/framework/fundamentals/data-filtering)
- [EF Core Global Query Filters](https://learn.microsoft.com/en-us/ef/core/querying/filters)
- [ABP Multi-Tenancy Documentation](https://abp.io/docs/latest/framework/fundamentals/multi-tenancy)
- [Using User-defined function mapping for global filters](https://abp.io/docs/latest/framework/infrastructure/data-filtering#using-user-defined-function-mapping-for-global-filters)
---
## References
- [ABP Framework Documentation](https://docs.abp.io)
- [Entity Framework Core Documentation](https://docs.microsoft.com/en-us/ef/core/)
- [EF Core Global Query Filters](https://learn.microsoft.com/en-us/ef/core/querying/filters)
- [User-defined Function Mapping](https://learn.microsoft.com/en-us/ef/core/querying/user-defined-function-mapping)

1
docs/en/Community-Articles/2025-12-18-Implementing-Multiple-Global-Query-Filters-With-Entity-Framework-Core/summary.md

@ -0,0 +1 @@
Global query filters in Entity Framework Core allow automatic data filtering at the entity level. This article covers ABP Framework's data filtering system, including built-in filters (ISoftDelete, IMultiTenant), custom filter implementation, and performance optimization using user-defined functions.

167
docs/en/Community-Articles/2026-01-24-How-AI-Is-Changing-Developers/POST.md

@ -0,0 +1,167 @@
# How AI Is Changing Developers
In the last few years, AI has moved from “nice to have” to “hard to live without” for developers. At first it was just code completion and smart hints. Now it’s getting deep into how we build software: the methods, the toolchain, and even the job itself.
Here are some structured thoughts on how AI is affecting developers, based on trends and personal experience.
## Every library will have AI-first docs
Future libraries and frameworks won’t just have docs for humans. They’ll also have a manual for AI:
- How to use
- Why it is designed this way
- What NOT to do
- Conventions & Best Practices
Once these rules are written in a structured way, AI can onboard to a library faster and more consistently than a junior developer.
Docs won’t just be knowledge anymore. They’ll be instructions AI can execute.
## AI will be a must-have for developers
Soon, “writing code without AI” will feel as strange as “writing code without an IDE.”
- It won’t be about whether you use AI
- It’ll be about how well you use it and where
AI will become:
- A standard productivity tool
- An extension of a developer’s thinking
- A second brain
Developers who don’t use AI will fall behind in both speed and understanding.
## As AI gets smarter, it replaces “time”
AI isn’t replacing developers right away. It’s replacing:
- Lots of repetitive time
- Basic development costs
- Higher output per hour
Boilerplate, CRUD, basic validation, simple logic — all of that will get swallowed fast.
It’s not people being replaced. It’s waste.
## Orchestrating multiple AIs becomes real
The future isn’t “one AI does everything.” It’s more like:
- Claude writes core code
- Copilot generates and maintains unit tests
- Codex and similar tools write docs and examples
- Other AIs handle refactoring, performance analysis, security checks
The dev process itself becomes an AI orchestration system.
The developer’s role looks more like:
Architect + conductor + quality gatekeeper
## Only great infrastructure gets amplified by AI
Even if AI can teach you “how to use it correctly,” it still can’t invent mature infrastructure for you.
We still rely on:
- Stable base frameworks (like [ABP](https://abp.io))
- Engineering capability proven by many projects
- Long-term maintenance and evolution
AI is an accelerator, not the foundation.
For open source, AI is actually a better companion:
- Helps understand the source code
- Helps learn design thinking
- Helps ship faster
The stronger the infrastructure, the more value AI can amplify.
## Frontend feels mature; backend still evolving
From personal experience:
- AI is already very strong in frontend work (Bootstrap / UI components, layout, styling, interaction)
- Backend is still learning and improving (business boundaries, architecture trade-offs, implicit constraints)
This shows: the clearer the rules and the faster the feedback, the faster AI improves.
## Writing rules for AI is productivity itself
In the ABP libraries, we’ve already written lots of rules for AI:
- Conventions
- Usage limits
- Recommended patterns
As rules grow:
- AI becomes more stable
- More predictable
- Base development work can be largely automated
Future engineering skill will be, in large part: how to design a rules system for AI.
## The real advantage is better feedback loops
AI gets much stronger when there’s clear feedback:
- Tests that run fast and fail loudly
- Logs and metrics that explain behavior
- Code review that checks for edge cases and security
The teams that win are the ones who can quickly verify, correct, and learn.
## About a developer’s career
Sometimes I think: I’m glad I didn’t enter the software industry just in the last few years.
If you’re just starting out, you really feel:
- The barrier is lower
- The competition is tougher
But whenever I see AI generate confident but wrong code, I’m reminded:
- The industry still has a future
- It still needs judgment, taste, and experience
There will always be people who love coding. If AI does it and we watch, that’s fine too.
## Chaos everywhere, but the experience is moving fast
Big companies, platforms, tools:
- GitHub
- OpenAI
- Claude
- All kinds of IDEs / agents
New AI tools, apps, and platforms keep popping up. New concepts show up almost every week. It’s noisy, but the big picture is clear: AI keeps getting better, and the overall developer experience is improving fast.
## Get ready for the AI revolution
Looking back at personal experience:
- Before: Google
- Now: ChatGPT
- Before: manual translation
- Now: fully automatic
- Before: writing unit tests by hand
- Now: AI does it all
- Before: human replies to customers
- Now: AI-assisted or even AI-led
From code completion to agents running tasks, and now deep IDE integration — the pace is shocking.
## Closing
AI is not the end of software engineering. It is:
- A leap in cognition
- A restructure of how work gets done
- An upgrade of roles
What matters most isn’t how much code AI can write, but how we redefine the value of “developers” in the AI era.

BIN
docs/en/Community-Articles/2026-01-24-How-AI-Is-Changing-Developers/image.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 672 KiB

50
docs/en/Community-Articles/2026-02-02-ndc-london-article/post.md

@ -0,0 +1,50 @@
The software development world converged on the **Queen Elizabeth II Centre** in Westminster from **January 26-30** for **NDC London 2026**. As one of the most anticipated tech conferences in Europe, this year’s event delivered a masterclass in the future of the stack.
We have spent five days immersed in workshops and sessions. Here is our comprehensive recap of the highlights and the technical shifts that will define 2026\.
![enter image description here](https://abp.io/api/file-management/file-descriptor/share?shareToken=CfDJ8NqaJZr2oLpIuRyHVjJk1BBjsk292Ejh%2b5X2yeS2pD9uibmq8qxh50b9eOg5U5Ib2jAFaeCHItbTyOpajIeaUzNKg/p0WHohjf1iac2%2bVL6kT/Y3ORSKpRQrdE22QJTwAxBMUryUgTQJ989hYtsvF%2bkReDR03k0gIl4ApUaji6Tg)
## **1\. High-Performance .NET and C\# Evolution**
A major focus this year was the continued evolution of the .NET ecosystem. Experts delivered standout sessions on high-performance coding patterns, it’s clear that efficiency and "Native AOT" (Ahead-of-Time compilation) are no longer niche topics, they are becoming industry standards.
## **1\. Moving Beyond the AI Hype**
If 2025 was about experimenting with LLMs, NDC London 2026 was about AI integration. Sessions from experts showcased how developers are moving past simple chatbots and integrating AI directly into the CI/CD pipeline and automated testing suites.
![enter image description here](https://abp.io/api/file-management/file-descriptor/share?shareToken=CfDJ8NqaJZr2oLpIuRyHVjJk1BDxx%2FqqZ08tgIxCPsAnDDD2w5yJPjVXwUJrbGHpSln3npfpJEBQ78chKoSlZS1cz1nbigNQtRq60dlbyMLwnAgE52tBwUJz481PcBgNtyFMW7rm7oKhFV9c7tK8bEcK%2FscRudaV8w7%2FPO8U5KJv%2BQal)
![enter image description here](https://abp.io/api/file-management/file-descriptor/share?shareToken=CfDJ8NqaJZr2oLpIuRyHVjJk1BBdNXgjnu7HIGgX//VJrh3XzjPns4ODHMUhZ%2bDQCcZa2Nc0%2b%2bshyt2UXqaIKEJMPHh6JIDGBtUrdQZ1EzmGn3pingGKiw7YTbh0Z%2bLRZSmcY6pEXkd1S/7VVncmICIHrQgjg%2b7eb2uO28qadIWGbD99)
## **3\. The "Hallway Track" and Community Networking**
One of the biggest draws of **NDC London** is the community. Between the 100+ sessions, the exhibitor hall was buzzing with live demos and networking.
Watch the video:
[![Watch the Hallway Track video](https://img.youtube.com/vi/yb-FILkqL7U/hqdefault.jpg)](https://www.youtube.com/watch?v=yb-FILkqL7U)
![enter image description here](https://abp.io/api/file-management/file-descriptor/share?shareToken=CfDJ8NqaJZr2oLpIuRyHVjJk1BCLbkSK3YZDZZhBGi/IBZOCXgcWHwTyS/s5v6U%2bSeQnY5yCTzMJFTu/mA4xX%2bL5tjbMPfEI8gvCwmVEfSymGFIiJLtAbP8T2zFZev%2bm74sTsQ%2b4sdsLKbdijiae3G%2b45ijWep7yFJx9BWMgV263zzvI)
![enter image description here](https://abp.io/api/file-management/file-descriptor/share?shareToken=CfDJ8NqaJZr2oLpIuRyHVjJk1BCrCACVWDlDjOgl9ASMeZNMVBGye%2bfya4aO6UW5Kyg9MCVLswzckRWS%2bT71AcQuWMGfiousZlSCrKNAGrosPXzuWAsxnNai3xBcj061TWjGAGX4u1AtrD0eknRxuKe2ba%2bVO7r0sZqle%2bUyZa305hhO)
## **4\. The Big Giveaway: Our Xbox Series S Raffle**
One of our favorite moments of the week was our Raffle Session. We love giving back to the community that inspires us, and this year, the energy at our booth was higher than ever.
We were thrilled to give away a brand-new Xbox Series S to one lucky winner\! It was fantastic to meet so many of you who stopped by to enter, chat about your current projects, and share your thoughts on the future of the industry.
**Congratulations again to our 2026 winner\!** We hope you enjoy some well-deserved gaming time after a long week of learning.
![enter image description here](https://abp.io/api/file-management/file-descriptor/share?shareToken=CfDJ8NqaJZr2oLpIuRyHVjJk1BBozHxXhCL7qMtx5LAxvafvPOKaZJepGlR7tgHVvw6wGpuR4Ervipym%2busZ7eMl3uook15K1874RYEwUenBfoZSJBm33MdaHFduha9iJ7tnfTmW12QbdYM77yqfVJ7EonuJsRrNySdYrQuRI0H2RkZr)
Watch the video:
[![Watch the Xbox Series S giveaway](https://img.youtube.com/vi/W5HRwys8dpE/hqdefault.jpg)](https://www.youtube.com/watch?v=W5HRwys8dpE)
## **Final Thoughts: See You at NDC London 2027\!**
NDC London 2026 proved once again why it is a cornerstone event for the global developer community. We are returning to our projects with a refreshed roadmap and a deeper understanding of the tools shaping our industry.
![enter image description here](https://abp.io/api/file-management/file-descriptor/share?shareToken=CfDJ8NqaJZr2oLpIuRyHVjJk1BDJq%2bG7yg1jtoY3gGH8mFMZen%2bncuL%2bKrQHY4/FPOF2KXcLyEjJymhk0JAVwJ76lPeqBchrfsAK3TOUTKY15tC7jm3uwgcH9IWRxCM2ouqxVGqGPd8YIRdG7H7QgyuknBkS4wsdYI9gl1EGqgPtTXJd)

BIN
docs/en/Community-Articles/2026-02-03-Impressions-of-NDC-London-2026/0.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 4.0 MiB

BIN
docs/en/Community-Articles/2026-02-03-Impressions-of-NDC-London-2026/1.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.9 MiB

BIN
docs/en/Community-Articles/2026-02-03-Impressions-of-NDC-London-2026/2.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 4.8 MiB

BIN
docs/en/Community-Articles/2026-02-03-Impressions-of-NDC-London-2026/3.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.7 MiB

BIN
docs/en/Community-Articles/2026-02-03-Impressions-of-NDC-London-2026/4.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.2 MiB

BIN
docs/en/Community-Articles/2026-02-03-Impressions-of-NDC-London-2026/4_1.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.6 MiB

BIN
docs/en/Community-Articles/2026-02-03-Impressions-of-NDC-London-2026/4_2.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.0 MiB

BIN
docs/en/Community-Articles/2026-02-03-Impressions-of-NDC-London-2026/5.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.0 MiB

BIN
docs/en/Community-Articles/2026-02-03-Impressions-of-NDC-London-2026/6.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 644 KiB

BIN
docs/en/Community-Articles/2026-02-03-Impressions-of-NDC-London-2026/7.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 600 KiB

325
docs/en/Community-Articles/2026-02-03-Impressions-of-NDC-London-2026/Post.md

@ -0,0 +1,325 @@
![Cover](0.png)
This year we attended NDC London as a sponsor for [ABP](https://abp.io). The conference was held at the same place [Queen Elizabeth II](https://qeiicentre.london/) as previous years. I guess this is the best conf for .NET developers around the world (thanks to the NDC team). And we attend last 5 years. It was 3 full days started from 28 to 30 January 2026. As an exhibitor we talked a lot with the attendees who stopped by our booth or while we were eating or in the conf rooms.
This is the best opportunity to know what everyone is doing in software society. While I was explaining ABP to the people who first time heard, I also ask about what they do in their work. Developers mostly work on web platforms. And as you know, there's an AI transformation in our sector. That's why I wonder if other people also stick to the latest AI trend! Well... not as I expected. In Volosoft, we are tightly following AI trends, using in our daily development, injecting this new technology to our product and trying to benefit this as much as possible.
![Our booth](1.png)
This new AI trend is same as the invention of printing (by Johannes Gutenberg in 1450) or it's similar to invention of calculators (by William S. Burroughs in 1886). The countries who benefit these inventions got a huge increase in their welfare level. So, we welcome this new AI invention in software development, design, devops and testing. I also see this as a big wave in the ocean, if you are prepared and develop your skills, you can play with it 🌊 and it's called surfing or you'll die against the AI wave in this ocean. But not all the companies react this transformation quickly. Many developers use it like ChatGpt conversation (copy-paste from it) or using GitHub Co-Pilot in a limited manner. But as I heard from Steven Sanderson's session and other Microsoft employees, they are already using it to reproduce the bugs reported in the issues or creating even feature PRs via Co-Pilot. That's a good!
Here're some pictures from the conf and that's me on the left side with brown shoes :)
![Alper & Halil](2.png)
Another thing I see, there's a decrease in the number of attendees'. I don't know the real reason but probably the IT companies cut the budget for conferences. As you also hear, many companies layoff because of the AI replaces some of the positions.
The food was great during the conference. It was more like eating sessions for me. Lots of good meals from different countries' kitchen. In the second day, there was a party. People grabbed their beers, wines, beverages and did some networking.
I was expecting more AI oriented sessions but it was less then my expectations. Even though I was an exhibitor, I tried to attend some of the session. I'll tell you my notes.
---
Here's a quick video from the exhibitors' area on the 3rd floor and our ABP booth's Xbox raffle:
**Video 1: NDC Conference 2026 Environment** 👉 [https://youtu.be/U1kiYG12KgA](https://youtu.be/U1kiYG12KgA)
[![Video 1](youtube-cover-1.png)](https://youtu.be/U1kiYG12KgA)
**Video 2: Our raffle for XBOX** 👉 [https://youtu.be/7o0WX70qYw0](https://youtu.be/7o0WX70qYw0)
[![Video 2](youtube-cover-2.png)](https://youtu.be/7o0WX70qYw0)
---
## Sessions / Talks
### The Dangers of Probably-Working Software | Damian Brady
![Damian Session](3.png)
The first session and keynote was from Damian Brady. He's part of Developer Advocacy team at GitHub. And the topic was "The dangers of probably-working software". He started with some negative impact of how generative AI is killing software, and he ended like this a not so bad, we can benefit from the AI transformation. First time I hear "sleepwalking" term for the development. He was telling when we generate code via AI, and if we don't review well-enough, we're sleepwalkers. And that's correct! and good analogy for this case. This talk centers on a powerful lesson: *“**Don’t ship code you don’t truly understand.**”*
Damian tells a personal story from his early .NET days when he implemented a **Huffman compression algorithm** based largely on Wikipedia. The code **“worked” in small tests** but **failed in production**. The experience forced him to deeply understand the algorithm rather than relying on copied solutions. Through this story, he explores themes of trust, complexity, testing, and mental models in software engineering.
#### Notes From This Session
- “It seems to work” is not the same as “I understand it.”
- Code copied from Wikipedia or StackOverflow or AI platforms is inherently risky in production.
- Passing tests on small datasets does not guarantee real-world reliability (happy path ~= unhappy results)
- Performance issues often surface only in edge cases.
- Delivery pressure can discourage deep understanding — to the detriment of quality.
- Always ask: “**When does this fail?**” — not just “**Why does this work?**”
---
### Playing The Long Game | Sheena O'Connell
![Sheena Session](4.png)
Sheena is a former software engineer who now trains and supports tech educators. She talks about AI tools...
AI tools are everywhere but poorly understood; there’s hype, risks, and mixed results. The key question is how individuals and organisations should play the long game (long-term strategy) so skilled human engineers—especially juniors—can still grow and thrive.
She showed some statistics about how job postings on Indeed platform dramatically decreasing for software developers. About AI generated-code, she tells, it's less secure, there might be logical problems or interesting bugs, human might not read code very well and understanding/debugging code might sometimes take much longer time.
Being an engineer is about much more than a job title — it requires systems thinking, clear communication, dealing with uncertainty, continuous learning, discipline, and good knowledge management. The job market is shifting: demand for AI-skilled workers is rising quickly and paying premiums, and required skills are changing faster in AI-exposed roles. There’s strength in using a diversity of models instead of locking into one provider, and guardrails improve reliability.
AI is creating new roles (like AI security, observability, and operations) and new kinds of work, while routine attrition also opens opportunities. At the same time, heavy AI use can have negative cognitive effects: people may think less, feel lonelier, and prefer talking to AI over humans.
Organizations are becoming more dynamic and project-based, with shorter planning cycles, higher trust, and more experimentation — but also risk of “shiny new toy” syndrome. Research shows AI can boost productivity by 15–20% in many cases, especially in simpler, greenfield projects and popular languages, but it can actually reduce productivity on very complex work. Overall, the recommendation is to focus on using AI well (not just the newest model), add monitoring and guardrails, keep flexibility, and build tools that allow safe experimentation.
![Sheena Session 2](4_1.png)
We’re in a messy, fast-moving AI era where LLM tools are everywhere but poorly understood. There’s a lot of hype and marketing noise, making it hard even for technical people to separate reality from fantasy. Different archetypes have emerged — from AI-optimists to skeptics — and both extremes have risks. AI is great for quick prototyping but unreliable for complex work, so teams need guardrails, better practices, and a focus on learning rather than “writing more code faster.” The key question is how individuals and organizations can play the long game so strong human engineers — especially juniors — can still grow and thrive in an AI-driven world.
![Sheena Session 3](4_2.png)
---
### Crafting Intelligent Agents with Context Engineering | Carly Richmond
![Carly Session](5.png)
Carly is a Developer Advocate Lead at Elastic in London with deep experience in web development and agile delivery from her years in investment banking. A practical UI engineer. She brings a clear, hands-on perspective to building real-world AI systems. In her talk on **“Crafting Intelligent Agents with Context Engineering,”** she argues that prompt engineering isn’t enough — and shows how carefully shaping context across data, tools, and systems is key to creating reliable, useful AI agents. She mentioned about the context of an AI process. The context consists of Instructions, Short Memory, Long Memory, RAG, User Prompts, Tools, Structured Output.
---
### Modular Monoliths | Kevlin Henney
![Kevlin Session](6.png)
Kevlin frames the “microservices vs monolith” debate as a false dichotomy. His core argument is simple but powerful: problems rarely come from *being a monolith* — they come from being a **poorly structured one**. Modularity is not a deployment choice; it is an architectural discipline.
#### **Notes from the Talk**
- A monolith is not inherently bad; a tangled (intertwined, complex) monolith is.
- Architecture is mostly about **boundaries**, not boxes.
- If you cannot draw clean internal boundaries, you are not ready for microservices.
- Dependencies reveal your real architecture better than diagrams.
- Teams shape systems more than tools do.
- Splitting systems prematurely increases complexity without increasing clarity.
- Good modular design makes systems **easier to change, not just easier to scale**.
#### **So As a Developer;**
- Start with a well-structured modular monolith before considering microservices.
- Treat modules as real first-class citizens: clear ownership, clear contracts.
- Make dependency direction explicit — no circular graphs.
- Use internal architectural tests to prevent boundary violations.
- Organize code by *capability*, not by technical layer.
- If your team structure is messy, your architecture will be messy — fix people, not tech.
---
### AI Coding Agents & Skills | Steve Sanderson
**Being productive with AI Agents**
![Steve Session](steve-sanderson-talk.png)
In this session, Steve started how Microsoft is excessively using AI tools for PRs, reproducing bug reports etc... He's now working on **GitHub Co-Pilot Coding Agent Runtime Team**. He says, we use brains and hands less then anytime.
![image-20260206004021726](steve-sanderson-talk_1.png)
**In 1 Week 293 PRs Opened by the help of AI**
![image-20260206004403643](steve-sanderson-talk_2.png)
**He created a new feature to Copilot with the help of Copilot in minutes**
![Steve](steve-sanderson-talk_3.png)
> Code is cheap! Prototypes are almost free!
And he summarized the AI assisted development into 10 outlines. These are Subagents, Plan Mode, Skills, Delegate, Memories, Hooks, MCP, Infinite Sessions, Plugins and Git Workflow. Let's see his statements for each of these headings:
#### **1. Subagents**
![image-20260206005620904](steve-sanderson-talk_4.png)
- Break big problems into smaller, specialized agents.
- Each subagent should have a clear responsibility and limited scope.
- Parallel work is better than one “smart but slow” agent.
- Reduces hallucination by narrowing context per agent.
- Easier to debug: you can inspect each agent’s output separately.
------
#### **2. Plan Mode**
![steve-sanderson-talk_6](steve-sanderson-talk_6.png)
- Always start with a plan before generating code.
- The plan should be explicit, human-readable, and reviewable.
- You'll align your expectations with the AI's next steps.
- Prevents wasted effort on wrong directions.
- Encourages structured thinking instead of trial-and-error coding.
------
#### **3. Skills**
![steve-sanderson-talk_7](steve-sanderson-talk_7.png)
- These are just Markdown files but (can be also tools, scripts as well)
- Skills are reusable capabilities for AI agents.
- You cannot just give all the info (as Markdown) to the AI context (limited!), skills are being used when necessary (by their Description field)
- Treat skills like APIs: versioned, documented, and shareable.
- Prefer many small skills over one big skill set.
- Store skills in Git, not in chat history.
- Skills should integrate with real tools (CI, GitHub, browsers, etc.).
#### 3.1 Skill > Test Your Project Skill
![steve-sanderson-talk_8](steve-sanderson-talk_8.png)
------
#### **4. Delegate**
> didn't mention much about this topic
- “Delegate” refers to **offloading local work to the cloud**.
- Using remote computers for AI stuff not your local resources (agent continues the task remotely)
##### **Ralph Force Do While Over and Over Until It Finishes**
https://awesomeclaude.ai/ralph-wiggum
> Who knows how much tokens it uses :)
![image-20260206010621010](steve-sanderson-talk_5.png)
------
#### **5. Memories**
> didn't mention much about this topic
- It's like don't write tests like this but write like that, and AI will remember it among your team members.
- Copilot Memory allows Copilot to learn about your codebase, helping Copilot coding agent, Copilot code review, and Copilot CLI to work more effectively in a repository.
- Treat memory like documentation that evolves over time.
- Copilot Memory is **turned off by default**
- https://docs.github.com/en/copilot/how-tos/use-copilot-agents/copilot-memory
------
#### **6. Hooks**
> didn't mention much about this topic
![image-20260206015638169](steve-sanderson-talk_10.png)
- Execute custom shell commands at key points during agent execution.
- Examples: pre-commit checks, PR reviews, test triggers.
- Hooks make AI proactive instead of reactive.
- They reduce manual context switching for developers.
- https://docs.github.com/en/copilot/how-tos/use-copilot-agents/coding-agent/use-hooks
------
#### **7. MCP**
- Talk to external tools.
- Enables safe, controlled access to systems (files, APIs, databases).
- Prevents random tool usage; everything is explicit.
------
#### **8. Infinite Sessions**
![Infinite Sessions](steve-sanderson-talk_11.png)
- AI should remember the “project context,” not just the last message.
- Reduces repetition and re-explaining.
- Enables deeper reasoning over time.
- Memory + skills + hooks together make “infinite sessions” possible.
- https://docs.github.com/en/copilot/how-tos/copilot-cli/cli-best-practices#3-leverage-infinite-sessions
------
#### **9. Plugins**
![Plugins](steve-sanderson-talk_12.png)
- Extend AI capabilities beyond core model features.
- https://github.com/marketplace?type=apps&copilot_app=true
------
#### **10. Git Workflow**
- AI should operate inside your existing Git process.
- Generate small, focused commits — not giant changes.
- Use AI for PR descriptions and code reviews.
- Keep humans in the loop for design decisions.
- Branching strategy still matters; AI doesn’t replace it.
- Treat AI like a junior teammate: helpful, but needs supervision.
- CI + tests remain your primary safety net, not the model.
- Keep feedback loops fast: generate → test → review → refine.
**Copilot as SDK**
You can wrap GitHub CoPilot into your app as below:
![steve-sanderson-talk_9](steve-sanderson-talk_9.png)
#### **As a Developer What You Need to Get from Steve's Talk;**
- Coding agents work best when you treat them like programmable teammates, not autocomplete tools.
- “Skills” are the right abstraction for scaling AI assistants across a team.
- Treat skills like shared APIs: version them, review them, and store them in source control.
- Skills can be installed from Git repos (marketplaces), not just created locally.
- Slash commands make skills fast, explicit, and reproducible in daily workflow.
- Use skills to bridge AI ↔ real systems (e.g., GitHub Actions, Playwright, build status).
- Automation skills are most valuable when they handle end-to-end flows (browser + app + data).
- Let the agent *discover* the right skill rather than hard-coding every step.
- Skills reduce hallucination risk by constraining what the agent is allowed to do.
---
### My Personal Notes about AI
- This is your code tech stack for a basic .NET project:
- Assembly > MSIL > C# > ASP.NET Core > ...ABP... >NuGet + NPM > Your Handmade Business Code
When we ask a development to an AI assisted IDE, AI never starts from Assembly or even it's not writing an existing NPM package. It basically uses what's there on the market. So we know frameworks like ASP.NET Core, ABP will always be there after AI evolution.
- Software engineer is not just writing correct syntax code to explain a program to computer. As an engineer you need to understand the requirements, design the problem, make proper decisions and fix the uncertainty. Asking AI the right questions is very critical these days.
- Tesla cars already started to go autonomous. As a driver, you don't need to care about how the car is driven. You need to choose the right way to go in the shortest time without hussle.
- I talk with other software companies owners, they also say their docs website visits are down. I talked to another guy who's making video tutorials to Pluralsight, he's telling learning from video is decreasing nowadays...
- Nowadays, **developers big new issue is Reviewing the AI generated-code.** In the future, developers who use AI, who inspect AI generated code well and who tells the AI exactly what's needed will be the most important topics. Others (who's typing only code) will be naturally eliminated. Invest your time for these topics.
- We see that our brain is getting lazier, our coding muscles gets weaker day by day. Just like after calculator invention, we stopped calculate big numbers. We'll eventually forget coding. But maybe that's what it needs to be!
- Also I don't think AI will replace developers. Think about washing machines. Since they came out, they still need humans to put the clothes in the machine, pick the best program, take out from the machine and iron. From now on, AI is our assistance in every aspect of our life from shopping, medical issues, learning to coding. Let's benefit from it.
#### Software and service stocks shed $830 billion in market value in six trading days
Software stocks fall on AI disruption fears on Feb 4, 2026 in NASDAQ. Software and service stocks shed $830 billion in market value in six trading days. Scramble to shield portfolios as AI muddies valuations, business prospects.
![Reuters](7.png)
**We need to be well prepared for this war.**

BIN
docs/en/Community-Articles/2026-02-03-Impressions-of-NDC-London-2026/cover.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.2 MiB

BIN
docs/en/Community-Articles/2026-02-03-Impressions-of-NDC-London-2026/image-20260206003328436.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 495 KiB

BIN
docs/en/Community-Articles/2026-02-03-Impressions-of-NDC-London-2026/image-20260206004046914.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 155 KiB

BIN
docs/en/Community-Articles/2026-02-03-Impressions-of-NDC-London-2026/image-20260206012506799.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 430 KiB

BIN
docs/en/Community-Articles/2026-02-03-Impressions-of-NDC-London-2026/steve-sanderson-talk.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.9 MiB

BIN
docs/en/Community-Articles/2026-02-03-Impressions-of-NDC-London-2026/steve-sanderson-talk_1.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 348 KiB

BIN
docs/en/Community-Articles/2026-02-03-Impressions-of-NDC-London-2026/steve-sanderson-talk_10.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 30 KiB

BIN
docs/en/Community-Articles/2026-02-03-Impressions-of-NDC-London-2026/steve-sanderson-talk_11.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 34 KiB

BIN
docs/en/Community-Articles/2026-02-03-Impressions-of-NDC-London-2026/steve-sanderson-talk_12.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 142 KiB

BIN
docs/en/Community-Articles/2026-02-03-Impressions-of-NDC-London-2026/steve-sanderson-talk_2.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 203 KiB

BIN
docs/en/Community-Articles/2026-02-03-Impressions-of-NDC-London-2026/steve-sanderson-talk_3.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 315 KiB

BIN
docs/en/Community-Articles/2026-02-03-Impressions-of-NDC-London-2026/steve-sanderson-talk_4.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 477 KiB

BIN
docs/en/Community-Articles/2026-02-03-Impressions-of-NDC-London-2026/steve-sanderson-talk_5.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 81 KiB

BIN
docs/en/Community-Articles/2026-02-03-Impressions-of-NDC-London-2026/steve-sanderson-talk_6.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 260 KiB

BIN
docs/en/Community-Articles/2026-02-03-Impressions-of-NDC-London-2026/steve-sanderson-talk_7.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 631 KiB

BIN
docs/en/Community-Articles/2026-02-03-Impressions-of-NDC-London-2026/steve-sanderson-talk_8.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.8 MiB

BIN
docs/en/Community-Articles/2026-02-03-Impressions-of-NDC-London-2026/steve-sanderson-talk_9.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 903 KiB

BIN
docs/en/Community-Articles/2026-02-03-Impressions-of-NDC-London-2026/youtube-cover-1.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 300 KiB

BIN
docs/en/Community-Articles/2026-02-03-Impressions-of-NDC-London-2026/youtube-cover-2.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 355 KiB

BIN
docs/en/Community-Articles/2026-02-04-Omni-Moderation-in-AI-Management-Module/demo.gif

Binary file not shown.

After

Width:  |  Height:  |  Size: 471 KiB

BIN
docs/en/Community-Articles/2026-02-04-Omni-Moderation-in-AI-Management-Module/images/abp-studio-ai-management.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 56 KiB

BIN
docs/en/Community-Articles/2026-02-04-Omni-Moderation-in-AI-Management-Module/images/ai-management-widget.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.7 KiB

BIN
docs/en/Community-Articles/2026-02-04-Omni-Moderation-in-AI-Management-Module/images/ai-management-workspaces.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 41 KiB

BIN
docs/en/Community-Articles/2026-02-04-Omni-Moderation-in-AI-Management-Module/images/example-comment.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 19 KiB

488
docs/en/Community-Articles/2026-02-04-Omni-Moderation-in-AI-Management-Module/post.md

@ -0,0 +1,488 @@
# Using OpenAI's Moderation API in an ABP Application with the AI Management Module
If your application accepts user-generated content (comments, reviews, forum posts) you likely need some form of content moderation. Building one from scratch typically means training ML models, maintaining datasets, and writing a lot of code. OpenAI's `omni-moderation-latest` model offers a practical shortcut: it's free, requires no training data, and covers 13+ harm categories across text and images in 40+ languages.
In this article, I'll show you how to integrate this model into an ABP application using the [**AI Management Module**](https://abp.io/docs/latest/modules/ai-management). We'll wire it into the [CMS Kit Module's Comment Feature](https://abp.io/docs/latest/modules/cms-kit/comments) so every comment is automatically screened before it's published. The **AI Management Module** handles the OpenAI configuration (API keys, model selection, etc.) through a runtime UI, so you won't need to hardcode any of that into your `appsettings.json` or redeploy when something changes.
By the end, you'll have a working content moderation pipeline you can adapt for any entity in your ABP project.
## Understanding OpenAI's Omni-Moderation Model
Before diving into the implementation, let's understand what makes OpenAI's `omni-moderation-latest` model a game-changer for content moderation.
### What is it?
OpenAI's `omni-moderation-latest` is a next-generation multimodal content moderation model built on the foundation of GPT-4o. Released in September 2024, this model represents a significant leap forward in automated content moderation capabilities.
The most remarkable aspect? **It's completely free to use** through OpenAI's Moderation API, there are no token costs, no usage limits for reasonable use cases, and no hidden fees.
### Key Capabilities
The **omni-moderation** model offers several compelling features that make it ideal for production applications:
- **Multimodal Understanding**: Unlike text-only moderation systems, this model *can process both text and image inputs*, making it suitable for applications where users can upload images alongside their comments or posts.
- **High Accuracy**: Built on GPT-4o's advanced understanding capabilities, the model achieves significantly higher accuracy in detecting nuanced harmful content compared to rule-based systems or simpler ML models.
- **Multilingual Support**: The model demonstrates enhanced performance across more than 40 languages, making it suitable for global applications without requiring separate moderation systems for each language.
- **Comprehensive Category Coverage**: Rather than just detecting "spam" or "not spam," the model classifies content across 13+ distinct categories of potentially harmful content.
### Content Categories
The model evaluates content against the following categories, each designed to catch specific types of harmful content:
| Category | What It Detects |
|----------|-----------------|
| `harassment` | Content that expresses, incites, or promotes harassing language towards any individual or group |
| `harassment/threatening` | Harassment content that additionally includes threats of violence or serious harm |
| `hate` | Content that promotes hate based on race, gender, ethnicity, religion, nationality, sexual orientation, disability, or caste |
| `hate/threatening` | Hateful content that includes threats of violence or serious harm towards the targeted group |
| `self-harm` | Content that promotes, encourages, or depicts acts of self-harm such as suicide, cutting, or eating disorders |
| `self-harm/intent` | Content where the speaker expresses intent to engage in self-harm |
| `self-harm/instructions` | Content that provides instructions or advice on how to commit acts of self-harm |
| `sexual` | Content meant to arouse sexual excitement, including descriptions of sexual activity or promotion of sexual services |
| `sexual/minors` | Sexual content that involves individuals under 18 years of age |
| `violence` | Content that depicts death, violence, or physical injury in graphic detail |
| `violence/graphic` | Content depicting violence or physical injury in extremely graphic, disturbing detail |
| `illicit` | Content that provides advice or instructions for committing illegal activities |
| `illicit/violent` | Illicit content that specifically involves violence or weapons |
### API Response Structure
When you send content to the Moderation API (through model or directly to the API), you receive a structured response containing:
- **`flagged`**: A boolean indicating whether the content violates any of OpenAI's usage policies. This is your primary indicator for whether to block content.
- **`categories`**: A dictionary containing boolean flags for each category, telling you exactly which policies were violated.
- **`category_scores`**: Confidence scores ranging from 0 to 1 for each category, allowing you to implement custom thresholds if needed.
- **`category_applied_input_types`**: A dictionary containing information on which input types were flagged for each category. For example, if both the image and text inputs to the model are flagged for "violence/graphic", the `violence/graphic` property will be set to `["image", "text"]`. This is only available on omni models.
> For more detailed information about the model's capabilities and best practices, refer to the [OpenAI Moderation Guide](https://platform.openai.com/docs/guides/moderation).
## The AI Management Module: Your Dynamic AI Configuration Hub
The [AI Management Module](https://abp.io/docs/latest/modules/ai-management) is a powerful addition to the ABP Platform that transforms how you integrate and manage AI capabilities in your applications. Built on top of the [ABP Framework's AI infrastructure](https://abp.io/docs/latest/framework/infrastructure/artificial-intelligence), it provides a complete solution for managing AI workspaces dynamically—without requiring code changes or application redeployment.
### Why Use the AI Management Module?
Traditional AI integrations often suffer from several pain points:
1. **Hardcoded Configuration**: API keys, model names, and endpoints are typically stored in configuration files, requiring redeployment for any changes.
2. **No Runtime Flexibility**: Switching between AI providers or models requires code changes.
3. **Security Concerns**: Managing API keys across environments is cumbersome and error-prone.
4. **Limited Visibility**: There's no easy way to see which AI configurations are active or test them without writing code.
The AI Management Module addresses all these concerns by providing:
- **Dynamic Workspace Management**: Create, configure, and update AI workspaces directly from a user-friendly administrative interface—no code changes required.
- **Provider Flexibility**: Seamlessly switch between different AI providers (OpenAI, Gemini, Antrophic, Azure OpenAI, Ollama, and custom providers) without modifying your application code.
- **Built-in Testing**: Test your AI configurations immediately using the included chat interface playground before deploying to production.
- **Permission-Based Access Control**: Define granular permissions to control who can manage AI workspaces and who can use specific AI features.
- **Multi-Framework Support**: Full support for MVC/Razor Pages, Blazor (Server & WebAssembly), and Angular UI frameworks.
### Built-in Provider Support
The **AI Management Module** comes with built-in support for popular AI providers through dedicated NuGet packages:
- **`Volo.AIManagement.OpenAI`**: Provides seamless integration with OpenAI's APIs, including GPT models and the *Moderation API*.
- Custom providers can be added by implementing the `IChatClientFactory` interface. (If you configured the Ollama while creating your project, then you can see the example implementation for Ollama)
## Building the Demo Application
Now let's put theory into practice by building a complete content moderation system. We'll create an ABP application with the **AI Management Module**, configure OpenAI as our provider, set up the CMS Kit Comment Feature, and implement automatic content moderation for all user comments.
### Step 1: Creating an Application with AI Management Module
> In this tutorial, I'll create a **layered MVC application** named **ContentModeration**. If you already have an existing solution, you can follow along by replacing the namespaces accordingly. Otherwise, feel free to follow the solution creation steps below.
The most straightforward way to create an application with the AI Management Module is through **ABP Studio**. When you create a new project, you'll encounter an **AI Integration** step in the project creation wizard. This wizard allows you to:
- Enable the AI Management Module with a single checkbox
- Configure your preferred AI provider (OpenAI and Ollama)
- Set up initial workspace configurations
- Automatically install all required NuGet packages
> **Note:** The AI Integration tab in ABP Studio currently only supports the **MVC/Razor Pages** UI. Support for **Angular** and **Blazor** UIs will be added in upcoming versions.
![ABP Studio AI Management](images/abp-studio-ai-management.png)
During the wizard, select **OpenAI** as your AI provider, set the model name as `omni-moderation-latest` and provide your API key. The wizard will automatically:
1. Install the `Volo.AIManagement.*` packages across your solution
2. Install the `Volo.AIManagement.OpenAI` package for OpenAI provider support (you can use any OpenAI compatible model here, including Gemini, Claude and GPT models)
3. Configure the necessary module dependencies
4. Set up initial database migrations
**Alternative Installation Method:**
If you have an existing project or prefer manual installation, you can add the module using the ABP CLI:
```bash
abp add-module Volo.AIManagement
```
Or through ABP Studio by right-clicking on your solution, selecting **Import Module**, and choosing `Volo.AIManagement` from the NuGet tab.
### Step 2: Understanding the OpenAI Workspace Configuration
After creating your project and running the application for the first time, navigate to **AI Management > Workspaces** in the admin menu. Here you'll find the workspace management interface where you can view, create, and modify AI workspaces.
![AI Management Workspaces](images/ai-management-workspaces.png)
If you configured OpenAI during the project creation wizard, you'll already have a workspace set up. Otherwise, you can create a new workspace with the following configuration:
| Property | Value | Description |
|----------|-------|-------------|
| **Name** | `OpenAIAssistant` | A unique identifier for this workspace (no spaces allowed) |
| **Provider** | `OpenAI` | The AI provider to use |
| **Model** | `omni-moderation-latest` | The specific model for content moderation |
| **API Key** | `<Your-OpenAI-API-key>` | Authentication credential for the OpenAI API |
| **Description** | `Workspace for content moderation` | A helpful description for administrators |
The beauty of this approach is that you can modify any of these settings at runtime through the UI. Need to rotate your API key? Just update it in the workspace configuration. Want to test a different model? Change it without touching your code.
### Step 3: Setting Up the CMS Kit Comment Feature
Now let's add the CMS Kit Module to enable the Comment Feature. The CMS Kit provides a robust, production-ready commenting system that we'll enhance with our content moderation.
**Install the CMS Kit Module:**
Run the following command in your solution directory:
```bash
abp add-module Volo.CmsKit --skip-db-migrations
```
> Also, you can add the related module through ABP Studio UI.
**Enable the Comment Feature:**
By default, CMS Kit features are disabled to keep your application lean. Open the `GlobalFeatureConfigurator` class in your `*.Domain.Shared` project and enable the Comment Feature:
```csharp
using Volo.Abp.GlobalFeatures;
using Volo.Abp.Threading;
namespace ContentModeration;
public static class ContentModerationGlobalFeatureConfigurator
{
private static readonly OneTimeRunner OneTimeRunner = new OneTimeRunner();
public static void Configure()
{
OneTimeRunner.Run(() =>
{
GlobalFeatureManager.Instance.Modules.CmsKit(cmsKit =>
{
//only enable the Comment Feature
cmsKit.Comments.Enable();
});
});
}
}
```
**Configure the Comment Entity Types:**
Open your `*DomainModule` class and configure which entity types can have comments. For our demo, we'll enable comments on "Article" entities:
```csharp
using Volo.CmsKit.Comments;
// In your ConfigureServices method:
Configure<CmsKitCommentOptions>(options =>
{
options.EntityTypes.Add(new CommentEntityTypeDefinition("Article"));
});
```
**Add the Comment Component to a Page:**
Finally, let's add the commenting interface to a page. Open the `Index.cshtml` file in your `*.Web` project and add the Comment component (replace with the following content):
```html
@page
@using Volo.CmsKit.Public.Web.Pages.CmsKit.Shared.Components.Commenting
@model ContentModeration.Web.Pages.IndexModel
<div class="container mt-4">
<div class="card">
<div class="card-header">
<h3>Welcome to Our Community</h3>
</div>
<div class="card-body">
<p class="lead">
Share your thoughts in the comments below. Our AI-powered moderation system
automatically reviews all comments to ensure a safe and respectful environment
for everyone.
</p>
<hr/>
<h4>Comments</h4>
@await Component.InvokeAsync(typeof(CommentingViewComponent), new
{
entityType = "Article",
entityId = "welcome-article",
isReadOnly = false
})
</div>
</div>
</div>
```
At this point, you have a fully functional commenting system. Users can post comments, reply to existing comments, and interact with the community.
![](./images/example-comment.png)
However, there's no content moderation yet and any content, including harmful content, would be accepted. Let's fix that!
## Implementing the Content Moderation Service
**Now comes the exciting part:** implementing the content moderation service that leverages OpenAI's `omni-moderation` model to automatically screen all comments before they're published.
### Understanding the Architecture
Our implementation follows a clean, modular architecture:
1. **`IContentModerator` Interface**: Defines the contract for content moderation, making our implementation testable and replaceable.
2. **`ContentModerator` Service**: The concrete implementation that calls OpenAI's Moderation API using the configuration from the AI Management Module.
3. **`MyCommentAppService`**: An override of the CMS Kit's comment service that integrates our moderation logic.
This separation of concerns ensures that:
- The moderation logic is isolated and can be unit tested independently
- You can easily swap the moderation implementation (e.g., switch to a different provider)
- The integration with CMS Kit is clean and maintainable
### Creating the Content Moderator Interface
First, let's define the interface in your `*.Application.Contracts` project. This interface is intentionally simple and it takes text input and throws an exception if the content is harmful:
```csharp
using System.Threading.Tasks;
namespace ContentModeration.Moderation;
public interface IContentModerator
{
Task CheckAsync(string text);
}
```
### Implementing the Content Moderator Service
Now let's implement the service in your `*.Application` project. This implementation uses the `IWorkspaceConfigurationStore` from the AI Management Module to dynamically retrieve the OpenAI configuration:
```csharp
using System.Collections.Generic;
using System.Threading.Tasks;
using OpenAI.Moderations;
using Volo.Abp;
using Volo.Abp.DependencyInjection;
using Volo.AIManagement.Workspaces.Configuration;
namespace ContentModeration.Moderation;
public class ContentModerator : IContentModerator, ITransientDependency
{
private readonly IWorkspaceConfigurationStore _workspaceConfigurationStore;
public ContentModerator(IWorkspaceConfigurationStore workspaceConfigurationStore)
{
_workspaceConfigurationStore = workspaceConfigurationStore;
}
public async Task CheckAsync(string text)
{
// Skip moderation for empty content
if (string.IsNullOrWhiteSpace(text))
{
return;
}
// Retrieve the workspace configuration from AI Management Module
// This allows runtime configuration changes without redeployment
var config = await _workspaceConfigurationStore.GetOrNullAsync<OpenAIAssistantWorkspace>();
if(config == null)
{
throw new UserFriendlyException("Could not find the 'OpenAIAssistant' workspace!");
}
var client = new ModerationClient(
model: config.Model,
apiKey: config.ApiKey
);
// Send the text to OpenAI's Moderation API
var result = await client.ClassifyTextAsync(text);
var moderationResult = result.Value;
// If the content is flagged, throw a user-friendly exception
if (moderationResult.Flagged)
{
var flaggedCategories = GetFlaggedCategories(moderationResult);
throw new UserFriendlyException(
$"Your comment contains content that violates our community guidelines. " +
$"Detected issues: {string.Join(", ", flaggedCategories)}. " +
$"Please revise your comment and try again."
);
}
}
private static List<string> GetFlaggedCategories(ModerationResult result)
{
var flaggedCategories = new List<string>();
if (result.Harassment.Flagged)
{
flaggedCategories.Add("harassment");
}
if (result.HarassmentThreatening.Flagged)
{
flaggedCategories.Add("threatening harassment");
}
//other categories...
return flaggedCategories;
}
}
```
> **Note**: The `ModerationResult` class from the OpenAI .NET SDK provides properties for each moderation category (e.g., `Harassment`, `Violence`, `Sexual`), each with a `Flagged` boolean and a `Score` float (0-1). The exact property names may vary slightly between SDK versions, so check the [OpenAI .NET SDK documentation](https://github.com/openai/openai-dotnet) for the latest API.
### Integrating with CMS Kit Comments
The final piece of the puzzle is integrating our moderation service with the CMS Kit's comment system. We'll override the `CommentPublicAppService` to intercept all comment creation and update requests:
```csharp
using System;
using System.Threading.Tasks;
using ContentModeration.Moderation;
using Microsoft.Extensions.Options;
using Volo.Abp.DependencyInjection;
using Volo.Abp.EventBus.Distributed;
using Volo.CmsKit.Comments;
using Volo.CmsKit.Public.Comments;
using Volo.CmsKit.Users;
using Volo.Abp.SettingManagement;
namespace ContentModeration.Comments;
[Dependency(ReplaceServices = true)]
[ExposeServices(typeof(ICommentPublicAppService), typeof(CommentPublicAppService), typeof(MyCommentAppService))]
public class MyCommentAppService : CommentPublicAppService
{
protected IContentModerator ContentModerator { get; }
public MyCommentAppService(
ICommentRepository commentRepository,
ICmsUserLookupService cmsUserLookupService,
IDistributedEventBus distributedEventBus,
CommentManager commentManager,
IOptionsSnapshot<CmsKitCommentOptions> cmsCommentOptions,
ISettingManager settingManager,
IContentModerator contentModerator)
: base(commentRepository, cmsUserLookupService, distributedEventBus, commentManager, cmsCommentOptions, settingManager)
{
ContentModerator = contentModerator;
}
public override async Task<CommentDto> CreateAsync(string entityType, string entityId, CreateCommentInput input)
{
// Check for harmful content BEFORE creating the comment
// If harmful content is detected, an exception is thrown and the comment is not saved
await ContentModerator.CheckAsync(input.Text);
return await base.CreateAsync(entityType, entityId, input);
}
public override async Task<CommentDto> UpdateAsync(Guid id, UpdateCommentInput input)
{
// Check for harmful content BEFORE updating the comment
// This prevents users from editing approved comments to add harmful content
await ContentModerator.CheckAsync(input.Text);
return await base.UpdateAsync(id, input);
}
}
```
**How This Works:**
1. When a user submits a new comment, the `CreateAsync` method is called.
2. Before the comment is saved to the database, we call `ContentModerator.CheckAsync()` with the comment text.
3. The moderation service sends the text to OpenAI's Moderation API.
4. If the content is flagged as harmful, a `UserFriendlyException` is thrown with a descriptive message.
5. The exception is caught by ABP's exception handling middleware and displayed to the user as a friendly error message.
6. If the content passes moderation, the comment is saved normally.
The same flow applies to comment updates, ensuring users can't circumvent moderation by editing previously approved comments.
Here's the full flow in action — submitting a comment with harmful content and seeing the moderation kick in:
![Content moderation demo](demo.gif)
## The Power of Dynamic Configuration - What AI Management Module Provides to You?
One of the most significant advantages of using the AI Management Module is the ability to manage your AI configurations dynamically. Let's explore what this means in practice.
### Runtime Configuration Changes
With the AI Management Module, you can:
- **Rotate API Keys**: Update your OpenAI API key through the admin UI without any downtime or redeployment. This is crucial for security compliance and key rotation policies.
- **Switch Models**: Want to test a newer moderation model? Simply update the model name in the workspace configuration. Your application will immediately start using the new model.
- **Adjust Settings**: Fine-tune settings like temperature or system prompts (for chat-based workspaces) without touching your codebase.
- **Enable/Disable Workspaces**: Temporarily disable a workspace for maintenance or testing without affecting other parts of your application.
### Multi-Environment Management
The dynamic configuration approach shines in multi-environment scenarios:
- **Development**: Use a test API key with lower rate limits
- **Staging**: Use a separate API key for integration testing
- **Production**: Use your production API key with appropriate security measures
All these configurations can be managed through the UI or via data seeding, without environment-specific code changes.
### Actively Maintained & What's Coming Next
The AI Management Module is **actively maintained** and continuously evolving. The team is working on exciting new capabilities that will further expand what you can do with AI in your ABP applications:
- **MCP (Model Context Protocol) Support** — Coming in **v10.2**, MCP support will allow your AI workspaces to interact with external tools and data sources, enabling more sophisticated AI-powered workflows.
- **RAG (Retrieval-Augmented Generation) System** — Also planned for **v10.2**, the built-in RAG system will let you ground AI responses in your own data, making AI features more accurate and context-aware.
- **And More** — Additional features and improvements are on the roadmap to make AI integration even more seamless.
Since the module is built on ABP's modular architecture, adopting these new capabilities will be straightforward — you can simply update the module and start using the new features without rewriting your existing AI integrations.
### Permission-Based Access Control
The AI Management Module integrates with ABP's permission system, allowing you to:
- Restrict who can view AI workspace configurations
- Control who can create or modify workspaces
- Limit access to specific workspaces based on user roles
This ensures that sensitive configurations like API keys are only accessible to authorized administrators.
## Conclusion
In this comprehensive guide, we've built a production-ready content moderation system that combines the power of OpenAI's `omni-moderation-latest` model with the flexibility of ABP's AI Management Module. Let's recap what makes this approach powerful:
### Key Takeaways
1. **Zero Training Required**: Unlike traditional ML approaches that require collecting datasets, training models, and ongoing maintenance, OpenAI's Moderation API works out of the box with state-of-the-art accuracy.
2. **Completely Free**: OpenAI's Moderation API has no token costs, making it economically viable for applications of any scale.
3. **Comprehensive Detection**: With 13+ categories of harmful content detection, you get protection against harassment, hate speech, violence, sexual content, self-harm, and more—all from a single API call.
4. **Dynamic Configuration**: The AI Management Module allows you to manage API keys, switch providers, and adjust settings at runtime without code changes or redeployment.
5. **Clean Integration**: By following ABP's service override pattern, we integrated moderation seamlessly into the existing CMS Kit comment system without modifying the original module.
6. **Production Ready**: The implementation includes proper error handling, graceful degradation, and user-friendly error messages suitable for production use.
### Resources
- [AI Management Module Documentation](https://abp.io/docs/latest/modules/ai-management)
- [OpenAI Moderation Guide](https://platform.openai.com/docs/guides/moderation)
- [CMS Kit Comments Feature](https://abp.io/docs/latest/modules/cms-kit/comments)
- [ABP Framework AI Infrastructure](https://abp.io/docs/latest/framework/infrastructure/artificial-intelligence)

30
docs/en/cli/index.md

@ -73,6 +73,7 @@ Here is the list of all available commands before explaining their details:
* **[`clear-download-cache`](../cli#clear-download-cache)**: Clears the templates download cache.
* **[`check-extensions`](../cli#check-extensions)**: Checks the latest version of the ABP CLI extensions.
* **[`install-old-cli`](../cli#install-old-cli)**: Installs old ABP CLI.
* **[`mcp-studio`](../cli#mcp-studio)**: Starts ABP Studio MCP bridge for AI tools (requires ABP Studio running).
* **[`generate-razor-page`](../cli#generate-razor-page)**: Generates a page class that you can use it in the ASP NET Core pipeline to return an HTML page.
### help
@ -981,6 +982,35 @@ Usage:
abp install-old-cli [options]
```
### mcp-studio
Starts an MCP stdio bridge for AI tools (Cursor, Claude Desktop, VS Code, etc.) that connects to the local ABP Studio instance. ABP Studio must be running for this command to work.
> You do not need to run this command manually. It is invoked automatically by your AI tool once you add the MCP configuration to your IDE. See the [Configuration](#configuration) examples below.
> This command connects to the **local ABP Studio** instance. It is separate from the `abp mcp` command, which connects to the ABP.IO cloud MCP service and requires an active license.
Usage:
```bash
abp mcp-studio [options]
```
Options:
* `--endpoint` or `-e`: Overrides ABP Studio MCP endpoint. Default value is `http://localhost:38280/mcp/`.
Example:
```bash
abp mcp-studio
abp mcp-studio --endpoint http://localhost:38280/mcp/
```
For detailed configuration examples (Cursor, Claude Desktop, VS Code) and the full list of available MCP tools, see the [Model Context Protocol (MCP)](../studio/model-context-protocol.md) documentation.
> You can also run `abp help mcp-studio` to see available options and example IDE configuration snippets directly in your terminal.
### generate-razor-page
`generate-razor-page` command to generate a page class and then use it in the ASP NET Core pipeline to return an HTML page.

2
docs/en/deployment/configuring-production.md

@ -113,6 +113,6 @@ ABP uses .NET's standard [Logging services](../framework/fundamentals/logging.md
ABP's startup solution templates come with [Swagger UI](https://swagger.io/) pre-installed. Swagger is a pretty standard and useful tool to discover and test your HTTP APIs on a built-in UI that is embedded into your application or service. It is typically used in development environment, but you may want to enable it on staging or production environments too.
While you will always secure your HTTP APIs with other techniques (like the [Authorization](../framework/fundamentals/authorization.md) system), allowing malicious software and people to easily discover your HTTP API endpoint details can be considered as a security problem for some systems. So, be careful while taking the decision of enabling or disabling Swagger for the production environment.
While you will always secure your HTTP APIs with other techniques (like the [Authorization](../framework/fundamentals/authorization/index.md) system), allowing malicious software and people to easily discover your HTTP API endpoint details can be considered as a security problem for some systems. So, be careful while taking the decision of enabling or disabling Swagger for the production environment.
> You may also want to see the [ABP Swagger integration](../framework/api-development/swagger.md) document.

22
docs/en/docs-nav.json

@ -333,13 +333,17 @@
"path": "studio/solution-explorer.md"
},
{
"text": "Running Applications",
"text": "Solution Runner",
"path": "studio/running-applications.md"
},
{
"text": "Monitoring Applications",
"path": "studio/monitoring-applications.md"
},
{
"text": "Model Context Protocol (MCP)",
"path": "studio/model-context-protocol.md"
},
{
"text": "Working with Kubernetes",
"path": "studio/kubernetes.md"
@ -347,6 +351,10 @@
{
"text": "Working with ABP Suite",
"path": "studio/working-with-suite.md"
},
{
"text": "Custom Commands",
"path": "studio/custom-commands.md"
}
]
},
@ -458,12 +466,16 @@
"items": [
{
"text": "Overview",
"path": "framework/fundamentals/authorization.md",
"path": "framework/fundamentals/authorization/index.md",
"isIndex": true
},
{
"text": "Dynamic Claims",
"path": "framework/fundamentals/dynamic-claims.md"
},
{
"text": "Resource Based Authorization",
"path": "framework/fundamentals/authorization/resource-based-authorization.md"
}
]
},
@ -689,6 +701,10 @@
"text": "Concurrency Check",
"path": "framework/infrastructure/concurrency-check.md"
},
{
"text": "Correlation ID",
"path": "framework/infrastructure/correlation-id.md"
},
{
"text": "Current User",
"path": "framework/infrastructure/current-user.md"
@ -1284,7 +1300,7 @@
},
{
"text": "LeptonX Lite",
"path": "ui-themes/lepton-x-lite/mvc.md"
"path": "ui-themes/lepton-x-lite/asp-net-core.md"
},
{
"text": "LeptonX",

2
docs/en/framework/api-development/standard-apis/configuration.md

@ -9,7 +9,7 @@
ABP provides a pre-built and standard endpoint that contains some useful information about the application/service. Here, is the list of some fundamental information at this endpoint:
* Granted [policies](../../fundamentals/authorization.md) (permissions) for the current user.
* Granted [policies](../../fundamentals/authorization/index.md) (permissions) for the current user.
* [Setting](../../infrastructure/settings.md) values for the current user.
* Info about the [current user](../../infrastructure/current-user.md) (like id and user name).
* Info about the current [tenant](../../architecture/multi-tenancy) (like id and name).

2
docs/en/framework/architecture/domain-driven-design/application-services.md

@ -218,7 +218,7 @@ See the [validation document](../../fundamentals/validation.md) for more.
It's possible to use declarative and imperative authorization for application service methods.
See the [authorization document](../../fundamentals/authorization.md) for more.
See the [authorization document](../../fundamentals/authorization/index.md) for more.
## CRUD Application Services

23
docs/en/framework/architecture/domain-driven-design/entities.md

@ -135,6 +135,29 @@ if (book1.EntityEquals(book2)) //Check equality
}
```
### `IKeyedObject` Interface
ABP entities implement the `IKeyedObject` interface, which provides a way to get the entity's primary key as a string:
```csharp
public interface IKeyedObject
{
string? GetObjectKey();
}
```
The `GetObjectKey()` method returns a string representation of the entity's primary key. For entities with a single key (like `Entity<Guid>` or `Entity<int>`), it returns the `Id` property converted to a string. For entities with composite keys, it returns the keys combined with a comma separator.
This interface is particularly useful for scenarios where you need to identify an entity by its key in a type-agnostic way, such as:
* **Resource-based authorization**: When checking or granting permissions for specific entity instances
* **Caching**: When creating cache keys based on entity identifiers
* **Logging and auditing**: When recording entity identifiers in a consistent format
Since all ABP entities implement this interface through the `IEntity` interface, you can use `GetObjectKey()` on any entity without additional implementation.
> See the [Resource-Based Authorization](../../fundamentals/authorization/resource-based-authorization.md) documentation for a practical example of using `IKeyedObject` with the permission system.
## AggregateRoot Class
"*Aggregate is a pattern in Domain-Driven Design. A DDD aggregate is a cluster of domain objects that can be treated as a single unit. An example may be an order and its line-items, these will be separate objects, but it's useful to treat the order (together with its line items) as a single aggregate.*" (see the [full description](http://martinfowler.com/bliki/DDD_Aggregate.html))

2
docs/en/framework/architecture/modularity/extending/customizing-application-modules-guide.md

@ -112,4 +112,4 @@ Also, see the following documents:
* See [the localization document](../../../fundamentals/localization.md) to learn how to extend existing localization resources.
* See [the settings document](../../../infrastructure/settings.md) to learn how to change setting definitions of a depended module.
* See [the authorization document](../../../fundamentals/authorization.md) to learn how to change permission definitions of a depended module.
* See [the authorization document](../../../fundamentals/authorization/index.md) to learn how to change permission definitions of a depended module.

97
docs/en/framework/fundamentals/authorization.md → docs/en/framework/fundamentals/authorization/index.md

@ -9,13 +9,15 @@
Authorization is used to check if a user is allowed to perform some specific operations in the application.
ABP extends [ASP.NET Core Authorization](https://docs.microsoft.com/en-us/aspnet/core/security/authorization/introduction) by adding **permissions** as auto [policies](https://docs.microsoft.com/en-us/aspnet/core/security/authorization/policies) and allowing authorization system to be usable in the **[application services](../architecture/domain-driven-design/application-services.md)** too.
ABP extends [ASP.NET Core Authorization](https://docs.microsoft.com/en-us/aspnet/core/security/authorization/introduction) by adding **permissions** as auto [policies](https://docs.microsoft.com/en-us/aspnet/core/security/authorization/policies) and allowing authorization system to be usable in the **[application services](../../architecture/domain-driven-design/application-services.md)** too.
So, all the ASP.NET Core authorization features and the documentation are valid in an ABP based application. This document focuses on the features that are added on top of ASP.NET Core authorization features.
ABP supports two types of permissions: **Standard permissions** apply globally (e.g., "can create documents"), while **resource-based permissions** target specific instances (e.g., "can edit Document #123"). This document covers standard permissions; see [Resource-Based Authorization](./resource-based-authorization.md) for fine-grained, per-resource access control.
## Authorize Attribute
ASP.NET Core defines the [**Authorize**](https://docs.microsoft.com/en-us/aspnet/core/security/authorization/simple) attribute that can be used for an action, a controller or a page. ABP allows you to use the same attribute for an [application service](../architecture/domain-driven-design/application-services.md) too.
ASP.NET Core defines the [**Authorize**](https://docs.microsoft.com/en-us/aspnet/core/security/authorization/simple) attribute that can be used for an action, a controller or a page. ABP allows you to use the same attribute for an [application service](../../architecture/domain-driven-design/application-services.md) too.
Example:
@ -87,9 +89,11 @@ namespace Acme.BookStore.Permissions
> ABP automatically discovers this class. No additional configuration required!
> You typically define this class inside the `Application.Contracts` project of your [application](../../solution-templates/layered-web-application). The startup template already comes with an empty class named *YourProjectNamePermissionDefinitionProvider* that you can start with.
> You typically define this class inside the `Application.Contracts` project of your [application](../../../solution-templates/layered-web-application/index.md). The startup template already comes with an empty class named *YourProjectNamePermissionDefinitionProvider* that you can start with.
In the `Define` method, you first need to add a **permission group** (or get an existing group), then add **permissions** to this group using the `AddPermission` method.
In the `Define` method, you first need to add a **permission group** or get an existing group then add **permissions** to this group.
> For resource-specific fine-grained permissions, use the `AddResourcePermission` method instead. See [Resource-Based Authorization](./resource-based-authorization.md) for details.
When you define a permission, it becomes usable in the ASP.NET Core authorization system as a **policy** name. It also becomes visible in the UI. See permissions dialog for a role:
@ -100,6 +104,8 @@ When you define a permission, it becomes usable in the ASP.NET Core authorizatio
When you save the dialog, it is saved to the database and used in the authorization system.
> **Note:** Only standard (global) permissions are shown in this dialog. Resource-based permissions are managed through the [Resource Permission Management Dialog](../../../modules/permission-management.md#resource-permission-management-dialog) on individual resource instances.
> The screen above is available when you have installed the identity module, which is basically used for user and role management. Startup templates come with the identity module pre-installed.
#### Localizing the Permission Name
@ -125,15 +131,15 @@ Then you can define texts for "BookStore" and "Permission:BookStore_Author_Creat
"Permission:BookStore_Author_Create": "Creating a new author"
```
> For more information, see the [localization document](./localization.md) on the localization system.
> For more information, see the [localization document](../localization.md) on the localization system.
The localized UI will be as seen below:
![authorization-new-permission-ui-localized](../../images/authorization-new-permission-ui-localized.png)
![authorization-new-permission-ui-localized](../../../images/authorization-new-permission-ui-localized.png)
#### Multi-Tenancy
ABP supports [multi-tenancy](../architecture/multi-tenancy) as a first class citizen. You can define multi-tenancy side option while defining a new permission. It gets one of the three values defined below:
ABP supports [multi-tenancy](../../architecture/multi-tenancy/index.md) as a first class citizen. You can define multi-tenancy side option while defining a new permission. It gets one of the three values defined below:
- **Host**: The permission is available only for the host side.
- **Tenant**: The permission is available only for the tenant side.
@ -180,7 +186,7 @@ authorManagement.AddChild("Author_Management_Delete_Books");
The result on the UI is shown below (you probably want to localize permissions for your application):
![authorization-new-permission-ui-hierarcy](../../images/authorization-new-permission-ui-hierarcy.png)
![authorization-new-permission-ui-hierarcy](../../../images/authorization-new-permission-ui-hierarcy.png)
For the example code, it is assumed that a role/user with "Author_Management" permission granted may have additional permissions. Then a typical application service that checks permissions can be defined as shown below:
@ -229,7 +235,7 @@ See [policy based authorization](https://docs.microsoft.com/en-us/aspnet/core/se
### Changing Permission Definitions of a Depended Module
A class deriving from the `PermissionDefinitionProvider` (just like the example above) can also get existing permission definitions (defined by the depended [modules](../architecture/modularity/basics.md)) and change their definitions.
A class deriving from the `PermissionDefinitionProvider` (just like the example above) can also get existing permission definitions (defined by the depended [modules](../../architecture/modularity/basics.md)) and change their definitions.
Example:
@ -247,12 +253,12 @@ When you write this code inside your permission definition provider, it finds th
You may want to disable a permission based on a condition. Disabled permissions are not visible on the UI and always returns `prohibited` when you check them. There are two built-in conditional dependencies for a permission definition;
* A permission can be automatically disabled if a [Feature](../infrastructure/features.md) was disabled.
* A permission can be automatically disabled if a [Global Feature](../infrastructure/global-features.md) was disabled.
* A permission can be automatically disabled if a [Feature](../../infrastructure/features.md) was disabled.
* A permission can be automatically disabled if a [Global Feature](../../infrastructure/global-features.md) was disabled.
In addition, you can create your custom extensions.
#### Depending on a Features
#### Depending on Features
Use the `RequireFeatures` extension method on your permission definition to make the permission available only if a given feature is enabled:
@ -261,7 +267,7 @@ myGroup.AddPermission("Book_Creation")
.RequireFeatures("BookManagement");
````
#### Depending on a Global Feature
#### Depending on Global Features
Use the `RequireGlobalFeatures` extension method on your permission definition to make the permission available only if a given feature is enabled:
@ -272,13 +278,13 @@ myGroup.AddPermission("Book_Creation")
#### Creating a Custom Permission Dependency
`PermissionDefinition` supports state check, Please refer to [Simple State Checker's documentation](../infrastructure/simple-state-checker.md)
`PermissionDefinition` supports state check, please refer to [Simple State Checker's documentation](../../infrastructure/simple-state-checker.md)
## IAuthorizationService
ASP.NET Core provides the `IAuthorizationService` that can be used to check for authorization. Once you inject, you can use it in your code to conditionally control the authorization.
ASP.NET Core provides the `IAuthorizationService` that can be used to check for authorization. Once you inject it, you can use it in your code to conditionally control the authorization.
Example:
**Example:**
```csharp
public async Task CreateAsync(CreateAuthorDto input)
@ -295,7 +301,7 @@ public async Task CreateAsync(CreateAuthorDto input)
}
```
> `AuthorizationService` is available as a property when you derive from ABP's `ApplicationService` base class. Since it is widely used in application services, `ApplicationService` pre-injects it for you. Otherwise, you can directly [inject](./dependency-injection.md) it into your class.
> `AuthorizationService` is available as a property when you derive from ABP's `ApplicationService` base class. Since it is widely used in application services, `ApplicationService` pre-injects it for you. Otherwise, you can directly [inject](../dependency-injection.md) it into your class.
Since this is a typical code block, ABP provides extension methods to simplify it.
@ -320,15 +326,15 @@ public async Task CreateAsync(CreateAuthorDto input)
See the following documents to learn how to re-use the authorization system on the client side:
* [ASP.NET Core MVC / Razor Pages UI: Authorization](../ui/mvc-razor-pages/javascript-api/auth.md)
* [Angular UI Authorization](../ui/angular/authorization.md)
* [Blazor UI Authorization](../ui/blazor/authorization.md)
* [ASP.NET Core MVC / Razor Pages UI: Authorization](../../ui/mvc-razor-pages/javascript-api/auth.md)
* [Angular UI Authorization](../../ui/angular/authorization.md)
* [Blazor UI Authorization](../../ui/blazor/authorization.md)
## Permission Management
Permission management is normally done by an admin user using the permission management modal:
![authorization-new-permission-ui-localized](../../images/authorization-new-permission-ui-localized.png)
![authorization-new-permission-ui-localized](../../../images/authorization-new-permission-ui-localized.png)
If you need to manage permissions by code, inject the `IPermissionManager` and use as shown below:
@ -356,13 +362,13 @@ public class MyService : ITransientDependency
`SetForUserAsync` sets the value (true/false) for a permission of a user. There are more extension methods like `SetForRoleAsync` and `SetForClientAsync`.
`IPermissionManager` is defined by the permission management module. See the [permission management module documentation](../../modules/permission-management.md) for more information.
`IPermissionManager` is defined by the Permission Management module. For resource-based permissions, use `IResourcePermissionManager` instead. See the [Permission Management Module documentation](../../../modules/permission-management.md) for more information.
## Advanced Topics
### Permission Value Providers
Permission checking system is extensible. Any class derived from `PermissionValueProvider` (or implements `IPermissionValueProvider`) can contribute to the permission check. There are three pre-defined value providers:
The permission checking system is extensible. Any class derived from `PermissionValueProvider` (or implements `IPermissionValueProvider`) can contribute to the permission check. There are three pre-defined value providers:
- `UserPermissionValueProvider` checks if the current user is granted for the given permission. It gets user id from the current claims. User claim name is defined with the `AbpClaimTypes.UserId` static property.
- `RolePermissionValueProvider` checks if any of the roles of the current user is granted for the given permission. It gets role names from the current claims. Role claims name is defined with the `AbpClaimTypes.Role` static property.
@ -412,15 +418,35 @@ Configure<AbpPermissionOptions>(options =>
});
```
### Resource Permission Value Providers
Similar to standard permission value providers, you can extend the resource permission checking system by creating custom **resource permission value providers**. ABP provides two built-in resource permission value providers:
* `UserResourcePermissionValueProvider`: Checks permissions granted directly to users for a specific resource.
* `RoleResourcePermissionValueProvider`: Checks permissions granted to roles for a specific resource.
You can create custom providers by implementing `IResourcePermissionValueProvider` or inheriting from `ResourcePermissionValueProvider`. Register them using:
```csharp
Configure<AbpPermissionOptions>(options =>
{
options.ResourceValueProviders.Add<YourCustomResourcePermissionValueProvider>();
});
```
> See the [Permission Management Module](../../../modules/permission-management.md#resource-permission-value-providers) documentation for detailed examples.
### Permission Store
`IPermissionStore` is the only interface that needs to be implemented to read the value of permissions from a persistence source, generally a database system. The Permission Management module implements it and pre-installed in the application startup template. See the [permission management module documentation](../../modules/permission-management.md) for more information
`IPermissionStore` is the interface that needs to be implemented to read the value of permissions from a persistence source, generally a database system. The Permission Management module implements it and is pre-installed in the application startup template. See the [Permission Management Module documentation](../../../modules/permission-management.md) for more information.
For resource-based permissions, `IResourcePermissionStore` serves the same purpose, storing and retrieving permissions for specific resource instances.
### AlwaysAllowAuthorizationService
`AlwaysAllowAuthorizationService` is a class that is used to bypass the authorization service. It is generally used in integration tests where you may want to disable the authorization system.
Use `IServiceCollection.AddAlwaysAllowAuthorization()` extension method to register the `AlwaysAllowAuthorizationService` to the [dependency injection](./dependency-injection.md) system:
Use `IServiceCollection.AddAlwaysAllowAuthorization()` extension method to register the `AlwaysAllowAuthorizationService` to the [dependency injection](../../dependency-injection.md) system:
```csharp
public override void ConfigureServices(ServiceConfigurationContext context)
@ -466,11 +492,24 @@ public static class CurrentUserExtensions
}
```
> If you use OpenIddict please see [Updating Claims in Access Token and ID Token](../../modules/openiddict#updating-claims-in-access_token-and-id_token).
> If you use OpenIddict please see [Updating Claims in Access Token and ID Token](../../../modules/openiddict#updating-claims-in-access_token-and-id_token).
## Resource-Based Authorization
While this document covers standard (global) permissions, ABP also supports **resource-based authorization** for fine-grained access control on specific resource instances. Resource-based authorization allows you to grant permissions for a specific document, project, or any other entity rather than granting a permission for all resources of that type.
**Example scenarios:**
* Allow users to edit **only their own** blog posts or documents
* Grant access to **specific projects** based on team membership
* Implement document sharing where **different users have different access levels** to the same document
> See the [Resource-Based Authorization](./resource-based-authorization.md) document for implementation details.
## See Also
* [Permission Management Module](../../modules/permission-management.md)
* [ASP.NET Core MVC / Razor Pages JavaScript Auth API](../ui/mvc-razor-pages/javascript-api/auth.md)
* [Permission Management in Angular UI](../ui/angular/Permission-Management.md)
* [Resource-Based Authorization](./resource-based-authorization.md)
* [Permission Management Module](../../../modules/permission-management.md)
* [ASP.NET Core MVC / Razor Pages JavaScript Auth API](../../ui/mvc-razor-pages/javascript-api/auth.md)
* [Permission Management in Angular UI](../../ui/angular/Permission-Management.md)
* [Video tutorial](https://abp.io/video-courses/essentials/authorization)

241
docs/en/framework/fundamentals/authorization/resource-based-authorization.md

@ -0,0 +1,241 @@
```json
//[doc-seo]
{
"Description": "Learn how to implement resource-based authorization in ABP Framework for fine-grained access control on specific resource instances like documents, projects, or any entity."
}
```
# Resource-Based Authorization
**Resource-Based Authorization** is a powerful feature that enables fine-grained access control based on specific resource instances. While the standard [authorization system](./index.md) grants permissions at a general level (e.g., "can edit documents"), resource-based authorization allows you to grant permissions for a **specific** document, project, or any other entity rather than granting a permission for all of them.
## When to Use Resource-Based Authorization?
Consider resource-based authorization when you need to:
* Allow users to edit **only their own blog posts or documents**
* Grant access to **specific projects** based on team membership
* Implement document sharing **where different users have different access levels to the same document**
* Control access to resources based on ownership or custom sharing rules
**Example Scenarios:**
Imagine a document management system where:
- User A can view and edit Document 1
- User B can only view Document 1
- User A has no access to Document 2
- User C can manage permissions for Document 2
This level of granular control is what resource-based authorization provides.
## Usage
Implementing resource-based authorization involves three main steps:
1. **Define** resource permissions in your `PermissionDefinitionProvider`
2. **Check** permissions using `IResourcePermissionChecker`
3. **Manage** permissions via UI or using `IResourcePermissionManager` for programmatic usages
### Defining Resource Permissions
Define resource permissions in your `PermissionDefinitionProvider` class using the `AddResourcePermission` method:
```csharp
namespace Acme.BookStore.Permissions;
public static class BookStorePermissions
{
public const string GroupName = "BookStore";
public static class Books
{
public const string Default = GroupName + ".Books";
public const string ManagePermissions = Default + ".ManagePermissions";
public static class Resources
{
public const string Name = "Acme.BookStore.Books.Book";
public const string View = Name + ".View";
public const string Edit = Name + ".Edit";
public const string Delete = Name + ".Delete";
}
}
}
```
```csharp
using Volo.Abp.Authorization.Permissions;
using Volo.Abp.Localization;
namespace Acme.BookStore.Permissions
{
public class BookStorePermissionDefinitionProvider : PermissionDefinitionProvider
{
public override void Define(IPermissionDefinitionContext context)
{
var myGroup = context.AddGroup("BookStore");
// Standard permissions
myGroup.AddPermission(BookStorePermissions.Books.Default, L("Permission:Books"));
// Permission to manage resource permissions (required)
myGroup.AddPermission(BookStorePermissions.Books.ManagePermissions, L("Permission:Books:ManagePermissions"));
// Resource-based permissions
context.AddResourcePermission(
name: BookStorePermissions.Books.Resources.View,
resourceName: BookStorePermissions.Books.Resources.Name,
managementPermissionName: BookStorePermissions.Books.ManagePermissions,
displayName: L("Permission:Books:View")
);
context.AddResourcePermission(
name: BookStorePermissions.Books.Resources.Edit,
resourceName: BookStorePermissions.Books.Resources.Name,
managementPermissionName: BookStorePermissions.Books.ManagePermissions,
displayName: L("Permission:Books:Edit")
);
context.AddResourcePermission(
name: BookStorePermissions.Books.Resources.Delete,
resourceName: BookStorePermissions.Books.Resources.Name,
managementPermissionName: BookStorePermissions.Books.ManagePermissions,
displayName: L("Permission:Books:Delete"),
multiTenancySide: MultiTenancySides.Host
);
}
}
private static LocalizableString L(string name)
{
return LocalizableString.Create<BookStoreResource>(name);
}
}
```
The `AddResourcePermission` method requires the following parameters:
* `name`: A unique name for the resource permission.
* `resourceName`: An identifier for the resource type. This is typically the full name of the entity class (e.g., `Acme.BookStore.Books.Book`).
* `managementPermissionName`: A standard permission that controls who can manage resource permissions. Users with this permission can grant/revoke resource permissions for specific resources.
* `displayName`: (Optional) A localized display name shown in the UI.
* `multiTenancySide`: (Optional) Specifies on which side of a multi-tenant application this permission can be used. Accepts `MultiTenancySides.Host` (only for the host side), `MultiTenancySides.Tenant` (only for tenants), or `MultiTenancySides.Both` (default, available on both sides).
### Checking Resource Permissions
Use the `IAuthorizationService` service to check if a user/role/client has a specific permission for a resource:
```csharp
using System;
using System.Threading.Tasks;
using Volo.Abp.Application.Services;
using Volo.Abp.Authorization.Permissions.Resources;
namespace Acme.BookStore.Books
{
public class BookAppService : ApplicationService, IBookAppService
{
private readonly IBookRepository _bookRepository;
public BookAppService(IBookRepository bookRepository)
{
_bookRepository = bookRepository;
}
public virtual async Task<BookDto> GetAsync(Guid id)
{
var book = await _bookRepository.GetAsync(id);
// Check if the current user can view this specific book
var isGranted = await AuthorizationService.IsGrantedAsync(book, BookStorePermissions.Books.Resources.View); // AuthorizationService is a property of the ApplicationService class and will be automatically injected.
if (!isGranted)
{
throw new AbpAuthorizationException("You don't have permission to view this book.");
}
return ObjectMapper.Map<Book, BookDto>(book);
}
public virtual async Task UpdateAsync(Guid id, UpdateBookDto input)
{
var book = await _bookRepository.GetAsync(id);
// Check if the current user can edit this specific book
var isGranted = await AuthorizationService.IsGrantedAsync(book, BookStorePermissions.Books.Resources.Edit); // AuthorizationService is a property of the ApplicationService class and will be automatically injected.
if (!isGranted)
{
throw new AbpAuthorizationException("You don't have permission to edit this book.");
}
book.Title = input.Title;
book.Content = input.Content;
await _bookRepository.UpdateAsync(book);
}
}
}
```
In this example, the `BookAppService` uses `IAuthorizationService` to check if the current user has the required permission for a specific book before performing the operation. The method takes the `Book` entity object and resource permission name as parameters.
#### IKeyedObject
The `IAuthorizationService` internally uses `IResourcePermissionChecker` to check resource permissions, and gets the resource key by calling the `GetObjectKey()` method of the `IKeyedObject` interface. All ABP entities implement the `IKeyedObject` interface, so you can directly pass entity objects to the `IsGrantedAsync` method.
> See the [Entities documentation](../../architecture/domain-driven-design/entities.md) for more information about the `IKeyedObject` interface.
#### IResourcePermissionChecker
You can also directly use the `IResourcePermissionChecker` service to check resource permissions which provides more advanced features, such as checking multiple permissions at once:
> You have to pass the resource key (obtained via `GetObjectKey()`) explicitly when using `IResourcePermissionChecker`.
```csharp
public class BookAppService : ApplicationService, IBookAppService
{
private readonly IBookRepository _bookRepository;
private readonly IResourcePermissionChecker _resourcePermissionChecker;
public BookAppService(IBookRepository bookRepository, IResourcePermissionChecker resourcePermissionChecker)
{
_bookRepository = bookRepository;
_resourcePermissionChecker = resourcePermissionChecker;
}
public async Task<BookPermissionsDto> GetPermissionsAsync(Guid id)
{
var book = await _bookRepository.GetAsync(id);
var result = await _resourcePermissionChecker.IsGrantedAsync(new[]
{
BookStorePermissions.Books.Resources.View,
BookStorePermissions.Books.Resources.Edit,
BookStorePermissions.Books.Resources.Delete
},
BookStorePermissions.Books.Resources.Name,
book.GetObjectKey()!);
return new BookPermissionsDto
{
CanView = result.Result[BookStorePermissions.Books.Resources.View] == PermissionGrantResult.Granted,
CanEdit = result.Result[BookStorePermissions.Books.Resources.Edit] == PermissionGrantResult.Granted,
CanDelete = result.Result[BookStorePermissions.Books.Resources.Delete] == PermissionGrantResult.Granted
};
}
}
```
### Managing Resource Permissions
Once you have defined resource permissions, you need a way to grant or revoke them for specific users, roles, or clients. The [Permission Management Module](../../../modules/permission-management.md) provides the infrastructure for managing resource permissions:
- **UI Components**: Built-in modal dialogs for managing resource permissions on all supported UI frameworks (MVC/Razor Pages, Blazor, and Angular). These components allow administrators to grant or revoke permissions for users and roles on specific resource instances through a user-friendly interface.
- **`IResourcePermissionManager` Service**: A service for programmatically granting, revoking, and querying resource permissions at runtime. This is useful for scenarios like automatically granting permissions when a resource is created, implementing sharing functionality, or integrating with external systems.
> See the [Permission Management Module](../../../modules/permission-management.md#resource-permission-management-dialog) documentation for detailed information on using the UI components and the `IResourcePermissionManager` service.
## See Also
* [Authorization](./index.md)
* [Permission Management Module](../../../modules/permission-management.md)
* [Entities](../../architecture/domain-driven-design/entities.md)

2
docs/en/framework/fundamentals/dynamic-claims.md

@ -94,6 +94,6 @@ If you want to add your own dynamic claims contributor, you can create a class t
## See Also
* [Authorization](./authorization.md)
* [Authorization](./authorization/index.md)
* [Claims-based authorization in ASP.NET Core](https://learn.microsoft.com/en-us/aspnet/core/security/authorization/claims)
* [Mapping, customizing, and transforming claims in ASP.NET Core](https://learn.microsoft.com/en-us/aspnet/core/security/authentication/claims)

2
docs/en/framework/fundamentals/exception-handling.md

@ -322,7 +322,7 @@ The `context` object contains necessary information about the exception occurred
Some exception types are automatically thrown by the framework:
- `AbpAuthorizationException` is thrown if the current user has no permission to perform the requested operation. See [authorization](./authorization.md) for more.
- `AbpAuthorizationException` is thrown if the current user has no permission to perform the requested operation. See [authorization](./authorization/index.md) for more.
- `AbpValidationException` is thrown if the input of the current request is not valid. See [validation](./validation.md) for more.
- `EntityNotFoundException` is thrown if the requested entity is not available. This is mostly thrown by [repositories](../architecture/domain-driven-design/repositories.md).

Some files were not shown because too many files changed in this diff

Loading…
Cancel
Save