Browse Source

Merge branch 'rel-10.2' into auto-merge/rel-10-1/4399

pull/25001/head
selman koc 4 weeks ago
committed by GitHub
parent
commit
16ee73ef43
No known key found for this signature in database GPG Key ID: B5690EEEBB952194
  1. 270
      .cursorrules
  2. 372
      .github/copilot-instructions.md
  3. 22
      .github/workflows/auto-pr.yml
  4. 658
      .github/workflows/update-studio-docs.yml
  5. 18
      Directory.Packages.props
  6. 2
      abp_io/AbpIoLocalization/AbpIoLocalization/Admin/Localization/Resources/de.json
  7. 2
      abp_io/AbpIoLocalization/AbpIoLocalization/Commercial/Localization/Resources/de.json
  8. 2
      abp_io/AbpIoLocalization/AbpIoLocalization/Www/Localization/Resources/de.json
  9. 3
      abp_io/AbpIoLocalization/AbpIoLocalization/Www/Localization/Resources/en.json
  10. 4
      common.props
  11. 342
      docs/en/Blog-Posts/2026-01-08 v10_1_Preview/POST.md
  12. BIN
      docs/en/Blog-Posts/2026-01-08 v10_1_Preview/ai-management-demo.gif
  13. BIN
      docs/en/Blog-Posts/2026-01-08 v10_1_Preview/ai-management-workspaces.png
  14. BIN
      docs/en/Blog-Posts/2026-01-08 v10_1_Preview/community-talk-2025-10-ai.png
  15. BIN
      docs/en/Blog-Posts/2026-01-08 v10_1_Preview/cover-image.png
  16. BIN
      docs/en/Blog-Posts/2026-01-08 v10_1_Preview/dotnet-conf-china-2025.png
  17. BIN
      docs/en/Blog-Posts/2026-01-08 v10_1_Preview/file-sharing.gif
  18. BIN
      docs/en/Blog-Posts/2026-01-08 v10_1_Preview/live-training-discount.png
  19. BIN
      docs/en/Blog-Posts/2026-01-08 v10_1_Preview/my-passkey.png
  20. BIN
      docs/en/Blog-Posts/2026-01-08 v10_1_Preview/passkey-login.png
  21. BIN
      docs/en/Blog-Posts/2026-01-08 v10_1_Preview/passkey-registration.png
  22. BIN
      docs/en/Blog-Posts/2026-01-08 v10_1_Preview/passkey-setting.png
  23. BIN
      docs/en/Blog-Posts/2026-01-08 v10_1_Preview/password-history-settings.png
  24. BIN
      docs/en/Blog-Posts/2026-01-08 v10_1_Preview/password-history-warning.png
  25. BIN
      docs/en/Blog-Posts/2026-01-08 v10_1_Preview/referral-program.png
  26. BIN
      docs/en/Blog-Posts/2026-01-08 v10_1_Preview/reset-password-error-modal.png
  27. BIN
      docs/en/Blog-Posts/2026-01-08 v10_1_Preview/set-password-error-modal.png
  28. BIN
      docs/en/Blog-Posts/2026-01-08 v10_1_Preview/studio-switch-to-preview.png
  29. 82
      docs/en/Blog-Posts/2026-02-23 v10_1_Release_Stable/POST.md
  30. BIN
      docs/en/Blog-Posts/2026-02-23 v10_1_Release_Stable/cover-image.png
  31. BIN
      docs/en/Blog-Posts/2026-02-23 v10_1_Release_Stable/upgrade-abp-packages.png
  32. 377
      docs/en/Community-Articles/09-02-2026-building-multiagent-system-in-dotnet/articles.md
  33. BIN
      docs/en/Community-Articles/09-02-2026-building-multiagent-system-in-dotnet/images/agent-context.png
  34. BIN
      docs/en/Community-Articles/09-02-2026-building-multiagent-system-in-dotnet/images/agent-ecosystem.png
  35. BIN
      docs/en/Community-Articles/09-02-2026-building-multiagent-system-in-dotnet/images/agent-state-flow.png
  36. BIN
      docs/en/Community-Articles/09-02-2026-building-multiagent-system-in-dotnet/images/image-1.png
  37. BIN
      docs/en/Community-Articles/09-02-2026-building-multiagent-system-in-dotnet/images/image-2.png
  38. BIN
      docs/en/Community-Articles/09-02-2026-building-multiagent-system-in-dotnet/images/image-3.png
  39. BIN
      docs/en/Community-Articles/09-02-2026-building-multiagent-system-in-dotnet/images/image-4.png
  40. BIN
      docs/en/Community-Articles/09-02-2026-building-multiagent-system-in-dotnet/images/image.png
  41. BIN
      docs/en/Community-Articles/09-02-2026-building-multiagent-system-in-dotnet/images/mcp-client-server-1200x700.png
  42. BIN
      docs/en/Community-Articles/09-02-2026-building-multiagent-system-in-dotnet/images/orchestrator-a2a-routing-1200x700.png
  43. BIN
      docs/en/Community-Articles/09-02-2026-building-multiagent-system-in-dotnet/images/orchestrator-researcher-seq-1200x700.png
  44. BIN
      docs/en/Community-Articles/09-02-2026-building-multiagent-system-in-dotnet/images/sequential-agent-context-flow-1200x700.png
  45. 2
      docs/en/Community-Articles/2025-09-02-training-campaign/post.md
  46. 4
      docs/en/Community-Articles/2025-12-18-Announcement-AIMAnagement/post.md
  47. BIN
      docs/en/Community-Articles/2025-12-18-Implementing-Multiple-Global-Query-Filters-With-Entity-Framework-Core/images/cover.png
  48. 728
      docs/en/Community-Articles/2025-12-18-Implementing-Multiple-Global-Query-Filters-With-Entity-Framework-Core/post.md
  49. 1
      docs/en/Community-Articles/2025-12-18-Implementing-Multiple-Global-Query-Filters-With-Entity-Framework-Core/summary.md
  50. 170
      docs/en/Community-Articles/2026-01-11/article.md
  51. BIN
      docs/en/Community-Articles/2026-01-11/event-driven-systems.png
  52. BIN
      docs/en/Community-Articles/2026-01-11/message-driven-systems.png
  53. 17
      docs/en/Community-Articles/2026-01-16-meet-abio-at-ndc-london/post.md
  54. BIN
      docs/en/Community-Articles/2026-01-19-Trend-PDF-Libraries-For-CSharp/PuppeteerSharp.png
  55. BIN
      docs/en/Community-Articles/2026-01-19-Trend-PDF-Libraries-For-CSharp/QuestPDF.png
  56. 153
      docs/en/Community-Articles/2026-01-19-Trend-PDF-Libraries-For-CSharp/article.md
  57. BIN
      docs/en/Community-Articles/2026-01-19-Trend-PDF-Libraries-For-CSharp/cover.png
  58. BIN
      docs/en/Community-Articles/2026-01-19-Trend-PDF-Libraries-For-CSharp/itext.jpg
  59. BIN
      docs/en/Community-Articles/2026-01-19-Trend-PDF-Libraries-For-CSharp/pdfsharp.png
  60. BIN
      docs/en/Community-Articles/2026-01-19-Trend-PDF-Libraries-For-CSharp/playwright.png
  61. 167
      docs/en/Community-Articles/2026-01-24-How-AI-Is-Changing-Developers/POST.md
  62. BIN
      docs/en/Community-Articles/2026-01-24-How-AI-Is-Changing-Developers/image.png
  63. 50
      docs/en/Community-Articles/2026-02-02-ndc-london-article/post.md
  64. BIN
      docs/en/Community-Articles/2026-02-03-Impressions-of-NDC-London-2026/0.png
  65. BIN
      docs/en/Community-Articles/2026-02-03-Impressions-of-NDC-London-2026/1.png
  66. BIN
      docs/en/Community-Articles/2026-02-03-Impressions-of-NDC-London-2026/2.png
  67. BIN
      docs/en/Community-Articles/2026-02-03-Impressions-of-NDC-London-2026/3.png
  68. BIN
      docs/en/Community-Articles/2026-02-03-Impressions-of-NDC-London-2026/4.png
  69. BIN
      docs/en/Community-Articles/2026-02-03-Impressions-of-NDC-London-2026/4_1.png
  70. BIN
      docs/en/Community-Articles/2026-02-03-Impressions-of-NDC-London-2026/4_2.png
  71. BIN
      docs/en/Community-Articles/2026-02-03-Impressions-of-NDC-London-2026/5.png
  72. BIN
      docs/en/Community-Articles/2026-02-03-Impressions-of-NDC-London-2026/6.png
  73. BIN
      docs/en/Community-Articles/2026-02-03-Impressions-of-NDC-London-2026/7.png
  74. 325
      docs/en/Community-Articles/2026-02-03-Impressions-of-NDC-London-2026/Post.md
  75. BIN
      docs/en/Community-Articles/2026-02-03-Impressions-of-NDC-London-2026/cover.png
  76. BIN
      docs/en/Community-Articles/2026-02-03-Impressions-of-NDC-London-2026/image-20260206003328436.png
  77. BIN
      docs/en/Community-Articles/2026-02-03-Impressions-of-NDC-London-2026/image-20260206004046914.png
  78. BIN
      docs/en/Community-Articles/2026-02-03-Impressions-of-NDC-London-2026/image-20260206012506799.png
  79. BIN
      docs/en/Community-Articles/2026-02-03-Impressions-of-NDC-London-2026/steve-sanderson-talk.png
  80. BIN
      docs/en/Community-Articles/2026-02-03-Impressions-of-NDC-London-2026/steve-sanderson-talk_1.png
  81. BIN
      docs/en/Community-Articles/2026-02-03-Impressions-of-NDC-London-2026/steve-sanderson-talk_10.png
  82. BIN
      docs/en/Community-Articles/2026-02-03-Impressions-of-NDC-London-2026/steve-sanderson-talk_11.png
  83. BIN
      docs/en/Community-Articles/2026-02-03-Impressions-of-NDC-London-2026/steve-sanderson-talk_12.png
  84. BIN
      docs/en/Community-Articles/2026-02-03-Impressions-of-NDC-London-2026/steve-sanderson-talk_2.png
  85. BIN
      docs/en/Community-Articles/2026-02-03-Impressions-of-NDC-London-2026/steve-sanderson-talk_3.png
  86. BIN
      docs/en/Community-Articles/2026-02-03-Impressions-of-NDC-London-2026/steve-sanderson-talk_4.png
  87. BIN
      docs/en/Community-Articles/2026-02-03-Impressions-of-NDC-London-2026/steve-sanderson-talk_5.png
  88. BIN
      docs/en/Community-Articles/2026-02-03-Impressions-of-NDC-London-2026/steve-sanderson-talk_6.png
  89. BIN
      docs/en/Community-Articles/2026-02-03-Impressions-of-NDC-London-2026/steve-sanderson-talk_7.png
  90. BIN
      docs/en/Community-Articles/2026-02-03-Impressions-of-NDC-London-2026/steve-sanderson-talk_8.png
  91. BIN
      docs/en/Community-Articles/2026-02-03-Impressions-of-NDC-London-2026/steve-sanderson-talk_9.png
  92. BIN
      docs/en/Community-Articles/2026-02-03-Impressions-of-NDC-London-2026/youtube-cover-1.png
  93. BIN
      docs/en/Community-Articles/2026-02-03-Impressions-of-NDC-London-2026/youtube-cover-2.png
  94. BIN
      docs/en/Community-Articles/2026-02-04-Omni-Moderation-in-AI-Management-Module/demo.gif
  95. BIN
      docs/en/Community-Articles/2026-02-04-Omni-Moderation-in-AI-Management-Module/images/abp-studio-ai-management.png
  96. BIN
      docs/en/Community-Articles/2026-02-04-Omni-Moderation-in-AI-Management-Module/images/ai-management-widget.png
  97. BIN
      docs/en/Community-Articles/2026-02-04-Omni-Moderation-in-AI-Management-Module/images/ai-management-workspaces.png
  98. BIN
      docs/en/Community-Articles/2026-02-04-Omni-Moderation-in-AI-Management-Module/images/example-comment.png
  99. 488
      docs/en/Community-Articles/2026-02-04-Omni-Moderation-in-AI-Management-Module/post.md
  100. BIN
      docs/en/Community-Articles/2026-02-19-ABP-Framework-Hidden-Magic/images/cover.png

270
.cursorrules

@ -0,0 +1,270 @@
# ABP Framework – Cursor Rules
# Scope: ABP Framework repository (abpframework/abp) — for developing ABP itself, not ABP-based applications.
# Goal: Enforce ABP module architecture best practices (DDD, layering, DB/ORM independence),
# maintain backward compatibility, ensure extensibility, and align with ABP contribution guidelines.
## Global Defaults
- Follow existing patterns in this repository first. Before generating new code, search for similar implementations and mirror their structure, naming, and conventions.
- Prefer minimal, focused diffs. Avoid drive-by refactors and formatting churn.
- Preserve public APIs. Avoid breaking changes unless explicitly requested and justified.
- Keep layers clean. Do not introduce forbidden dependencies between packages.
## Module / Package Architecture (Layering)
- Use a layered module structure with explicit dependencies:
- *.Domain.Shared: constants, enums, shared types safe for all layers and 3rd-party clients. MUST NOT contain entities, repositories, domain services, or business objects.
- *.Domain: entities/aggregate roots, repository interfaces, domain services.
- *.Application.Contracts: application service interfaces and DTOs.
- *.Application: application service implementations.
- *.EntityFrameworkCore / *.MongoDb: ORM integration packages depend on *.Domain only. MUST NOT depend on other layers.
- *.HttpApi: REST controllers. MUST depend ONLY on *.Application.Contracts (NOT *.Application).
- *.HttpApi.Client: remote client proxies. MUST depend ONLY on *.Application.Contracts.
- *.Web: UI. MUST depend ONLY on *.HttpApi.
- Enforce dependency direction:
- Web -> HttpApi -> Application.Contracts
- Application -> Domain + Application.Contracts
- Domain -> Domain.Shared
- ORM integration -> Domain
- Do not leak web concerns into application/domain.
## Domain Layer – Entities & Aggregate Roots
- Define entities in the domain layer.
- Entities must be valid at creation:
- Provide a primary constructor that enforces invariants.
- Always include a protected parameterless constructor for ORMs.
- Always initialize sub-collections in the primary constructor.
- Do NOT generate Guid keys inside constructors; accept `id` and generate using `IGuidGenerator` from the calling code.
- Make members `virtual` where appropriate (ORM/proxy compatibility).
- Protect consistency:
- Use non-public setters (private/protected/internal) when needed.
- Provide meaningful domain methods for state transitions; prefer returning `this` from setters when applicable.
- Aggregate roots:
- Always use a single `Id` property. Do NOT use composite keys.
- Prefer `Guid` keys for aggregate roots.
- Inherit from `AggregateRoot<TKey>` or audited base classes as required.
- Aggregate boundaries:
- Keep aggregates small. Avoid large sub-collections unless necessary.
- References:
- Reference other aggregate roots by Id only.
- Do NOT add navigation properties to other aggregate roots.
## Repositories
- Define repository interfaces in the domain layer.
- Create one dedicated repository interface per aggregate root (e.g., `IProductRepository`).
- Public repository interfaces exposed by modules:
- SHOULD inherit from `IBasicRepository<TEntity, TKey>` (or `IReadOnlyRepository<...>` when suitable).
- SHOULD NOT expose `IQueryable` in the public contract.
- Internal implementations MAY use `IRepository<TEntity, TKey>` and `IQueryable` as needed.
- Do NOT define repositories for non-aggregate-root entities.
- Repository method conventions:
- All methods async.
- Include optional `CancellationToken cancellationToken = default` in every method.
- For single-entity returning methods: include `bool includeDetails = true`.
- For list returning methods: include `bool includeDetails = false`.
- Do NOT return composite projection classes like `UserWithRoles`. Use `includeDetails` for eager-loading.
- Avoid projection-only view models from repositories by default; only allow when performance is critical.
## Domain Services
- Define domain services in the domain layer.
- Default: do NOT create interfaces for domain services unless necessary (mocking/multiple implementations).
- Naming: use `*Manager` suffix.
- Domain service methods:
- Focus on operations that enforce domain invariants and business rules.
- Query methods are acceptable when they encapsulate domain-specific lookup logic (e.g., normalized lookups, caching, complex resolution). Simple queries belong in repositories.
- Define methods that mutate state and enforce domain rules.
- Use specific, intention-revealing names (avoid generic `UpdateXAsync`).
- Accept valid domain objects as parameters; do NOT accept/return DTOs.
- On rule violations, throw `BusinessException` (or custom business exceptions).
- Use unique, namespaced error codes suitable for localization (e.g., `IssueTracking:ConcurrentOpenIssueLimit`).
- Do NOT depend on authenticated user logic; pass required values from application layer.
## Application Services (Contracts + Implementation)
### Contracts
- Define one interface per application service in *.Application.Contracts.
- Interfaces must inherit from `IApplicationService`.
- Naming: `I*AppService`.
- Do NOT accept/return entities. Use DTOs and primitive parameters.
### Method Naming & Shapes
- All service methods async and end with `Async`.
- Do not repeat entity names in method names (use `GetAsync`, not `GetProductAsync`).
- Standard CRUD:
- `GetAsync(Guid id)` returns a detailed DTO.
- `GetListAsync(QueryDto queryDto)` returns a list of detailed DTOs.
- `CreateAsync(CreateDto dto)` returns detailed DTO.
- `UpdateAsync(Guid id, UpdateDto dto)` returns detailed DTO (id MUST NOT be inside update DTO).
- `DeleteAsync(Guid id)` returns void/Task.
- `GetListAsync` query DTO:
- Filtering/sorting/paging fields optional with defaults.
- Enforce a maximum page size for performance.
### DTO Usage
- Inputs:
- Do not include unused properties.
- Do NOT share input DTOs between methods.
- Do NOT use inheritance between input DTOs (except rare abstract base DTO cases; be very cautious).
### Implementation
- Application layer must be independent of web.
- Implement interfaces in *.Application, name `ProductAppService` for `IProductAppService`.
- Inherit from `ApplicationService`.
- Make all public methods `virtual`.
- Avoid private helper methods; prefer `protected virtual` helpers for extensibility.
- Data access:
- Use dedicated repositories (e.g., `IProductRepository`).
- Do NOT use generic repositories.
- Do NOT put LINQ/SQL queries inside application service methods; repositories perform queries.
- Entity mutation:
- Load required entities from repositories.
- Mutate using domain methods.
- Call repository `UpdateAsync` after updates (do not assume change tracking).
- Extra properties:
- Use `MapExtraPropertiesTo` or configure object mapper for `MapExtraProperties`.
- Files:
- Do NOT use web types like `IFormFile` or `Stream` in application services.
- Controllers handle upload; pass `byte[]` (or similar) to application services.
- Cross-application-service calls:
- Do NOT call other application services within the same module.
- For reuse, push logic into domain layer or extract shared helpers carefully.
- You MAY call other modules’ application services only via their Application.Contracts.
## DTO Conventions
- Define DTOs in *.Application.Contracts.
- Prefer ABP base DTO types (`EntityDto<TKey>`, audited DTOs).
- For aggregate roots, prefer extensible DTO base types so extra properties can map.
- DTO properties: public getters/setters.
- Input DTO validation:
- Use data annotations.
- Reuse constants from Domain.Shared wherever possible.
- Avoid logic in DTOs; only implement `IValidatableObject` when necessary.
- Do NOT use `[Serializable]` attribute (BinaryFormatter is obsolete); ABP uses JSON serialization.
- Output DTO strategy:
- Prefer a Basic DTO and a Detailed DTO; avoid many variants.
- Detailed DTOs: include reference details as nested basic DTOs; avoid duplicating raw FK ids unnecessarily.
## EF Core Integration
- Define a separate DbContext interface + class per module.
- Do NOT rely on lazy loading; do NOT enable lazy loading.
- DbContext interface:
- Inherit from `IEfCoreDbContext`.
- Add `[ConnectionStringName("...")]`.
- Expose `DbSet<TEntity>` ONLY for aggregate roots.
- Do NOT include setters in the interface.
- DbContext class:
- Inherit `AbpDbContext<TDbContext>`.
- Add `[ConnectionStringName("...")]` and implement the interface.
- Table prefix/schema:
- Provide static `TablePrefix` and `Schema` defaulted from constants.
- Use short prefixes; `Abp` prefix reserved for ABP core modules.
- Default schema should be `null`.
- Model mapping:
- Do NOT configure entities directly inside `OnModelCreating`.
- Create `ModelBuilder` extension method `ConfigureX()` and call it.
- Call `b.ConfigureByConvention()` for each entity.
- Repository implementations:
- Inherit from `EfCoreRepository<TDbContextInterface, TEntity, TKey>`.
- Use DbContext interface as generic parameter.
- Pass cancellation tokens using `GetCancellationToken(cancellationToken)`.
- Implement `IncludeDetails(include)` extension per aggregate root with sub-collections.
- Override `WithDetailsAsync()` where needed.
## MongoDB Integration
- Define a separate MongoDbContext interface + class per module.
- MongoDbContext interface:
- Inherit from `IAbpMongoDbContext`.
- Add `[ConnectionStringName("...")]`.
- Expose `IMongoCollection<TEntity>` ONLY for aggregate roots.
- MongoDbContext class:
- Inherit `AbpMongoDbContext` and implement the interface.
- Collection prefix:
- Provide static `CollectionPrefix` defaulted from constants.
- Use short prefixes; `Abp` prefix reserved for ABP core modules.
- Mapping:
- Do NOT configure directly inside `CreateModel`.
- Create `IMongoModelBuilder` extension method `ConfigureX()` and call it.
- Repository implementations:
- Inherit from `MongoDbRepository<TMongoDbContextInterface, TEntity, TKey>`.
- Pass cancellation tokens using `GetCancellationToken(cancellationToken)`.
- Ignore `includeDetails` for MongoDB in most cases (documents load sub-collections).
- Prefer `GetQueryableAsync()` to ensure ABP data filters are applied.
## ABP Module Classes
- Every package must have exactly one `AbpModule` class.
- Naming: `Abp[ModuleName][Layer]Module` (e.g., `AbpIdentityDomainModule`, `AbpIdentityApplicationModule`).
- Use `[DependsOn(typeof(...))]` to declare module dependencies explicitly.
- Override `ConfigureServices` for DI registration and configuration.
- Override `OnApplicationInitialization` sparingly; prefer `ConfigureServices` when possible.
- Each module must be usable standalone; avoid hidden cross-module coupling.
## Framework Extensibility
- All public and protected members should be `virtual` for inheritance-based extensibility.
- Prefer `protected virtual` over `private` for helper methods to allow overriding.
- Use `[Dependency(ReplaceServices = true)]` patterns for services intended to be replaceable.
- Provide extension points via interfaces and virtual methods.
- Document extension points with XML comments explaining intended usage.
- Consider providing `*Options` classes for configuration-based extensibility.
## Backward Compatibility
- Do NOT remove or rename public API members without a deprecation cycle.
- Use `[Obsolete("Message. Use X instead.")]` with clear migration guidance before removal.
- Maintain binary and source compatibility within major versions.
- Add new optional parameters with defaults; do not change existing method signatures.
- When adding new abstract members to base classes, provide default implementations if possible.
- Prefer adding new interfaces over modifying existing ones.
## Localization Resources
- Define localization resources in Domain.Shared.
- Resource class naming: `[ModuleName]Resource` (e.g., `IdentityResource`, `PermissionManagementResource`).
- JSON files under `/Localization/[ModuleName]/` directory.
- Use `LocalizableString.Create<TResource>("Key")` for localizable exceptions and messages.
- All user-facing strings must be localized; no hardcoded English text in code.
- Error codes should be namespaced: `ModuleName:ErrorCode` (e.g., `Identity:UserNameAlreadyExists`).
## Settings & Features
- Define settings in `*SettingDefinitionProvider` in Domain.Shared or Domain.
- Setting names must follow `Abp.[ModuleName].[SettingName]` convention.
- Define features in `*FeatureDefinitionProvider` in Domain.Shared.
- Feature names must follow `[ModuleName].[FeatureName]` convention.
- Use constants for setting/feature names; never hardcode strings.
## Permissions
- Define permissions in `*PermissionDefinitionProvider` in Application.Contracts.
- Permission names must follow `[ModuleName].[Permission]` convention.
- Use constants for permission names (e.g., `IdentityPermissions.Users.Create`).
- Group related permissions logically.
## Event Bus & Distributed Events
- Use `ILocalEventBus` for intra-module communication within the same process.
- Use `IDistributedEventBus` for cross-module or cross-service communication.
- Define Event Transfer Objects (ETOs) in Domain.Shared for distributed events.
- ETO naming: `[EntityName][Action]Eto` (e.g., `UserCreatedEto`, `OrderCompletedEto`).
- Event handlers belong in the Application layer.
- ETOs should be simple, serializable, and contain only primitive types or nested ETOs.
## Testing
- Unit tests: `*.Tests` projects for isolated logic testing with mocked dependencies.
- Integration tests: `*.EntityFrameworkCore.Tests` / `*.MongoDB.Tests` for repository and DB tests.
- Use `AbpIntegratedTest<TModule>` or `AbpApplicationTestBase<TModule>` base classes.
- Test modules should use `[DependsOn]` on the module under test.
- Use `Shouldly` assertions (ABP convention).
- Test both EF Core and MongoDB implementations when the module supports both.
- Include tests for permission checks, validation, and edge cases.
- Name test methods: `MethodName_Scenario_ExpectedResult` or `Should_ExpectedBehavior_When_Condition`.
## Contribution Discipline (PR / Issues / Tests)
- Before significant changes, align via GitHub issue/discussion.
- PRs:
- Keep changes scoped and reviewable.
- Add/update unit/integration tests relevant to the change.
- Build and run tests for the impacted area when possible.
- Localization:
- Prefer the `abp translate` workflow for adding missing translations (generate `abp-translation.json`, fill, apply, then PR).
## Review Checklist
- Layer dependencies respected (no forbidden references).
- No `IQueryable` or generic repository usage leaking into application/domain.
- Entities maintain invariants; Guid id generation not inside constructors.
- Repositories follow async + CancellationToken + includeDetails conventions.
- No web types in application services.
- DTOs in contracts, serializable, validated, minimal, no logic.
- EF/Mongo integration follows context + mapping + repository patterns.
- Minimal diff; no unnecessary API surface expansion.

372
.github/copilot-instructions.md

@ -0,0 +1,372 @@
# ABP Framework – GitHub Copilot Instructions
> **Scope**: ABP Framework repository (abpframework/abp) — for developing ABP itself, not ABP-based applications.
>
> **Goal**: Enforce ABP module architecture best practices (DDD, layering, DB/ORM independence), maintain backward compatibility, ensure extensibility, and align with ABP contribution guidelines.
---
## Global Defaults
- Follow existing patterns in this repository first. Before generating new code, search for similar implementations and mirror their structure, naming, and conventions.
- Prefer minimal, focused diffs. Avoid drive-by refactors and formatting churn.
- Preserve public APIs. Avoid breaking changes unless explicitly requested and justified.
- Keep layers clean. Do not introduce forbidden dependencies between packages.
---
## Module / Package Architecture (Layering)
Use a layered module structure with explicit dependencies:
| Layer | Purpose | Allowed Dependencies |
|-------|---------|---------------------|
| `*.Domain.Shared` | Constants, enums, shared types safe for all layers and 3rd-party clients. MUST NOT contain entities, repositories, domain services, or business objects. | None |
| `*.Domain` | Entities/aggregate roots, repository interfaces, domain services. | Domain.Shared |
| `*.Application.Contracts` | Application service interfaces and DTOs. | Domain.Shared |
| `*.Application` | Application service implementations. | Domain, Application.Contracts |
| `*.EntityFrameworkCore` / `*.MongoDb` | ORM integration packages. MUST NOT depend on other layers. | Domain only |
| `*.HttpApi` | REST controllers. MUST depend ONLY on Application.Contracts (NOT Application). | Application.Contracts |
| `*.HttpApi.Client` | Remote client proxies. MUST depend ONLY on Application.Contracts. | Application.Contracts |
| `*.Web` | UI layer. MUST depend ONLY on HttpApi. | HttpApi |
### Dependency Direction
```
Web -> HttpApi -> Application.Contracts
Application -> Domain + Application.Contracts
Domain -> Domain.Shared
ORM integration -> Domain
```
Do not leak web concerns into application/domain.
---
## Domain Layer – Entities & Aggregate Roots
- Define entities in the domain layer.
- Entities must be valid at creation:
- Provide a primary constructor that enforces invariants.
- Always include a `protected` parameterless constructor for ORMs.
- Always initialize sub-collections in the primary constructor.
- Do NOT generate Guid keys inside constructors; accept `id` and generate using `IGuidGenerator` from the calling code.
- Make members `virtual` where appropriate (ORM/proxy compatibility).
- Protect consistency:
- Use non-public setters (`private`/`protected`/`internal`) when needed.
- Provide meaningful domain methods for state transitions.
### Aggregate Roots
- Always use a single `Id` property. Do NOT use composite keys.
- Prefer `Guid` keys for aggregate roots.
- Inherit from `AggregateRoot<TKey>` or audited base classes as required.
- Keep aggregates small. Avoid large sub-collections unless necessary.
### References
- Reference other aggregate roots by Id only.
- Do NOT add navigation properties to other aggregate roots.
---
## Repositories
- Define repository interfaces in the domain layer.
- Create one dedicated repository interface per aggregate root (e.g., `IProductRepository`).
- Public repository interfaces exposed by modules:
- SHOULD inherit from `IBasicRepository<TEntity, TKey>` (or `IReadOnlyRepository<...>` when suitable).
- SHOULD NOT expose `IQueryable` in the public contract.
- Internal implementations MAY use `IRepository<TEntity, TKey>` and `IQueryable` as needed.
- Do NOT define repositories for non-aggregate-root entities.
### Method Conventions
- All methods async.
- Include optional `CancellationToken cancellationToken = default` in every method.
- For single-entity returning methods: include `bool includeDetails = true`.
- For list returning methods: include `bool includeDetails = false`.
- Do NOT return composite projection classes like `UserWithRoles`. Use `includeDetails` for eager-loading.
- Avoid projection-only view models from repositories by default; only allow when performance is critical.
---
## Domain Services
- Define domain services in the domain layer.
- Default: do NOT create interfaces for domain services unless necessary (mocking/multiple implementations).
- Naming: use `*Manager` suffix.
### Method Guidelines
- Focus on operations that enforce domain invariants and business rules.
- Query methods are acceptable when they encapsulate domain-specific lookup logic (e.g., normalized lookups, caching, complex resolution). Simple queries belong in repositories.
- Define methods that mutate state and enforce domain rules.
- Use specific, intention-revealing names (avoid generic `UpdateXAsync`).
- Accept valid domain objects as parameters; do NOT accept/return DTOs.
- On rule violations, throw `BusinessException` (or custom business exceptions).
- Use unique, namespaced error codes suitable for localization (e.g., `IssueTracking:ConcurrentOpenIssueLimit`).
- Do NOT depend on authenticated user logic; pass required values from application layer.
---
## Application Services
### Contracts
- Define one interface per application service in `*.Application.Contracts`.
- Interfaces must inherit from `IApplicationService`.
- Naming: `I*AppService`.
- Do NOT accept/return entities. Use DTOs and primitive parameters.
### Method Naming & Shapes
- All service methods async and end with `Async`.
- Do not repeat entity names in method names (use `GetAsync`, not `GetProductAsync`).
**Standard CRUD:**
```csharp
Task<ProductDto> GetAsync(Guid id);
Task<PagedResultDto<ProductDto>> GetListAsync(GetProductListInput input);
Task<ProductDto> CreateAsync(CreateProductInput input);
Task<ProductDto> UpdateAsync(Guid id, UpdateProductInput input); // id NOT inside DTO
Task DeleteAsync(Guid id);
```
### DTO Usage (Inputs)
- Do not include unused properties.
- Do NOT share input DTOs between methods.
- Do NOT use inheritance between input DTOs (except rare abstract base DTO cases; be very cautious).
### Implementation
- Application layer must be independent of web.
- Implement interfaces in `*.Application`, name `ProductAppService` for `IProductAppService`.
- Inherit from `ApplicationService`.
- Make all public methods `virtual`.
- Avoid private helper methods; prefer `protected virtual` helpers for extensibility.
### Data Access
- Use dedicated repositories (e.g., `IProductRepository`).
- Do NOT put LINQ/SQL queries inside application service methods; repositories perform queries.
### Entity Mutation
- Load required entities from repositories.
- Mutate using domain methods.
- Call repository `UpdateAsync` after updates (do not assume change tracking).
### Files
- Do NOT use web types like `IFormFile` or `Stream` in application services.
- Controllers handle upload; pass `byte[]` (or similar) to application services.
### Cross-Service Calls
- Do NOT call other application services within the same module.
- For reuse, push logic into domain layer or extract shared helpers carefully.
- You MAY call other modules' application services only via their Application.Contracts.
---
## DTO Conventions
- Define DTOs in `*.Application.Contracts`.
- Prefer ABP base DTO types (`EntityDto<TKey>`, audited DTOs).
- For aggregate roots, prefer extensible DTO base types so extra properties can map.
- DTO properties: public getters/setters.
### Input DTO Validation
- Use data annotations.
- Reuse constants from Domain.Shared wherever possible.
### General Rules
- Avoid logic in DTOs; only implement `IValidatableObject` when necessary.
- Do NOT use `[Serializable]` attribute (BinaryFormatter is obsolete); ABP uses JSON serialization.
### Output DTO Strategy
- Prefer a Basic DTO and a Detailed DTO; avoid many variants.
- Detailed DTOs: include reference details as nested basic DTOs; avoid duplicating raw FK ids unnecessarily.
---
## EF Core Integration
- Define a separate DbContext interface + class per module.
- Do NOT rely on lazy loading; do NOT enable lazy loading.
### DbContext Interface
```csharp
[ConnectionStringName("ModuleName")]
public interface IModuleNameDbContext : IEfCoreDbContext
{
DbSet<Product> Products { get; } // No setters, aggregate roots only
}
```
### DbContext Class
```csharp
[ConnectionStringName("ModuleName")]
public class ModuleNameDbContext : AbpDbContext<ModuleNameDbContext>, IModuleNameDbContext
{
public static string TablePrefix { get; set; } = ModuleNameConsts.DefaultDbTablePrefix;
public static string? Schema { get; set; } = ModuleNameConsts.DefaultDbSchema;
public DbSet<Product> Products { get; set; }
}
```
### Table Prefix/Schema
- Provide static `TablePrefix` and `Schema` defaulted from constants.
- Use short prefixes; `Abp` prefix reserved for ABP core modules.
- Default schema should be `null`.
### Model Mapping
- Do NOT configure entities directly inside `OnModelCreating`.
- Create `ModelBuilder` extension method `ConfigureX()` and call it.
- Call `b.ConfigureByConvention()` for each entity.
### Repository Implementations
- Inherit from `EfCoreRepository<TDbContextInterface, TEntity, TKey>`.
- Use DbContext interface as generic parameter.
- Pass cancellation tokens using `GetCancellationToken(cancellationToken)`.
- Implement `IncludeDetails(include)` extension per aggregate root with sub-collections.
- Override `WithDetailsAsync()` where needed.
---
## MongoDB Integration
- Define a separate MongoDbContext interface + class per module.
### MongoDbContext Interface
```csharp
[ConnectionStringName("ModuleName")]
public interface IModuleNameMongoDbContext : IAbpMongoDbContext
{
IMongoCollection<Product> Products { get; } // Aggregate roots only
}
```
### MongoDbContext Class
```csharp
public class ModuleNameMongoDbContext : AbpMongoDbContext, IModuleNameMongoDbContext
{
public static string CollectionPrefix { get; set; } = ModuleNameConsts.DefaultDbTablePrefix;
}
```
### Mapping
- Do NOT configure directly inside `CreateModel`.
- Create `IMongoModelBuilder` extension method `ConfigureX()` and call it.
### Repository Implementations
- Inherit from `MongoDbRepository<TMongoDbContextInterface, TEntity, TKey>`.
- Pass cancellation tokens using `GetCancellationToken(cancellationToken)`.
- Ignore `includeDetails` for MongoDB in most cases (documents load sub-collections).
- Prefer `GetQueryableAsync()` to ensure ABP data filters are applied.
---
## ABP Module Classes
- Every package must have exactly one `AbpModule` class.
- Naming: `Abp[ModuleName][Layer]Module` (e.g., `AbpIdentityDomainModule`, `AbpIdentityApplicationModule`).
- Use `[DependsOn(typeof(...))]` to declare module dependencies explicitly.
- Override `ConfigureServices` for DI registration and configuration.
- Override `OnApplicationInitialization` sparingly; prefer `ConfigureServices` when possible.
- Each module must be usable standalone; avoid hidden cross-module coupling.
---
## Framework Extensibility
- All public and protected members should be `virtual` for inheritance-based extensibility.
- Prefer `protected virtual` over `private` for helper methods to allow overriding.
- Use `[Dependency(ReplaceServices = true)]` patterns for services intended to be replaceable.
- Provide extension points via interfaces and virtual methods.
- Document extension points with XML comments explaining intended usage.
- Consider providing `*Options` classes for configuration-based extensibility.
---
## Backward Compatibility
- Do NOT remove or rename public API members without a deprecation cycle.
- Use `[Obsolete("Message. Use X instead.")]` with clear migration guidance before removal.
- Maintain binary and source compatibility within major versions.
- Add new optional parameters with defaults; do not change existing method signatures.
- When adding new abstract members to base classes, provide default implementations if possible.
- Prefer adding new interfaces over modifying existing ones.
---
## Localization Resources
- Define localization resources in Domain.Shared.
- Resource class naming: `[ModuleName]Resource` (e.g., `IdentityResource`, `PermissionManagementResource`).
- JSON files under `/Localization/[ModuleName]/` directory.
- Use `LocalizableString.Create<TResource>("Key")` for localizable exceptions and messages.
- All user-facing strings must be localized; no hardcoded English text in code.
- Error codes should be namespaced: `ModuleName:ErrorCode` (e.g., `Identity:UserNameAlreadyExists`).
---
## Settings & Features
- Define settings in `*SettingDefinitionProvider` in Domain.Shared or Domain.
- Setting names must follow `Abp.[ModuleName].[SettingName]` convention.
- Define features in `*FeatureDefinitionProvider` in Domain.Shared.
- Feature names must follow `[ModuleName].[FeatureName]` convention.
- Use constants for setting/feature names; never hardcode strings.
---
## Permissions
- Define permissions in `*PermissionDefinitionProvider` in Application.Contracts.
- Permission names must follow `[ModuleName].[Permission]` convention.
- Use constants for permission names (e.g., `IdentityPermissions.Users.Create`).
- Group related permissions logically.
---
## Event Bus & Distributed Events
- Use `ILocalEventBus` for intra-module communication within the same process.
- Use `IDistributedEventBus` for cross-module or cross-service communication.
- Define Event Transfer Objects (ETOs) in Domain.Shared for distributed events.
- ETO naming: `[EntityName][Action]Eto` (e.g., `UserCreatedEto`, `OrderCompletedEto`).
- Event handlers belong in the Application layer.
- ETOs should be simple, serializable, and contain only primitive types or nested ETOs.
---
## Testing
- Unit tests: `*.Tests` projects for isolated logic testing with mocked dependencies.
- Integration tests: `*.EntityFrameworkCore.Tests` / `*.MongoDB.Tests` for repository and DB tests.
- Use `AbpIntegratedTest<TModule>` or `AbpApplicationTestBase<TModule>` base classes.
- Test modules should use `[DependsOn]` on the module under test.
- Use `Shouldly` assertions (ABP convention).
- Test both EF Core and MongoDB implementations when the module supports both.
- Include tests for permission checks, validation, and edge cases.
- Name test methods: `MethodName_Scenario_ExpectedResult` or `Should_ExpectedBehavior_When_Condition`.
---
## Contribution Discipline (PR / Issues / Tests)
- Before significant changes, align via GitHub issue/discussion.
### PRs
- Keep changes scoped and reviewable.
- Add/update unit/integration tests relevant to the change.
- Build and run tests for the impacted area when possible.
### Localization
- Prefer the `abp translate` workflow for adding missing translations (generate `abp-translation.json`, fill, apply, then PR).
---
## Review Checklist
- [ ] Layer dependencies respected (no forbidden references).
- [ ] No `IQueryable` leaking into public repository contracts.
- [ ] Entities maintain invariants; Guid id generation not inside constructors.
- [ ] Repositories follow async + CancellationToken + includeDetails conventions.
- [ ] No web types in application services.
- [ ] DTOs in contracts, validated, minimal, no logic.
- [ ] EF/Mongo integration follows context + mapping + repository patterns.
- [ ] Public members are `virtual` for extensibility.
- [ ] Backward compatibility maintained; no breaking changes without deprecation.
- [ ] Minimal diff; no unnecessary API surface expansion.

22
.github/workflows/auto-pr.yml

@ -1,13 +1,13 @@
name: Merge branch rel-10.2 with rel-10.1
name: Merge branch dev with rel-10.2
on:
push:
branches:
- rel-10.1
- rel-10.2
permissions:
contents: read
jobs:
merge-rel-10-2-with-rel-10-1:
merge-dev-with-rel-10-2:
permissions:
contents: write # for peter-evans/create-pull-request to create branch
pull-requests: write # for peter-evans/create-pull-request to create a PR
@ -15,17 +15,17 @@ jobs:
steps:
- uses: actions/checkout@v2
with:
ref: rel-10.2
ref: dev
- name: Reset promotion branch
run: |
git fetch origin rel-10.1:rel-10.1
git reset --hard rel-10.1
git fetch origin rel-10.2:rel-10.2
git reset --hard rel-10.2
- name: Create Pull Request
uses: peter-evans/create-pull-request@v3
with:
branch: auto-merge/rel-10-1/${{github.run_number}}
title: Merge branch rel-10.2 with rel-10.1
body: This PR generated automatically to merge rel-10.2 with rel-10.1. Please review the changed files before merging to prevent any errors that may occur.
branch: auto-merge/rel-10-2/${{github.run_number}}
title: Merge branch dev with rel-10.2
body: This PR generated automatically to merge dev with rel-10.2. Please review the changed files before merging to prevent any errors that may occur.
draft: true
token: ${{ github.token }}
- name: Merge Pull Request
@ -33,5 +33,5 @@ jobs:
GH_TOKEN: ${{ secrets.BOT_SECRET }}
run: |
gh pr ready
gh pr review auto-merge/rel-10-1/${{github.run_number}} --approve
gh pr merge auto-merge/rel-10-1/${{github.run_number}} --merge --auto --delete-branch
gh pr review auto-merge/rel-10-2/${{github.run_number}} --approve
gh pr merge auto-merge/rel-10-2/${{github.run_number}} --merge --auto --delete-branch

658
.github/workflows/update-studio-docs.yml

@ -0,0 +1,658 @@
name: Update ABP Studio Docs
on:
repository_dispatch:
types: [update_studio_docs]
workflow_dispatch:
inputs:
version:
description: 'Studio version (e.g., 2.1.10)'
required: true
name:
description: 'Release name'
required: true
notes:
description: 'Raw release notes'
required: true
url:
description: 'Release URL'
required: true
target_branch:
description: 'Target branch (default: dev)'
required: false
default: 'dev'
jobs:
update-docs:
runs-on: ubuntu-latest
permissions:
contents: write
pull-requests: write
models: read
steps:
# -------------------------------------------------
# Extract payload (repository_dispatch or workflow_dispatch)
# -------------------------------------------------
- name: Extract payload
id: payload
run: |
if [ "${{ github.event_name }}" = "repository_dispatch" ]; then
echo "version=${{ github.event.client_payload.version }}" >> $GITHUB_OUTPUT
echo "name=${{ github.event.client_payload.name }}" >> $GITHUB_OUTPUT
echo "url=${{ github.event.client_payload.url }}" >> $GITHUB_OUTPUT
echo "target_branch=${{ github.event.client_payload.target_branch || 'dev' }}" >> $GITHUB_OUTPUT
# Save notes to environment variable (multiline)
{
echo "RAW_NOTES<<NOTES_DELIMITER_EOF"
jq -r '.client_payload.notes' "$GITHUB_EVENT_PATH"
echo "NOTES_DELIMITER_EOF"
} >> $GITHUB_ENV
else
echo "version=${{ github.event.inputs.version }}" >> $GITHUB_OUTPUT
echo "name=${{ github.event.inputs.name }}" >> $GITHUB_OUTPUT
echo "url=${{ github.event.inputs.url }}" >> $GITHUB_OUTPUT
echo "target_branch=${{ github.event.inputs.target_branch || 'dev' }}" >> $GITHUB_OUTPUT
# Save notes to environment variable (multiline)
{
echo "RAW_NOTES<<NOTES_DELIMITER_EOF"
echo "${{ github.event.inputs.notes }}"
echo "NOTES_DELIMITER_EOF"
} >> $GITHUB_ENV
fi
- name: Validate payload
env:
VERSION: ${{ steps.payload.outputs.version }}
NAME: ${{ steps.payload.outputs.name }}
URL: ${{ steps.payload.outputs.url }}
TARGET_BRANCH: ${{ steps.payload.outputs.target_branch }}
run: |
if [ -z "$VERSION" ] || [ "$VERSION" = "null" ]; then
echo "❌ Missing: version"
exit 1
fi
if [ -z "$NAME" ] || [ "$NAME" = "null" ]; then
echo "❌ Missing: name"
exit 1
fi
if [ -z "$URL" ] || [ "$URL" = "null" ]; then
echo "❌ Missing: url"
exit 1
fi
if [ -z "$RAW_NOTES" ]; then
echo "❌ Missing: release notes"
exit 1
fi
echo "✅ Payload validated"
echo " Version: $VERSION"
echo " Name: $NAME"
echo " Target Branch: $TARGET_BRANCH"
# -------------------------------------------------
# Checkout target branch
# -------------------------------------------------
- name: Checkout
uses: actions/checkout@v4
with:
ref: ${{ steps.payload.outputs.target_branch }}
fetch-depth: 0
- name: Configure git
run: |
git config user.name "github-actions[bot]"
git config user.email "github-actions[bot]@users.noreply.github.com"
# -------------------------------------------------
# Create working branch
# -------------------------------------------------
- name: Create branch
env:
VERSION: ${{ steps.payload.outputs.version }}
run: |
BRANCH="docs/studio-${VERSION}"
# Delete remote branch if exists (idempotent)
git push origin --delete "$BRANCH" 2>/dev/null || true
git checkout -B "$BRANCH"
echo "BRANCH=$BRANCH" >> $GITHUB_ENV
# -------------------------------------------------
# Analyze existing release notes format
# -------------------------------------------------
- name: Analyze existing format
id: analyze
run: |
FILE="docs/en/studio/release-notes.md"
if [ -f "$FILE" ] && [ -s "$FILE" ]; then
{
echo "EXISTING_FORMAT<<DELIMITER_EOF"
head -50 "$FILE" | sed 's/DELIMITER_EOF/DELIMITER_E0F/g'
echo "DELIMITER_EOF"
} >> $GITHUB_OUTPUT
else
{
echo "EXISTING_FORMAT<<DELIMITER_EOF"
echo "# ABP Studio Release Notes"
echo ""
echo "## 2.1.0 (2025-12-08) Latest"
echo "- Enhanced Module Installation UI"
echo "- Added AI Management option"
echo "DELIMITER_EOF"
} >> $GITHUB_OUTPUT
fi
# -------------------------------------------------
# Try AI formatting (OPTIONAL - never fails workflow)
# -------------------------------------------------
- name: Format release notes with AI
id: ai
continue-on-error: true
uses: actions/ai-inference@v1
with:
model: openai/gpt-4.1
prompt: |
You are a technical writer for ABP Studio release notes.
Existing release notes format:
${{ steps.analyze.outputs.EXISTING_FORMAT }}
New release:
Version: ${{ steps.payload.outputs.version }}
Name: ${{ steps.payload.outputs.name }}
Raw notes:
${{ env.RAW_NOTES }}
CRITICAL RULES:
1. Extract ONLY essential, user-facing changes
2. Format as bullet points starting with "- "
3. Keep it concise and professional
4. Match the style of existing release notes
5. Skip internal/technical details unless critical
6. Return ONLY the bullet points (no version header, no date)
7. One change per line
Output example:
- Fixed books sample for blazor-webapp tiered solution
- Enhanced Module Installation UI
- Added AI Management option to Startup Templates
Return ONLY the formatted bullet points.
# -------------------------------------------------
# Fallback: Use raw notes if AI unavailable
# -------------------------------------------------
- name: Prepare final release notes
run: |
mkdir -p .tmp
AI_RESPONSE="${{ steps.ai.outputs.response }}"
if [ -n "$AI_RESPONSE" ] && [ "$AI_RESPONSE" != "null" ]; then
echo "✅ Using AI-formatted release notes"
echo "$AI_RESPONSE" > .tmp/final-notes.txt
else
echo "⚠️ AI unavailable - using aggressive cleaning on raw release notes"
# Clean and format raw notes with aggressive filtering
echo "$RAW_NOTES" | while IFS= read -r line; do
# Skip empty lines
[ -z "$line" ] && continue
# Skip section headers
[[ "$line" =~ ^#+.*What.*Changed ]] && continue
[[ "$line" =~ ^##[[:space:]] ]] && continue
# Skip full changelog links
[[ "$line" =~ ^\*\*Full\ Changelog ]] && continue
[[ "$line" =~ ^Full\ Changelog ]] && continue
# Remove leading bullet/asterisk
line=$(echo "$line" | sed 's/^[[:space:]]*[*-][[:space:]]*//')
# Aggressive cleaning: remove entire " by @user in https://..." suffix
line=$(echo "$line" | sed 's/[[:space:]]*by @[a-zA-Z0-9_-]*[[:space:]]*in https:\/\/github\.com\/[^[:space:]]*//g')
# Remove remaining "by @username" or "by username"
line=$(echo "$line" | sed 's/[[:space:]]*by @[a-zA-Z0-9_-]*[[:space:]]*$//g')
line=$(echo "$line" | sed 's/[[:space:]]*by [a-zA-Z0-9_-]*[[:space:]]*$//g')
# Remove standalone @mentions
line=$(echo "$line" | sed 's/@[a-zA-Z0-9_-]*//g')
# Clean trailing periods if orphaned
line=$(echo "$line" | sed 's/\.[[:space:]]*$//')
# Trim all whitespace
line=$(echo "$line" | sed 's/^[[:space:]]*//;s/[[:space:]]*$//')
# Skip if line is empty or too short
[ -z "$line" ] && continue
[ ${#line} -lt 5 ] && continue
# Capitalize first letter if lowercase
line="$(echo ${line:0:1} | tr '[:lower:]' '[:upper:]')${line:1}"
# Add clean bullet and output
echo "- $line"
done > .tmp/final-notes.txt
fi
# Safety check: verify we have content
if [ ! -s .tmp/final-notes.txt ]; then
echo "⚠️ No valid release notes extracted, using minimal fallback"
echo "- Release ${{ steps.payload.outputs.version }}" > .tmp/final-notes.txt
fi
echo "=== Final release notes ==="
cat .tmp/final-notes.txt
echo "==========================="
# -------------------------------------------------
# Update release-notes.md (move "Latest" tag correctly)
# -------------------------------------------------
- name: Update release-notes.md
env:
VERSION: ${{ steps.payload.outputs.version }}
NAME: ${{ steps.payload.outputs.name }}
URL: ${{ steps.payload.outputs.url }}
run: |
FILE="docs/en/studio/release-notes.md"
DATE="$(date +%Y-%m-%d)"
mkdir -p docs/en/studio
# Check if version already exists (idempotent)
if [ -f "$FILE" ] && grep -q "^## $VERSION " "$FILE"; then
echo "⚠️ Version $VERSION already exists in release notes - skipping update"
echo "VERSION_UPDATED=false" >> $GITHUB_ENV
exit 0
fi
# Read final notes
NOTES_CONTENT="$(cat .tmp/final-notes.txt)"
# Create new entry
NEW_ENTRY="## $VERSION ($DATE) Latest
$NOTES_CONTENT
"
# Process file
if [ ! -f "$FILE" ]; then
# Create new file
cat > "$FILE" <<EOF
# ABP Studio Release Notes
$NEW_ENTRY
EOF
else
# Remove "Latest" tag from existing entries and insert new one
awk -v new_entry="$NEW_ENTRY" '
BEGIN { inserted = 0 }
# Remove "Latest" from existing entries
/^## [0-9]/ {
gsub(/ Latest$/, "", $0)
}
# Insert after first "## " (version heading) or after title
/^## / && !inserted {
print new_entry
inserted = 1
}
# Print current line
{ print }
# If we reach end without inserting, add at end
END {
if (!inserted) {
print ""
print new_entry
}
}
' "$FILE" > "$FILE.new"
mv "$FILE.new" "$FILE"
fi
echo "VERSION_UPDATED=true" >> $GITHUB_ENV
echo "=== Updated release-notes.md preview ==="
head -30 "$FILE"
echo "========================================"
# -------------------------------------------------
# Fetch latest stable ABP version (no preview/rc/beta)
# -------------------------------------------------
- name: Fetch latest stable ABP version
id: abp
run: |
# Fetch all releases
RELEASES=$(curl -fsS \
-H "Accept: application/vnd.github+json" \
-H "Authorization: Bearer ${{ secrets.GITHUB_TOKEN }}" \
"https://api.github.com/repos/abpframework/abp/releases?per_page=20")
# Filter stable releases (exclude preview, rc, beta, dev)
ABP_VERSION=$(echo "$RELEASES" | jq -r '
[.[] | select(
(.prerelease == false) and
(.tag_name | test("preview|rc|beta|dev"; "i") | not)
)] | first | .tag_name
')
if [ -z "$ABP_VERSION" ] || [ "$ABP_VERSION" = "null" ]; then
echo "❌ Could not determine latest stable ABP version"
exit 1
fi
echo "✅ Latest stable ABP version: $ABP_VERSION"
echo "ABP_VERSION=$ABP_VERSION" >> $GITHUB_ENV
# -------------------------------------------------
# Update version-mapping.md (smart range expansion)
# -------------------------------------------------
- name: Update version-mapping.md
env:
STUDIO_VERSION: ${{ steps.payload.outputs.version }}
run: |
FILE="docs/en/studio/version-mapping.md"
ABP_VERSION="${{ env.ABP_VERSION }}"
mkdir -p docs/en/studio
# Create file if doesn't exist
if [ ! -f "$FILE" ]; then
cat > "$FILE" <<EOF
# ABP Studio and ABP Startup Template Version Mappings
| **ABP Studio Version** | **ABP Version of Startup Template** |
|------------------------|-------------------------------------|
| $STUDIO_VERSION | $ABP_VERSION |
EOF
echo "MAPPING_UPDATED=true" >> $GITHUB_ENV
exit 0
fi
# Use Python for smart version range handling
python3 <<'PYTHON_EOF'
import os
import re
from packaging.version import Version, InvalidVersion
studio_ver = os.environ["STUDIO_VERSION"]
abp_ver = os.environ["ABP_VERSION"]
file_path = "docs/en/studio/version-mapping.md"
try:
studio = Version(studio_ver)
except InvalidVersion:
print(f"❌ Invalid Studio version: {studio_ver}")
exit(1)
with open(file_path, 'r') as f:
lines = f.readlines()
# Find table start (skip SEO and headers)
table_start = 0
table_end = 0
for i, line in enumerate(lines):
if line.strip().startswith('|') and '**ABP Studio Version**' in line:
table_start = i
elif table_start > 0 and line.strip() and not line.strip().startswith('|'):
table_end = i
break
if table_start == 0:
print("❌ Could not find version mapping table")
exit(1)
# If no end found, table goes to end of file
if table_end == 0:
table_end = len(lines)
# Extract sections
before_table = lines[:table_start] # Everything before table
table_header = lines[table_start:table_start+2] # Header + separator
data_rows = [l for l in lines[table_start+2:table_end] if l.strip().startswith('|')] # Data rows
after_table = lines[table_end:] # Everything after table
new_rows = []
handled = False
def parse_version_range(version_str):
"""Parse '2.1.5 - 2.1.9' or '2.1.5' into (start, end)"""
version_str = version_str.strip()
if '–' in version_str or '-' in version_str:
# Handle both em-dash and hyphen
parts = re.split(r'\s*[–-]\s*', version_str)
if len(parts) == 2:
try:
return Version(parts[0].strip()), Version(parts[1].strip())
except InvalidVersion:
return None, None
try:
v = Version(version_str)
return v, v
except InvalidVersion:
return None, None
def format_row(studio_range, abp_version):
"""Format a table row with proper spacing"""
return f"| {studio_range:<22} | {abp_version:<27} |\n"
# Process existing rows
for row in data_rows:
match = re.match(r'\|\s*(.+?)\s*\|\s*(.+?)\s*\|', row)
if not match:
continue
existing_studio_range = match.group(1).strip()
existing_abp = match.group(2).strip()
# Only consider rows with matching ABP version
if existing_abp != abp_ver:
new_rows.append(row)
continue
start_ver, end_ver = parse_version_range(existing_studio_range)
if start_ver is None or end_ver is None:
new_rows.append(row)
continue
# Check if current studio version is in this range
if start_ver <= studio <= end_ver:
print(f"✅ Studio version {studio_ver} already covered in range {existing_studio_range}")
handled = True
new_rows.append(row)
# Check if we should extend the range
elif end_ver < studio:
# Calculate if studio is the next logical version
# For patch versions: 2.1.9 -> 2.1.10
# For minor versions: 2.1.9 -> 2.2.0
# Simple heuristic: if major.minor match and patch increments, extend range
if (start_ver.major == studio.major and
start_ver.minor == studio.minor and
studio.micro <= end_ver.micro + 5): # Allow small gaps
new_range = f"{start_ver} - {studio}"
new_rows.append(format_row(new_range, abp_ver))
print(f"✅ Extended range: {new_range}")
handled = True
else:
new_rows.append(row)
else:
new_rows.append(row)
# If not handled, add new row at top of data
if not handled:
new_row = format_row(str(studio), abp_ver)
new_rows.insert(0, new_row)
print(f"✅ Added new mapping: {studio_ver} -> {abp_ver}")
# Write updated file - preserve ALL content
with open(file_path, 'w') as f:
f.writelines(before_table) # SEO, title, intro text
f.writelines(table_header) # Table header
f.writelines(new_rows) # Updated data rows
f.writelines(after_table) # Content after table (preview section, etc.)
print("MAPPING_UPDATED=true")
PYTHON_EOF
echo "MAPPING_UPDATED=true" >> $GITHUB_ENV
echo "=== Updated version-mapping.md preview ==="
head -35 "$FILE"
echo "=========================================="
# -------------------------------------------------
# Check for changes
# -------------------------------------------------
- name: Check for changes
id: changes
run: |
git add docs/en/studio/
if git diff --cached --quiet; then
echo "has_changes=false" >> $GITHUB_OUTPUT
echo "⚠️ No changes detected"
else
echo "has_changes=true" >> $GITHUB_OUTPUT
echo "✅ Changes detected:"
git diff --cached --stat
fi
# -------------------------------------------------
# Commit & push
# -------------------------------------------------
- name: Commit and push
if: steps.changes.outputs.has_changes == 'true'
env:
VERSION: ${{ steps.payload.outputs.version }}
NAME: ${{ steps.payload.outputs.name }}
run: |
git commit -m "docs(studio): update documentation for release $VERSION
- Updated release notes for $VERSION
- Updated version mapping with ABP ${{ env.ABP_VERSION }}
Release: $NAME"
git push -f origin "$BRANCH"
# -------------------------------------------------
# Create or update PR
# -------------------------------------------------
- name: Create or update PR
if: steps.changes.outputs.has_changes == 'true'
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
VERSION: ${{ steps.payload.outputs.version }}
NAME: ${{ steps.payload.outputs.name }}
URL: ${{ steps.payload.outputs.url }}
TARGET_BRANCH: ${{ steps.payload.outputs.target_branch }}
run: |
# Check for existing PR
EXISTING_PR=$(gh pr list \
--head "$BRANCH" \
--base "$TARGET_BRANCH" \
--json number \
--jq '.[0].number' 2>/dev/null || echo "")
PR_BODY="Automated documentation update for ABP Studio release **$VERSION**.
## Release Information
- **Version**: $VERSION
- **Name**: $NAME
- **Release**: [View on GitHub]($URL)
- **ABP Framework Version**: ${{ env.ABP_VERSION }}
## Changes
- ✅ Updated [release-notes.md](docs/en/studio/release-notes.md)
- ✅ Updated [version-mapping.md](docs/en/studio/version-mapping.md)
---
*This PR was automatically generated by the [update-studio-docs workflow](.github/workflows/update-studio-docs.yml)*"
if [ -n "$EXISTING_PR" ]; then
echo "🔄 Updating existing PR #$EXISTING_PR"
gh pr edit "$EXISTING_PR" \
--title "docs(studio): release $VERSION - $NAME" \
--body "$PR_BODY"
echo "PR_NUMBER=$EXISTING_PR" >> $GITHUB_ENV
else
echo "📝 Creating new PR"
sleep 2 # Wait for GitHub to sync
PR_URL=$(gh pr create \
--title "docs(studio): release $VERSION - $NAME" \
--body "$PR_BODY" \
--base "$TARGET_BRANCH" \
--head "$BRANCH")
PR_NUMBER=$(echo "$PR_URL" | grep -oE '[0-9]+$')
echo "PR_NUMBER=$PR_NUMBER" >> $GITHUB_ENV
echo "✅ Created PR #$PR_NUMBER: $PR_URL"
fi
# -------------------------------------------------
# Enable auto-merge (safe with branch protection)
# -------------------------------------------------
- name: Enable auto-merge
if: steps.changes.outputs.has_changes == 'true'
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
continue-on-error: true
run: |
echo "🔄 Attempting to enable auto-merge for PR #$PR_NUMBER"
gh pr merge "$PR_NUMBER" \
--auto \
--squash \
--delete-branch || {
echo "⚠️ Auto-merge not available (branch protection or permissions)"
echo " PR #$PR_NUMBER is ready for manual review"
}
# -------------------------------------------------
# Summary
# -------------------------------------------------
- name: Workflow summary
if: always()
env:
VERSION: ${{ steps.payload.outputs.version }}
run: |
echo "## 📚 ABP Studio Docs Update Summary" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
echo "**Version**: $VERSION" >> $GITHUB_STEP_SUMMARY
echo "**Release**: ${{ steps.payload.outputs.name }}" >> $GITHUB_STEP_SUMMARY
echo "**Target Branch**: ${{ steps.payload.outputs.target_branch }}" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
if [ "${{ steps.changes.outputs.has_changes }}" = "true" ]; then
echo "### ✅ Changes Applied" >> $GITHUB_STEP_SUMMARY
echo "- Release notes updated: ${{ env.VERSION_UPDATED }}" >> $GITHUB_STEP_SUMMARY
echo "- Version mapping updated: ${{ env.MAPPING_UPDATED }}" >> $GITHUB_STEP_SUMMARY
echo "- ABP Framework version: ${{ env.ABP_VERSION }}" >> $GITHUB_STEP_SUMMARY
echo "- PR: #${{ env.PR_NUMBER }}" >> $GITHUB_STEP_SUMMARY
else
echo "### ⚠️ No Changes" >> $GITHUB_STEP_SUMMARY
echo "Version $VERSION already exists in documentation." >> $GITHUB_STEP_SUMMARY
fi

18
Directory.Packages.props

@ -19,10 +19,10 @@
<PackageVersion Include="Azure.Identity" Version="1.14.2" />
<PackageVersion Include="Azure.Messaging.ServiceBus" Version="7.20.1" />
<PackageVersion Include="Azure.Storage.Blobs" Version="12.25.0" />
<PackageVersion Include="Blazorise" Version="1.8.8" />
<PackageVersion Include="Blazorise.Components" Version="1.8.8" />
<PackageVersion Include="Blazorise.DataGrid" Version="1.8.8" />
<PackageVersion Include="Blazorise.Snackbar" Version="1.8.8" />
<PackageVersion Include="Blazorise" Version="2.0.0" />
<PackageVersion Include="Blazorise.Components" Version="2.0.0" />
<PackageVersion Include="Blazorise.DataGrid" Version="2.0.0" />
<PackageVersion Include="Blazorise.Snackbar" Version="2.0.0" />
<PackageVersion Include="Castle.Core" Version="5.2.1" />
<PackageVersion Include="Castle.Core.AsyncInterceptor" Version="2.1.0" />
<PackageVersion Include="CommonMark.NET" Version="0.15.1" />
@ -122,7 +122,7 @@
<PackageVersion Include="Microsoft.IdentityModel.Tokens" Version="8.14.0" />
<PackageVersion Include="Microsoft.IdentityModel.JsonWebTokens" Version="8.14.0" />
<PackageVersion Include="Minio" Version="6.0.5" />
<PackageVersion Include="MongoDB.Driver" Version="3.5.2" />
<PackageVersion Include="MongoDB.Driver" Version="3.6.0" />
<PackageVersion Include="NEST" Version="7.17.5" />
<PackageVersion Include="Newtonsoft.Json" Version="13.0.4" />
<PackageVersion Include="Nito.AsyncEx.Context" Version="5.1.2" />
@ -183,10 +183,10 @@
<PackageVersion Include="System.Threading.Tasks.Extensions" Version="4.6.3" />
<PackageVersion Include="TencentCloudSDK.Sms" Version="3.0.1273" />
<PackageVersion Include="TimeZoneConverter" Version="7.2.0" />
<PackageVersion Include="TickerQ" Version="2.5.3" />
<PackageVersion Include="TickerQ.Dashboard" Version="2.5.3" />
<PackageVersion Include="TickerQ.Utilities" Version="2.5.3" />
<PackageVersion Include="TickerQ.EntityFrameworkCore" Version="2.5.3" />
<PackageVersion Include="TickerQ" Version="10.1.1" />
<PackageVersion Include="TickerQ.Dashboard" Version="10.1.1" />
<PackageVersion Include="TickerQ.Utilities" Version="10.1.1" />
<PackageVersion Include="TickerQ.EntityFrameworkCore" Version="10.1.1" />
<PackageVersion Include="Unidecode.NET" Version="2.1.0" />
<PackageVersion Include="xunit" Version="2.9.3" />
<PackageVersion Include="xunit.extensibility.execution" Version="2.9.3" />

2
abp_io/AbpIoLocalization/AbpIoLocalization/Admin/Localization/Resources/de.json

@ -261,7 +261,7 @@
"Enum:EntityChangeType:0": "Erstellt",
"Enum:EntityChangeType:1": "Aktualisiert",
"Enum:EntityChangeType:2": "Gelöscht",
"TenantId": "Mieter-ID",
"TenantId": "Mandanten-ID",
"ChangeTime": "Zeit ändern",
"EntityTypeFullName": "Vollständiger Name des Entitätstyps",
"AuditLogsFor{0}Organization": "Audit-Logs für die Organisation \"{0}\"",

2
abp_io/AbpIoLocalization/AbpIoLocalization/Commercial/Localization/Resources/de.json

@ -162,7 +162,7 @@
"WhatIsTheABPCommercial": "Was ist der ABP-Werbespot?",
"WhatAreDifferencesThanAbpFramework": "Was sind die Unterschiede zwischen dem Open Source ABP Framework und dem ABP Commercial?",
"ABPCommercialExplanation": "ABP Commercial ist eine Reihe von Premium-Modulen, Tools, Themen und Diensten, die auf dem Open-Source-<a target=\"_blank\" href=\"{0}\">ABP-Framework</a> aufbauen. ABP Commercial wird von demselben Team entwickelt und unterstützt, das hinter dem ABP-Framework steht.",
"WhatAreDifferencesThanABPFrameworkExplanation": "<p> <a target=\"_blank\" href=\"{0}\">ABP-Framework</a> ist ein modulares, thematisches, Microservice-kompatibles Anwendungsentwicklungsframework für ASP.NET Core. Es bietet eine vollständige Architektur und eine starke Infrastruktur, damit Sie sich auf Ihren eigenen Geschäftscode konzentrieren können, anstatt sich für jedes neue Projekt zu wiederholen. Es basiert auf Best Practices für die Softwareentwicklung und beliebten Tools, die Sie bereits kennen. </p> <p> Das ABP-Framework ist völlig kostenlos, Open Source und wird von der Community betrieben. Es bietet auch ein kostenloses Thema und einige vorgefertigte Module (z. B. Identitätsmanagement und Mieterverwaltung).</p>",
"WhatAreDifferencesThanABPFrameworkExplanation": "<p> <a target=\"_blank\" href=\"{0}\">ABP-Framework</a> ist ein modulares, thematisches, Microservice-kompatibles Anwendungsentwicklungsframework für ASP.NET Core. Es bietet eine vollständige Architektur und eine starke Infrastruktur, damit Sie sich auf Ihren eigenen Geschäftscode konzentrieren können, anstatt sich für jedes neue Projekt zu wiederholen. Es basiert auf Best Practices für die Softwareentwicklung und beliebten Tools, die Sie bereits kennen. </p> <p> Das ABP-Framework ist völlig kostenlos, Open Source und wird von der Community betrieben. Es bietet auch ein kostenloses Thema und einige vorgefertigte Module (z. B. Identitätsmanagement und Mandanten-Verwaltung).</p>",
"VisitTheFrameworkVSCommercialDocument": "Besuchen Sie den folgenden Link für weitere Informationen <a href=\"{0}\" target=\"_blank\"> {1} </a>",
"ABPCommercialFollowingBenefits": "ABP Commercial fügt dem ABP-Framework die folgenden Vorteile hinzu;",
"Professional": "Fachmann",

2
abp_io/AbpIoLocalization/AbpIoLocalization/Www/Localization/Resources/de.json

@ -332,7 +332,7 @@
"ConnectionResolver": "Verbindungslöser",
"TenantBasedDataFilter": "Mandantenbasierter Datenfilter",
"ApplicationCode": "Anwendungscode",
"TenantResolution": "Mieterbeschluss",
"TenantResolution": "Mandanten-Ermittlung",
"TenantUser": "Mandant {0} Benutzer",
"CardTitle": "Kartentitel",
"View": "Sicht",

3
abp_io/AbpIoLocalization/AbpIoLocalization/Www/Localization/Resources/en.json

@ -1229,6 +1229,7 @@
"Pricing_Page_HurryUp": "Hurry Up!",
"Pricing_Page_BuyLicense": "Buy a license at <strong>2021 prices</strong> until January 16!",
"Pricing_Page_ValidForExistingCustomers": "Also valid for existing customers and license renewals.",
"Pricing_Page_AdditionalDevCost": "The cost of an additional developer seat for the {0} License is {1}.",
"Pricing_Page_Hint1": "The license price includes a certain number of developer seats. If you have more developers, you can always purchase additional seats.",
"Pricing_Page_Hint2": "You can purchase more developer licenses now or in the future. Licenses are seat-based, so you can transfer a seat from one developer to another.",
"Pricing_Page_Hint3": "You can develop an unlimited count of different products with your license.",
@ -1434,6 +1435,8 @@
"Facebook": "Facebook",
"Youtube": "YouTube",
"Google": "Google",
"GoogleOrganic": "Google Organic",
"GoogleAds": "Google Ads",
"Github": "GitHub",
"Friend": " From a friend",
"Other": "Other",

4
common.props

@ -1,8 +1,8 @@
<Project>
<PropertyGroup>
<LangVersion>latest</LangVersion>
<Version>10.1.1</Version>
<LeptonXVersion>5.1.1</LeptonXVersion>
<Version>10.2.0-rc.1</Version>
<LeptonXVersion>5.2.0-rc.1</LeptonXVersion>
<NoWarn>$(NoWarn);CS1591;CS0436</NoWarn>
<PackageIconUrl>https://abp.io/assets/abp_nupkg.png</PackageIconUrl>
<PackageProjectUrl>https://abp.io/</PackageProjectUrl>

342
docs/en/Blog-Posts/2026-01-08 v10_1_Preview/POST.md

@ -0,0 +1,342 @@
# ABP Platform 10.1 RC Has Been Released
We are happy to release [ABP](https://abp.io) version **10.1 RC** (Release Candidate). This blog post introduces the new features and important changes in this new version.
Try this version and provide feedback for a more stable version of ABP v10.1! Thanks to you in advance.
## Get Started with the 10.1 RC
You can check the [Get Started page](https://abp.io/get-started) to see how to get started with ABP. You can either download [ABP Studio](https://abp.io/get-started#abp-studio-tab) (**recommended**, if you prefer a user-friendly GUI application - desktop application) or use the [ABP CLI](https://abp.io/docs/latest/cli).
By default, ABP Studio uses stable versions to create solutions. Therefore, if you want to create a solution with a preview version, first you need to create a solution and then switch your solution to the preview version from the ABP Studio UI:
![studio-switch-to-preview.png](studio-switch-to-preview.png)
## Migration Guide
There are a few breaking changes in this version that may affect your application. Please read the migration guide carefully, if you are upgrading from v10.0 or earlier: [ABP Version 10.1 Migration Guide](https://abp.io/docs/10.1/release-info/migration-guides/abp-10-1).
## What's New with ABP v10.1?
In this section, I will introduce some major features released in this version.
Here is a brief list of titles explained in the next sections:
- Resource-Based Authorization
- Introducing the TickerQ Background Worker Provider
- Angular UI: Improving Authentication Token Handling
- Angular Version Upgrade to v21
- File Management Module: Public File Sharing Support
- Payment Module: Public Page Implementation for Blazor & Angular UIs
- AI Management Module: Blazor & Angular UIs
- Identity PRO Module: Password History Support
- Account PRO Module: Introducing WebAuthn Passkeys
### Resource-Based Authorization
ABP v10.1 introduces **Resource-Based Authorization**, a powerful feature that enables fine-grained access control based on specific resource instances. This enhancement addresses a long-requested feature ([#236](https://github.com/abpframework/abp/issues/236)) that allows you to implement authorization logic that depends on the resource being accessed, not just static roles or permissions.
**What is Resource-Based Authorization?**
Unlike traditional permission-based authorization where you check if a user has a general permission (like "CanEditDocuments"), resource-based authorization allows you to make authorization decisions based on the specific resource instance. For example:
- Allow users to edit only their own blog posts
- Grant access to documents based on ownership or sharing settings
- Implement complex authorization rules that depend on resource properties
![](ai-management-demo.gif)
#### How It Works?
**1. Define resource permissions (`AddResourcePermission`)**:
```csharp
public class MyPermissionDefinitionProvider : PermissionDefinitionProvider
{
public override void Define(IPermissionDefinitionContext context)
{
//other permissions...
context.AddResourcePermission(
name: BookManagementPermissions.Manage.Resources.Consume,
resourceName: BookManagementPermissions.Manage.Resources.Name,
managementPermissionName: BookManagementPermissions.Manage.ManagePermissions,
L("LocalizedPermissionDisplayName")
);
}
}
```
**2. Use `IResourcePermissionChecker.IsGrantedAsync` in your code to perform the resource permission check**:
```csharp
protected IResourcePermissionChecker ResourcePermissionChecker { get; }
public async Task MyService()
{
if(await ResourcePermissionChecker.IsGrantedAsync(
BookManagementPermissions.Manage.Resources.Consume,
BookManagementPermissions.Manage.Resources.Name,
workspaceConfiguration.WorkspaceId!.Value.ToString()))
{
return;
}
//...
}
```
**3. Use the relevant `ResourcePermissionManagementModel` in your UI:**
> The following code block demonstrates its usage in the Blazor UI, but the same component is also implemented for MVC & Angular UIs (however, component name might be different, please refer to the documentation before using the component).
```xml
<ResourcePermissionManagementModal @ref="PermissionManagementModal" />
@code {
ResourcePermissionManagementModal PermissionManagementModal { get; set; } = null!;
private Task OpenResourcePermissionModel()
{
await PermissionManagementModal.OpenAsync(
resourceName: BookManagementPermissions.Manage.Resources.Name,
resourceKey: entity.Id.ToString(),
resourceDisplayName: entity.Name
);
}
}
```
This feature integrates perfectly with ABP's existing authorization infrastructure and provides a standard way to implement complex, context-aware authorization scenarios in your applications.
### Introducing the TickerQ Background Worker Provider
ABP v10.1 now includes **[TickerQ](https://tickerq.net/)** as a new background job and background worker provider option. TickerQ is a fast, reflection-free background task scheduler for .NET — built with source generators, EF Core integration, cron + time-based execution, and a real-time dashboard. It offers reliable job execution with built-in retry mechanisms, persistent job storage, and efficient resource usage.
To use TickerQ in your ABP-based solution, refer to the following documentation:
- [TickerQ Background Job Integration](https://abp.io/docs/10.1/framework/infrastructure/background-jobs/tickerq)
- [TickerQ Background Worker Integration](https://abp.io/docs/10.1/framework/infrastructure/background-workers/tickerq)
### Angular UI: Improving Authentication Token Handling
ABP v10.1 brings significant improvements to **Angular authentication token handling**, making token refresh more reliable and providing better error handling for expired or invalid tokens.
#### What's Improved?
Prior to this version, access tokens issued by the auth-server were stored in localStorage, making them vulnerable to XSS attacks. We've made the following enhancements to improve safety and reduce security risks:
- Store sensitive tokens in memory
- Use web-workers for state sharing between tabs
These enhancements are automatically available in new Angular projects and can be applied to existing projects by updating ABP packages.
> See [#23930](https://github.com/abpframework/abp/issues/23930) for more details.
### Angular Version Upgrade to v21
ABP v10.1 **upgrades Angular to version 21**, bringing the latest improvements and features from the Angular ecosystem to your ABP applications. We've upgraded the relevant core Angular packages and 3rd party packages such as **angular-oauth2-oidc** and **ng-bootstrap**. We will also update the ABP Studio templates along with the stable v10.1 release.
> See [#24384](https://github.com/abpframework/abp/issues/24384) for the complete change list.
### File Management Module: Public File Sharing Support
_This is a **PRO** feature available for ABP Commercial customers._
The **File Management Module** now supports **public file sharing** via shareable links, similar to popular cloud storage services like Google Drive or Dropbox. This feature enables you to generate public URLs for files that can be accessed without authentication.
![](file-sharing.gif)
**Example Share URL:**
```text
https://abp.io/api/file-management/file-descriptor/share?shareToken=CfDJ8AK%2BOEpCD...
```
**Configuration:**
You can configure the public share domain through options:
```csharp
Configure<FileManagementWebOptions>(options =>
{
options.FileDownloadRootUrl = "https://files.yourdomain.com";
});
```
This feature is available for all supported UI types (MVC, Angular, Blazor) and integrates seamlessly with the existing [File Management Module](https://abp.io/docs/latest/modules/file-management).
### Payment Module: Public Page Implementation for Blazor & Angular UIs
The **Payment Module** now includes **public page implementations for Angular and Blazor UIs**, completing UI coverage across all ABP-supported frameworks. Previously, public payment pages (payment gateway selection, pre-payment, and post-payment pages) were only available for MVC/Razor Pages UI. With this version, both admin and public pages are now available for MVC, Angular, and Blazor UIs.
The public payment pages seamlessly integrate with ABP's [Payment Module](https://abp.io/docs/latest/modules/payment) and support all configured payment gateways. The documentation will be updated soon with detailed integration guides and examples at [abp.io/docs/latest/modules/payment](https://abp.io/docs/latest/modules/payment).
### AI Management Module: Blazor & Angular UIs
With this version, Angular and Blazor UIs for the [AI Management module](https://abp.io/docs/latest/modules/ai-management) have been implemented, completing the cross-platform support for this powerful AI integration module.
![AI Management Workspaces](ai-management-workspaces.png)
The AI Management Module builds on top of [ABP's AI Infrastructure](https://abp.io/docs/latest/framework/infrastructure/artificial-intelligence) and provides:
- **Multi-Provider Support**: Integrate with OpenAI, Google Gemini, Anthropic Claude, and more from a unified API
- **Workspace-Based Organization**: Organize AI capabilities into separate workspaces for different use cases
- **Built-In Chat Interface**: Ready-to-use chat UI for conversational AI
- **Chat Widget**: Drop-in chat widget component for customer support or AI assistance
- **Resource-Based Permissions**: Control access to specific AI workspaces for users, roles, or clients
Learn more about the AI Management Module in the [announcement post](https://abp.io/community/announcements/introducing-the-ai-management-module-nz9404a9) and [official documentation](https://abp.io/docs/latest/modules/ai-management).
### Identity PRO Module: Password History Support
The [**Identity PRO Module**](https://abp.io/docs/latest/modules/identity-pro) now includes **Password History** support, preventing users from reusing previous passwords. This security feature helps enforce stronger password policies and meet compliance requirements for your organization.
Administrators can enable password reuse prevention by toggling the related setting on the _Administration -> Settings -> Identity Management_ page:
![Password History Settings](password-history-settings.png)
When changing a password, the system checks the specified number of previous passwords and displays an error message if the new password matches any of them:
![](set-password-error-modal.png)
![](reset-password-error-modal.png)
### Account PRO Module: Introducing WebAuthn Passkeys
ABP v10.1 introduces **Passkey authentication**, enabling passwordless sign-in using modern biometric authentication methods. Built on the **WebAuthn standard (FIDO2)**, this feature allows users to authenticate using Face ID, Touch ID, Windows Hello, Android biometrics, security keys, or other platform authenticators.
**What are Passkeys?**
Passkeys are a modern, phishing-resistant authentication method that replaces traditional passwords:
- **Passwordless**: No passwords to remember, type, or manage
- **Secure**: Uses public/private key cryptography stored on the user's device
- **Convenient**: Sign in with a fingerprint, face scan, or device PIN
- **Cross-Platform**: Can sync across devices depending on platform support (Apple, Google, Microsoft)
**How It Works:**
**1. Enable or disable the WebAuthn passkeys feature in the _Settings -> Account -> Passkeys_ page:**
![Passkey Setting](passkey-setting.png)
**2. Add your passkeys in the _Account/Manage_ page:**
![My Passkeys](my-passkey.png)
![Passkey registration](passkey-registration.png)
**3. Use the _Passkey login_ option for passwordless authentication the next time you log in:**
![Passkey Login](passkey-login.png)
> For more information, refer to the [Web Authentication API (WebAuthn) passkeys](https://abp.io/docs/10.1/modules/account/passkey) documentation.
## Community News
### Special Offer: Level Up Your ABP Skills with 33% Off Live Trainings!
![ABP Live Training Discount](./live-training-discount.png)
We're excited to announce a special limited-time offer for developers looking to master the ABP Platform! Get **33% OFF** on all ABP live training sessions and accelerate your learning journey with hands-on guidance from ABP experts.
**Why Join ABP Live Trainings?**
Our live training sessions provide an immersive learning experience where you can:
- **Learn from the Experts**: Get direct instruction from ABP team members and experienced trainers who know the platform inside and out.
- **Hands-On Practice**: Work through real-world scenarios and build actual applications during the sessions.
- **Interactive Q&A**: Ask questions in real-time and get immediate answers to your specific challenges.
- **Comprehensive Coverage**: From fundamentals to advanced topics, our trainings cover everything you need to build production-ready applications with ABP.
- **Certificate of Completion**: Receive a certificate upon completing the training to showcase your ABP expertise.
Don't miss this opportunity to invest in your skills and career. Whether you're new to ABP or looking to advance your expertise, our live trainings provide the structured learning path you need to succeed.
> 👉 [Learn more and claim your discount here](https://abp.io/community/announcements/improve-your-abp-skills-with-33-off-live-trainings-hjnw57xu)
### Introducing the ABP Referral Program
![ABP.IO Referral Program](./referral-program.png)
We're thrilled to announce the launch of the **ABP.IO Referral Program**, a new way for our community members to earn rewards while helping others discover the ABP Platform!
**How It Works:**
ABP's Referral Program is simple and rewarding:
1. **Get Your Unique Referral Link**: Sign up for the program and receive your personalized referral link.
2. **Share with Your Network**: Share your link with colleagues, friends, and fellow developers who could benefit from ABP.
3. **Earn Rewards**: When someone purchases an ABP Commercial license through your referral link, **you earn 5% commission**!
By joining the referral program, you're not just earning rewards and also you're helping other developers discover a platform that can significantly improve their productivity and project success.
> 👉 [Join the ABP.IO Referral Program](https://abp.io/community/announcements/introducing-abp.io-referral-program-b59obhe7)
### Announcing AI Management Module
We are excited to announce the [AI Management Module](https://abp.io/docs/10.0/modules/ai-management), a powerful new module to the ABP Platform that makes managing AI capabilities in your applications easier than ever!
![ABP - AI Management Module Workspaces](ai-management-workspaces.png)
**What is the AI Management Module?**
Built on top of the [ABP Framework's AI infrastructure](https://abp.io/docs/latest/framework/infrastructure/artificial-intelligence), the **AI Management Module** allows you to manage AI workspaces dynamically without touching your code. Whether you're building a customer support chatbot, adding AI-powered search, or creating intelligent automation workflows, this module provides everything you need to manage AI integrations through a user-friendly interface.
**Key Features:**
- **Multi-Provider Support**: Allows integrating with multiple AI providers including OpenAI, Google Gemini, Anthropic Claude, and more from a single unified API.
- **Buit-In Chat Interface**
- **Ready to Use Chat Widget**
- and more... (RAG & MCP supports are on the way!)
👉 [Read the announcement post for more...](https://abp.io/community/announcements/introducing-the-ai-management-module-nz9404a9)
### We Were At .NET Conf China 2025!
![.NET Conf China 2025](./dotnet-conf-china-2025.png)
The ABP team participated in **.NET Conf China 2025** in Shanghai, celebrating the release of .NET 10 (LTS) and the achievements of the .NET community in China.
**Event Highlights:**
The conference brought together hundereds of developers and featured Scott Hanselman's opening keynote announcing .NET 10's availability, focused on four pillars: AI, cloud-native, cross-platform, and performance. The event covered three main themes: performance improvements, AI integration, and cross-platform development, with in-depth sessions on topics ranging from Avalonia and Blazor to AI agents and enterprise adoption.
**ABP's Participation:**
At the ABP booth, we showcased our developer platform with live demonstrations of modular architecture, multi-tenancy support, and built-in authentication systems. We hosted interactive raffles with prizes including ABP stickers, the _Mastering ABP Framework_ book, and Bluetooth headphones. The booth was a hub for sharing experiences, impromptu code walkthroughs, and meaningful conversations with Chinese developers about ABP's future.
> 👉 [Read the full event recap](https://abp.io/community/announcements/.net-conf-china-2025-fz03gfge)
### Community Talks 2025.10: AI-Powered .NET Apps with ABP & Microsoft Agent Framework
![ABP Community Talks - AI-Powered .NET Apps](./community-talk-2025-10-ai.png)
In our latest ABP Community Talks session, we dove deep into the world of **Artificial Intelligence** and its integration with the ABP Framework. This session explored Microsoft's cutting-edge AI libraries: **Extensions AI**, **Semantic Kernel**, and the **Microsoft Agent Framework**.
**What We Covered:**
We introduced the new **AI Management Module**, discussing its current status and roadmap. The session included practical demonstrations on building intelligent applications with the Microsoft Agent Framework within ABP projects, showing how these technologies empower developers to create AI-powered .NET applications.
> 👉 [Missed the live session? Click here to watch the full session](https://www.youtube.com/live/tEcd2H6yXQk)
### New ABP Community Articles
There are exciting articles contributed by the ABP community as always. I will highlight some of them here:
- [Salih Özkara](https://github.com/salihozkara) has published 3 new articles:
- [Building Dynamic XML Sitemaps with ABP Framework](https://abp.io/community/articles/building-dynamic-xml-sitemaps-with-abp-framework-n3q6schd)
- [Implement Automatic Method-Level Caching in ABP Framework](https://abp.io/community/articles/implement-automatic-methodlevel-caching-in-abp-framework-4uzd3wx8)
- [Building Production-Ready LLM Applications with .NET: A Practical Guide](https://abp.io/community/articles/building-production-ready-llm-applications-with-net-ya7qemfa)
- [Adnan Ali](https://abp.io/community/members/adnanaldaim) has published 2 new articles:
- [Integrating AI into ABP.IO Applications: The Complete Guide to Volo.Abp.AI and AI Management Module](https://abp.io/community/articles/integrating-ai-into-abp.io-applications-the-complete-guide-jc9fbjq0)
- [How ABP.IO Framework Cuts Your MVP Development Time by 60%](https://abp.io/community/articles/how-abp.io-framework-cuts-your-mvp-development-time-by-60-8l7m3ugj)
- [My First Look and Experience with Google AntiGravity](https://abp.io/community/articles/my-first-look-and-experience-with-google-antigravity-0hr4sjtf) by [Alper Ebiçoğlu](https://twitter.com/alperebicoglu)
- [TOON vs JSON for LLM Prompts in ABP: Token-Efficient Structured Context](https://abp.io/community/articles/toon-vs-json-b4rn2avd) by [Suhaib Mousa](https://abp.io/community/members/suhaib-mousa)
Thanks to the ABP Community for all the content they have published. You can also [post your ABP-related (text or video) content](https://abp.io/community/posts/create) to the ABP Community.
## Conclusion
This version comes with some new features and a lot of enhancements to the existing features. You can see the [Road Map](https://abp.io/docs/10.1/release-info/road-map) documentation to learn about the release schedule and planned features for the next releases. Please try ABP v10.1 RC and provide feedback to help us release a more stable version.
Thanks for being a part of this community!

BIN
docs/en/Blog-Posts/2026-01-08 v10_1_Preview/ai-management-demo.gif

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.8 MiB

BIN
docs/en/Blog-Posts/2026-01-08 v10_1_Preview/ai-management-workspaces.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 20 KiB

BIN
docs/en/Blog-Posts/2026-01-08 v10_1_Preview/community-talk-2025-10-ai.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 35 KiB

BIN
docs/en/Blog-Posts/2026-01-08 v10_1_Preview/cover-image.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 129 KiB

BIN
docs/en/Blog-Posts/2026-01-08 v10_1_Preview/dotnet-conf-china-2025.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 160 KiB

BIN
docs/en/Blog-Posts/2026-01-08 v10_1_Preview/file-sharing.gif

Binary file not shown.

After

Width:  |  Height:  |  Size: 665 KiB

BIN
docs/en/Blog-Posts/2026-01-08 v10_1_Preview/live-training-discount.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 76 KiB

BIN
docs/en/Blog-Posts/2026-01-08 v10_1_Preview/my-passkey.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 19 KiB

BIN
docs/en/Blog-Posts/2026-01-08 v10_1_Preview/passkey-login.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 15 KiB

BIN
docs/en/Blog-Posts/2026-01-08 v10_1_Preview/passkey-registration.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 15 KiB

BIN
docs/en/Blog-Posts/2026-01-08 v10_1_Preview/passkey-setting.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 18 KiB

BIN
docs/en/Blog-Posts/2026-01-08 v10_1_Preview/password-history-settings.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 13 KiB

BIN
docs/en/Blog-Posts/2026-01-08 v10_1_Preview/password-history-warning.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 21 KiB

BIN
docs/en/Blog-Posts/2026-01-08 v10_1_Preview/referral-program.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 112 KiB

BIN
docs/en/Blog-Posts/2026-01-08 v10_1_Preview/reset-password-error-modal.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 21 KiB

BIN
docs/en/Blog-Posts/2026-01-08 v10_1_Preview/set-password-error-modal.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 6.7 KiB

BIN
docs/en/Blog-Posts/2026-01-08 v10_1_Preview/studio-switch-to-preview.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 20 KiB

82
docs/en/Blog-Posts/2026-02-23 v10_1_Release_Stable/POST.md

@ -0,0 +1,82 @@
# ABP.IO Platform 10.1 Final Has Been Released!
We are glad to announce that [ABP](https://abp.io/) 10.1 stable version has been released.
## What's New With Version 10.1?
All the new features were explained in detail in the [10.1 RC Announcement Post](https://abp.io/community/announcements/announcing-abp-10-1-release-candidate-cyqui19d), so there is no need to review them again. You can check it out for more details.
## Getting Started with 10.1
### How to Upgrade an Existing Solution
You can upgrade your existing solutions with either ABP Studio or ABP CLI. In the following sections, both approaches are explained:
### Upgrading via ABP Studio
If you are already using the ABP Studio, you can upgrade it to the latest version. ABP Studio periodically checks for updates in the background, and when a new version of ABP Studio is available, you will be notified through a modal. Then, you can update it by confirming the opened modal. See [the documentation](https://abp.io/docs/latest/studio/installation#upgrading) for more info.
After upgrading the ABP Studio, then you can open your solution in the application, and simply click the **Upgrade ABP Packages** action button to instantly upgrade your solution:
![](upgrade-abp-packages.png)
### Upgrading via ABP CLI
Alternatively, you can upgrade your existing solution via ABP CLI. First, you need to install the ABP CLI or upgrade it to the latest version.
If you haven't installed it yet, you can run the following command:
```bash
dotnet tool install -g Volo.Abp.Studio.Cli
```
Or to update the existing CLI, you can run the following command:
```bash
dotnet tool update -g Volo.Abp.Studio.Cli
```
After installing/updating the ABP CLI, you can use the [`update` command](https://abp.io/docs/latest/CLI#update) to update all the ABP related NuGet and NPM packages in your solution as follows:
```bash
abp update
```
You can run this command in the root folder of your solution to update all ABP related packages.
## Migration Guides
There are a few breaking changes in this version that may affect your application. Please read the migration guide carefully, if you are upgrading from v10.0 or earlier versions: [ABP Version 10.1 Migration Guide](https://abp.io/docs/latest/release-info/migration-guides/abp-10-1)
## Community News
### New ABP Community Articles
As always, exciting articles have been contributed by the ABP community. I will highlight some of them here:
* [Enis Necipoğlu](https://abp.io/community/members/enisn):
* [ABP Framework's Hidden Magic: Things That Just Work Without You Knowing](https://abp.io/community/articles/hidden-magic-things-that-just-work-without-you-knowing-vw6osmyt)
* [Implementing Multiple Global Query Filters with Entity Framework Core](https://abp.io/community/articles/implementing-multiple-global-query-filters-with-entity-ugnsmf6i)
* [Suhaib Mousa](https://abp.io/community/members/suhaib-mousa):
* [.NET 11 Preview 1 Highlights: Faster Runtime, Smarter JIT, and AI-Ready Improvements](https://abp.io/community/articles/dotnet-11-preview-1-highlights-hspp3o5x)
* [TOON vs JSON for LLM Prompts in ABP: Token-Efficient Structured Context](https://abp.io/community/articles/toon-vs-json-b4rn2avd)
* [Fahri Gedik](https://abp.io/community/members/fahrigedik):
* [Building a Multi-Agent AI System with A2A, MCP, and ADK in .NET](https://abp.io/community/articles/building-a-multiagent-ai-system-with-a2a-mcp-iefdehyx)
* [Async Chain of Persistence Pattern: Designing for Failure in Event-Driven Systems](https://abp.io/community/articles/async-chain-of-persistence-pattern-wzjuy4gl)
* [Alper Ebiçoğlu](https://abp.io/community/members/alper):
* [NDC London 2026: From a Developer's Perspective and My Personal Notes about AI](https://abp.io/community/articles/ndc-london-2026-a-.net-conf-from-a-developers-perspective-07wp50yl)
* [Which Open-Source PDF Libraries Are Recently Popular? A Data-Driven Look At PDF Topic](https://abp.io/community/articles/which-opensource-pdf-libraries-are-recently-popular-a-g68q78it)
* [Engincan Veske](https://abp.io/community/members/EngincanV):
* [Stop Spam and Toxic Users in Your App with AI](https://abp.io/community/articles/stop-spam-and-toxic-users-in-your-app-with-ai-3i0xxh0y)
* [Liming Ma](https://abp.io/community/members/maliming):
* [How AI Is Changing Developers](https://abp.io/community/articles/how-ai-is-changing-developers-e8y4a85f)
* [Tarık Özdemir](https://abp.io/community/members/mtozdemir):
* [JetBrains State of Developer Ecosystem Report 2025 — Key Insights](https://abp.io/community/articles/jetbrains-state-of-developer-ecosystem-report-2025-key-z0638q5e)
* [Adnan Ali](https://abp.io/community/members/adnanaldaim):
* [Integrating AI into ABP.IO Applications: The Complete Guide to Volo.Abp.AI and AI Management Module](https://abp.io/community/articles/integrating-ai-into-abp.io-applications-the-complete-guide-jc9fbjq0)
Thanks to the ABP Community for all the content they have published. You can also [post your ABP related (text or video) content](https://abp.io/community/posts/create) to the ABP Community.
## About the Next Version
The next feature version will be 10.2. You can follow the [release planning here](https://github.com/abpframework/abp/milestones). Please [submit an issue](https://github.com/abpframework/abp/issues/new) if you have any problems with this version.

BIN
docs/en/Blog-Posts/2026-02-23 v10_1_Release_Stable/cover-image.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 471 KiB

BIN
docs/en/Blog-Posts/2026-02-23 v10_1_Release_Stable/upgrade-abp-packages.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 16 KiB

377
docs/en/Community-Articles/09-02-2026-building-multiagent-system-in-dotnet/articles.md

@ -0,0 +1,377 @@
# Building a Multi-Agent AI System with A2A, MCP, and ADK in .NET
> How we combined three open AI protocols — Google's A2A & ADK with Anthropic's MCP — to build a production-ready Multi-Agent Research Assistant using .NET 10.
---
## Introduction
The AI space is constantly changing and improving. Once again, we've moved past the single LLM calls and into the future of **Multi-Agent Systems**, in which expert AI agents act in unison as a collaborative team.
But here is the problem: **How do you make agents communicate with each other? How do you equip agents with tools? How do you control them?**
Three open protocols have emerged for answering these questions:
- **MCP (Model Context Protocol)** by Anthropic — The "USB-C for AI"
- **A2A (Agent-to-Agent Protocol)** by Google — The "phone line between agents"
- **ADK (Agent Development Kit)** by Google — The "organizational chart for agents"
In this article, I will briefly describe each protocol, highlight the benefits of the combination, and walk you through our own project: a **Multi-Agent Research Assistant** developed via ABP Framework.
---
## The Problem: Why Single-Agent Isn't Enough
Imagine you ask an AI: *"Research the latest AI agent frameworks and give me a comprehensive analysis report."*
A single LLM call would:
- Hallucinate search results (can't actually browse the web)
- Produce a shallow analysis (no structured research pipeline)
- Lose context between steps (no state management)
- Can't save results anywhere (no tool access)
What you actually need is a **team of specialists**:
1. A **Researcher** who searches the web and gathers raw data
2. An **Analyst** who processes that data into a structured report
3. **Tools** that let agents interact with the real world (web, database, filesystem)
4. An **Orchestrator** that coordinates everything
This is exactly what we built.
!["single-vs-multiagent system"](images/image.png)
---
## Protocol #1: MCP — Giving Agents Superpowers
### What is MCP?
**MCP (Model Context Protocol)**: Anthropic's standardized protocol allows AI models to be connected to all external tools and data sources. MCP can be thought of as **the USB-C of AI** – one port compatible with everything.
Earlier, before MCP, if you wanted your LLM to do things such as search the web, query a database, and store files, you would need to write your own integration code for each capability. MCP lets you define your tools one time, and any agent that is MCP-compatible can make use of them.
!["mcp"](images/image-1.png)
### How MCP Works
MCP follows a simple **Client-Server architecture**:
![mcp client server](images/mcp-client-server-1200x700.png)
The flow is straightforward:
1. **Discovery**: The agent asks "What tools do you have?" (`tools/list`)
2. **Invocation**: The agent calls a specific tool (`tools/call`)
3. **Result**: The tool returns data back to the agent
### MCP in Our Project
We built three MCP tool servers:
| MCP Tool | Purpose | Used By |
|----------|---------|---------|
| `web_search` | Searches the web via Tavily API | Researcher Agent |
| `fetch_url_content` | Fetches content from a URL | Researcher Agent |
| `save_research_to_file` | Saves reports to the filesystem | Analysis Agent |
| `save_research_to_database` | Persists results in SQL Server | Analysis Agent |
| `search_past_research` | Queries historical research | Analysis Agent |
The beauty of MCP is that you do not need to know how these tools are implemented inside the tool. You simply need to call them by their names as given in the description.
---
## Protocol #2: A2A — Making Agents Talk to Each Other
### What is A2A?
**A2A (Agent to Agent)**, formerly proposed by Google and now presented under the Linux Foundation, describes a protocol allowing **one AI agent to discover another and trade tasks**. MCP fits as helping agents acquire tools; A2A helps them acquire the ability to speak.
Think of it this way:
- **MCP** = "What can this agent *do*?" (capabilities)
- **A2A** = "How do agents *talk*?" (communication)
### The Agent Card: Your Agent's Business Card
Every A2A-compatible agent publishes an **Agent Card** — a JSON document that describes who it is and what it can do. It's like a business card for AI agents:
```json
{
"name": "Researcher Agent",
"description": "Searches the web to collect comprehensive research data",
"url": "https://localhost:44331/a2a/researcher",
"version": "1.0.0",
"capabilities": {
"streaming": false,
"pushNotifications": false
},
"skills": [
{
"id": "web-research",
"name": "Web Research",
"description": "Searches the web on a given topic and collects raw data",
"tags": ["research", "web-search", "data-collection"]
}
]
}
```
Other agents can discover this card at `/.well-known/agent.json` and immediately know:
- What this agent does
- Where to reach it
- What skills it has
![What is A2A?](images/image-2.png)
### How A2A Task Exchange Works
Once an agent discovers another agent, it can send tasks:
![orchestrator](images/orchestrator-researcher-seq-1200x700.png)
The key concepts:
- **Task**: A unit of work sent between agents (like an email with instructions)
- **Artifact**: The output produced by an agent (like an attachment in the reply)
- **Task State**: `Submitted → Working → Completed/Failed`
### A2A in Our Project
Agent communication in our system uses A2A:
- The **Orchestrator** finds all agents through the Agent Cards
- It sends a research task to the **Researcher Agent**
- The Researcher’s output (artifacts) is used as input by **Analysis Agent** - The Analysis Agent creates the final structured report
---
## Protocol #3: ADK — Organizing Your Agent Team
### What is ADK?
**ADK (Agent Development Kit)**, created by Google, provides patterns for **organizing and orchestrating multiple agents**. It answers the question: "How do you build a team of agents that work together efficiently?"
ADK gives you:
- **BaseAgent**: A foundation every agent inherits from
- **SequentialAgent**: Runs agents one after another (pipeline)
- **ParallelAgent**: Runs agents simultaneously
- **AgentContext**: Shared state that flows through the pipeline
- **AgentEvent**: Control flow signals (escalate, transfer, state updates)
> **Note**: ADK's official SDK is Python-only. We ported the core patterns to .NET for our project.
### The Pipeline Pattern
The most powerful ADK pattern is the **Sequential Pipeline**. Think of it as an assembly line in a factory:
![agent state flow](images/agent-state-flow.png)
Each agent:
1. Receives the shared **AgentContext** (with state from previous agents)
2. Does its work
3. Updates the state
4. Passes it to the next agent
### AgentContext: The Shared Memory
`AgentContext` is like a shared whiteboard that all agents can read from and write to:
![agent context](images/agent-context.png)
This pattern eliminates the need for complex inter-agent messaging — agents simply read and write to a shared context.
### ADK Orchestration Patterns
ADK supports multiple orchestration patterns:
| Pattern | Description | Use Case |
|---------|-------------|----------|
| **Sequential** | A → B → C | Research → Analysis pipeline |
| **Parallel** | A, B, C simultaneously | Multiple searches at once |
| **Fan-Out/Fan-In** | Split → Process → Merge | Distributed research |
| **Conditional Routing** | If/else agent selection | Route by query type |
---
## How the Three Protocols Work Together
Here's the key insight: **MCP, A2A, and ADK are not competitors — they're complementary layers of a complete agent system.**
![agent ecosystem](images/agent-ecosystem.png)
Each protocol handles a different concern:
| Layer | Protocol | Question It Answers |
|-------|----------|-------------------|
| **Top** | ADK | "How are agents organized?" |
| **Middle** | A2A | "How do agents communicate?" |
| **Bottom** | MCP | "What tools can agents use?" |
---
## Our Project: Multi-Agent Research Assistant
### Built With
- **.NET 10.0** — Latest runtime
- **ABP Framework 10.0.2** — Enterprise .NET application framework
- **Semantic Kernel 1.70.0** — Microsoft's AI orchestration SDK
- **Azure OpenAI (GPT)** — LLM backbone
- **Tavily Search API** — Real-time web search
- **SQL Server** — Research persistence
- **MCP SDK** (`ModelContextProtocol` 0.8.0-preview.1)
- **A2A SDK** (`A2A` 0.3.3-preview)
### How It Works (Step by Step)
**Step 1: User Submits a Query**
For example, the user specifies a field of research in the dashboard: *“Compare the latest AI agent frameworks: LangChain, Semantic Kernel, and AutoGen”*, and then specifies execution mode as ADK-Sequential or A2A.
**Step 2: Orchestrator Activates**
The `ResearchOrchestrator` receives the query and constructs the `AgentContext`. In ADK mode, it constructs a `SequentialAgent` with two sub-agents; in A2A mode, it uses the `A2AServer` to send the tasks.
**Step 3: Researcher Agent Goes to Work**
The Researcher Agent:
- Receives the query from the context
- Uses GPT to formulate optimal search queries
- Calls the `web_search` MCP tool (powered by Tavily API)
- Collects and synthesizes raw research data
- Stores results in the shared `AgentContext`
**Step 4: Analysis Agent Takes Over**
The Analysis Agent:
- Reads the Researcher's raw data from `AgentContext`
- Uses GPT to perform deep analysis
- Generates a structured Markdown report with sections:
- Executive Summary
- Key Findings
- Detailed Analysis
- Comparative Assessment
- Conclusion and Recommendations
- Calls MCP tools to save the report to both filesystem and database
**Step 5: Results Returned**
The orchestrator collects all results and returns them to the user via the REST API. The dashboard displays the research report, analysis report, agent event timeline, and raw data.
### Two Execution Modes
Our system supports two execution modes, demonstrating both ADK and A2A approaches:
#### Mode 1: ADK Sequential Pipeline
Agents are organized as a `SequentialAgent`. State flows automatically through the pipeline via `AgentContext`. This is an in-process approach — fast and simple.
![sequential agent context flow](images/sequential-agent-context-flow-1200x700.png)
#### Mode 2: A2A Protocol-Based
Agents communicate via the A2A protocol. The Orchestrator sends `AgentTask` objects to each agent through the `A2AServer`. Each agent has its own `AgentCard` for discovery.
![orchestrator a2a routing](images/orchestrator-a2a-routing-1200x700.png)
### The Dashboard
The UI provides a complete research experience:
- **Hero Section** with system description and protocol badges
- **Architecture Cards** showing all four components (Researcher, Analyst, MCP Tools, Orchestrator)
- **Research Form** with query input and mode selection
- **Live Pipeline Status** tracking each stage of execution
- **Tabbed Results** view: Research Report, Analysis Report, Raw Data, Agent Events
- **Research History** table with past queries and their results
![Dashboard 1](images/image-3.png)
![Dashboard 2](images/image-4.png)
---
## Why ABP Framework?
We chose ABP Framework as our .NET application foundation. Here's why it was a natural fit:
| ABP Feature | How We Used It |
|-------------|---------------|
| **Auto API Controllers** | `ResearchAppService` automatically becomes REST API endpoints |
| **Dependency Injection** | Clean registration of agents, tools, orchestrator, Semantic Kernel |
| **Repository Pattern** | `IRepository<ResearchRecord>` for database operations in MCP tools |
| **Module System** | All agent ecosystem config encapsulated in `AgentEcosystemModule` |
| **Entity Framework Core** | Research record persistence with code-first migrations |
| **Built-in Auth** | OpenIddict integration for securing agent endpoints |
| **Health Checks** | Monitoring agent ecosystem health |
ABP's single layer template provided us the best .NET groundwork, which had all the enterprise features without any unnecessary complexity for a focused AI project. Of course, the agent architecture (MCP, A2A, ADK) is actually framework-agnostic and can be implemented with any .NET application.
---
## Key Takeaways
### 1. Protocols Are Complementary, Not Competing
MCP, A2A, and ADK solve different problems. Using them together creates a complete agent system:
- **MCP**: Standardize tool access
- **A2A**: Standardize inter-agent communication
- **ADK**: Standardize agent orchestration
### 2. Start Simple, Scale Later
Our approach runs all of that in a single process, which is in-process A2A. Using A2A allowed us to design the code so that each agent can be extracted into its own microservice later on without affecting the code logic.
### 3. Shared State > Message Passing (For Simple Cases)
ADK's `AgentContext` with shared state is simpler and faster than A2A message passing for in-process scenarios. Use A2A when agents need to run as separate services.
### 4. MCP is the Real Game-Changer
The ability to define tools once and have any agent use them — with automatic discovery and structured invocations — eliminates enormous amounts of boilerplate code.
### 5. LLM Abstraction is Critical
Using Semantic Kernel's `IChatCompletionService` lets you swap between Azure OpenAI, OpenAI, Ollama, or any provider without touching agent code.
---
## What's Next?
This project demonstrates the foundation of a multi-agent system. Future enhancements could include:
- **Streaming responses** — Real-time updates as agents work (A2A supports this)
- **More specialized agents** — Code analysis, translation, fact-checking agents
- **Distributed deployment** — Each agent as a separate microservice with HTTP-based A2A
- **Agent marketplace** — Discover and integrate third-party agents via A2A Agent Cards
- **Human-in-the-loop** — Using A2A's `InputRequired` state for human approval steps
- **RAG integration** — MCP tools for vector database search
---
## Resources
| Resource | Link |
|----------|------|
| **MCP Specification** | [modelcontextprotocol.io](https://modelcontextprotocol.io) |
| **A2A Specification** | [google.github.io/A2A](https://google.github.io/A2A) |
| **ADK Documentation** | [google.github.io/adk-docs](https://google.github.io/adk-docs) |
| **ABP Framework** | [abp.io](https://abp.io) |
| **Semantic Kernel** | [github.com/microsoft/semantic-kernel](https://github.com/microsoft/semantic-kernel) |
| **MCP .NET SDK** | [NuGet: ModelContextProtocol](https://www.nuget.org/packages/ModelContextProtocol) |
| **A2A .NET SDK** | [NuGet: A2A](https://www.nuget.org/packages/A2A) |
| **Our Source Code** | [GitHub Repository](https://github.com/fahrigedik/agent-ecosystem-in-abp) |
---
## Conclusion
Developing a multi-agent AI system is no longer a futuristic dream; it’s something that can actually be achieved today by using open protocols and available frameworks. In this manner, by using **MCP** for access to tools, **A2A** for communicating between agents, and **ADK** for orchestration, we have actually built a Research Assistant.
ABP Framework and .NET turned out to be an excellent choice, delivering us the infrastructure we needed to implement DI, repositories, auto APIs, and modules, allowing us to work completely on the AI agent architecture.
The era of single LLM calls is ending, and the era of agent ecosystems begins now.
---

BIN
docs/en/Community-Articles/09-02-2026-building-multiagent-system-in-dotnet/images/agent-context.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 48 KiB

BIN
docs/en/Community-Articles/09-02-2026-building-multiagent-system-in-dotnet/images/agent-ecosystem.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 52 KiB

BIN
docs/en/Community-Articles/09-02-2026-building-multiagent-system-in-dotnet/images/agent-state-flow.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 22 KiB

BIN
docs/en/Community-Articles/09-02-2026-building-multiagent-system-in-dotnet/images/image-1.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 86 KiB

BIN
docs/en/Community-Articles/09-02-2026-building-multiagent-system-in-dotnet/images/image-2.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 126 KiB

BIN
docs/en/Community-Articles/09-02-2026-building-multiagent-system-in-dotnet/images/image-3.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 54 KiB

BIN
docs/en/Community-Articles/09-02-2026-building-multiagent-system-in-dotnet/images/image-4.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 61 KiB

BIN
docs/en/Community-Articles/09-02-2026-building-multiagent-system-in-dotnet/images/image.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 169 KiB

BIN
docs/en/Community-Articles/09-02-2026-building-multiagent-system-in-dotnet/images/mcp-client-server-1200x700.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 16 KiB

BIN
docs/en/Community-Articles/09-02-2026-building-multiagent-system-in-dotnet/images/orchestrator-a2a-routing-1200x700.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 17 KiB

BIN
docs/en/Community-Articles/09-02-2026-building-multiagent-system-in-dotnet/images/orchestrator-researcher-seq-1200x700.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 14 KiB

BIN
docs/en/Community-Articles/09-02-2026-building-multiagent-system-in-dotnet/images/sequential-agent-context-flow-1200x700.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 17 KiB

2
docs/en/Community-Articles/2025-09-02-training-campaign/post.md

@ -1,6 +1,6 @@
# IMPROVE YOUR ABP SKILLS WITH 33% OFF LIVE TRAININGS!
We have exciting news to share\! As you know, we offer live training packages to help you improve your skills and knowledge of ABP. From September 8th to 19th, we are giving you 33% OFF our live trainings, so you can learn more about the product at a discounted price\!
We have exciting news to share\! As you know, we offer live training packages to help you improve your skills and knowledge of ABP. For a limited time, we are giving you 33% OFF our live trainings, so you can learn more about the product at a discounted price\!
#### Why Join ABP.IO Training?

4
docs/en/Community-Articles/2025-12-18-Announcement-AIMAnagement/post.md

@ -76,7 +76,7 @@ Installation is straightforward using the [ABP Studio](https://abp.io/studio). Y
- Client Components
- Integration to Startup Templates
### v10.1
### v10.1
- Blazor UI
- Angular UI
- Resource based authorization on Workspaces
@ -103,4 +103,4 @@ The AI Management Module is available now for ABP Team and higher license holder
---
*The AI Management Module is currently in preview. We're excited to hear your feedback as we continue to improve and add new features!*
*The AI Management Module is currently in preview. We're excited to hear your feedback as we continue to improve and add new features!*

BIN
docs/en/Community-Articles/2025-12-18-Implementing-Multiple-Global-Query-Filters-With-Entity-Framework-Core/images/cover.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 358 KiB

728
docs/en/Community-Articles/2025-12-18-Implementing-Multiple-Global-Query-Filters-With-Entity-Framework-Core/post.md

@ -0,0 +1,728 @@
# Implementing Multiple Global Query Filters with Entity Framework Core
Global query filters are one of Entity Framework Core's most powerful features for automatically filtering data based on certain conditions. They allow you to define filter criteria at the entity level that are automatically applied to all LINQ queries, making it impossible for developers to accidentally forget to include important filtering logic. In this article, we'll explore how to implement multiple global query filters in ABP Framework, covering built-in filters, custom filters, and performance optimization techniques.
By the end of this guide, you'll understand how ABP Framework's data filtering system works, how to create custom global query filters for your specific business requirements, how to combine multiple filters effectively, and how to optimize filter performance using user-defined functions.
## Understanding Global Query Filters in EF Core
Global query filters were introduced in EF Core 2.0 and allow you to automatically append LINQ predicates to queries generated for an entity type. This is particularly useful for scenarios like multi-tenancy, soft delete, data isolation, and row-level security.
In traditional applications, developers must remember to add filter conditions manually to every query:
```csharp
// Manual filtering - error-prone and tedious
var activeBooks = await _bookRepository
.GetListAsync(b => b.IsDeleted == false && b.TenantId == currentTenantId);
```
With global query filters, this logic is applied automatically:
```csharp
// Filter is applied automatically - no manual filtering needed
var activeBooks = await _bookRepository.GetListAsync();
```
ABP Framework provides a sophisticated data filtering system built on top of EF Core's global query filters, with built-in support for soft delete, multi-tenancy, and the ability to easily create custom filters.
### Important: Plain EF Core vs ABP Composition
In plain EF Core, calling `HasQueryFilter` multiple times for the same entity does **not** create multiple active filters. The last call replaces the previous one (unless you use newer named-filter APIs in recent EF Core versions).
ABP provides `HasAbpQueryFilter` to compose query filters safely. This method combines your custom filter with ABP's built-in filters (such as `ISoftDelete` and `IMultiTenant`) and with other `HasAbpQueryFilter` calls.
## ABP Framework's Data Filtering System
ABP's data filtering system is defined in the `Volo.Abp.Data` namespace and provides a consistent way to manage filters across your application. The core interface is `IDataFilter<TFilter>`, which allows you to enable or disable filters programmatically.
### Built-in Filters
ABP Framework comes with several built-in filters:
1. **ISoftDelete**: Automatically filters out soft-deleted entities
2. **IMultiTenant**: Automatically filters entities by current tenant (for SaaS applications)
3. **IIsActive**: Filters entities based on active status
Let's look at how these are implemented in the ABP framework:
The `ISoftDelete` interface is straightforward:
```csharp
namespace Volo.Abp;
public interface ISoftDelete
{
bool IsDeleted { get; }
}
```
Any entity implementing this interface will automatically have deleted records filtered out of queries.
### Enabling and Disabling Filters
ABP provides the `IDataFilter<TFilter>` service to control filter behavior at runtime:
```csharp
public class BookAppService : ApplicationService
{
private readonly IDataFilter<ISoftDelete> _softDeleteFilter;
private readonly IRepository<Book, Guid> _bookRepository;
public BookAppService(
IDataFilter<ISoftDelete> softDeleteFilter,
IRepository<Book, Guid> bookRepository)
{
_softDeleteFilter = softDeleteFilter;
_bookRepository = bookRepository;
}
public async Task<List<Book>> GetAllBooksIncludingDeletedAsync()
{
// Temporarily disable the soft delete filter
using (_softDeleteFilter.Disable())
{
return await _bookRepository.GetListAsync();
}
}
public async Task<List<Book>> GetActiveBooksAsync()
{
// Filter is enabled by default - soft-deleted items are excluded
return await _bookRepository.GetListAsync();
}
}
```
You can also check if a filter is enabled and enable/disable it programmatically:
```csharp
public async Task ProcessBooksAsync()
{
// Check if filter is enabled
if (_softDeleteFilter.IsEnabled)
{
// Enable or disable explicitly
_softDeleteFilter.Enable();
// or
_softDeleteFilter.Disable();
}
}
```
## Creating Custom Global Query Filters
Now let's create custom global query filters for a real-world scenario. Imagine we have a library management system where we need to filter books based on:
1. **Publication Status**: Only show published books in public areas
2. **User's Department**: Users can only see books from their department
3. **Approval Status**: Only show approved content
### Step 1: Define Filter Interfaces
First, create the filter interfaces. You can define them in the same file as your entity or in separate files:
```csharp
// Can be placed in the same file as Book entity or in separate files
namespace Library;
public interface IPublishable
{
bool IsPublished { get; }
DateTime PublishDate { get; set; }
}
public interface IDepartmentRestricted
{
Guid DepartmentId { get; }
}
public interface IApproveable
{
bool IsApproved { get; }
}
public interface IPublishedFilter
{
}
public interface IApprovedFilter
{
}
```
`IPublishable` / `IApproveable` are implemented by entities and define entity properties.
`IPublishedFilter` / `IApprovedFilter` are filter-state interfaces used with `IDataFilter` so you can enable/disable those filters at runtime.
### Step 2: Add Filter Expressions to DbContext
Now let's add the filter expressions to your existing DbContext. First, here's how to use `HasAbpQueryFilter` to create **always-on** filters (they cannot be toggled at runtime):
```csharp
// MyProjectDbContext.cs
using Microsoft.EntityFrameworkCore;
using Volo.Abp.EntityFrameworkCore;
using Volo.Abp.GlobalFeatures;
using Volo.Abp.MultiTenancy;
using Volo.Abp.Authorization;
using Volo.Abp.Data;
using Volo.Abp.EntityFrameworkCore.Modeling;
namespace Library;
public class LibraryDbContext : AbpDbContext<LibraryDbContext>
{
public DbSet<Book> Books { get; set; }
public DbSet<Department> Departments { get; set; }
public DbSet<Author> Authors { get; set; }
public LibraryDbContext(DbContextOptions<LibraryDbContext> options)
: base(options)
{
}
protected override void OnModelCreating(ModelBuilder builder)
{
base.OnModelCreating(builder);
builder.Entity<Book>(b =>
{
b.ToTable("Books");
b.ConfigureByConvention();
// HasAbpQueryFilter creates ALWAYS-ACTIVE filters
// These cannot be toggled at runtime via IDataFilter
b.HasAbpQueryFilter(book =>
book.IsPublished &&
book.PublishDate <= DateTime.UtcNow);
b.HasAbpQueryFilter(book => book.IsApproved);
});
builder.Entity<Department>(b =>
{
b.ToTable("Departments");
b.ConfigureByConvention();
});
}
}
```
> **Note:** Using `HasAbpQueryFilter` alone creates filters that are always active and cannot be toggled at runtime. This approach is simpler but less flexible. For toggleable filters, see Step 3 below.
### Step 3: Make Filters Toggleable (Optional)
If you need filters that can be enabled/disabled at runtime via `IDataFilter<T>`, override `ShouldFilterEntity` and `CreateFilterExpression` instead of (or in addition to) `HasAbpQueryFilter`:
```csharp
// MyProjectDbContext.cs
using System;
using System.Linq.Expressions;
using Microsoft.EntityFrameworkCore;
using Microsoft.EntityFrameworkCore.Metadata;
using Microsoft.EntityFrameworkCore.Metadata.Builders;
using Volo.Abp.EntityFrameworkCore;
namespace Library;
public class LibraryDbContext : AbpDbContext<LibraryDbContext>
{
protected bool IsPublishedFilterEnabled => DataFilter?.IsEnabled<IPublishedFilter>() ?? false;
protected bool IsApprovedFilterEnabled => DataFilter?.IsEnabled<IApprovedFilter>() ?? false;
protected override bool ShouldFilterEntity<TEntity>(IMutableEntityType entityType)
{
if (typeof(IPublishable).IsAssignableFrom(typeof(TEntity)))
{
return true;
}
if (typeof(IApproveable).IsAssignableFrom(typeof(TEntity)))
{
return true;
}
return base.ShouldFilterEntity<TEntity>(entityType);
}
protected override Expression<Func<TEntity, bool>>? CreateFilterExpression<TEntity>(
ModelBuilder modelBuilder,
EntityTypeBuilder<TEntity> entityTypeBuilder)
where TEntity : class
{
var expression = base.CreateFilterExpression<TEntity>(modelBuilder, entityTypeBuilder);
if (typeof(IPublishable).IsAssignableFrom(typeof(TEntity)))
{
Expression<Func<TEntity, bool>> publishFilter = e =>
!IsPublishedFilterEnabled ||
(
EF.Property<bool>(e, nameof(IPublishable.IsPublished)) &&
EF.Property<DateTime>(e, nameof(IPublishable.PublishDate)) <= DateTime.UtcNow
);
expression = expression == null
? publishFilter
: QueryFilterExpressionHelper.CombineExpressions(expression, publishFilter);
}
if (typeof(IApproveable).IsAssignableFrom(typeof(TEntity)))
{
Expression<Func<TEntity, bool>> approvalFilter = e =>
!IsApprovedFilterEnabled || EF.Property<bool>(e, nameof(IApproveable.IsApproved));
expression = expression == null
? approvalFilter
: QueryFilterExpressionHelper.CombineExpressions(expression, approvalFilter);
}
return expression;
}
}
```
This mapping step is what connects `IDataFilter<IPublishedFilter>` and `IDataFilter<IApprovedFilter>` to entity-level predicates. Without this step, `HasAbpQueryFilter` expressions remain always active.
> **Important:** Note that we use `DateTime` (not `DateTime?`) in the filter expression to match the entity property type. Adjust accordingly if your entity uses nullable `DateTime?`.
### Step 4: Disable Custom Filters with IDataFilter
Once custom filters are mapped to the ABP data-filter pipeline, you can disable them just like built-in filters:
```csharp
public class BookAppService : ApplicationService
{
private readonly IRepository<Book, Guid> _bookRepository;
private readonly IDataFilter<IPublishedFilter> _publishedFilter;
private readonly IDataFilter<IApprovedFilter> _approvedFilter;
public BookAppService(
IRepository<Book, Guid> bookRepository,
IDataFilter<IPublishedFilter> publishedFilter,
IDataFilter<IApprovedFilter> approvedFilter)
{
_bookRepository = bookRepository;
_publishedFilter = publishedFilter;
_approvedFilter = approvedFilter;
}
public async Task<List<Book>> GetIncludingUnpublishedAndUnapprovedAsync()
{
using (_publishedFilter.Disable())
using (_approvedFilter.Disable())
{
return await _bookRepository.GetListAsync();
}
}
}
```
## Advanced: Multiple Filters with User-Defined Functions
Starting from ABP v8.3, you can use user-defined function (UDF) mapping for better performance. This approach generates more efficient SQL and allows EF Core to create better execution plans.
### Step 1: Enable UDF Mapping
First, configure your module to use UDF mapping:
```csharp
// MyProjectModule.cs
using Volo.Abp.EntityFrameworkCore;
using Volo.Abp.EntityFrameworkCore.GlobalFilters;
using Microsoft.Extensions.DependencyInjection;
namespace Library;
[DependsOn(
typeof(AbpEntityFrameworkCoreModule),
typeof(AbpDddDomainModule)
)]
public class LibraryModule : AbpModule
{
public override void ConfigureServices(ServiceConfigurationContext context)
{
Configure<AbpEfCoreGlobalFilterOptions>(options =>
{
options.UseDbFunction = true; // Enable UDF mapping
});
}
}
```
### Step 2: Define DbFunctions
Create static methods that EF Core will map to database functions:
```csharp
// LibraryDbFunctions.cs
using Microsoft.EntityFrameworkCore;
namespace Library;
public static class LibraryDbFunctions
{
public static bool IsPublishedFilter(bool isPublished, DateTime? publishDate)
{
return isPublished && (publishDate == null || publishDate <= DateTime.UtcNow);
}
public static bool IsApprovedFilter(bool isApproved)
{
return isApproved;
}
public static bool DepartmentFilter(Guid entityDepartmentId, Guid userDepartmentId)
{
return entityDepartmentId == userDepartmentId;
}
}
```
### Step 4: Apply UDF Filters
Update your DbContext to use the UDF-based filters:
```csharp
// MyProjectDbContext.cs
protected override void OnModelCreating(ModelBuilder builder)
{
base.OnModelCreating(builder);
// Map CLR methods to SQL scalar functions.
// Create matching SQL functions in a migration.
var isPublishedMethod = typeof(LibraryDbFunctions).GetMethod(
nameof(LibraryDbFunctions.IsPublishedFilter),
new[] { typeof(bool), typeof(DateTime?) })!;
builder.HasDbFunction(isPublishedMethod);
var isApprovedMethod = typeof(LibraryDbFunctions).GetMethod(
nameof(LibraryDbFunctions.IsApprovedFilter),
new[] { typeof(bool) })!;
builder.HasDbFunction(isApprovedMethod);
builder.Entity<Book>(b =>
{
b.ToTable("Books");
b.ConfigureByConvention();
// ABP way: define separate filters. HasAbpQueryFilter composes them.
b.HasAbpQueryFilter(book =>
LibraryDbFunctions.IsPublishedFilter(book.IsPublished, book.PublishDate));
b.HasAbpQueryFilter(book =>
LibraryDbFunctions.IsApprovedFilter(book.IsApproved));
});
}
```
This approach generates cleaner SQL and improves query performance, especially in complex scenarios with multiple filters.
## Working with Complex Filter Combinations
When combining multiple filters, it's important to understand how they interact. Let's explore some common scenarios.
### Combining Tenant and Department Filters
In a multi-tenant application, you might need to combine tenant isolation with department-level access control:
```csharp
public class BookAppService : ApplicationService
{
private readonly IRepository<Book, Guid> _bookRepository;
private readonly IDataFilter<IMultiTenant> _tenantFilter;
private readonly ICurrentUser _currentUser;
public BookAppService(
IRepository<Book, Guid> bookRepository,
IDataFilter<IMultiTenant> tenantFilter,
ICurrentUser currentUser)
{
_bookRepository = bookRepository;
_tenantFilter = tenantFilter;
_currentUser = currentUser;
}
public async Task<List<BookDto>> GetMyDepartmentBooksAsync()
{
var currentUser = _currentUser;
var userDepartmentId = GetUserDepartmentId(currentUser);
// Get all books without department filter, then filter in memory
// (for scenarios where you need custom filter logic)
using (_tenantFilter.Disable()) // Optional: disable tenant filter if needed
{
var allBooks = await _bookRepository.GetListAsync();
// Apply department filter in memory (custom logic)
var departmentBooks = allBooks
.Where(b => b.DepartmentId == userDepartmentId)
.ToList();
return ObjectMapper.Map<List<Book>, List<BookDto>>(departmentBooks);
}
}
private Guid GetUserDepartmentId(ICurrentUser currentUser)
{
// Get user's department from claims or database
var departmentClaim = currentUser.FindClaim("DepartmentId");
return Guid.Parse(departmentClaim.Value);
}
}
```
### Filter Priority and Override
Sometimes you need to override filters in specific scenarios. ABP provides a flexible way to handle this:
```csharp
public async Task<Book> GetBookForEditingAsync(Guid id)
{
// Disable soft delete filter to get deleted records for restoration
using (DataFilter.Disable<ISoftDelete>())
{
return await _bookRepository.GetAsync(id);
}
}
public async Task<Book> GetBookIncludingUnpublishedAsync(Guid id)
{
// Use GetQueryableAsync to customize the query
var query = await _bookRepository.GetQueryableAsync();
// Manually apply or bypass filters
var book = await query
.FirstOrDefaultAsync(b => b.Id == id);
return book;
}
```
## Best Practices for Multiple Global Query Filters
When implementing multiple global query filters, consider these best practices:
### 1. Keep Filters Simple
Complex filter expressions can significantly impact query performance. Keep each condition focused on a single concern. In ABP, you can define them separately with `HasAbpQueryFilter`, which composes with ABP's built-in filters:
```csharp
// Good (ABP): separate, focused filters composed by HasAbpQueryFilter
b.HasAbpQueryFilter(b => b.IsPublished);
b.HasAbpQueryFilter(b => b.IsApproved);
b.HasAbpQueryFilter(b => b.DepartmentId == userDeptId);
// Avoid: calling HasQueryFilter multiple times for the same entity
// in plain EF Core (the last call replaces the previous one)
b.HasQueryFilter(b => b.IsPublished);
b.HasQueryFilter(b => b.IsApproved);
```
### 2. Use Indexing
Ensure your database has appropriate indexes for filtered columns:
```csharp
builder.Entity<Book>(b =>
{
b.HasIndex(b => b.IsPublished);
b.HasIndex(b => b.IsApproved);
b.HasIndex(b => b.DepartmentId);
b.HasIndex(b => new { b.IsPublished, b.PublishDate });
});
```
### 3. Consider Performance Impact
Use UDF mapping for better performance with complex filters. Profile your queries and analyze execution plans.
### 4. Document Filter Behavior
Clearly document which filters are applied to each entity to help developers understand the behavior:
```csharp
/// <summary>
/// Book entity with the following global query filters:
/// - ISoftDelete: Automatically excludes soft-deleted books
/// - IMultiTenant: Automatically filters by current tenant
/// - IPublishable: Excludes unpublished books (based on IsPublished and PublishDate)
/// - IApproveable: Excludes unapproved books (based on IsApproved)
/// </summary>
/// <remarks>
/// Filter interfaces (IPublishable, IApproveable, IPublishedFilter, IApprovedFilter)
/// are defined in Step 1: Define Filter Interfaces
/// </remarks>
public class Book : AuditedAggregateRoot<Guid>, ISoftDelete, IMultiTenant, IPublishable, IApproveable
{
public string Name { get; set; }
public BookType Type { get; set; }
public DateTime PublishDate { get; set; }
public float Price { get; set; }
public bool IsPublished { get; set; }
public bool IsApproved { get; set; }
public Guid? TenantId { get; set; }
public bool IsDeleted { get; set; }
public Guid DepartmentId { get; set; }
}
```
## Testing Global Query Filters
Testing with global query filters can be challenging. Here's how to do it effectively:
### Unit Testing Filters
```csharp
[Fact]
public void Book_QueryFilter_Should_Filter_Unpublished()
{
var options = new DbContextOptionsBuilder<BookStoreDbContext>()
.UseInMemoryDatabase(databaseName: "TestDb")
.Options;
using (var context = new BookStoreDbContext(options))
{
context.Books.Add(new Book { Name = "Published Book", IsPublished = true });
context.Books.Add(new Book { Name = "Unpublished Book", IsPublished = false });
context.SaveChanges();
}
using (var context = new BookStoreDbContext(options))
{
// Query with filter enabled (default)
var publishedBooks = context.Books.ToList();
Assert.Single(publishedBooks);
Assert.Equal("Published Book", publishedBooks[0].Name);
}
}
```
### Integration Testing with Filter Control
```csharp
[Fact]
public async Task Should_Get_Deleted_Book_When_Filter_Disabled()
{
var dataFilter = GetRequiredService<IDataFilter>();
// Arrange
var book = await _bookRepository.InsertAsync(
new Book { Name = "Test Book" },
autoSave: true
);
await _bookRepository.DeleteAsync(book);
// Act - with filter disabled
using (dataFilter.Disable<ISoftDelete>())
{
var deletedBook = await _bookRepository
.FirstOrDefaultAsync(b => b.Id == book.Id);
deletedBook.ShouldNotBeNull();
deletedBook.IsDeleted.ShouldBeTrue();
}
}
```
### Testing Custom Global Query Filters
Here's a complete example of testing custom toggleable filters:
```csharp
[Fact]
public async Task Should_Filter_Unpublished_Books_By_Default()
{
// Default: filters are enabled
var result = await WithUnitOfWorkAsync(async () =>
{
var bookRepository = GetRequiredService<IRepository<Book, Guid>>();
return await bookRepository.GetListAsync();
});
// Only published and approved books should be returned
result.All(b => b.IsPublished).ShouldBeTrue();
result.All(b => b.IsApproved).ShouldBeTrue();
}
[Fact]
public async Task Should_Return_All_Books_When_Filter_Disabled()
{
var result = await WithUnitOfWorkAsync(async () =>
{
// Disable the published filter to see unpublished books
using (_publishedFilter.Disable())
{
var bookRepository = GetRequiredService<IRepository<Book, Guid>>();
return await bookRepository.GetListAsync();
}
});
// Should include unpublished books
result.Any(b => b.Name == "Unpublished Book").ShouldBeTrue();
}
[Fact]
public async Task Should_Combine_Filters_Correctly()
{
// Test combining multiple filter disables
using (_publishedFilter.Disable())
using (_approvedFilter.Disable())
{
var bookRepository = GetRequiredService<IRepository<Book, Guid>>();
var allBooks = await bookRepository.GetListAsync();
// All books should be visible
allBooks.Count.ShouldBe(5);
}
}
```
> **Tip:** When using ABP's test base, inject `IDataFilter<IPublishedFilter>` and `IDataFilter<IApprovedFilter>` to control filters in your tests.
## Key Takeaways
**Global query filters automatically apply filter criteria to all queries**, reducing developer error and ensuring consistent data filtering across your application.
**ABP Framework provides a sophisticated data filtering system** with built-in support for soft delete (`ISoftDelete`) and multi-tenancy (`IMultiTenant`), plus the ability to create custom filters.
**Use `IDataFilter<TFilter>` to control filters at runtime**, enabling or disabling filters as needed for specific operations.
**To make custom filters toggleable, override `ShouldFilterEntity` and `CreateFilterExpression`** in your DbContext. Using only `HasAbpQueryFilter` creates filters that are always active.
**Combine multiple filters carefully** and consider performance implications, especially with complex filter expressions.
**Leverage user-defined function (UDF) mapping** for better SQL generation and query performance, available since ABP v8.3.
**Always test filter behavior** to ensure filters work as expected in different scenarios, including edge cases.
## Conclusion
Global query filters are essential for building secure, well-isolated applications. ABP Framework's data filtering system provides a robust foundation that builds on EF Core's capabilities while adding convenient features like runtime filter control and UDF mapping optimization.
By implementing multiple global query filters strategically, you can ensure data isolation, simplify your query logic, and reduce the risk of accidentally exposing unauthorized data. Remember to keep filters simple, add appropriate database indexes, and test thoroughly to maintain optimal performance.
Start implementing global query filters in your ABP applications today to leverage automatic data filtering across all your repositories and queries.
### See Also
- [ABP Data Filtering Documentation](https://abp.io/docs/latest/framework/fundamentals/data-filtering)
- [EF Core Global Query Filters](https://learn.microsoft.com/en-us/ef/core/querying/filters)
- [ABP Multi-Tenancy Documentation](https://abp.io/docs/latest/framework/fundamentals/multi-tenancy)
- [Using User-defined function mapping for global filters](https://abp.io/docs/latest/framework/infrastructure/data-filtering#using-user-defined-function-mapping-for-global-filters)
---
## References
- [ABP Framework Documentation](https://docs.abp.io)
- [Entity Framework Core Documentation](https://docs.microsoft.com/en-us/ef/core/)
- [EF Core Global Query Filters](https://learn.microsoft.com/en-us/ef/core/querying/filters)
- [User-defined Function Mapping](https://learn.microsoft.com/en-us/ef/core/querying/user-defined-function-mapping)

1
docs/en/Community-Articles/2025-12-18-Implementing-Multiple-Global-Query-Filters-With-Entity-Framework-Core/summary.md

@ -0,0 +1 @@
Global query filters in Entity Framework Core allow automatic data filtering at the entity level. This article covers ABP Framework's data filtering system, including built-in filters (ISoftDelete, IMultiTenant), custom filter implementation, and performance optimization using user-defined functions.

170
docs/en/Community-Articles/2026-01-11/article.md

@ -0,0 +1,170 @@
# Async Chain of Persistence Pattern: Designing for Failure in Event-Driven Systems
## Introduction
Messages can get lost while being processed when you use asynchronous messaging or event handling.
The Async Chain of Persistence Pattern makes sure that no message is ever lost from the system by making sure that the message is always stored at every step of the workflow.
## The Fundamental Principle of Pattern
The Async Chain of Persistence Pattern guarantees that no message is ever lost by ensuring that the message is always persistently stored at every step of the workflow. This is where the pattern gets its name. A message cannot be removed from its previous location until it is confirmed to be persistently stored in the subsequent stages of the chain.
It is commonly used in event-driven systems and message-driven systems.
### Event-Driven versus Message-Driven Systems
To understand the pattern, it's important to know the differences between events and messages.
#### The Core Difference
**Event:** Says "something happened", describes the past. Example: `OrderPlaced`, `PaymentCompleted`
**Message:** Says "do this", commands for the future. Example: `CreateOrder`, `SendEmail`
| Property | Event | Message |
|----------|-------|---------|
| **Coupling** | Loose - no one knows who's listening | Tighter - there's a specific receiver |
| **Publishing** | Pub/Sub - 0-N services listen | Point-to-Point - usually 1 service |
| **Tense** | Past tense, immutable | Present/future tense |
| **Error Handling** | If one consumer fails, others continue | If not processed, system breaks |
### Relationship with Async Chain of Persistence
**In event-driven systems:** Each service receives the event → persists it → publishes a new event
![Event-Driven Systems](event-driven-systems.png)
**In message-driven systems:** At each step, the message is kept safe on queue + disk
In both systems, the goal is the same: no message should be lost!
![Message-Driven Systems](message-driven-systems.png)
---
## When Do Messages Get Lost?
There are 3 main scenarios for message loss:
### 1. While Processing a Message (Receiving a Message)
While processing a message in a reply, by default, there is an automatic acknowledgment and subsequent deletion of a message that is in a queue. But in cases where there is a fatal or unrecoverable error in the message processing operation or a service instance crashes, this message gets lost.
### 2. Message Broker Crashes
In the majority of message brokers, the default option for the message persistence state is set to nonpersisting. In other words, the message will be held in the memory of the message broker without any persistence. It has been used for the purpose of obtaining rapid responses and improved throughput. However, in the event of failure of the message broker process, the nonpersistent message will be lost permanently.
### 3. Event Chaining
In event-driven systems, an event is published as a derived message after an operation is done by a service. Message loss in two ways is possible in this scenario:
1. **Risk of Asynchronous Send:** The publish operation is often carried out using the asynchronous send feature. When a fatal error takes place prior to the receipt of acknowledgment of the message publish operation by the publish services, it becomes difficult to determine if the message has been successfully transmitted to the message broker.
2. **Transaction Coordination:** In case an error is detected after the commit of the database but before the derived event is published, the derived event might be lost.
## Implementing the Pattern: 4 Critical Steps
Four critical steps are required to implement the Async Chain of Persistence Pattern:
### 1. Message Persistence
The initial step to ensure that there are no lost messages is to specify the messages as PERSISTENT: When the message broker receives the message, it is saved on disk.
```javascript
var delivery_mode = PERSISTENT
var producer = create_producer(delivery_mode)
// All sent messages are persisted on the message broker
producer.send_message(APPLY_PAYMENT)
// Alternatively
var delivery_mode = PERSISTENT
var producer = create_producer()
producer.send_message(APPLY_PAYMENT, delivery_mode)
```
With this approach, even if the message broker crashes, all messages will still be there when it comes back up.
### 2. Client Acknowledgement Mode
Instead, client-acknowledgement mode needs to be employed. When a message is received in this mode, that message will be retained in the queue until a proper acknowledgment from the processing service has been made. Instead of "auto acknowledge" mode, "client-acknowledgement" mode
Client acknowledgement mode ensures that the message is not lost while processing by the service. When a fatal error is detected during message processing, the service exits without sending an acknowledgement message and thus results in a message redispatched.
**Important Note:** The message has to be acknowledged while the message processing operation is completed so that a repeat message will not be processed.
### 3. Synchronous Send
The synchronous send must be preferred over the asynchronous send.
Although it takes longer to send via synchronous send, since it's a blocking call, it ensures that the message broker received and persisted the message on disk.
```javascript
// Blocking call to publish the derived event
var ack = publish_event(PAYMENT_APPLIED)
if (ack not_successful) {
retry_or_persist(PAYMENT_APPLIED)
}
```
With this approach, the risk of message loss during event chaining is eliminated.
### 4. Last Participant Support
This is the most complex step of the Async Chain of Persistence pattern. It determines when the message should be acknowledged.
#### For Message-Driven Systems:
```javascript
var message = receive_message()
process_message(message)
database.commit()
message.acknowledge()
```
Recommended order: **commit first, ack last.** Otherwise, if the database operation fails, the message will be lost because it has already been removed from the queue.
#### For Event-Driven Systems
In event-driven systems, there are two distinct parties involved: one that acknowledges an event and another that publishes the event that’s been derived. The party that publishes the event that’s been derived should be treated as “last participant.”
```javascript
var event = receive_event()
process_event(event)
database.commit()
message.acknowledge()
var ack = publish_event(PAYMENT_APPLIED)
if (ack not_successful) {
retry_or_persist(PAYMENT_APPLIED)
}
```
In this sequence, the original event is acknowledged after it is completed. If publishing the derived event fails, it can be retried or persisted for later delivery.
---
## Trade-offs
### Advantages
**Preventing Message Loss:**
The major benefit of this pattern is that it doesn't let a message get lost while messages are under processing. This issue, being a serious concern in asynchronous systems, is ruled out due to the pattern.
### Disadvantages
#### 1. Possible Duplicate Messages
Enabling the client-acknowledgement mode can result in the processing of the same message multiple times. If the service instance fails after the database commit but before the message could be acknowledged, the message will be repeated and duplicates can be processed.
**Solution:** In order to determine whether the arriving messages are previously processed or not, the message IDs can be logged and traced back. This also has the drawback of requiring one extra read operation per message in the system.
#### 2. Performance and Throughput
It has also been noted that the time duration in which the persistent message takes to be sent from the message broker could be **up to four times longer** than in the nonpersistent message transmission.
Persisted messages also affect performance in terms of sending and reading the message. It is also common for message brokers to maintain message data in their memory for efficient reading, but there is no assurance that messages will always be resident in memory, depending on memory size, among other factors.
#### 3. Impact of Synchronous Send
Because a synchronous send makes a blocking call until a confirmation is received, there is no other work that can be accomplished before a confirmation is received from the broker for a synchronous send. A persisted message makes this even more apparent.
#### 4. Overall Scalability
This is because, upon receipt, the message broker has to spend more time persisting messages to disk, thus negatively affecting scalability. Persistent messages always give lower total throughput, which can limit scalability under high usage loads and high message volumes.
---
## Conclusion
The Async Chain of Persistence Pattern provides a powerful solution for preventing message loss. Although it has negative effects on performance, throughput, and scalability, these trade-offs are generally acceptable in systems where data loss is critical.
Before implementing the pattern, carefully analyze your system requirements:
- **How critical is message loss?**
- **What are the performance and throughput requirements?**
- **How should the system behave in case of duplicate processing?**
The answers to these questions will help you determine whether the Async Chain of Persistence Pattern is suitable for your system.
## Sample Project
To see an example project where this pattern is implemented, you can check out the repository:
🔗 **[GitHub Repository](https://github.com/fahrigedik/SoftwareArchitecturePatterns)**

BIN
docs/en/Community-Articles/2026-01-11/event-driven-systems.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 29 KiB

BIN
docs/en/Community-Articles/2026-01-11/message-driven-systems.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 13 KiB

17
docs/en/Community-Articles/2026-01-16-meet-abio-at-ndc-london/post.md

@ -0,0 +1,17 @@
We are thrilled to announce that **ABP.IO will be sponsoring [NDC London 2026](https://ndclondon.com/),** making the start of 2026 a very exciting time for us\!
NDC London is going to take place from **26th-30th January 2026 at Queen Elizabeth II Center.** This 5-Day event for software developers will have over 90 speakers and 100 sessions. We are excited to be a part of this amazing event once more as devoted supporters of the software development community\!
## Conference Tracks, Topics, and What Developers Can Expect
Developers attending **NDC London 2026** can expect five focused tracks packed with practical, real-world sessions. The conference covers the modern development stack, including **.NET, JavaScript, Cloud, DevOps, Security, Testing, UX, Web**, and emerging technologies, delivered by industry experts with actionable insights developers can apply immediately.
## Discover Previous NDC Events
We have shared **our takeaways from past NDC events** and other conferences [**here**](https://abp.io/community/events/sponsored#gsc.tab=0). You can check them out to learn what we discovered along the way\!
## Stop By Our Booth and Say Hello
We can’t wait to meet fellow developers at NDC London 2026, have meaningful conversations and connect in person. **If you are stopping by our booth, don’t miss our raffle\!** We will be giving away a nice surprise during the event\!
We are looking forward to meeting you there and sharing a few great days focused on software development. See you there\!

BIN
docs/en/Community-Articles/2026-01-19-Trend-PDF-Libraries-For-CSharp/PuppeteerSharp.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 60 KiB

BIN
docs/en/Community-Articles/2026-01-19-Trend-PDF-Libraries-For-CSharp/QuestPDF.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 269 KiB

153
docs/en/Community-Articles/2026-01-19-Trend-PDF-Libraries-For-CSharp/article.md

@ -0,0 +1,153 @@
# Which Open-Source PDF Libraries Are Recently Popular ? A Data-Driven Look At PDF Topic
So you're looking for a PDF library in .NET, right? Here's the thing - just because something has a million downloads doesn't mean it's what you should use *today*. I'm looking at **recent download momentum** (how many people are actually using it NOW via NuGet) and **GitHub activity** (are they still maintaining this thing or did they abandon it?).
I pulled data from the last ~90 days for the main players in the .NET PDF space. Here's what's actually happening:
## Popularity Comparison of .NET PDF Libraries (*ordered by score*)
| Library | GitHub Stars | Avg Daily NuGet Downloads | Total NuGet Downloads | **Popularity Score** |
|---------|---------------|-----------------------------|----------------------------|---------------------|
| **[Microsoft.Playwright](https://github.com/microsoft/playwright-dotnet)** | [2.9k](https://github.com/microsoft/playwright-dotnet) | [23k](https://www.nuget.org/packages/Microsoft.Playwright) | 39M | **71/100** |
| **[QuestPDF](https://github.com/QuestPDF/QuestPDF)** | [13.7k](https://github.com/QuestPDF/QuestPDF) | [8.2k](https://www.nuget.org/packages/QuestPDF) | 15M | **54/100** |
| **[PDFsharp](https://github.com/empira/PDFsharp)** | [862](https://github.com/empira/PDFsharp) | [9k](https://www.nuget.org/packages/PdfSharp) | 47M | **48/100** |
| **[iText](https://github.com/itext/itext-dotnet)** | [1.9k](https://github.com/itext/itext-dotnet) | [17.2k](https://www.nuget.org/packages/itext) | 16M | **44/100** |
| **[PuppeteerSharp](https://github.com/hardkoded/puppeteer-sharp)** | [3.8k](https://github.com/hardkoded/puppeteer-sharp) | [8.7k](https://www.nuget.org/packages/PuppeteerSharp) | 26M | **40/100** |
**How I calculated the score:** I weighted GitHub Stars (30%), Daily Downloads (40% - because that's what matters NOW), and Total Downloads (30% - for historical context). Everything normalized to 0-100 before weighting. Higher = better momentum overall.
## The Breakdown - What You Actually Need to Know
### [PDFsharp](https://docs.pdfsharp.net/)
![pdfsharp](pdfsharp.png)
**NuGet:** [PdfSharp](https://www.nuget.org/packages/PdfSharp) | **GitHub:** [empira/PDFsharp](https://github.com/empira/PDFsharp)
**What it does:** Code-first PDF stuff - drawing, manipulating, merging, that kind of thing. Not for HTML/browser rendering though, so don't try to convert your React app to PDF with this.
**What's the vibe?** **Stable, but kinda old school.** It's got the biggest total download count (47M!) but only pulling ~9k/day now. They updated it 2 weeks ago (Jan 6) so it's alive, and it supports .NET 8-10 which is nice. The GitHub stars (862) are pretty low compared to the shiny new kids, but honestly? It's been around forever and people still use it. It's the reliable old workhorse.
**Pick this if:**
- You need to build PDFs from scratch with code (not HTML)
- You want to draw graphics, manipulate existing PDFs, merge files
- You don't want browser engines anywhere near your project
---
### [iText](https://itextpdf.com/)
![iText Logo](itext.jpg)
**NuGet:** [itext](https://www.nuget.org/packages/itext/) | **GitHub:** [itext/itext-dotnet](https://github.com/itext/itext-dotnet)
**What it does:** The enterprise beast. Digital signatures, PDF compliance (PDF/A, PDF/UA), forms, all that fancy stuff. Can do HTML-to-PDF too if you need it.
**What's the vibe?** **Actually doing pretty well!** ~17.2k downloads/day (highest for code-first libs), updated literally yesterday (Jan 18). They're moving fast. 1.9k stars isn't huge but the community seems active. The catch? This is the enterprise option - check the licensing before you commit if you're doing commercial work.
**Pick this if:**
- You need digital signatures, PDF compliance, or advanced form stuff
- Your company is cool with licensing fees (or you're doing open source)
- You need serious PDF manipulation features
- You want HTML-to-PDF AND code-based generation in one package
---
### [Microsoft.Playwright](https://playwright.dev/dotnet/)
![Playwright Logo](playwright.png)
**NuGet:** [Microsoft.Playwright](https://www.nuget.org/packages/Microsoft.Playwright) | **GitHub:** [microsoft/playwright-dotnet](https://github.com/microsoft/playwright-dotnet)
**What it does:** Browser automation that can turn HTML/CSS/JS into PDFs. Uses real browser engines (Chromium, WebKit, Firefox) so your PDFs look exactly like they would in a browser.
**What's the vibe?** **Killing it.** ~23k downloads/day (highest in this whole list!). It's Microsoft-backed so you know they're not gonna abandon it anytime soon. Last commit was December 3rd but honestly that's fine, they're actively maintaining. 2.9k stars and climbing. If you need to turn web pages into PDFs, this is probably your best bet right now.
**Pick this if:**
- You need to convert HTML/CSS/JS to PDF and want it to look EXACTLY like the browser
- You're working with SPAs, dynamic content, or web templates
- You also need browser automation/testing (bonus!)
- Layout accuracy is critical (forms, dashboards, etc.)
---
### [PuppeteerSharp](https://www.puppeteersharp.com/)
![PuppeteerSharp Logo](PuppeteerSharp.png)
**NuGet:** [PuppeteerSharp](https://www.nuget.org/packages/PuppeteerSharp) | **GitHub:** [hardkoded/puppeteer-sharp](https://github.com/hardkoded/puppeteer-sharp)
**What it does:** Basically Playwright's older sibling. Uses headless Chromium to turn HTML into PDFs. Same idea, different API.
**What's the vibe?** **Stable but losing ground.** Got updated last week (Jan 12) so it's maintained, but ~8.7k/day is way less than Playwright's ~23k. 3.8k stars is decent though. It works fine, but Playwright is eating its lunch. Still, if you know Puppeteer already or only need Chromium, this might be fine.
**Pick this if:**
- You already know Puppeteer from Node.js and want the same vibe in .NET
- You only need Chromium (don't care about Firefox/WebKit)
- You have existing Puppeteer code you're porting
---
### [QuestPDF](https://github.com/QuestPDF/QuestPDF)
![QuestPDF Logo](QuestPDF.png)
**NuGet:** [QuestPDF](https://www.nuget.org/packages/QuestPDF) | **GitHub:** [QuestPDF/QuestPDF](https://github.com/QuestPDF/QuestPDF)
**What it does:** Build PDFs with fluent C# APIs. Think of it like building a UI layout, but for PDFs. No HTML needed - it's all code, all .NET.
**What's the vibe?** **The community favorite.** 13.7k stars (most by far!), updated yesterday (Jan 18). ~8.2k downloads/day isn't the highest but the community is clearly excited about it. Modern API, active dev, people seem to actually enjoy using it. If you're building reports/invoices from code and want something that feels modern, this is it.
**Pick this if:**
- You want to build PDFs with code (not HTML) and you like fluent APIs
- You're generating reports, invoices, structured documents
- You want zero browser dependencies
- You care about type safety and maintainable code
- You want something that feels modern and well-designed
## Who's Winning Right Now?
Here's what the numbers are telling us:
### Code-First Libraries (Building PDFs with Code)
**[QuestPDF](https://github.com/QuestPDF/QuestPDF)** - Score: 54/100
The people's choice. Most GitHub stars (13.7k), updated yesterday, community loves it. Downloads aren't the highest but the engagement is real. This is what people are excited about.
**[iText](https://github.com/itext/itext-dotnet)** - Score: 44/100
Actually pulling the most daily downloads (~17.2k/day) for code-first libs, also updated yesterday. The enterprise crowd is still using this heavily. Just watch that licensing.
**[PDFsharp](https://github.com/empira/PDFsharp)** - Score: 48/100
The old reliable. 47M total downloads but only ~9k/day now. It works, it's stable, but it's not where the momentum is. Still a solid choice if you need something battle-tested.
### HTML/Browser-Based Libraries (Turning Web Pages into PDFs)
**[Microsoft.Playwright](https://github.com/microsoft/playwright-dotnet)** - Score: 71/100
Winner winner. ~23k downloads/day (highest overall), Microsoft backing, actively maintained. If you need HTML-to-PDF, this is probably the move.
**[PuppeteerSharp](https://github.com/hardkoded/puppeteer-sharp)** - Score: 40/100
Still kicking around at ~8.7k/day but Playwright is clearly the future. Updated last week so it's not dead, just... less popular.
## TL;DR - What Should You Actually Use?
**Building PDFs from code (not HTML):**
- **QuestPDF** - If you want something modern and the community is raving about it (13.7k stars!)
- **iText** - If you need enterprise features and can handle the licensing
- **PDFsharp** - If you want the battle-tested option that's been around forever
**Converting HTML/web pages to PDF:**
- **Playwright** - Just use this. It's winning right now (~23k/day), Microsoft-backed, actively maintained. Game over.
- **PuppeteerSharp** - Only if you really need Chromium-only or you're migrating from Node.js Puppeteer
**Bottom line:** For HTML-to-PDF, Playwright is dominating. For code-first, QuestPDF has the hype but iText has the downloads. Choose your fighter.
---
*Numbers from GitHub and NuGet as of January 19, 2026. Daily downloads are from the last 90 days.*

BIN
docs/en/Community-Articles/2026-01-19-Trend-PDF-Libraries-For-CSharp/cover.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 494 KiB

BIN
docs/en/Community-Articles/2026-01-19-Trend-PDF-Libraries-For-CSharp/itext.jpg

Binary file not shown.

After

Width:  |  Height:  |  Size: 72 KiB

BIN
docs/en/Community-Articles/2026-01-19-Trend-PDF-Libraries-For-CSharp/pdfsharp.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 8.9 KiB

BIN
docs/en/Community-Articles/2026-01-19-Trend-PDF-Libraries-For-CSharp/playwright.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 22 KiB

167
docs/en/Community-Articles/2026-01-24-How-AI-Is-Changing-Developers/POST.md

@ -0,0 +1,167 @@
# How AI Is Changing Developers
In the last few years, AI has moved from “nice to have” to “hard to live without” for developers. At first it was just code completion and smart hints. Now it’s getting deep into how we build software: the methods, the toolchain, and even the job itself.
Here are some structured thoughts on how AI is affecting developers, based on trends and personal experience.
## Every library will have AI-first docs
Future libraries and frameworks won’t just have docs for humans. They’ll also have a manual for AI:
- How to use
- Why it is designed this way
- What NOT to do
- Conventions & Best Practices
Once these rules are written in a structured way, AI can onboard to a library faster and more consistently than a junior developer.
Docs won’t just be knowledge anymore. They’ll be instructions AI can execute.
## AI will be a must-have for developers
Soon, “writing code without AI” will feel as strange as “writing code without an IDE.”
- It won’t be about whether you use AI
- It’ll be about how well you use it and where
AI will become:
- A standard productivity tool
- An extension of a developer’s thinking
- A second brain
Developers who don’t use AI will fall behind in both speed and understanding.
## As AI gets smarter, it replaces “time”
AI isn’t replacing developers right away. It’s replacing:
- Lots of repetitive time
- Basic development costs
- Higher output per hour
Boilerplate, CRUD, basic validation, simple logic — all of that will get swallowed fast.
It’s not people being replaced. It’s waste.
## Orchestrating multiple AIs becomes real
The future isn’t “one AI does everything.” It’s more like:
- Claude writes core code
- Copilot generates and maintains unit tests
- Codex and similar tools write docs and examples
- Other AIs handle refactoring, performance analysis, security checks
The dev process itself becomes an AI orchestration system.
The developer’s role looks more like:
Architect + conductor + quality gatekeeper
## Only great infrastructure gets amplified by AI
Even if AI can teach you “how to use it correctly,” it still can’t invent mature infrastructure for you.
We still rely on:
- Stable base frameworks (like [ABP](https://abp.io))
- Engineering capability proven by many projects
- Long-term maintenance and evolution
AI is an accelerator, not the foundation.
For open source, AI is actually a better companion:
- Helps understand the source code
- Helps learn design thinking
- Helps ship faster
The stronger the infrastructure, the more value AI can amplify.
## Frontend feels mature; backend still evolving
From personal experience:
- AI is already very strong in frontend work (Bootstrap / UI components, layout, styling, interaction)
- Backend is still learning and improving (business boundaries, architecture trade-offs, implicit constraints)
This shows: the clearer the rules and the faster the feedback, the faster AI improves.
## Writing rules for AI is productivity itself
In the ABP libraries, we’ve already written lots of rules for AI:
- Conventions
- Usage limits
- Recommended patterns
As rules grow:
- AI becomes more stable
- More predictable
- Base development work can be largely automated
Future engineering skill will be, in large part: how to design a rules system for AI.
## The real advantage is better feedback loops
AI gets much stronger when there’s clear feedback:
- Tests that run fast and fail loudly
- Logs and metrics that explain behavior
- Code review that checks for edge cases and security
The teams that win are the ones who can quickly verify, correct, and learn.
## About a developer’s career
Sometimes I think: I’m glad I didn’t enter the software industry just in the last few years.
If you’re just starting out, you really feel:
- The barrier is lower
- The competition is tougher
But whenever I see AI generate confident but wrong code, I’m reminded:
- The industry still has a future
- It still needs judgment, taste, and experience
There will always be people who love coding. If AI does it and we watch, that’s fine too.
## Chaos everywhere, but the experience is moving fast
Big companies, platforms, tools:
- GitHub
- OpenAI
- Claude
- All kinds of IDEs / agents
New AI tools, apps, and platforms keep popping up. New concepts show up almost every week. It’s noisy, but the big picture is clear: AI keeps getting better, and the overall developer experience is improving fast.
## Get ready for the AI revolution
Looking back at personal experience:
- Before: Google
- Now: ChatGPT
- Before: manual translation
- Now: fully automatic
- Before: writing unit tests by hand
- Now: AI does it all
- Before: human replies to customers
- Now: AI-assisted or even AI-led
From code completion to agents running tasks, and now deep IDE integration — the pace is shocking.
## Closing
AI is not the end of software engineering. It is:
- A leap in cognition
- A restructure of how work gets done
- An upgrade of roles
What matters most isn’t how much code AI can write, but how we redefine the value of “developers” in the AI era.

BIN
docs/en/Community-Articles/2026-01-24-How-AI-Is-Changing-Developers/image.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 672 KiB

50
docs/en/Community-Articles/2026-02-02-ndc-london-article/post.md

@ -0,0 +1,50 @@
The software development world converged on the **Queen Elizabeth II Centre** in Westminster from **January 26-30** for **NDC London 2026**. As one of the most anticipated tech conferences in Europe, this year’s event delivered a masterclass in the future of the stack.
We have spent five days immersed in workshops and sessions. Here is our comprehensive recap of the highlights and the technical shifts that will define 2026\.
![enter image description here](https://abp.io/api/file-management/file-descriptor/share?shareToken=CfDJ8NqaJZr2oLpIuRyHVjJk1BBjsk292Ejh%2b5X2yeS2pD9uibmq8qxh50b9eOg5U5Ib2jAFaeCHItbTyOpajIeaUzNKg/p0WHohjf1iac2%2bVL6kT/Y3ORSKpRQrdE22QJTwAxBMUryUgTQJ989hYtsvF%2bkReDR03k0gIl4ApUaji6Tg)
## **1\. High-Performance .NET and C\# Evolution**
A major focus this year was the continued evolution of the .NET ecosystem. Experts delivered standout sessions on high-performance coding patterns, it’s clear that efficiency and "Native AOT" (Ahead-of-Time compilation) are no longer niche topics, they are becoming industry standards.
## **1\. Moving Beyond the AI Hype**
If 2025 was about experimenting with LLMs, NDC London 2026 was about AI integration. Sessions from experts showcased how developers are moving past simple chatbots and integrating AI directly into the CI/CD pipeline and automated testing suites.
![enter image description here](https://abp.io/api/file-management/file-descriptor/share?shareToken=CfDJ8NqaJZr2oLpIuRyHVjJk1BDxx%2FqqZ08tgIxCPsAnDDD2w5yJPjVXwUJrbGHpSln3npfpJEBQ78chKoSlZS1cz1nbigNQtRq60dlbyMLwnAgE52tBwUJz481PcBgNtyFMW7rm7oKhFV9c7tK8bEcK%2FscRudaV8w7%2FPO8U5KJv%2BQal)
![enter image description here](https://abp.io/api/file-management/file-descriptor/share?shareToken=CfDJ8NqaJZr2oLpIuRyHVjJk1BBdNXgjnu7HIGgX//VJrh3XzjPns4ODHMUhZ%2bDQCcZa2Nc0%2b%2bshyt2UXqaIKEJMPHh6JIDGBtUrdQZ1EzmGn3pingGKiw7YTbh0Z%2bLRZSmcY6pEXkd1S/7VVncmICIHrQgjg%2b7eb2uO28qadIWGbD99)
## **3\. The "Hallway Track" and Community Networking**
One of the biggest draws of **NDC London** is the community. Between the 100+ sessions, the exhibitor hall was buzzing with live demos and networking.
Watch the video:
[![Watch the Hallway Track video](https://img.youtube.com/vi/yb-FILkqL7U/hqdefault.jpg)](https://www.youtube.com/watch?v=yb-FILkqL7U)
![enter image description here](https://abp.io/api/file-management/file-descriptor/share?shareToken=CfDJ8NqaJZr2oLpIuRyHVjJk1BCLbkSK3YZDZZhBGi/IBZOCXgcWHwTyS/s5v6U%2bSeQnY5yCTzMJFTu/mA4xX%2bL5tjbMPfEI8gvCwmVEfSymGFIiJLtAbP8T2zFZev%2bm74sTsQ%2b4sdsLKbdijiae3G%2b45ijWep7yFJx9BWMgV263zzvI)
![enter image description here](https://abp.io/api/file-management/file-descriptor/share?shareToken=CfDJ8NqaJZr2oLpIuRyHVjJk1BCrCACVWDlDjOgl9ASMeZNMVBGye%2bfya4aO6UW5Kyg9MCVLswzckRWS%2bT71AcQuWMGfiousZlSCrKNAGrosPXzuWAsxnNai3xBcj061TWjGAGX4u1AtrD0eknRxuKe2ba%2bVO7r0sZqle%2bUyZa305hhO)
## **4\. The Big Giveaway: Our Xbox Series S Raffle**
One of our favorite moments of the week was our Raffle Session. We love giving back to the community that inspires us, and this year, the energy at our booth was higher than ever.
We were thrilled to give away a brand-new Xbox Series S to one lucky winner\! It was fantastic to meet so many of you who stopped by to enter, chat about your current projects, and share your thoughts on the future of the industry.
**Congratulations again to our 2026 winner\!** We hope you enjoy some well-deserved gaming time after a long week of learning.
![enter image description here](https://abp.io/api/file-management/file-descriptor/share?shareToken=CfDJ8NqaJZr2oLpIuRyHVjJk1BBozHxXhCL7qMtx5LAxvafvPOKaZJepGlR7tgHVvw6wGpuR4Ervipym%2busZ7eMl3uook15K1874RYEwUenBfoZSJBm33MdaHFduha9iJ7tnfTmW12QbdYM77yqfVJ7EonuJsRrNySdYrQuRI0H2RkZr)
Watch the video:
[![Watch the Xbox Series S giveaway](https://img.youtube.com/vi/W5HRwys8dpE/hqdefault.jpg)](https://www.youtube.com/watch?v=W5HRwys8dpE)
## **Final Thoughts: See You at NDC London 2027\!**
NDC London 2026 proved once again why it is a cornerstone event for the global developer community. We are returning to our projects with a refreshed roadmap and a deeper understanding of the tools shaping our industry.
![enter image description here](https://abp.io/api/file-management/file-descriptor/share?shareToken=CfDJ8NqaJZr2oLpIuRyHVjJk1BDJq%2bG7yg1jtoY3gGH8mFMZen%2bncuL%2bKrQHY4/FPOF2KXcLyEjJymhk0JAVwJ76lPeqBchrfsAK3TOUTKY15tC7jm3uwgcH9IWRxCM2ouqxVGqGPd8YIRdG7H7QgyuknBkS4wsdYI9gl1EGqgPtTXJd)

BIN
docs/en/Community-Articles/2026-02-03-Impressions-of-NDC-London-2026/0.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 4.0 MiB

BIN
docs/en/Community-Articles/2026-02-03-Impressions-of-NDC-London-2026/1.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.9 MiB

BIN
docs/en/Community-Articles/2026-02-03-Impressions-of-NDC-London-2026/2.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 4.8 MiB

BIN
docs/en/Community-Articles/2026-02-03-Impressions-of-NDC-London-2026/3.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.7 MiB

BIN
docs/en/Community-Articles/2026-02-03-Impressions-of-NDC-London-2026/4.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.2 MiB

BIN
docs/en/Community-Articles/2026-02-03-Impressions-of-NDC-London-2026/4_1.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.6 MiB

BIN
docs/en/Community-Articles/2026-02-03-Impressions-of-NDC-London-2026/4_2.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.0 MiB

BIN
docs/en/Community-Articles/2026-02-03-Impressions-of-NDC-London-2026/5.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.0 MiB

BIN
docs/en/Community-Articles/2026-02-03-Impressions-of-NDC-London-2026/6.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 644 KiB

BIN
docs/en/Community-Articles/2026-02-03-Impressions-of-NDC-London-2026/7.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 600 KiB

325
docs/en/Community-Articles/2026-02-03-Impressions-of-NDC-London-2026/Post.md

@ -0,0 +1,325 @@
![Cover](0.png)
This year we attended NDC London as a sponsor for [ABP](https://abp.io). The conference was held at the same place [Queen Elizabeth II](https://qeiicentre.london/) as previous years. I guess this is the best conf for .NET developers around the world (thanks to the NDC team). And we attend last 5 years. It was 3 full days started from 28 to 30 January 2026. As an exhibitor we talked a lot with the attendees who stopped by our booth or while we were eating or in the conf rooms.
This is the best opportunity to know what everyone is doing in software society. While I was explaining ABP to the people who first time heard, I also ask about what they do in their work. Developers mostly work on web platforms. And as you know, there's an AI transformation in our sector. That's why I wonder if other people also stick to the latest AI trend! Well... not as I expected. In Volosoft, we are tightly following AI trends, using in our daily development, injecting this new technology to our product and trying to benefit this as much as possible.
![Our booth](1.png)
This new AI trend is same as the invention of printing (by Johannes Gutenberg in 1450) or it's similar to invention of calculators (by William S. Burroughs in 1886). The countries who benefit these inventions got a huge increase in their welfare level. So, we welcome this new AI invention in software development, design, devops and testing. I also see this as a big wave in the ocean, if you are prepared and develop your skills, you can play with it 🌊 and it's called surfing or you'll die against the AI wave in this ocean. But not all the companies react this transformation quickly. Many developers use it like ChatGpt conversation (copy-paste from it) or using GitHub Co-Pilot in a limited manner. But as I heard from Steven Sanderson's session and other Microsoft employees, they are already using it to reproduce the bugs reported in the issues or creating even feature PRs via Co-Pilot. That's a good!
Here're some pictures from the conf and that's me on the left side with brown shoes :)
![Alper & Halil](2.png)
Another thing I see, there's a decrease in the number of attendees'. I don't know the real reason but probably the IT companies cut the budget for conferences. As you also hear, many companies layoff because of the AI replaces some of the positions.
The food was great during the conference. It was more like eating sessions for me. Lots of good meals from different countries' kitchen. In the second day, there was a party. People grabbed their beers, wines, beverages and did some networking.
I was expecting more AI oriented sessions but it was less then my expectations. Even though I was an exhibitor, I tried to attend some of the session. I'll tell you my notes.
---
Here's a quick video from the exhibitors' area on the 3rd floor and our ABP booth's Xbox raffle:
**Video 1: NDC Conference 2026 Environment** 👉 [https://youtu.be/U1kiYG12KgA](https://youtu.be/U1kiYG12KgA)
[![Video 1](youtube-cover-1.png)](https://youtu.be/U1kiYG12KgA)
**Video 2: Our raffle for XBOX** 👉 [https://youtu.be/7o0WX70qYw0](https://youtu.be/7o0WX70qYw0)
[![Video 2](youtube-cover-2.png)](https://youtu.be/7o0WX70qYw0)
---
## Sessions / Talks
### The Dangers of Probably-Working Software | Damian Brady
![Damian Session](3.png)
The first session and keynote was from Damian Brady. He's part of Developer Advocacy team at GitHub. And the topic was "The dangers of probably-working software". He started with some negative impact of how generative AI is killing software, and he ended like this a not so bad, we can benefit from the AI transformation. First time I hear "sleepwalking" term for the development. He was telling when we generate code via AI, and if we don't review well-enough, we're sleepwalkers. And that's correct! and good analogy for this case. This talk centers on a powerful lesson: *“**Don’t ship code you don’t truly understand.**”*
Damian tells a personal story from his early .NET days when he implemented a **Huffman compression algorithm** based largely on Wikipedia. The code **“worked” in small tests** but **failed in production**. The experience forced him to deeply understand the algorithm rather than relying on copied solutions. Through this story, he explores themes of trust, complexity, testing, and mental models in software engineering.
#### Notes From This Session
- “It seems to work” is not the same as “I understand it.”
- Code copied from Wikipedia or StackOverflow or AI platforms is inherently risky in production.
- Passing tests on small datasets does not guarantee real-world reliability (happy path ~= unhappy results)
- Performance issues often surface only in edge cases.
- Delivery pressure can discourage deep understanding — to the detriment of quality.
- Always ask: “**When does this fail?**” — not just “**Why does this work?**”
---
### Playing The Long Game | Sheena O'Connell
![Sheena Session](4.png)
Sheena is a former software engineer who now trains and supports tech educators. She talks about AI tools...
AI tools are everywhere but poorly understood; there’s hype, risks, and mixed results. The key question is how individuals and organisations should play the long game (long-term strategy) so skilled human engineers—especially juniors—can still grow and thrive.
She showed some statistics about how job postings on Indeed platform dramatically decreasing for software developers. About AI generated-code, she tells, it's less secure, there might be logical problems or interesting bugs, human might not read code very well and understanding/debugging code might sometimes take much longer time.
Being an engineer is about much more than a job title — it requires systems thinking, clear communication, dealing with uncertainty, continuous learning, discipline, and good knowledge management. The job market is shifting: demand for AI-skilled workers is rising quickly and paying premiums, and required skills are changing faster in AI-exposed roles. There’s strength in using a diversity of models instead of locking into one provider, and guardrails improve reliability.
AI is creating new roles (like AI security, observability, and operations) and new kinds of work, while routine attrition also opens opportunities. At the same time, heavy AI use can have negative cognitive effects: people may think less, feel lonelier, and prefer talking to AI over humans.
Organizations are becoming more dynamic and project-based, with shorter planning cycles, higher trust, and more experimentation — but also risk of “shiny new toy” syndrome. Research shows AI can boost productivity by 15–20% in many cases, especially in simpler, greenfield projects and popular languages, but it can actually reduce productivity on very complex work. Overall, the recommendation is to focus on using AI well (not just the newest model), add monitoring and guardrails, keep flexibility, and build tools that allow safe experimentation.
![Sheena Session 2](4_1.png)
We’re in a messy, fast-moving AI era where LLM tools are everywhere but poorly understood. There’s a lot of hype and marketing noise, making it hard even for technical people to separate reality from fantasy. Different archetypes have emerged — from AI-optimists to skeptics — and both extremes have risks. AI is great for quick prototyping but unreliable for complex work, so teams need guardrails, better practices, and a focus on learning rather than “writing more code faster.” The key question is how individuals and organizations can play the long game so strong human engineers — especially juniors — can still grow and thrive in an AI-driven world.
![Sheena Session 3](4_2.png)
---
### Crafting Intelligent Agents with Context Engineering | Carly Richmond
![Carly Session](5.png)
Carly is a Developer Advocate Lead at Elastic in London with deep experience in web development and agile delivery from her years in investment banking. A practical UI engineer. She brings a clear, hands-on perspective to building real-world AI systems. In her talk on **“Crafting Intelligent Agents with Context Engineering,”** she argues that prompt engineering isn’t enough — and shows how carefully shaping context across data, tools, and systems is key to creating reliable, useful AI agents. She mentioned about the context of an AI process. The context consists of Instructions, Short Memory, Long Memory, RAG, User Prompts, Tools, Structured Output.
---
### Modular Monoliths | Kevlin Henney
![Kevlin Session](6.png)
Kevlin frames the “microservices vs monolith” debate as a false dichotomy. His core argument is simple but powerful: problems rarely come from *being a monolith* — they come from being a **poorly structured one**. Modularity is not a deployment choice; it is an architectural discipline.
#### **Notes from the Talk**
- A monolith is not inherently bad; a tangled (intertwined, complex) monolith is.
- Architecture is mostly about **boundaries**, not boxes.
- If you cannot draw clean internal boundaries, you are not ready for microservices.
- Dependencies reveal your real architecture better than diagrams.
- Teams shape systems more than tools do.
- Splitting systems prematurely increases complexity without increasing clarity.
- Good modular design makes systems **easier to change, not just easier to scale**.
#### **So As a Developer;**
- Start with a well-structured modular monolith before considering microservices.
- Treat modules as real first-class citizens: clear ownership, clear contracts.
- Make dependency direction explicit — no circular graphs.
- Use internal architectural tests to prevent boundary violations.
- Organize code by *capability*, not by technical layer.
- If your team structure is messy, your architecture will be messy — fix people, not tech.
---
### AI Coding Agents & Skills | Steve Sanderson
**Being productive with AI Agents**
![Steve Session](steve-sanderson-talk.png)
In this session, Steve started how Microsoft is excessively using AI tools for PRs, reproducing bug reports etc... He's now working on **GitHub Co-Pilot Coding Agent Runtime Team**. He says, we use brains and hands less then anytime.
![image-20260206004021726](steve-sanderson-talk_1.png)
**In 1 Week 293 PRs Opened by the help of AI**
![image-20260206004403643](steve-sanderson-talk_2.png)
**He created a new feature to Copilot with the help of Copilot in minutes**
![Steve](steve-sanderson-talk_3.png)
> Code is cheap! Prototypes are almost free!
And he summarized the AI assisted development into 10 outlines. These are Subagents, Plan Mode, Skills, Delegate, Memories, Hooks, MCP, Infinite Sessions, Plugins and Git Workflow. Let's see his statements for each of these headings:
#### **1. Subagents**
![image-20260206005620904](steve-sanderson-talk_4.png)
- Break big problems into smaller, specialized agents.
- Each subagent should have a clear responsibility and limited scope.
- Parallel work is better than one “smart but slow” agent.
- Reduces hallucination by narrowing context per agent.
- Easier to debug: you can inspect each agent’s output separately.
------
#### **2. Plan Mode**
![steve-sanderson-talk_6](steve-sanderson-talk_6.png)
- Always start with a plan before generating code.
- The plan should be explicit, human-readable, and reviewable.
- You'll align your expectations with the AI's next steps.
- Prevents wasted effort on wrong directions.
- Encourages structured thinking instead of trial-and-error coding.
------
#### **3. Skills**
![steve-sanderson-talk_7](steve-sanderson-talk_7.png)
- These are just Markdown files but (can be also tools, scripts as well)
- Skills are reusable capabilities for AI agents.
- You cannot just give all the info (as Markdown) to the AI context (limited!), skills are being used when necessary (by their Description field)
- Treat skills like APIs: versioned, documented, and shareable.
- Prefer many small skills over one big skill set.
- Store skills in Git, not in chat history.
- Skills should integrate with real tools (CI, GitHub, browsers, etc.).
#### 3.1 Skill > Test Your Project Skill
![steve-sanderson-talk_8](steve-sanderson-talk_8.png)
------
#### **4. Delegate**
> didn't mention much about this topic
- “Delegate” refers to **offloading local work to the cloud**.
- Using remote computers for AI stuff not your local resources (agent continues the task remotely)
##### **Ralph Force Do While Over and Over Until It Finishes**
https://awesomeclaude.ai/ralph-wiggum
> Who knows how much tokens it uses :)
![image-20260206010621010](steve-sanderson-talk_5.png)
------
#### **5. Memories**
> didn't mention much about this topic
- It's like don't write tests like this but write like that, and AI will remember it among your team members.
- Copilot Memory allows Copilot to learn about your codebase, helping Copilot coding agent, Copilot code review, and Copilot CLI to work more effectively in a repository.
- Treat memory like documentation that evolves over time.
- Copilot Memory is **turned off by default**
- https://docs.github.com/en/copilot/how-tos/use-copilot-agents/copilot-memory
------
#### **6. Hooks**
> didn't mention much about this topic
![image-20260206015638169](steve-sanderson-talk_10.png)
- Execute custom shell commands at key points during agent execution.
- Examples: pre-commit checks, PR reviews, test triggers.
- Hooks make AI proactive instead of reactive.
- They reduce manual context switching for developers.
- https://docs.github.com/en/copilot/how-tos/use-copilot-agents/coding-agent/use-hooks
------
#### **7. MCP**
- Talk to external tools.
- Enables safe, controlled access to systems (files, APIs, databases).
- Prevents random tool usage; everything is explicit.
------
#### **8. Infinite Sessions**
![Infinite Sessions](steve-sanderson-talk_11.png)
- AI should remember the “project context,” not just the last message.
- Reduces repetition and re-explaining.
- Enables deeper reasoning over time.
- Memory + skills + hooks together make “infinite sessions” possible.
- https://docs.github.com/en/copilot/how-tos/copilot-cli/cli-best-practices#3-leverage-infinite-sessions
------
#### **9. Plugins**
![Plugins](steve-sanderson-talk_12.png)
- Extend AI capabilities beyond core model features.
- https://github.com/marketplace?type=apps&copilot_app=true
------
#### **10. Git Workflow**
- AI should operate inside your existing Git process.
- Generate small, focused commits — not giant changes.
- Use AI for PR descriptions and code reviews.
- Keep humans in the loop for design decisions.
- Branching strategy still matters; AI doesn’t replace it.
- Treat AI like a junior teammate: helpful, but needs supervision.
- CI + tests remain your primary safety net, not the model.
- Keep feedback loops fast: generate → test → review → refine.
**Copilot as SDK**
You can wrap GitHub CoPilot into your app as below:
![steve-sanderson-talk_9](steve-sanderson-talk_9.png)
#### **As a Developer What You Need to Get from Steve's Talk;**
- Coding agents work best when you treat them like programmable teammates, not autocomplete tools.
- “Skills” are the right abstraction for scaling AI assistants across a team.
- Treat skills like shared APIs: version them, review them, and store them in source control.
- Skills can be installed from Git repos (marketplaces), not just created locally.
- Slash commands make skills fast, explicit, and reproducible in daily workflow.
- Use skills to bridge AI ↔ real systems (e.g., GitHub Actions, Playwright, build status).
- Automation skills are most valuable when they handle end-to-end flows (browser + app + data).
- Let the agent *discover* the right skill rather than hard-coding every step.
- Skills reduce hallucination risk by constraining what the agent is allowed to do.
---
### My Personal Notes about AI
- This is your code tech stack for a basic .NET project:
- Assembly > MSIL > C# > ASP.NET Core > ...ABP... >NuGet + NPM > Your Handmade Business Code
When we ask a development to an AI assisted IDE, AI never starts from Assembly or even it's not writing an existing NPM package. It basically uses what's there on the market. So we know frameworks like ASP.NET Core, ABP will always be there after AI evolution.
- Software engineer is not just writing correct syntax code to explain a program to computer. As an engineer you need to understand the requirements, design the problem, make proper decisions and fix the uncertainty. Asking AI the right questions is very critical these days.
- Tesla cars already started to go autonomous. As a driver, you don't need to care about how the car is driven. You need to choose the right way to go in the shortest time without hussle.
- I talk with other software companies owners, they also say their docs website visits are down. I talked to another guy who's making video tutorials to Pluralsight, he's telling learning from video is decreasing nowadays...
- Nowadays, **developers big new issue is Reviewing the AI generated-code.** In the future, developers who use AI, who inspect AI generated code well and who tells the AI exactly what's needed will be the most important topics. Others (who's typing only code) will be naturally eliminated. Invest your time for these topics.
- We see that our brain is getting lazier, our coding muscles gets weaker day by day. Just like after calculator invention, we stopped calculate big numbers. We'll eventually forget coding. But maybe that's what it needs to be!
- Also I don't think AI will replace developers. Think about washing machines. Since they came out, they still need humans to put the clothes in the machine, pick the best program, take out from the machine and iron. From now on, AI is our assistance in every aspect of our life from shopping, medical issues, learning to coding. Let's benefit from it.
#### Software and service stocks shed $830 billion in market value in six trading days
Software stocks fall on AI disruption fears on Feb 4, 2026 in NASDAQ. Software and service stocks shed $830 billion in market value in six trading days. Scramble to shield portfolios as AI muddies valuations, business prospects.
![Reuters](7.png)
**We need to be well prepared for this war.**

BIN
docs/en/Community-Articles/2026-02-03-Impressions-of-NDC-London-2026/cover.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.2 MiB

BIN
docs/en/Community-Articles/2026-02-03-Impressions-of-NDC-London-2026/image-20260206003328436.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 495 KiB

BIN
docs/en/Community-Articles/2026-02-03-Impressions-of-NDC-London-2026/image-20260206004046914.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 155 KiB

BIN
docs/en/Community-Articles/2026-02-03-Impressions-of-NDC-London-2026/image-20260206012506799.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 430 KiB

BIN
docs/en/Community-Articles/2026-02-03-Impressions-of-NDC-London-2026/steve-sanderson-talk.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.9 MiB

BIN
docs/en/Community-Articles/2026-02-03-Impressions-of-NDC-London-2026/steve-sanderson-talk_1.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 348 KiB

BIN
docs/en/Community-Articles/2026-02-03-Impressions-of-NDC-London-2026/steve-sanderson-talk_10.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 30 KiB

BIN
docs/en/Community-Articles/2026-02-03-Impressions-of-NDC-London-2026/steve-sanderson-talk_11.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 34 KiB

BIN
docs/en/Community-Articles/2026-02-03-Impressions-of-NDC-London-2026/steve-sanderson-talk_12.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 142 KiB

BIN
docs/en/Community-Articles/2026-02-03-Impressions-of-NDC-London-2026/steve-sanderson-talk_2.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 203 KiB

BIN
docs/en/Community-Articles/2026-02-03-Impressions-of-NDC-London-2026/steve-sanderson-talk_3.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 315 KiB

BIN
docs/en/Community-Articles/2026-02-03-Impressions-of-NDC-London-2026/steve-sanderson-talk_4.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 477 KiB

BIN
docs/en/Community-Articles/2026-02-03-Impressions-of-NDC-London-2026/steve-sanderson-talk_5.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 81 KiB

BIN
docs/en/Community-Articles/2026-02-03-Impressions-of-NDC-London-2026/steve-sanderson-talk_6.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 260 KiB

BIN
docs/en/Community-Articles/2026-02-03-Impressions-of-NDC-London-2026/steve-sanderson-talk_7.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 631 KiB

BIN
docs/en/Community-Articles/2026-02-03-Impressions-of-NDC-London-2026/steve-sanderson-talk_8.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.8 MiB

BIN
docs/en/Community-Articles/2026-02-03-Impressions-of-NDC-London-2026/steve-sanderson-talk_9.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 903 KiB

BIN
docs/en/Community-Articles/2026-02-03-Impressions-of-NDC-London-2026/youtube-cover-1.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 300 KiB

BIN
docs/en/Community-Articles/2026-02-03-Impressions-of-NDC-London-2026/youtube-cover-2.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 355 KiB

BIN
docs/en/Community-Articles/2026-02-04-Omni-Moderation-in-AI-Management-Module/demo.gif

Binary file not shown.

After

Width:  |  Height:  |  Size: 471 KiB

BIN
docs/en/Community-Articles/2026-02-04-Omni-Moderation-in-AI-Management-Module/images/abp-studio-ai-management.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 56 KiB

BIN
docs/en/Community-Articles/2026-02-04-Omni-Moderation-in-AI-Management-Module/images/ai-management-widget.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.7 KiB

BIN
docs/en/Community-Articles/2026-02-04-Omni-Moderation-in-AI-Management-Module/images/ai-management-workspaces.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 41 KiB

BIN
docs/en/Community-Articles/2026-02-04-Omni-Moderation-in-AI-Management-Module/images/example-comment.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 19 KiB

488
docs/en/Community-Articles/2026-02-04-Omni-Moderation-in-AI-Management-Module/post.md

@ -0,0 +1,488 @@
# Using OpenAI's Moderation API in an ABP Application with the AI Management Module
If your application accepts user-generated content (comments, reviews, forum posts) you likely need some form of content moderation. Building one from scratch typically means training ML models, maintaining datasets, and writing a lot of code. OpenAI's `omni-moderation-latest` model offers a practical shortcut: it's free, requires no training data, and covers 13+ harm categories across text and images in 40+ languages.
In this article, I'll show you how to integrate this model into an ABP application using the [**AI Management Module**](https://abp.io/docs/latest/modules/ai-management). We'll wire it into the [CMS Kit Module's Comment Feature](https://abp.io/docs/latest/modules/cms-kit/comments) so every comment is automatically screened before it's published. The **AI Management Module** handles the OpenAI configuration (API keys, model selection, etc.) through a runtime UI, so you won't need to hardcode any of that into your `appsettings.json` or redeploy when something changes.
By the end, you'll have a working content moderation pipeline you can adapt for any entity in your ABP project.
## Understanding OpenAI's Omni-Moderation Model
Before diving into the implementation, let's understand what makes OpenAI's `omni-moderation-latest` model a game-changer for content moderation.
### What is it?
OpenAI's `omni-moderation-latest` is a next-generation multimodal content moderation model built on the foundation of GPT-4o. Released in September 2024, this model represents a significant leap forward in automated content moderation capabilities.
The most remarkable aspect? **It's completely free to use** through OpenAI's Moderation API, there are no token costs, no usage limits for reasonable use cases, and no hidden fees.
### Key Capabilities
The **omni-moderation** model offers several compelling features that make it ideal for production applications:
- **Multimodal Understanding**: Unlike text-only moderation systems, this model *can process both text and image inputs*, making it suitable for applications where users can upload images alongside their comments or posts.
- **High Accuracy**: Built on GPT-4o's advanced understanding capabilities, the model achieves significantly higher accuracy in detecting nuanced harmful content compared to rule-based systems or simpler ML models.
- **Multilingual Support**: The model demonstrates enhanced performance across more than 40 languages, making it suitable for global applications without requiring separate moderation systems for each language.
- **Comprehensive Category Coverage**: Rather than just detecting "spam" or "not spam," the model classifies content across 13+ distinct categories of potentially harmful content.
### Content Categories
The model evaluates content against the following categories, each designed to catch specific types of harmful content:
| Category | What It Detects |
|----------|-----------------|
| `harassment` | Content that expresses, incites, or promotes harassing language towards any individual or group |
| `harassment/threatening` | Harassment content that additionally includes threats of violence or serious harm |
| `hate` | Content that promotes hate based on race, gender, ethnicity, religion, nationality, sexual orientation, disability, or caste |
| `hate/threatening` | Hateful content that includes threats of violence or serious harm towards the targeted group |
| `self-harm` | Content that promotes, encourages, or depicts acts of self-harm such as suicide, cutting, or eating disorders |
| `self-harm/intent` | Content where the speaker expresses intent to engage in self-harm |
| `self-harm/instructions` | Content that provides instructions or advice on how to commit acts of self-harm |
| `sexual` | Content meant to arouse sexual excitement, including descriptions of sexual activity or promotion of sexual services |
| `sexual/minors` | Sexual content that involves individuals under 18 years of age |
| `violence` | Content that depicts death, violence, or physical injury in graphic detail |
| `violence/graphic` | Content depicting violence or physical injury in extremely graphic, disturbing detail |
| `illicit` | Content that provides advice or instructions for committing illegal activities |
| `illicit/violent` | Illicit content that specifically involves violence or weapons |
### API Response Structure
When you send content to the Moderation API (through model or directly to the API), you receive a structured response containing:
- **`flagged`**: A boolean indicating whether the content violates any of OpenAI's usage policies. This is your primary indicator for whether to block content.
- **`categories`**: A dictionary containing boolean flags for each category, telling you exactly which policies were violated.
- **`category_scores`**: Confidence scores ranging from 0 to 1 for each category, allowing you to implement custom thresholds if needed.
- **`category_applied_input_types`**: A dictionary containing information on which input types were flagged for each category. For example, if both the image and text inputs to the model are flagged for "violence/graphic", the `violence/graphic` property will be set to `["image", "text"]`. This is only available on omni models.
> For more detailed information about the model's capabilities and best practices, refer to the [OpenAI Moderation Guide](https://platform.openai.com/docs/guides/moderation).
## The AI Management Module: Your Dynamic AI Configuration Hub
The [AI Management Module](https://abp.io/docs/latest/modules/ai-management) is a powerful addition to the ABP Platform that transforms how you integrate and manage AI capabilities in your applications. Built on top of the [ABP Framework's AI infrastructure](https://abp.io/docs/latest/framework/infrastructure/artificial-intelligence), it provides a complete solution for managing AI workspaces dynamically—without requiring code changes or application redeployment.
### Why Use the AI Management Module?
Traditional AI integrations often suffer from several pain points:
1. **Hardcoded Configuration**: API keys, model names, and endpoints are typically stored in configuration files, requiring redeployment for any changes.
2. **No Runtime Flexibility**: Switching between AI providers or models requires code changes.
3. **Security Concerns**: Managing API keys across environments is cumbersome and error-prone.
4. **Limited Visibility**: There's no easy way to see which AI configurations are active or test them without writing code.
The AI Management Module addresses all these concerns by providing:
- **Dynamic Workspace Management**: Create, configure, and update AI workspaces directly from a user-friendly administrative interface—no code changes required.
- **Provider Flexibility**: Seamlessly switch between different AI providers (OpenAI, Gemini, Antrophic, Azure OpenAI, Ollama, and custom providers) without modifying your application code.
- **Built-in Testing**: Test your AI configurations immediately using the included chat interface playground before deploying to production.
- **Permission-Based Access Control**: Define granular permissions to control who can manage AI workspaces and who can use specific AI features.
- **Multi-Framework Support**: Full support for MVC/Razor Pages, Blazor (Server & WebAssembly), and Angular UI frameworks.
### Built-in Provider Support
The **AI Management Module** comes with built-in support for popular AI providers through dedicated NuGet packages:
- **`Volo.AIManagement.OpenAI`**: Provides seamless integration with OpenAI's APIs, including GPT models and the *Moderation API*.
- Custom providers can be added by implementing the `IChatClientFactory` interface. (If you configured the Ollama while creating your project, then you can see the example implementation for Ollama)
## Building the Demo Application
Now let's put theory into practice by building a complete content moderation system. We'll create an ABP application with the **AI Management Module**, configure OpenAI as our provider, set up the CMS Kit Comment Feature, and implement automatic content moderation for all user comments.
### Step 1: Creating an Application with AI Management Module
> In this tutorial, I'll create a **layered MVC application** named **ContentModeration**. If you already have an existing solution, you can follow along by replacing the namespaces accordingly. Otherwise, feel free to follow the solution creation steps below.
The most straightforward way to create an application with the AI Management Module is through **ABP Studio**. When you create a new project, you'll encounter an **AI Integration** step in the project creation wizard. This wizard allows you to:
- Enable the AI Management Module with a single checkbox
- Configure your preferred AI provider (OpenAI and Ollama)
- Set up initial workspace configurations
- Automatically install all required NuGet packages
> **Note:** The AI Integration tab in ABP Studio currently only supports the **MVC/Razor Pages** UI. Support for **Angular** and **Blazor** UIs will be added in upcoming versions.
![ABP Studio AI Management](images/abp-studio-ai-management.png)
During the wizard, select **OpenAI** as your AI provider, set the model name as `omni-moderation-latest` and provide your API key. The wizard will automatically:
1. Install the `Volo.AIManagement.*` packages across your solution
2. Install the `Volo.AIManagement.OpenAI` package for OpenAI provider support (you can use any OpenAI compatible model here, including Gemini, Claude and GPT models)
3. Configure the necessary module dependencies
4. Set up initial database migrations
**Alternative Installation Method:**
If you have an existing project or prefer manual installation, you can add the module using the ABP CLI:
```bash
abp add-module Volo.AIManagement
```
Or through ABP Studio by right-clicking on your solution, selecting **Import Module**, and choosing `Volo.AIManagement` from the NuGet tab.
### Step 2: Understanding the OpenAI Workspace Configuration
After creating your project and running the application for the first time, navigate to **AI Management > Workspaces** in the admin menu. Here you'll find the workspace management interface where you can view, create, and modify AI workspaces.
![AI Management Workspaces](images/ai-management-workspaces.png)
If you configured OpenAI during the project creation wizard, you'll already have a workspace set up. Otherwise, you can create a new workspace with the following configuration:
| Property | Value | Description |
|----------|-------|-------------|
| **Name** | `OpenAIAssistant` | A unique identifier for this workspace (no spaces allowed) |
| **Provider** | `OpenAI` | The AI provider to use |
| **Model** | `omni-moderation-latest` | The specific model for content moderation |
| **API Key** | `<Your-OpenAI-API-key>` | Authentication credential for the OpenAI API |
| **Description** | `Workspace for content moderation` | A helpful description for administrators |
The beauty of this approach is that you can modify any of these settings at runtime through the UI. Need to rotate your API key? Just update it in the workspace configuration. Want to test a different model? Change it without touching your code.
### Step 3: Setting Up the CMS Kit Comment Feature
Now let's add the CMS Kit Module to enable the Comment Feature. The CMS Kit provides a robust, production-ready commenting system that we'll enhance with our content moderation.
**Install the CMS Kit Module:**
Run the following command in your solution directory:
```bash
abp add-module Volo.CmsKit --skip-db-migrations
```
> Also, you can add the related module through ABP Studio UI.
**Enable the Comment Feature:**
By default, CMS Kit features are disabled to keep your application lean. Open the `GlobalFeatureConfigurator` class in your `*.Domain.Shared` project and enable the Comment Feature:
```csharp
using Volo.Abp.GlobalFeatures;
using Volo.Abp.Threading;
namespace ContentModeration;
public static class ContentModerationGlobalFeatureConfigurator
{
private static readonly OneTimeRunner OneTimeRunner = new OneTimeRunner();
public static void Configure()
{
OneTimeRunner.Run(() =>
{
GlobalFeatureManager.Instance.Modules.CmsKit(cmsKit =>
{
//only enable the Comment Feature
cmsKit.Comments.Enable();
});
});
}
}
```
**Configure the Comment Entity Types:**
Open your `*DomainModule` class and configure which entity types can have comments. For our demo, we'll enable comments on "Article" entities:
```csharp
using Volo.CmsKit.Comments;
// In your ConfigureServices method:
Configure<CmsKitCommentOptions>(options =>
{
options.EntityTypes.Add(new CommentEntityTypeDefinition("Article"));
});
```
**Add the Comment Component to a Page:**
Finally, let's add the commenting interface to a page. Open the `Index.cshtml` file in your `*.Web` project and add the Comment component (replace with the following content):
```html
@page
@using Volo.CmsKit.Public.Web.Pages.CmsKit.Shared.Components.Commenting
@model ContentModeration.Web.Pages.IndexModel
<div class="container mt-4">
<div class="card">
<div class="card-header">
<h3>Welcome to Our Community</h3>
</div>
<div class="card-body">
<p class="lead">
Share your thoughts in the comments below. Our AI-powered moderation system
automatically reviews all comments to ensure a safe and respectful environment
for everyone.
</p>
<hr/>
<h4>Comments</h4>
@await Component.InvokeAsync(typeof(CommentingViewComponent), new
{
entityType = "Article",
entityId = "welcome-article",
isReadOnly = false
})
</div>
</div>
</div>
```
At this point, you have a fully functional commenting system. Users can post comments, reply to existing comments, and interact with the community.
![](./images/example-comment.png)
However, there's no content moderation yet and any content, including harmful content, would be accepted. Let's fix that!
## Implementing the Content Moderation Service
**Now comes the exciting part:** implementing the content moderation service that leverages OpenAI's `omni-moderation` model to automatically screen all comments before they're published.
### Understanding the Architecture
Our implementation follows a clean, modular architecture:
1. **`IContentModerator` Interface**: Defines the contract for content moderation, making our implementation testable and replaceable.
2. **`ContentModerator` Service**: The concrete implementation that calls OpenAI's Moderation API using the configuration from the AI Management Module.
3. **`MyCommentAppService`**: An override of the CMS Kit's comment service that integrates our moderation logic.
This separation of concerns ensures that:
- The moderation logic is isolated and can be unit tested independently
- You can easily swap the moderation implementation (e.g., switch to a different provider)
- The integration with CMS Kit is clean and maintainable
### Creating the Content Moderator Interface
First, let's define the interface in your `*.Application.Contracts` project. This interface is intentionally simple and it takes text input and throws an exception if the content is harmful:
```csharp
using System.Threading.Tasks;
namespace ContentModeration.Moderation;
public interface IContentModerator
{
Task CheckAsync(string text);
}
```
### Implementing the Content Moderator Service
Now let's implement the service in your `*.Application` project. This implementation uses the `IWorkspaceConfigurationStore` from the AI Management Module to dynamically retrieve the OpenAI configuration:
```csharp
using System.Collections.Generic;
using System.Threading.Tasks;
using OpenAI.Moderations;
using Volo.Abp;
using Volo.Abp.DependencyInjection;
using Volo.AIManagement.Workspaces.Configuration;
namespace ContentModeration.Moderation;
public class ContentModerator : IContentModerator, ITransientDependency
{
private readonly IWorkspaceConfigurationStore _workspaceConfigurationStore;
public ContentModerator(IWorkspaceConfigurationStore workspaceConfigurationStore)
{
_workspaceConfigurationStore = workspaceConfigurationStore;
}
public async Task CheckAsync(string text)
{
// Skip moderation for empty content
if (string.IsNullOrWhiteSpace(text))
{
return;
}
// Retrieve the workspace configuration from AI Management Module
// This allows runtime configuration changes without redeployment
var config = await _workspaceConfigurationStore.GetOrNullAsync<OpenAIAssistantWorkspace>();
if(config == null)
{
throw new UserFriendlyException("Could not find the 'OpenAIAssistant' workspace!");
}
var client = new ModerationClient(
model: config.Model,
apiKey: config.ApiKey
);
// Send the text to OpenAI's Moderation API
var result = await client.ClassifyTextAsync(text);
var moderationResult = result.Value;
// If the content is flagged, throw a user-friendly exception
if (moderationResult.Flagged)
{
var flaggedCategories = GetFlaggedCategories(moderationResult);
throw new UserFriendlyException(
$"Your comment contains content that violates our community guidelines. " +
$"Detected issues: {string.Join(", ", flaggedCategories)}. " +
$"Please revise your comment and try again."
);
}
}
private static List<string> GetFlaggedCategories(ModerationResult result)
{
var flaggedCategories = new List<string>();
if (result.Harassment.Flagged)
{
flaggedCategories.Add("harassment");
}
if (result.HarassmentThreatening.Flagged)
{
flaggedCategories.Add("threatening harassment");
}
//other categories...
return flaggedCategories;
}
}
```
> **Note**: The `ModerationResult` class from the OpenAI .NET SDK provides properties for each moderation category (e.g., `Harassment`, `Violence`, `Sexual`), each with a `Flagged` boolean and a `Score` float (0-1). The exact property names may vary slightly between SDK versions, so check the [OpenAI .NET SDK documentation](https://github.com/openai/openai-dotnet) for the latest API.
### Integrating with CMS Kit Comments
The final piece of the puzzle is integrating our moderation service with the CMS Kit's comment system. We'll override the `CommentPublicAppService` to intercept all comment creation and update requests:
```csharp
using System;
using System.Threading.Tasks;
using ContentModeration.Moderation;
using Microsoft.Extensions.Options;
using Volo.Abp.DependencyInjection;
using Volo.Abp.EventBus.Distributed;
using Volo.CmsKit.Comments;
using Volo.CmsKit.Public.Comments;
using Volo.CmsKit.Users;
using Volo.Abp.SettingManagement;
namespace ContentModeration.Comments;
[Dependency(ReplaceServices = true)]
[ExposeServices(typeof(ICommentPublicAppService), typeof(CommentPublicAppService), typeof(MyCommentAppService))]
public class MyCommentAppService : CommentPublicAppService
{
protected IContentModerator ContentModerator { get; }
public MyCommentAppService(
ICommentRepository commentRepository,
ICmsUserLookupService cmsUserLookupService,
IDistributedEventBus distributedEventBus,
CommentManager commentManager,
IOptionsSnapshot<CmsKitCommentOptions> cmsCommentOptions,
ISettingManager settingManager,
IContentModerator contentModerator)
: base(commentRepository, cmsUserLookupService, distributedEventBus, commentManager, cmsCommentOptions, settingManager)
{
ContentModerator = contentModerator;
}
public override async Task<CommentDto> CreateAsync(string entityType, string entityId, CreateCommentInput input)
{
// Check for harmful content BEFORE creating the comment
// If harmful content is detected, an exception is thrown and the comment is not saved
await ContentModerator.CheckAsync(input.Text);
return await base.CreateAsync(entityType, entityId, input);
}
public override async Task<CommentDto> UpdateAsync(Guid id, UpdateCommentInput input)
{
// Check for harmful content BEFORE updating the comment
// This prevents users from editing approved comments to add harmful content
await ContentModerator.CheckAsync(input.Text);
return await base.UpdateAsync(id, input);
}
}
```
**How This Works:**
1. When a user submits a new comment, the `CreateAsync` method is called.
2. Before the comment is saved to the database, we call `ContentModerator.CheckAsync()` with the comment text.
3. The moderation service sends the text to OpenAI's Moderation API.
4. If the content is flagged as harmful, a `UserFriendlyException` is thrown with a descriptive message.
5. The exception is caught by ABP's exception handling middleware and displayed to the user as a friendly error message.
6. If the content passes moderation, the comment is saved normally.
The same flow applies to comment updates, ensuring users can't circumvent moderation by editing previously approved comments.
Here's the full flow in action — submitting a comment with harmful content and seeing the moderation kick in:
![Content moderation demo](demo.gif)
## The Power of Dynamic Configuration - What AI Management Module Provides to You?
One of the most significant advantages of using the AI Management Module is the ability to manage your AI configurations dynamically. Let's explore what this means in practice.
### Runtime Configuration Changes
With the AI Management Module, you can:
- **Rotate API Keys**: Update your OpenAI API key through the admin UI without any downtime or redeployment. This is crucial for security compliance and key rotation policies.
- **Switch Models**: Want to test a newer moderation model? Simply update the model name in the workspace configuration. Your application will immediately start using the new model.
- **Adjust Settings**: Fine-tune settings like temperature or system prompts (for chat-based workspaces) without touching your codebase.
- **Enable/Disable Workspaces**: Temporarily disable a workspace for maintenance or testing without affecting other parts of your application.
### Multi-Environment Management
The dynamic configuration approach shines in multi-environment scenarios:
- **Development**: Use a test API key with lower rate limits
- **Staging**: Use a separate API key for integration testing
- **Production**: Use your production API key with appropriate security measures
All these configurations can be managed through the UI or via data seeding, without environment-specific code changes.
### Actively Maintained & What's Coming Next
The AI Management Module is **actively maintained** and continuously evolving. The team is working on exciting new capabilities that will further expand what you can do with AI in your ABP applications:
- **MCP (Model Context Protocol) Support** — Coming in **v10.2**, MCP support will allow your AI workspaces to interact with external tools and data sources, enabling more sophisticated AI-powered workflows.
- **RAG (Retrieval-Augmented Generation) System** — Also planned for **v10.2**, the built-in RAG system will let you ground AI responses in your own data, making AI features more accurate and context-aware.
- **And More** — Additional features and improvements are on the roadmap to make AI integration even more seamless.
Since the module is built on ABP's modular architecture, adopting these new capabilities will be straightforward — you can simply update the module and start using the new features without rewriting your existing AI integrations.
### Permission-Based Access Control
The AI Management Module integrates with ABP's permission system, allowing you to:
- Restrict who can view AI workspace configurations
- Control who can create or modify workspaces
- Limit access to specific workspaces based on user roles
This ensures that sensitive configurations like API keys are only accessible to authorized administrators.
## Conclusion
In this comprehensive guide, we've built a production-ready content moderation system that combines the power of OpenAI's `omni-moderation-latest` model with the flexibility of ABP's AI Management Module. Let's recap what makes this approach powerful:
### Key Takeaways
1. **Zero Training Required**: Unlike traditional ML approaches that require collecting datasets, training models, and ongoing maintenance, OpenAI's Moderation API works out of the box with state-of-the-art accuracy.
2. **Completely Free**: OpenAI's Moderation API has no token costs, making it economically viable for applications of any scale.
3. **Comprehensive Detection**: With 13+ categories of harmful content detection, you get protection against harassment, hate speech, violence, sexual content, self-harm, and more—all from a single API call.
4. **Dynamic Configuration**: The AI Management Module allows you to manage API keys, switch providers, and adjust settings at runtime without code changes or redeployment.
5. **Clean Integration**: By following ABP's service override pattern, we integrated moderation seamlessly into the existing CMS Kit comment system without modifying the original module.
6. **Production Ready**: The implementation includes proper error handling, graceful degradation, and user-friendly error messages suitable for production use.
### Resources
- [AI Management Module Documentation](https://abp.io/docs/latest/modules/ai-management)
- [OpenAI Moderation Guide](https://platform.openai.com/docs/guides/moderation)
- [CMS Kit Comments Feature](https://abp.io/docs/latest/modules/cms-kit/comments)
- [ABP Framework AI Infrastructure](https://abp.io/docs/latest/framework/infrastructure/artificial-intelligence)

BIN
docs/en/Community-Articles/2026-02-19-ABP-Framework-Hidden-Magic/images/cover.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.4 MiB

Some files were not shown because too many files changed in this diff

Loading…
Cancel
Save