-
Notifications
You must be signed in to change notification settings - Fork 0
Description
Overview
The Cellix framework must provide robust, first-class support for Azure Queue Storage, enabling reusable queuing and logging for distributed communications between services. This work introduces foundational packages and application integration—including legacy-based queue sender/receiver abstractions, application-configurable service registration, and functional business examples.
Built-in Logging to Blob Storage
- Every message sent or received (either direction) must be uploaded to Azure Blob Storage in a container
queue-messages:- Messages sent: stored under
queue-messages/outbound/ - Messages received: stored under
queue-messages/inbound/ - File name: current timestamp (UTC, ISO8601, ms precision), e.g.
2026-02-07T14:42:03.123Z.json
- Messages sent: stored under
- Blob Metadata and Tagging:
- Each file must be tagged and must have blob metadata for queue name and message direction.
- Developers must be able to configure additional metadata/tags per queue at the application layer (e.g., custom tags per message type / queue).
- Logging must be reliable, atomic, and must not block the send/receive pipeline (logging should not prevent the queue operation from completing; errors must be handled robustly and traced).
- Documentation must include instructions for local Azurite-based development (storage emulator).
Implementation expectations (legacy parity + improvements)
- Abstractions, sender/receiver, and service interface must provide at least the same feature completeness, reliability, and error handling as the legacy efdo implementation.
@cellix/queue-storage-seedworkmust enforce proper type safety using generic typings and runtime guarantees:- No
anyfor generic queue message/payload plumbing. - Prefer
unknown+ validation + typed narrowing where needed. - Prefer generics and discriminated unions for message envelopes and payload types.
- No
Deliverables & Structure
1) @cellix/queue-storage-seedwork
Create a new framework seedwork package containing reusable queue storage infrastructure code:
- Base classes / services for sending and receiving messages (typed).
- JSON schema validation for message envelopes + payloads.
- Built-in blob logging described above (container
queue-messages, withinbound/andoutbound/). - Extension points for:
- per-queue schema
- per-queue metadata/tags configuration
- correlation IDs / tracing integration
- error handling strategy
- Must take inspiration from the legacy implementation:
2) @ocom/service-queue-storage
Create an Owner Community application-specific package that:
- Depends on
@cellix/queue-storage-seedwork. - Maintains Owner Community’s queue configuration:
- queue names
- direction (inbound/outbound)
- schemas
- logging metadata/tags configuration
- On startup, registers all configured queues for sending and receiving.
- Adheres to Cellix infrastructure service standards (startup/shutdown lifecycle, DI registration patterns).
3) Extend Cellix fluent startup API to support queue triggers
In @ocom/api (and/or Cellix core where appropriate), expose a fluent, chained startup API to register Azure Functions queue handlers similarly to how HTTP handlers are registered today.
Proof-of-concept scenarios (MUST be implemented in Owner Community)
These examples are required to prove the design works with what is already functional in the repo and to provide contributors a working reference.
Outbound queue example: community-created
- On community creation, an existing integration event handler for
CommunityCreatedEvent(already firing in the domain) must send a queue message to the outbound queuecommunity-created. - The message contract should align with the actual event and include relevant fields (e.g.,
communityId,name,createdAt, etc.). - The send must:
- be type-safe (generic typed payload)
- be schema-validated at runtime
- log the sent message as JSON to blob storage under
queue-messages/outbound/with configured tags/metadata
Inbound queue example: member
- Create an inbound queue
memberthat accepts a payload:memberId: string(required; objectId)- Select a few sensible fields from the member schema which can be used on the member message payload for the queue handler to update those fields in database when it processes a message. It doesn't matter what fields you pick, we just need something to demonstrate the queue handler is processing correctly and showing a change in persistence.
- Implement an Azure Function queue trigger handler that:
- validates and decodes the message with the seedwork receiver
- finds the member document by
memberIdin MongoDB - if
updatesis present, applies those updates to that member document (simple, pragmatic update logic is fine for MVP) - logs the received message + outcome to blob storage under
queue-messages/inbound/with configured tags/metadata
Acceptance Criteria
-
@cellix/queue-storage-seedworkpackage exists, with tests and documentation. - Built-in blob logging exists and writes JSON files to
queue-messages/inbound/andqueue-messages/outbound/with timestamp filenames, plus tags/metadata configurable per queue. - No
anyused for generic queue message/payload plumbing; the public API is strongly typed with generics and discriminated unions as needed. -
@ocom/service-queue-storageexists, registers/configures Owner Community queues at startup, and adheres to infrastructure service standards. - Owner Community proves both scenarios end-to-end:
-
CommunityCreatedEvent=> message sent tocommunity-createdqueue and logged to blob -
memberqueue trigger => updates member doc and logs inbound message + outcome to blob
-
-
@ocom/apiexposes a fluent way to register Azure Functions queue handlers on startup.
References (legacy foundation)
- https://github.com/ECFMG/efdo/blob/7972a5f7dc6d05b1d3db2d1d312af9ab89761052/data-access/services/queue-storage/base-queue-sender.ts
- https://github.com/ECFMG/efdo/blob/7972a5f7dc6d05b1d3db2d1d312af9ab89761052/data-access/services/queue-storage/base-queue-receiver.ts
Area: infra, seedwork, azure, queue, logging, integration, example