Skip to content

Implement Azure Queue Storage support in Cellix with type-safe, logged queue sender/receiver and proof-of-concept #160

@nnoce14

Description

@nnoce14

Overview

The Cellix framework must provide robust, first-class support for Azure Queue Storage, enabling reusable queuing and logging for distributed communications between services. This work introduces foundational packages and application integration—including legacy-based queue sender/receiver abstractions, application-configurable service registration, and functional business examples.

Built-in Logging to Blob Storage

  • Every message sent or received (either direction) must be uploaded to Azure Blob Storage in a container queue-messages:
    • Messages sent: stored under queue-messages/outbound/
    • Messages received: stored under queue-messages/inbound/
    • File name: current timestamp (UTC, ISO8601, ms precision), e.g. 2026-02-07T14:42:03.123Z.json
  • Blob Metadata and Tagging:
    • Each file must be tagged and must have blob metadata for queue name and message direction.
    • Developers must be able to configure additional metadata/tags per queue at the application layer (e.g., custom tags per message type / queue).
  • Logging must be reliable, atomic, and must not block the send/receive pipeline (logging should not prevent the queue operation from completing; errors must be handled robustly and traced).
  • Documentation must include instructions for local Azurite-based development (storage emulator).

Implementation expectations (legacy parity + improvements)

  • Abstractions, sender/receiver, and service interface must provide at least the same feature completeness, reliability, and error handling as the legacy efdo implementation.
  • @cellix/queue-storage-seedwork must enforce proper type safety using generic typings and runtime guarantees:
    • No any for generic queue message/payload plumbing.
    • Prefer unknown + validation + typed narrowing where needed.
    • Prefer generics and discriminated unions for message envelopes and payload types.

Deliverables & Structure

1) @cellix/queue-storage-seedwork

Create a new framework seedwork package containing reusable queue storage infrastructure code:

2) @ocom/service-queue-storage

Create an Owner Community application-specific package that:

  • Depends on @cellix/queue-storage-seedwork.
  • Maintains Owner Community’s queue configuration:
    • queue names
    • direction (inbound/outbound)
    • schemas
    • logging metadata/tags configuration
  • On startup, registers all configured queues for sending and receiving.
  • Adheres to Cellix infrastructure service standards (startup/shutdown lifecycle, DI registration patterns).

3) Extend Cellix fluent startup API to support queue triggers

In @ocom/api (and/or Cellix core where appropriate), expose a fluent, chained startup API to register Azure Functions queue handlers similarly to how HTTP handlers are registered today.

Proof-of-concept scenarios (MUST be implemented in Owner Community)

These examples are required to prove the design works with what is already functional in the repo and to provide contributors a working reference.

Outbound queue example: community-created

  • On community creation, an existing integration event handler for CommunityCreatedEvent (already firing in the domain) must send a queue message to the outbound queue community-created.
  • The message contract should align with the actual event and include relevant fields (e.g., communityId, name, createdAt, etc.).
  • The send must:
    • be type-safe (generic typed payload)
    • be schema-validated at runtime
    • log the sent message as JSON to blob storage under queue-messages/outbound/ with configured tags/metadata

Inbound queue example: member

  • Create an inbound queue member that accepts a payload:
    • memberId: string (required; objectId)
    • Select a few sensible fields from the member schema which can be used on the member message payload for the queue handler to update those fields in database when it processes a message. It doesn't matter what fields you pick, we just need something to demonstrate the queue handler is processing correctly and showing a change in persistence.
  • Implement an Azure Function queue trigger handler that:
    • validates and decodes the message with the seedwork receiver
    • finds the member document by memberId in MongoDB
    • if updates is present, applies those updates to that member document (simple, pragmatic update logic is fine for MVP)
    • logs the received message + outcome to blob storage under queue-messages/inbound/ with configured tags/metadata

Acceptance Criteria

  • @cellix/queue-storage-seedwork package exists, with tests and documentation.
  • Built-in blob logging exists and writes JSON files to queue-messages/inbound/ and queue-messages/outbound/ with timestamp filenames, plus tags/metadata configurable per queue.
  • No any used for generic queue message/payload plumbing; the public API is strongly typed with generics and discriminated unions as needed.
  • @ocom/service-queue-storage exists, registers/configures Owner Community queues at startup, and adheres to infrastructure service standards.
  • Owner Community proves both scenarios end-to-end:
    • CommunityCreatedEvent => message sent to community-created queue and logged to blob
    • member queue trigger => updates member doc and logs inbound message + outcome to blob
  • @ocom/api exposes a fluent way to register Azure Functions queue handlers on startup.

References (legacy foundation)


Area: infra, seedwork, azure, queue, logging, integration, example

Metadata

Metadata

Labels

No labels
No labels

Type

Projects

Status

In Progress

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions