- HTTP REST API to create/retrieve/search "work to do":
pp_backend_api. - Postgres DB via Docker with a SQL schema:
pp_storage. pp_libRust library sharing the source code for the business logic.- CLI utility (
cli_01) to interact with the HTTP API and the DB directly viapp_lib. - CLI utility (
cli_02) to open a TCP socket and perform a manual HTTP call topp_backend_api - AMQP RabbitMQ queue to publish/subscribe to produce/consume messages.
task_producera schedule (CLI) to produce messages as "work demand" via AMQP.- A backend schedule to consume the AMQP queue -
task_consumerthat maps a "work demand" to a "work to do" in the DB, then stores "events" in the DB table to track the execution of the work to do, finally it updates the work row in the DB with the results of the calculations.
All the operations can be performed with a dedicated target in the Makefile.
The "work to do" is adding together numbers up to a upper bound threshold.
{
"id": 21,
"work_code": "api-bjq8euwsEA",
"add_up_to": 4,
"done": false,
"created_on": 1634115736,
"updated_on": 1634115736
}- Rust structure:
Work. - The
work_codefield has a prefix ofapi-*. - The
donefield isfalseand is supposed to be updated when a hypothetical backend schedule picks up work to do from the database tableworkswith a time-range filter (e.g. last 6 hours). - Both the
work_codesuffix and theadd_up_tofield are randomly generated before adding the row to the database.
{
"add_up_to": 2,
"done": false
}- Rust structure:
WorkDemand. - This is translated into a Rust structure
Workby thetask_consumer. - The
work_codefor a message pulled from the queue has a prefix ofconsumer-*. - The
donefield is updated totrueonce the calculations have been performed by thetask_consumer.
These calculated rows can be searched for from the HTTP API to be retrieved.
