-
## Status
-| Features | v1 | v2 | Status |
-|------------------|----------|----------|----------|
-| Postgres Changes | ✔ | ✔ | GA |
-| Broadcast | | ✔ | Beta |
-| Presence | | ✔ | Beta |
+| Features | v1 | v2 | Status |
+| ---------------- | --- | --- | ------ |
+| Postgres Changes | ✔ | ✔ | GA |
+| Broadcast | | ✔ | Beta |
+| Presence | | ✔ | Beta |
This repository focuses on version 2 but you can still access the previous version's [code](https://github.com/tealbase/realtime/tree/v1) and [Docker image](https://hub.docker.com/layers/tealbase/realtime/v1.0.0/images/sha256-e2766e0e3b0d03f7e9aa1b238286245697d0892c2f6f192fd2995dca32a4446a). For the latest Docker images go to https://hub.docker.com/r/tealbase/realtime.
The codebase is under heavy development and the documentation is constantly evolving. Give it a try and let us know what you think by creating an issue. Watch [releases](https://github.com/tealbase/realtime/releases) of this repo to get notified of updates. And give us a star if you like it!
-
## Overview
### What is this?
@@ -47,25 +48,22 @@ For a more detailed overview head over to [Realtime guides](https://tealbase.com
### Does this server guarantee message delivery?
-The server does not guarantee that every message will be delivered to your clients so keep that in mind as you're using Realtime.
-
+The server does not guarantee that every message will be delivered to your clients so keep that in mind as you're using Realtime.
## Quick start
You can check out the [Multiplayer demo](https://multiplayer.dev) that features Broadcast, Presence and Postgres Changes under the demo directory: https://github.com/tealbase/realtime/tree/main/demo.
-
## Client libraries
- JavaScript: [@tealbase/realtime-js](https://github.com/tealbase/realtime-js)
- Dart: [@tealbase/realtime-dart](https://github.com/tealbase/realtime-dart)
-
## Server Setup
To get started, spin up your Postgres database and Realtime server containers defined in `docker-compose.yml`. As an example, you may run `docker-compose -f docker-compose.yml up`.
-> **Note**
+> **Note**
> tealbase runs Realtime in production with a separate database that keeps track of all tenants. However, a schema, `_realtime`, is created when spinning up containers via `docker-compose.yml` to simplify local development.
A tenant has already been added on your behalf. You can confirm this by checking the `_realtime.tenants` and `_realtime.extensions` tables inside the database.
@@ -92,8 +90,7 @@ You can add your own by making a `POST` request to the server. You must change b
"db_port": "5432",
"region": "us-west-1",
"poll_interval_ms": 100,
- "poll_max_record_bytes": 1048576,
- "ip_version": 4
+ "poll_max_record_bytes": 1048576
}
}
]
@@ -102,7 +99,7 @@ You can add your own by making a `POST` request to the server. You must change b
http://localhost:4000/api/tenants
```
-> **Note**
+> **Note**
> The `Authorization` token is signed with the secret set by `API_JWT_SECRET` in `docker-compose.yml`.
If you want to listen to Postgres changes, you can create a table and then add the table to the `tealbase_realtime` publication:
@@ -121,62 +118,153 @@ The WebSocket URL must contain the subdomain, `external_id` of the tenant on the
If you're using the default tenant, the URL is `ws://realtime-dev.localhost:4000/socket` (make sure the port is correct for your development environment), and you can use `eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE3MDMwMjgwODcsInJvbGUiOiJwb3N0Z3JlcyJ9.tz_XJ89gd6bN8MBpCl7afvPrZiBH6RB65iA1FadPT3Y` for the token. The token must have `exp` and `role` (database role) keys.
-**ALL RELEVANT OPTIONS**
-
-> **Note**
-> Realtime server is tightly coupled to [Fly.io](https://fly.io) at the moment.
-
-```sh
-PORT # {number} Port which you can connect your client/listeners
-DB_HOST # {string} Database host URL
-DB_PORT # {number} Database port
-DB_USER # {string} Database user
-DB_PASSWORD # {string} Database password
-DB_NAME # {string} Postgres database name
-DB_ENC_KEY # {string} Key used to encrypt sensitive fields in _realtime.tenants and _realtime.extensions tables. Recommended: 16 characters.
-DB_AFTER_CONNECT_QUERY # {string} Query that is run after server connects to database.
-API_JWT_SECRET # {string} Secret that is used to sign tokens used to manage tenants and their extensions via HTTP requests.
-FLY_ALLOC_ID # {string} This is auto-set when deploying to Fly. Otherwise, set to any string.
-FLY_APP_NAME # {string} A name of the server.
-FLY_REGION # {string} Name of the region that the server is running in. Fly auto-sets this on deployment. Otherwise, set to any string.
-SECRET_KEY_BASE # {string} Secret used by the server to sign cookies. Recommended: 64 characters.
-ERL_AFLAGS # {string} Set to either "-proto_dist inet_tcp" or "-proto_dist inet6_tcp" depending on whether or not your network uses IPv4 or IPv6, respectively.
-ENABLE_TAILSCALE # {string} Use Tailscale for private networking. Set to either 'true' or 'false'.
-TAILSCALE_APP_NAME # {string} Name of the Tailscale app.
-TAILSCALE_AUTHKEY # {string} Auth key for the Tailscape app.
-DNS_NODES # {string} Node name used when running server in a cluster.
-MAX_CONNECTIONS # {string} Set the soft maximum for WebSocket connections. Defaults to '16384'.
-NUM_ACCEPTORS # {string} Set the number of server processes that will relay incoming WebSocket connection requests. Defaults to '100'.
-DB_QUEUE_TARGET # {string} Maximum time to wait for a connection from the pool. Defaults to '5000' or 5 seconds. See for more info: https://hexdocs.pm/db_connection/DBConnection.html#start_link/2-queue-config.
-DB_QUEUE_INTERVAL # {string} Interval to wait to check if all connections were checked out under DB_QUEUE_TARGET. If all connections surpassed the target during this interval than the target is doubled. Defaults to '5000' or 5 seconds. See for more info: https://hexdocs.pm/db_connection/DBConnection.html#start_link/2-queue-config.
-DB_POOL_SIZE # {string} Sets the number of connections in the database pool. Defaults to '5'.
-```
-
-
-## Websocket Connection Authorization
-
-Websocket connections are authorized via symmetric JWT verification. Only supports JWTs signed with the following algorithms:
- - HS256
- - HS384
- - HS512
+**Environment Variables**
+
+| Variable | Type | Description |
+| ------------------------------------ | ------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| PORT | number | Port which you can connect your client/listeners |
+| DB_HOST | string | Database host URL |
+| DB_PORT | number | Database port |
+| DB_USER | string | Database user |
+| DB_PASSWORD | string | Database password |
+| DB_NAME | string | Postgres database name |
+| DB_ENC_KEY | string | Key used to encrypt sensitive fields in \_realtime.tenants and \_realtime.extensions tables. Recommended: 16 characters. |
+| DB_AFTER_CONNECT_QUERY | string | Query that is run after server connects to database. |
+| DB_IP_VERSION | string | Sets the IP Version to be used. Allowed values are "ipv6" and "ipv4". If none are set we will try to infer the correct version |
+| API_JWT_SECRET | string | Secret that is used to sign tokens used to manage tenants and their extensions via HTTP requests. |
+| SECRET_KEY_BASE | string | Secret used by the server to sign cookies. Recommended: 64 characters. |
+| ERL_AFLAGS | string | Set to either "-proto_dist inet_tcp" or "-proto_dist inet6_tcp" depending on whether or not your network uses IPv4 or IPv6, respectively. |
+| APP_NAME | string | A name of the server. |
+| DNS_NODES | string | Node name used when running server in a cluster. |
+| MAX_CONNECTIONS | string | Set the soft maximum for WebSocket connections. Defaults to '16384'. |
+| MAX_HEADER_LENGTH | string | Set the maximum header length for connections (in bytes). Defaults to '4096'. |
+| NUM_ACCEPTORS | string | Set the number of server processes that will relay incoming WebSocket connection requests. Defaults to '100'. |
+| DB_QUEUE_TARGET | string | Maximum time to wait for a connection from the pool. Defaults to '5000' or 5 seconds. See for more info: [DBConnection](https://hexdocs.pm/db_connection/DBConnection.html#start_link/2-queue-config). |
+| DB_QUEUE_INTERVAL | string | Interval to wait to check if all connections were checked out under DB_QUEUE_TARGET. If all connections surpassed the target during this interval than the target is doubled. Defaults to '5000' or 5 seconds. See for more info: [DBConnection](https://hexdocs.pm/db_connection/DBConnection.html#start_link/2-queue-config). |
+| DB_POOL_SIZE | string | Sets the number of connections in the database pool. Defaults to '5'. |
+| SLOT_NAME_SUFFIX | string | This is appended to the replication slot which allows making a custom slot name. May contain lowercase letters, numbers, and the underscore character. Together with the default `tealbase_realtime_replication_slot`, slot name should be up to 64 characters long. |
+| TENANT_MAX_BYTES_PER_SECOND | string | The default value of maximum bytes per second that each tenant can support, used when creating a tenant for the first time. Defaults to '100_000'. |
+| TENANT_MAX_CHANNELS_PER_CLIENT | string | The default value of maximum number of channels each tenant can support, used when creating a tenant for the first time. Defaults to '100'. |
+| TENANT_MAX_CONCURRENT_USERS | string | The default value of maximum concurrent users per channel that each tenant can support, used when creating a tenant for the first time. Defaults to '200'. |
+| TENANT_MAX_EVENTS_PER_SECOND | string | The default value of maximum events per second that each tenant can support, used when creating a tenant for the first time. Defaults to '100'. |
+| TENANT_MAX_JOINS_PER_SECOND | string | The default value of maximum channel joins per second that each tenant can support, used when creating a tenant for the first time. Defaults to '100'. |
+| SEED_SELF_HOST | boolean | Seeds the system with default tenant |
+| SELF_HOST_TENANT_NAME | string | Tenant reference to be used for self host. Do keep in mind to use a URL compatible name |
+| LOG_LEVEL | string | Sets log level for Realtime logs. Defaults to info, supported levels are: info, emergency, alert, critical, error, warning, notice, debug |
+| RUN_JANITOR | boolean | Do you want to janitor tasks to run |
+| JANITOR_SCHEDULE_TIMER_IN_MS | number | Time in ms to run the janitor task |
+| JANITOR_SCHEDULE_RANDOMIZE | boolean | Adds a randomized value of minutes to the timer |
+| JANITOR_RUN_AFTER_IN_MS | number | Tells system when to start janitor tasks after boot |
+| JANITOR_CLEANUP_MAX_CHILDREN | number | Maximum number of concurrent tasks working on janitor cleanup |
+| JANITOR_CLEANUP_CHILDREN_TIMEOUT | number | Timeout for each async task for janitor cleanup |
+| JANITOR_CHUNK_SIZE | number | Number of tenants to process per chunk. Each chunk will be processed by a Task |
+| MIGRATION_PARTITION_SLOTS | number | Number of dynamic supervisor partitions used by the migrations process |
+| METRICS_CLEANER_SCHEDULE_TIMER_IN_MS | number | Time in ms to run the Metric Cleaner task |
+| REQUEST_ID_BAGGAGE_KEY | string | OTEL Baggage key to be used as request id |
+| OTEL_SDK_DISABLED | boolean | Disable OpenTelemetry tracing completely when 'true' |
+| OTEL_TRACES_EXPORTER | string | Possible values: `otlp` or `none`. See [https://github.com/open-telemetry/opentelemetry-erlang/tree/v1.4.0/apps#os-environment] for more details on how to configure the traces exporter. |
+| OTEL_TRACES_SAMPLER | string | Default to `parentbased_always_on` . More info [here](https://opentelemetry.io/docs/languages/erlang/sampling/#environment-variables) |
+
+The OpenTelemetry variables mentioned above are not an exhaustive list of all [supported environment variables](https://opentelemetry.io/docs/languages/sdk-configuration/).
+
+## WebSocket URL
+
+The WebSocket URL is in the following format for local development: `ws://[external_id].localhost:4000/socket/websocket`
+
+If you're using tealbase's hosted Realtime in production the URL is `wss://[project-ref].tealbase.co/realtime/v1/websocket?apikey=[anon-token]&log_level=info&vsn=1.0.0"`
+
+## WebSocket Connection Authorization
+
+WebSocket connections are authorized via symmetric JWT verification. Only supports JWTs signed with the following algorithms:
+
+- HS256
+- HS384
+- HS512
Verify JWT claims by setting JWT_CLAIM_VALIDATORS:
- > e.g. {'iss': 'Issuer', 'nbf': 1610078130}
- >
- > Then JWT's "iss" value must equal "Issuer" and "nbf" value must equal 1610078130.
+> e.g. {'iss': 'Issuer', 'nbf': 1610078130}
+>
+> Then JWT's "iss" value must equal "Issuer" and "nbf" value must equal 1610078130.
+
+**Note:**
-> **Note:**
> JWT expiration is checked automatically. `exp` and `role` (database role) keys are mandatory.
**Authorizing Client Connection**: You can pass in the JWT by following the instructions under the Realtime client lib. For example, refer to the **Usage** section in the [@tealbase/realtime-js](https://github.com/tealbase/realtime-js) client library.
+## Error Operational Codes
+
+This is the list of operational codes that can help you understand your deployment and your usage.
+
+| Code | Description |
+| ---------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| TopicNameRequired | You are trying to use Realtime without a topic name set |
+| RealtimeDisabledForConfiguration | The configuration provided to Realtime on connect will not be able to provide you any Postgres Changes |
+| TenantNotFound | The tenant you are trying to connect to does not exist |
+| ErrorConnectingToWebsocket | Error when trying to connect to the WebSocket server |
+| ErrorAuthorizingWebsocket | Error when trying to authorize the WebSocket connection |
+| TableHasSpacesInName | The table you are trying to listen to has spaces in its name which we are unable to support |
+| UnableToDeleteTenant | Error when trying to delete a tenant |
+| UnableToSetPolicies | Error when setting up Authorization Policies |
+| UnableCheckoutConnection | Error when trying to checkout a connection from the tenant pool |
+| UnableToSubscribeToPostgres | Error when trying to subscribe to Postgres changes |
+| ChannelRateLimitReached | The number of channels you can create has reached its limit |
+| ConnectionRateLimitReached | The number of connected clients as reached its limit |
+| ClientJoinRateLimitReached | The rate of joins per second from your clients as reached the channel limits |
+| RealtimeDisabledForTenant | Realtime has been disabled for the tenant |
+| UnableToConnectToTenantDatabase | Realtime was not able to connect to the tenant's database |
+| DatabaseLackOfConnections | Realtime was not able to connect to the tenant's database due to not having enough available connections |
+| TooManyConnectAttempts | Realtime restricted the amount of attempts when connecting to the tenants database |
+| RealtimeNodeDisconnected | Realtime is a distributed application and this means that one the system is unable to communicate with one of the distributed nodes |
+| MigrationsFailedToRun | Error when running the migrations against the Tenant database that are required by Realtime |
+| StartListenAndReplicationFailed | Error when starting the replication and listening of errors for database broadcasting |
+| ReplicationMaxWalSendersReached | Maximum number of WAL senders reached in tenant database, check how to increase this value in this [link](https://tealbase.com/docs/guides/database/custom-postgres-config#cli-configurable-settings) |
+| MigrationCheckFailed | Check to see if we require to run migrations fails |
+| PartitionCreationFailed | Error when creating partitions for realtime.messages |
+| ErrorStartingPostgresCDCStream | Error when starting the Postgres CDC stream which is used for Postgres Changes |
+| UnknownDataProcessed | An unknown data type was processed by the Realtime system |
+| ErrorStartingPostgresCDC | Error when starting the Postgres CDC extension which is used for Postgres Changes |
+| ReplicationSlotBeingUsed | The replication slot is being used by another transaction |
+| PoolingReplicationPreparationError | Error when preparing the replication slot |
+| PoolingReplicationError | Error when pooling the replication slot |
+| SubscriptionDeletionFailed | Error when trying to delete a subscription for postgres changes |
+| UnableToDeletePhantomSubscriptions | Error when trying to delete subscriptions that are no longer being used |
+| UnableToCheckProcessesOnRemoteNode | Error when trying to check the processes on a remote node |
+| UnableToCreateCounter | Error when trying to create a counter to track rate limits for a tenant |
+| UnableToIncrementCounter | Error when trying to increment a counter to track rate limits for a tenant |
+| UnableToDecrementCounter | Error when trying to decrement a counter to track rate limits for a tenant |
+| UnableToUpdateCounter | Error when trying to update a counter to track rate limits for a tenant |
+| UnableToFindCounter | Error when trying to find a counter to track rate limits for a tenant |
+| UnhandledProcessMessage | Unhandled message received by a Realtime process |
+| UnableToSetPolicies | We were not able to set policies for this connection |
+| UnableToTrackPresence | Error when handling track presence for this socket |
+| UnknownPresenceEvent | Presence event type not recognized by service |
+| IncreaseConnectionPool | The number of connections you have set for Realtime are not enough to handle your current use case |
+| RlsPolicyError | Error on RLS policy used for authorization |
+| ConnectionInitializing | Database is initializing connection |
+| DatabaseConnectionIssue | Database had connection issues and connection was not able to be established |
+| UnableToConnectToProject | Unable to connect to Project database |
+| InvalidJWTExpiration | JWT exp claim value it's incorrect |
+| JwtSignatureError | JWT signature was not able to be validated |
+| Unauthorized | Unauthorized access to Realtime channel |
+| RealtimeRestarting | Realtime is currently restarting |
+| UnableToProcessListenPayload | Payload sent in NOTIFY operation was JSON parsable |
+| UnableToListenToTenantDatabase | Unable to LISTEN for notifications against the Tenant Database |
+| UnprocessableEntity | Received a HTTP request with a body that was not able to be processed by the endpoint |
+| InitializingProjectConnection | Connection against Tenant database is still starting |
+| TimeoutOnRpcCall | RPC request within the Realtime server as timed out. |
+| ErrorOnRpcCall | Error when calling another realtime node |
+| ErrorExecutingTransaction | Error executing a database transaction in tenant database |
+| SynInitializationError | Our framework to syncronize processes has failed to properly startup a connection to the database |
+| JanitorFailedToDeleteOldMessages | Scheduled task for realtime.message cleanup was unable to run |
+| UnknownErrorOnController | An error we are not handling correctly was triggered on a controller |
+| UnknownErrorOnChannel | An error we are not handling correctly was triggered on a channel |
## License
This repo is licensed under Apache 2.0.
-
## Credits
- [Phoenix](https://github.com/phoenixframework/phoenix) - `Realtime` server is built with the amazing Elixir framework.
diff --git a/TAGS.md b/TAGS.md
deleted file mode 100644
index e8ca742..0000000
--- a/TAGS.md
+++ /dev/null
@@ -1,35 +0,0 @@
-# New Tags to Merge
-
-Here are the tags that exist in my fork but not in the upstream repository:
-
-| Tag Name | Commit Hash | Message |
-|----------|-------------|---------|
-| v0.10.0 | d3b8e6a7032797c4b515fdeb65bcca7b7b871b5b | v0.10.0 |
-| v0.10.1 | 6a628bce32c0fdda2046063ddbdd1cb82e544776 | v0.10.1 |
-| v0.10.2 | c11160730f55cad8873c30da00ae97935097ae3e | v0.10.2 |
-| v0.10.3 | 98556c823758b29f63210fd9f04d40aa1cc44232 | v0.10.3 |
-| v0.10.4 | 09d113bdade4607f20e6e3aceaee33c733bbba4a | v0.10.4 |
-| v0.10.5 | 164cc96db6db8d275ac1f66d82dc6ae2bf8c6b7c | v0.10.5 |
-| v0.11.0 | ae41a557a2854590b6a9218e20ef29c6d9ace519 | v0.11.0 |
-| v0.11.1 | 8c4408c19b9680691a9391ce410456f5dc99b473 | v0.11.1 |
-| v0.12.0 | a8ff92bcceba2bf376b9157ebfa5f4f56234a7f6 | v0.12.0 |
-| v0.13.1 | f03ada20131e4f94bfb845ff7eaf8b7b185fa695 | v0.13.1 |
-| v0.13.2 | 46f03a7c9715a07dd0f2d1eb650977f1b1cdfeb1 | v0.13.2 |
-| v0.13.3 | 32612699ceb2ac445c444a8aca5e4a0aac83263f | v0.13.3 |
-| v0.7.10 | 3b4ba5facaa3316b6de3078201a83cc2be15b95d | v0.7.10 |
-| v0.7.11 | c507f039acb3b8144507f018b035150016464bcc | v0.7.11 |
-| v0.7.7 | 08fbab6655f817448e1a09adea7de75eb86feb97 | Batman -> v0.7.7 |
-| v0.7.8 | 9dd6702199c7329f6fd94860126740a2a4f36b45 | Merge remote-tracking branch 'origin/main' v0.7.8 |
-| v0.7.9 | be35ab88a28bb855586296f8f664a34abcc25b07 | v0.7.9 |
-| v0.8.0 | 52f0a0492c4d400ffd778bd9fb21d908388be110 | v0.8.0 |
-| v0.9.0 | 0872ea3edb34c36ac8fc47b2c99adccb5fe292b4 | v0.9.0 |
-| v0.9.1 | 0e644c0cf40f83a15ced495de45d38a7fd6f1dda | v0.9.1 |
-| v0.9.2 | f585643d8dd8c7e6d980f7e965587a4cf5b699c1 | v0.9.2 |
-| v0.9.3 | e3755f18fc56aa04719e730ca0c5c96bef567d4d | v0.9.3 |
-| v0.9.4 | 30d31c01378435af879cdacab153ac4144ca29e9 | v0.9.4 |
-| v0.9.5 | 2a4f235bbc45d33d99e9d59337288cad32d6cb67 | v0.9.5 |
-| v0.9.6 | 562a0058880a0752b492b0954aa35c8b47f5a43f | v0.9.6 |
-| v0.9.7 | bd42920771f715ce8541de2c7e67b7340b6c0d96 | v0.9.7 |
-| v1.0.0 | b6f9301cbf50faf55c929fe00733dcca74092664 | v1.0.0 |
-| v2.0.0 | b4d9b0dbb18f05ed9d665b17ad31675263bc5ea3 | Merge branch 'Tealbase:main' into main |
-| v2.1.0 | 1290bd40f7abdf89fb5146438c7c7e8f53733758 | v2.1.0 |
diff --git a/assets/js/app.js b/assets/js/app.js
index 221d75b..bad9ae6 100644
--- a/assets/js/app.js
+++ b/assets/js/app.js
@@ -1,228 +1,278 @@
-import "../css/app.css"
-import "phoenix_html"
-import {Socket} from "phoenix"
-import {LiveSocket} from "phoenix_live_view"
-import topbar from "../vendor/topbar"
-import { RealtimeClient } from '@tealbase/realtime-js';
+import "../css/app.css";
+import "phoenix_html";
+import { Socket } from "phoenix";
+import { LiveSocket } from "phoenix_live_view";
+import topbar from "../vendor/topbar";
+import { createClient } from "@tealbase/tealbase-js";
// LiveView is managing this page because we have Phoenix running
// We're using LiveView to handle the Realtime client via LiveView Hooks
-let Hooks = {}
+let Hooks = {};
Hooks.payload = {
- initRealtime(channelName, path, log_level, token, schema, table) {
- // Instantiate our client with the Realtime server and params to connect with
- this.realtimeSocket = new RealtimeClient(path, {
- params: { log_level: log_level, apikey: token }
- })
-
- // Join the Channel 'any'
- // Channels can be named anything
- // All clients on the same Channel will get messages sent to that Channel
- this.channel = this.realtimeSocket.channel(channelName, { config: { broadcast: { self: true } } })
-
- // Hack to confirm Postgres is subscribed
- // Need to add 'extension' key in the 'payload'
- this.channel.on("system", {}, payload => {
- if (payload.extension === 'postgres_changes' && payload.status === 'ok') {
- this.pushEventTo("#conn_info", "postgres_subscribed", {})
- }
- let ts = new Date();
- let line =
- `
+ initRealtime(
+ channelName,
+ host,
+ log_level,
+ token,
+ schema,
+ table,
+ filter,
+ bearer,
+ enable_presence,
+ enable_db_changes
+ ) {
+ // Instantiate our client with the Realtime server and params to connect with
+ {
+ }
+ const opts = {
+ realtime: {
+ params: {
+ log_level: log_level,
+ },
+ },
+ };
+
+ this.realtimeSocket = createClient(host, token, opts);
+
+ if (bearer != "") {
+ this.realtimeSocket.realtime.setAuth(bearer);
+ }
+
+ // Join the Channel 'any'
+ // Channels can be named anything
+ // All clients on the same Channel will get messages sent to that Channel
+ this.channel = this.realtimeSocket.channel(channelName, {
+ config: { broadcast: { self: true } },
+ });
+
+ // Hack to confirm Postgres is subscribed
+ // Need to add 'extension' key in the 'payload'
+ this.channel.on("system", {}, (payload) => {
+ if (payload.extension === "postgres_changes" && payload.status === "ok") {
+ this.pushEventTo("#conn_info", "postgres_subscribed", {});
+ }
+ let ts = new Date();
+ let line = `
SYSTEM
${ts.toISOString()}
${JSON.stringify(payload)}
-
`
- let list = document.querySelector("#plist")
- list.innerHTML = line + list.innerHTML;
- })
-
- // Listen for all (`*`) `broadcast` events
- // The event name can by anything
- // Match on specific event names to filter for only those types of events and do something with them
- this.channel.on("broadcast", { event: "*" }, payload => {
- let ts = new Date();
- let line =
- `
+
`;
+ let list = document.querySelector("#plist");
+ list.innerHTML = line + list.innerHTML;
+ });
+
+ // Listen for all (`*`) `broadcast` events
+ // The event name can by anything
+ // Match on specific event names to filter for only those types of events and do something with them
+ this.channel.on("broadcast", { event: "*" }, (payload) => {
+ let ts = new Date();
+ let line = `
BROADCAST
${ts.toISOString()}
${JSON.stringify(payload)}
-
`
- let list = document.querySelector("#plist")
- list.innerHTML = line + list.innerHTML;
- })
-
- // Listen for all (`*`) `presence` events
- this.channel.on("presence", { event: "*" }, payload => {
- this.pushEventTo("#conn_info", "presence_subscribed", {})
- let ts = new Date();
- let line =
- `
+
`;
+ let list = document.querySelector("#plist");
+ list.innerHTML = line + list.innerHTML;
+ });
+
+ // Listen for all (`*`) `presence` events
+ if (enable_presence === "true") {
+ console.log("enable_presence", enable_presence);
+
+ this.channel.on("presence", { event: "*" }, (payload) => {
+ this.pushEventTo("#conn_info", "presence_subscribed", {});
+ let ts = new Date();
+ let line = `
PRESENCE
${ts.toISOString()}
${JSON.stringify(payload)}
-
`
- let list = document.querySelector("#plist")
- list.innerHTML = line + list.innerHTML;
- })
-
- // Listen for all (`*`) `postgres_changes` events on tables in the `public` schema
- this.channel.on("postgres_changes", { event: "*", schema: schema, table: table }, payload => {
- let ts = performance.now() + performance.timeOrigin
- let iso_ts = new Date()
- let payload_ts = Date.parse(payload.commit_timestamp)
- let latency = ts - payload_ts
- let line =
- `
+
`;
+ let list = document.querySelector("#plist");
+ list.innerHTML = line + list.innerHTML;
+ });
+ }
+
+ // Listen for all (`*`) `postgres_changes` events on tables in the `public` schema
+ if (enable_db_changes === "true") {
+ let postgres_changes_opts = { event: "*", schema: schema, table: table };
+ if (filter !== "") {
+ postgres_changes_opts.filter = filter;
+ }
+ this.channel.on("postgres_changes", postgres_changes_opts, (payload) => {
+ let ts = performance.now() + performance.timeOrigin;
+ let iso_ts = new Date();
+ let payload_ts = Date.parse(payload.commit_timestamp);
+ let latency = ts - payload_ts;
+ let line = `
POSTGRES
${iso_ts.toISOString()}
${JSON.stringify(payload)}
-
Latency: ${latency.toFixed(1)} ms
+
Latency: ${latency.toFixed(
+ 1
+ )} ms
-
`
- let list = document.querySelector("#plist")
- list.innerHTML = line + list.innerHTML;
- })
-
- // Finally, subscribe to the Channel we just setup
- this.channel.subscribe(async (status) => {
- if (status === 'SUBSCRIBED') {
- console.log(`Realtime Channel status: ${status}`)
-
- // Let LiveView know we connected so we can update the button text
- this.pushEventTo("#conn_info", "broadcast_subscribed", { path: path})
-
- // Save params to local storage if `SUBSCRIBED`
- localStorage.setItem("path", path)
- localStorage.setItem("token", token)
- localStorage.setItem("log_level", log_level)
- localStorage.setItem("channel", channelName)
- localStorage.setItem("schema", schema)
- localStorage.setItem("table", table)
-
- // Initiate Presence for a connected user
- // Now when a new user connects and sends a `TRACK` message all clients will receive a message like:
- // {
- // "event":"join",
- // "key":"2b88be54-3b41-11ed-9887-1a9e1a785cf8",
- // "currentPresences":[
- //
- // ],
- // "newPresences":[
- // {
- // "name":"realtime_presence_55",
- // "t":1968.1000000238419,
- // "presence_ref":"Fxd_ZWlhIIfuIwlD"
- // }
- // ]
- // }
- //
- // And when `TRACK`ed users leave we'll receive an event like:
- //
- // {
- // "event":"leave",
- // "key":"2b88be54-3b41-11ed-9887-1a9e1a785cf8",
- // "currentPresences":[
- //
- // ],
- // "leftPresences":[
- // {
- // "name":"realtime_presence_55",
- // "t":1968.1000000238419,
- // "presence_ref":"Fxd_ZWlhIIfuIwlD"
- // }
- // ]
- // }
- const name = 'user_name_' + Math.floor(Math.random() * 100)
- this.channel.send(
- {
- type: 'presence',
- event: 'TRACK',
- payload: { name: name, t: performance.now() },
- })
+ `;
+ let list = document.querySelector("#plist");
+ list.innerHTML = line + list.innerHTML;
+ });
+ }
+
+ // Finally, subscribe to the Channel we just setup
+ this.channel.subscribe(async (status, error) => {
+ if (status === "SUBSCRIBED") {
+ console.log(`Realtime Channel status: ${status}`);
+
+ // Let LiveView know we connected so we can update the button text
+ this.pushEventTo("#conn_info", "broadcast_subscribed", { host: host });
+
+ // Save params to local storage if `SUBSCRIBED`
+ localStorage.setItem("host", host);
+ localStorage.setItem("token", token);
+ localStorage.setItem("log_level", log_level);
+ localStorage.setItem("channel", channelName);
+ localStorage.setItem("schema", schema);
+ localStorage.setItem("table", table);
+ localStorage.setItem("filter", filter);
+ localStorage.setItem("bearer", bearer);
+ localStorage.setItem("enable_presence", enable_presence);
+ localStorage.setItem("enable_db_changes", enable_db_changes);
+
+ // Initiate Presence for a connected user
+ // Now when a new user connects and sends a `TRACK` message all clients will receive a message like:
+ // {
+ // "event":"join",
+ // "key":"2b88be54-3b41-11ed-9887-1a9e1a785cf8",
+ // "currentPresences":[
+ //
+ // ],
+ // "newPresences":[
+ // {
+ // "name":"realtime_presence_55",
+ // "t":1968.1000000238419,
+ // "presence_ref":"Fxd_ZWlhIIfuIwlD"
+ // }
+ // ]
+ // }
+ //
+ // And when `TRACK`ed users leave we'll receive an event like:
+ //
+ // {
+ // "event":"leave",
+ // "key":"2b88be54-3b41-11ed-9887-1a9e1a785cf8",
+ // "currentPresences":[
+ //
+ // ],
+ // "leftPresences":[
+ // {
+ // "name":"realtime_presence_55",
+ // "t":1968.1000000238419,
+ // "presence_ref":"Fxd_ZWlhIIfuIwlD"
+ // }
+ // ]
+ // }
+ if (enable_presence === "true") {
+ const name = "user_name_" + Math.floor(Math.random() * 100);
+ this.channel.send({
+ type: "presence",
+ event: "TRACK",
+ payload: { name: name, t: performance.now() },
+ });
+ }
} else {
- console.log(`Realtime Channel status: ${status}`)
+ console.error(`Realtime Channel error status: ${status}`);
+ console.error(`Realtime Channel error: ${error}`);
}
- })
+ });
},
sendRealtime(event, payload) {
// Send a `broadcast` message over the Channel
- // All connected clients will receive this message if they're subscribed
+ // All connected clients will receive this message if they're subscribed
// to `broadcast` events and matching on the `event` name or using `*` to match all event names
this.channel.send({
type: "broadcast",
event: event,
- payload: payload
- })
+ payload: payload,
+ });
},
disconnectRealtime() {
// Send a `broadcast` message over the Channel
- // All connected clients will receive this message if they're subscribed
+ // All connected clients will receive this message if they're subscribed
// to `broadcast` events and matching on the `event` name or using `*` to match all event names
- this.channel.unsubscribe()
+ this.channel.unsubscribe();
},
clearLocalStorage() {
- localStorage.clear()
+ localStorage.clear();
},
mounted() {
- let params = {
- log_level: localStorage.getItem("log_level"),
- token: localStorage.getItem("token"),
- path: localStorage.getItem("path"),
+ let params = {
+ log_level: localStorage.getItem("log_level"),
+ token: localStorage.getItem("token"),
+ host: localStorage.getItem("host"),
channel: localStorage.getItem("channel"),
schema: localStorage.getItem("schema"),
- table: localStorage.getItem("table")
- }
+ table: localStorage.getItem("table"),
+ filter: localStorage.getItem("filter"),
+ bearer: localStorage.getItem("bearer"),
+ enable_presence: localStorage.getItem("enable_presence"),
+ enable_db_changes: localStorage.getItem("enable_db_changes"),
+ };
- this.pushEventTo("#conn_form", "local_storage", params)
+ this.pushEventTo("#conn_form", "local_storage", params);
- this.handleEvent("connect", ({connection}) =>
- this.initRealtime(connection.channel, connection.path, connection.log_level, connection.token, connection.schema, connection.table)
- )
+ this.handleEvent("connect", ({ connection }) =>
+ this.initRealtime(
+ connection.channel,
+ connection.host,
+ connection.log_level,
+ connection.token,
+ connection.schema,
+ connection.table,
+ connection.filter,
+ connection.bearer,
+ connection.enable_presence,
+ connection.enable_db_changes
+ )
+ );
- this.handleEvent("send_message", ({message}) =>
+ this.handleEvent("send_message", ({ message }) =>
this.sendRealtime(message.event, message.payload)
- )
+ );
- this.handleEvent("disconnect", ({}) =>
- this.disconnectRealtime()
- )
+ this.handleEvent("disconnect", ({}) => this.disconnectRealtime());
- this.handleEvent("clear_local_storage", ({}) =>
- this.clearLocalStorage()
- )
-
-
- }
-}
+ this.handleEvent("clear_local_storage", ({}) => this.clearLocalStorage());
+ },
+};
Hooks.latency = {
mounted() {
- this.handleEvent("ping", (params) =>
- this.pong(params)
- )
+ this.handleEvent("ping", (params) => this.pong(params));
},
pong(params) {
- this.pushEventTo("#ping", "pong", params)
+ this.pushEventTo("#ping", "pong", params);
},
-}
-
-let csrfToken = document.querySelector("meta[name='csrf-token']").getAttribute("content")
-let liveSocket = new LiveSocket("/live", Socket, {hooks: Hooks, params: {_csrf_token: csrfToken}})
+};
-topbar.config({barColors: {0: "#29d"}, shadowColor: "rgba(0, 0, 0, .3)"})
-window.addEventListener("phx:page-loading-start", info => topbar.show())
-window.addEventListener("phx:page-loading-stop", info => topbar.hide())
+let csrfToken = document
+ .querySelector("meta[name='csrf-token']")
+ .getAttribute("content");
-liveSocket.connect()
+let liveSocket = new LiveSocket("/live", Socket, {
+ hooks: Hooks,
+ params: { _csrf_token: csrfToken },
+});
-window.liveSocket = liveSocket
+topbar.config({ barColors: { 0: "#29d" }, shadowColor: "rgba(0, 0, 0, .3)" });
+window.addEventListener("phx:page-loading-start", (info) => topbar.show());
+window.addEventListener("phx:page-loading-stop", (info) => topbar.hide());
+liveSocket.connect();
+window.liveSocket = liveSocket;
diff --git a/assets/package-lock.json b/assets/package-lock.json
index b804c53..75c9fe9 100644
--- a/assets/package-lock.json
+++ b/assets/package-lock.json
@@ -1,31 +1,90 @@
{
"name": "assets",
- "lockfileVersion": 2,
+ "lockfileVersion": 3,
"requires": true,
"packages": {
"": {
"dependencies": {
- "@tealbase/realtime-js": "^2.1.0"
+ "@tealbase/tealbase-js": "^2.26.0"
+ }
+ },
+ "node_modules/@tealbase/functions-js": {
+ "version": "2.1.2",
+ "resolved": "https://registry.npmjs.org/@tealbase/functions-js/-/functions-js-2.1.2.tgz",
+ "integrity": "sha512-QCR6pwJs9exCl37bmpMisUd6mf+0SUBJ6mUpiAjEkSJ/+xW8TCuO14bvkWHADd5hElJK9MxNlMQXxSA4DRz9nQ==",
+ "dependencies": {
+ "cross-fetch": "^3.1.5"
+ }
+ },
+ "node_modules/@tealbase/gotrue-js": {
+ "version": "2.34.0",
+ "resolved": "https://registry.npmjs.org/@tealbase/gotrue-js/-/gotrue-js-2.34.0.tgz",
+ "integrity": "sha512-j4up+jZDyutUwKcrwDXhVbeHFydUu9wLvotr4qREenz+ec0d3L7Zs0Nb1hP8B64HbJ4tmFXhpOG23IsvQtC58w==",
+ "dependencies": {
+ "cross-fetch": "^3.1.5"
+ }
+ },
+ "node_modules/@tealbase/postgrest-js": {
+ "version": "1.7.1",
+ "resolved": "https://registry.npmjs.org/@tealbase/postgrest-js/-/postgrest-js-1.7.1.tgz",
+ "integrity": "sha512-xPRYLaZrkLbXNlzmHW6Wtf9hmcBLjjI5xUz2zj8oE2hgXGaYoZBBkpN9bmW9i17Z1f6Ujxa942AqK439XOA36A==",
+ "dependencies": {
+ "cross-fetch": "^3.1.5"
}
},
"node_modules/@tealbase/realtime-js": {
- "version": "2.1.0",
- "resolved": "https://registry.npmjs.org/@tealbase/realtime-js/-/realtime-js-2.1.0.tgz",
- "integrity": "sha512-iplLCofTeYjnx9FIOsIwHLhMp0+7UVyiA4/sCeq40VdOgN9eTIhjEno9Tgh4dJARi4aaXoKfRX1DTxgZaOpPAw==",
+ "version": "2.7.3",
+ "resolved": "https://registry.npmjs.org/@tealbase/realtime-js/-/realtime-js-2.7.3.tgz",
+ "integrity": "sha512-c7TzL81sx2kqyxsxcDduJcHL9KJdCOoKimGP6lQSqiZKX42ATlBZpWbyy9KFGFBjAP4nyopMf5JhPi2ZH9jyNw==",
"dependencies": {
"@types/phoenix": "^1.5.4",
+ "@types/websocket": "^1.0.3",
"websocket": "^1.0.34"
}
},
+ "node_modules/@tealbase/storage-js": {
+ "version": "2.5.1",
+ "resolved": "https://registry.npmjs.org/@tealbase/storage-js/-/storage-js-2.5.1.tgz",
+ "integrity": "sha512-nkR0fQA9ScAtIKA3vNoPEqbZv1k5B5HVRYEvRWdlP6mUpFphM9TwPL2jZ/ztNGMTG5xT6SrHr+H7Ykz8qzbhjw==",
+ "dependencies": {
+ "cross-fetch": "^3.1.5"
+ }
+ },
+ "node_modules/@tealbase/tealbase-js": {
+ "version": "2.26.0",
+ "resolved": "https://registry.npmjs.org/@tealbase/tealbase-js/-/tealbase-js-2.26.0.tgz",
+ "integrity": "sha512-RXmTPTobaYAwkSobadHZmEVLmzX3SGrtRZIGfLWnLv92VzBRrjuXn0a+bJqKl50GUzsyqPA+j5pod7EwMkcH5A==",
+ "dependencies": {
+ "@tealbase/functions-js": "^2.1.0",
+ "@tealbase/gotrue-js": "^2.31.0",
+ "@tealbase/postgrest-js": "^1.7.0",
+ "@tealbase/realtime-js": "^2.7.3",
+ "@tealbase/storage-js": "^2.5.1",
+ "cross-fetch": "^3.1.5"
+ }
+ },
+ "node_modules/@types/node": {
+ "version": "20.3.2",
+ "resolved": "https://registry.npmjs.org/@types/node/-/node-20.3.2.tgz",
+ "integrity": "sha512-vOBLVQeCQfIcF/2Y7eKFTqrMnizK5lRNQ7ykML/5RuwVXVWxYkgwS7xbt4B6fKCUPgbSL5FSsjHQpaGQP/dQmw=="
+ },
"node_modules/@types/phoenix": {
- "version": "1.5.4",
- "resolved": "https://registry.npmjs.org/@types/phoenix/-/phoenix-1.5.4.tgz",
- "integrity": "sha512-L5eZmzw89eXBKkiqVBcJfU1QGx9y+wurRIEgt0cuLH0hwNtVUxtx+6cu0R2STwWj468sjXyBYPYDtGclUd1kjQ=="
+ "version": "1.6.0",
+ "resolved": "https://registry.npmjs.org/@types/phoenix/-/phoenix-1.6.0.tgz",
+ "integrity": "sha512-qwfpsHmFuhAS/dVd4uBIraMxRd56vwBUYQGZ6GpXnFuM2XMRFJbIyruFKKlW2daQliuYZwe0qfn/UjFCDKic5g=="
+ },
+ "node_modules/@types/websocket": {
+ "version": "1.0.5",
+ "resolved": "https://registry.npmjs.org/@types/websocket/-/websocket-1.0.5.tgz",
+ "integrity": "sha512-NbsqiNX9CnEfC1Z0Vf4mE1SgAJ07JnRYcNex7AJ9zAVzmiGHmjKFEk7O4TJIsgv2B1sLEb6owKFZrACwdYngsQ==",
+ "dependencies": {
+ "@types/node": "*"
+ }
},
"node_modules/bufferutil": {
- "version": "4.0.6",
- "resolved": "https://registry.npmjs.org/bufferutil/-/bufferutil-4.0.6.tgz",
- "integrity": "sha512-jduaYOYtnio4aIAyc6UbvPCVcgq7nYpVnucyxr6eCYg/Woad9Hf/oxxBRDnGGjPfjUm6j5O/uBWhIu4iLebFaw==",
+ "version": "4.0.7",
+ "resolved": "https://registry.npmjs.org/bufferutil/-/bufferutil-4.0.7.tgz",
+ "integrity": "sha512-kukuqc39WOHtdxtw4UScxF/WVnMFVSQVKhtx3AjZJzhd0RGZZldcrfSEbVsWWe6KNH253574cq5F+wpv0G9pJw==",
"hasInstallScript": true,
"dependencies": {
"node-gyp-build": "^4.3.0"
@@ -34,6 +93,14 @@
"node": ">=6.14.2"
}
},
+ "node_modules/cross-fetch": {
+ "version": "3.1.6",
+ "resolved": "https://registry.npmjs.org/cross-fetch/-/cross-fetch-3.1.6.tgz",
+ "integrity": "sha512-riRvo06crlE8HiqOwIpQhxwdOk4fOeR7FVM/wXoxchFEqMNUjvbs3bfo4OTgMEMHzppd4DxFBDbyySj8Cv781g==",
+ "dependencies": {
+ "node-fetch": "^2.6.11"
+ }
+ },
"node_modules/d": {
"version": "1.0.1",
"resolved": "https://registry.npmjs.org/d/-/d-1.0.1.tgz",
@@ -52,13 +119,14 @@
}
},
"node_modules/es5-ext": {
- "version": "0.10.62",
- "resolved": "https://registry.npmjs.org/es5-ext/-/es5-ext-0.10.62.tgz",
- "integrity": "sha512-BHLqn0klhEpnOKSrzn/Xsz2UIW8j+cGmo9JLzr8BiUapV8hPL9+FliFqjwr9ngW7jWdnxv6eO+/LqyhJVqgrjA==",
+ "version": "0.10.64",
+ "resolved": "https://registry.npmjs.org/es5-ext/-/es5-ext-0.10.64.tgz",
+ "integrity": "sha512-p2snDhiLaXe6dahss1LddxqEm+SkuDvV8dnIQG0MWjyHpcMNfXKPE+/Cc0y+PhxJX3A4xGNeFCj5oc0BUh6deg==",
"hasInstallScript": true,
"dependencies": {
"es6-iterator": "^2.0.3",
"es6-symbol": "^3.1.3",
+ "esniff": "^2.0.1",
"next-tick": "^1.1.0"
},
"engines": {
@@ -84,6 +152,34 @@
"ext": "^1.1.2"
}
},
+ "node_modules/esniff": {
+ "version": "2.0.1",
+ "resolved": "https://registry.npmjs.org/esniff/-/esniff-2.0.1.tgz",
+ "integrity": "sha512-kTUIGKQ/mDPFoJ0oVfcmyJn4iBDRptjNVIzwIFR7tqWXdVI9xfA2RMwY/gbSpJG3lkdWNEjLap/NqVHZiJsdfg==",
+ "dependencies": {
+ "d": "^1.0.1",
+ "es5-ext": "^0.10.62",
+ "event-emitter": "^0.3.5",
+ "type": "^2.7.2"
+ },
+ "engines": {
+ "node": ">=0.10"
+ }
+ },
+ "node_modules/esniff/node_modules/type": {
+ "version": "2.7.2",
+ "resolved": "https://registry.npmjs.org/type/-/type-2.7.2.tgz",
+ "integrity": "sha512-dzlvlNlt6AXU7EBSfpAscydQ7gXB+pPGsPnfJnZpiNJBDj7IaJzQlBZYGdEi4R9HmPdBv2XmWJ6YUtoTa7lmCw=="
+ },
+ "node_modules/event-emitter": {
+ "version": "0.3.5",
+ "resolved": "https://registry.npmjs.org/event-emitter/-/event-emitter-0.3.5.tgz",
+ "integrity": "sha512-D9rRn9y7kLPnJ+hMq7S/nhvoKwwvVJahBi2BPmx3bvbsEdK3W9ii8cBSGjP+72/LnM4n6fo3+dkCX5FeTQruXA==",
+ "dependencies": {
+ "d": "1",
+ "es5-ext": "~0.10.14"
+ }
+ },
"node_modules/ext": {
"version": "1.7.0",
"resolved": "https://registry.npmjs.org/ext/-/ext-1.7.0.tgz",
@@ -112,16 +208,40 @@
"resolved": "https://registry.npmjs.org/next-tick/-/next-tick-1.1.0.tgz",
"integrity": "sha512-CXdUiJembsNjuToQvxayPZF9Vqht7hewsvy2sOWafLvi2awflj9mOC6bHIg50orX8IJvWKY9wYQ/zB2kogPslQ=="
},
+ "node_modules/node-fetch": {
+ "version": "2.6.11",
+ "resolved": "https://registry.npmjs.org/node-fetch/-/node-fetch-2.6.11.tgz",
+ "integrity": "sha512-4I6pdBY1EthSqDmJkiNk3JIT8cswwR9nfeW/cPdUagJYEQG7R95WRH74wpz7ma8Gh/9dI9FP+OU+0E4FvtA55w==",
+ "dependencies": {
+ "whatwg-url": "^5.0.0"
+ },
+ "engines": {
+ "node": "4.x || >=6.0.0"
+ },
+ "peerDependencies": {
+ "encoding": "^0.1.0"
+ },
+ "peerDependenciesMeta": {
+ "encoding": {
+ "optional": true
+ }
+ }
+ },
"node_modules/node-gyp-build": {
- "version": "4.5.0",
- "resolved": "https://registry.npmjs.org/node-gyp-build/-/node-gyp-build-4.5.0.tgz",
- "integrity": "sha512-2iGbaQBV+ITgCz76ZEjmhUKAKVf7xfY1sRl4UiKQspfZMH2h06SyhNsnSVy50cwkFQDGLyif6m/6uFXHkOZ6rg==",
+ "version": "4.6.0",
+ "resolved": "https://registry.npmjs.org/node-gyp-build/-/node-gyp-build-4.6.0.tgz",
+ "integrity": "sha512-NTZVKn9IylLwUzaKjkas1e4u2DLNcV4rdYagA4PWdPwW87Bi7z+BznyKSRwS/761tV/lzCGXplWsiaMjLqP2zQ==",
"bin": {
"node-gyp-build": "bin.js",
"node-gyp-build-optional": "optional.js",
"node-gyp-build-test": "build-test.js"
}
},
+ "node_modules/tr46": {
+ "version": "0.0.3",
+ "resolved": "https://registry.npmjs.org/tr46/-/tr46-0.0.3.tgz",
+ "integrity": "sha512-N3WMsuqV66lT30CrXNbEjx4GEwlow3v6rr4mCcv6prnfwhS01rkgyFdjPNBYd9br7LpXV1+Emh01fHnq2Gdgrw=="
+ },
"node_modules/type": {
"version": "1.2.0",
"resolved": "https://registry.npmjs.org/type/-/type-1.2.0.tgz",
@@ -136,9 +256,9 @@
}
},
"node_modules/utf-8-validate": {
- "version": "5.0.9",
- "resolved": "https://registry.npmjs.org/utf-8-validate/-/utf-8-validate-5.0.9.tgz",
- "integrity": "sha512-Yek7dAy0v3Kl0orwMlvi7TPtiCNrdfHNd7Gcc/pLq4BLXqfAmd0J7OWMizUQnTTJsyjKn02mU7anqwfmUP4J8Q==",
+ "version": "5.0.10",
+ "resolved": "https://registry.npmjs.org/utf-8-validate/-/utf-8-validate-5.0.10.tgz",
+ "integrity": "sha512-Z6czzLq4u8fPOyx7TU6X3dvUZVvoJmxSQ+IcrlmagKhilxlhZgxPK6C5Jqbkw1IDUmFTM+cz9QDnnLTwDz/2gQ==",
"hasInstallScript": true,
"dependencies": {
"node-gyp-build": "^4.3.0"
@@ -147,6 +267,11 @@
"node": ">=6.14.2"
}
},
+ "node_modules/webidl-conversions": {
+ "version": "3.0.1",
+ "resolved": "https://registry.npmjs.org/webidl-conversions/-/webidl-conversions-3.0.1.tgz",
+ "integrity": "sha512-2JAn3z8AR6rjK8Sm8orRC0h/bcl/DqL7tRPdGZ4I1CjdF+EaMLmYxBHyXuKL849eucPFhvBoxMsflfOb8kxaeQ=="
+ },
"node_modules/websocket": {
"version": "1.0.34",
"resolved": "https://registry.npmjs.org/websocket/-/websocket-1.0.34.tgz",
@@ -163,6 +288,15 @@
"node": ">=4.0.0"
}
},
+ "node_modules/whatwg-url": {
+ "version": "5.0.0",
+ "resolved": "https://registry.npmjs.org/whatwg-url/-/whatwg-url-5.0.0.tgz",
+ "integrity": "sha512-saE57nupxk6v3HY35+jzBwYa0rKSy0XR8JSxZPwgLr7ys0IBzhGviA1/TUGJLmSVqs8pb9AnvICXEuOHLprYTw==",
+ "dependencies": {
+ "tr46": "~0.0.3",
+ "webidl-conversions": "^3.0.0"
+ }
+ },
"node_modules/yaeti": {
"version": "0.0.6",
"resolved": "https://registry.npmjs.org/yaeti/-/yaeti-0.0.6.tgz",
@@ -171,149 +305,5 @@
"node": ">=0.10.32"
}
}
- },
- "dependencies": {
- "@tealbase/realtime-js": {
- "version": "2.1.0",
- "resolved": "https://registry.npmjs.org/@tealbase/realtime-js/-/realtime-js-2.1.0.tgz",
- "integrity": "sha512-iplLCofTeYjnx9FIOsIwHLhMp0+7UVyiA4/sCeq40VdOgN9eTIhjEno9Tgh4dJARi4aaXoKfRX1DTxgZaOpPAw==",
- "requires": {
- "@types/phoenix": "^1.5.4",
- "websocket": "^1.0.34"
- }
- },
- "@types/phoenix": {
- "version": "1.5.4",
- "resolved": "https://registry.npmjs.org/@types/phoenix/-/phoenix-1.5.4.tgz",
- "integrity": "sha512-L5eZmzw89eXBKkiqVBcJfU1QGx9y+wurRIEgt0cuLH0hwNtVUxtx+6cu0R2STwWj468sjXyBYPYDtGclUd1kjQ=="
- },
- "bufferutil": {
- "version": "4.0.6",
- "resolved": "https://registry.npmjs.org/bufferutil/-/bufferutil-4.0.6.tgz",
- "integrity": "sha512-jduaYOYtnio4aIAyc6UbvPCVcgq7nYpVnucyxr6eCYg/Woad9Hf/oxxBRDnGGjPfjUm6j5O/uBWhIu4iLebFaw==",
- "requires": {
- "node-gyp-build": "^4.3.0"
- }
- },
- "d": {
- "version": "1.0.1",
- "resolved": "https://registry.npmjs.org/d/-/d-1.0.1.tgz",
- "integrity": "sha512-m62ShEObQ39CfralilEQRjH6oAMtNCV1xJyEx5LpRYUVN+EviphDgUc/F3hnYbADmkiNs67Y+3ylmlG7Lnu+FA==",
- "requires": {
- "es5-ext": "^0.10.50",
- "type": "^1.0.1"
- }
- },
- "debug": {
- "version": "2.6.9",
- "resolved": "https://registry.npmjs.org/debug/-/debug-2.6.9.tgz",
- "integrity": "sha512-bC7ElrdJaJnPbAP+1EotYvqZsb3ecl5wi6Bfi6BJTUcNowp6cvspg0jXznRTKDjm/E7AdgFBVeAPVMNcKGsHMA==",
- "requires": {
- "ms": "2.0.0"
- }
- },
- "es5-ext": {
- "version": "0.10.62",
- "resolved": "https://registry.npmjs.org/es5-ext/-/es5-ext-0.10.62.tgz",
- "integrity": "sha512-BHLqn0klhEpnOKSrzn/Xsz2UIW8j+cGmo9JLzr8BiUapV8hPL9+FliFqjwr9ngW7jWdnxv6eO+/LqyhJVqgrjA==",
- "requires": {
- "es6-iterator": "^2.0.3",
- "es6-symbol": "^3.1.3",
- "next-tick": "^1.1.0"
- }
- },
- "es6-iterator": {
- "version": "2.0.3",
- "resolved": "https://registry.npmjs.org/es6-iterator/-/es6-iterator-2.0.3.tgz",
- "integrity": "sha512-zw4SRzoUkd+cl+ZoE15A9o1oQd920Bb0iOJMQkQhl3jNc03YqVjAhG7scf9C5KWRU/R13Orf588uCC6525o02g==",
- "requires": {
- "d": "1",
- "es5-ext": "^0.10.35",
- "es6-symbol": "^3.1.1"
- }
- },
- "es6-symbol": {
- "version": "3.1.3",
- "resolved": "https://registry.npmjs.org/es6-symbol/-/es6-symbol-3.1.3.tgz",
- "integrity": "sha512-NJ6Yn3FuDinBaBRWl/q5X/s4koRHBrgKAu+yGI6JCBeiu3qrcbJhwT2GeR/EXVfylRk8dpQVJoLEFhK+Mu31NA==",
- "requires": {
- "d": "^1.0.1",
- "ext": "^1.1.2"
- }
- },
- "ext": {
- "version": "1.7.0",
- "resolved": "https://registry.npmjs.org/ext/-/ext-1.7.0.tgz",
- "integrity": "sha512-6hxeJYaL110a9b5TEJSj0gojyHQAmA2ch5Os+ySCiA1QGdS697XWY1pzsrSjqA9LDEEgdB/KypIlR59RcLuHYw==",
- "requires": {
- "type": "^2.7.2"
- },
- "dependencies": {
- "type": {
- "version": "2.7.2",
- "resolved": "https://registry.npmjs.org/type/-/type-2.7.2.tgz",
- "integrity": "sha512-dzlvlNlt6AXU7EBSfpAscydQ7gXB+pPGsPnfJnZpiNJBDj7IaJzQlBZYGdEi4R9HmPdBv2XmWJ6YUtoTa7lmCw=="
- }
- }
- },
- "is-typedarray": {
- "version": "1.0.0",
- "resolved": "https://registry.npmjs.org/is-typedarray/-/is-typedarray-1.0.0.tgz",
- "integrity": "sha512-cyA56iCMHAh5CdzjJIa4aohJyeO1YbwLi3Jc35MmRU6poroFjIGZzUzupGiRPOjgHg9TLu43xbpwXk523fMxKA=="
- },
- "ms": {
- "version": "2.0.0",
- "resolved": "https://registry.npmjs.org/ms/-/ms-2.0.0.tgz",
- "integrity": "sha512-Tpp60P6IUJDTuOq/5Z8cdskzJujfwqfOTkrwIwj7IRISpnkJnT6SyJ4PCPnGMoFjC9ddhal5KVIYtAt97ix05A=="
- },
- "next-tick": {
- "version": "1.1.0",
- "resolved": "https://registry.npmjs.org/next-tick/-/next-tick-1.1.0.tgz",
- "integrity": "sha512-CXdUiJembsNjuToQvxayPZF9Vqht7hewsvy2sOWafLvi2awflj9mOC6bHIg50orX8IJvWKY9wYQ/zB2kogPslQ=="
- },
- "node-gyp-build": {
- "version": "4.5.0",
- "resolved": "https://registry.npmjs.org/node-gyp-build/-/node-gyp-build-4.5.0.tgz",
- "integrity": "sha512-2iGbaQBV+ITgCz76ZEjmhUKAKVf7xfY1sRl4UiKQspfZMH2h06SyhNsnSVy50cwkFQDGLyif6m/6uFXHkOZ6rg=="
- },
- "type": {
- "version": "1.2.0",
- "resolved": "https://registry.npmjs.org/type/-/type-1.2.0.tgz",
- "integrity": "sha512-+5nt5AAniqsCnu2cEQQdpzCAh33kVx8n0VoFidKpB1dVVLAN/F+bgVOqOJqOnEnrhp222clB5p3vUlD+1QAnfg=="
- },
- "typedarray-to-buffer": {
- "version": "3.1.5",
- "resolved": "https://registry.npmjs.org/typedarray-to-buffer/-/typedarray-to-buffer-3.1.5.tgz",
- "integrity": "sha512-zdu8XMNEDepKKR+XYOXAVPtWui0ly0NtohUscw+UmaHiAWT8hrV1rr//H6V+0DvJ3OQ19S979M0laLfX8rm82Q==",
- "requires": {
- "is-typedarray": "^1.0.0"
- }
- },
- "utf-8-validate": {
- "version": "5.0.9",
- "resolved": "https://registry.npmjs.org/utf-8-validate/-/utf-8-validate-5.0.9.tgz",
- "integrity": "sha512-Yek7dAy0v3Kl0orwMlvi7TPtiCNrdfHNd7Gcc/pLq4BLXqfAmd0J7OWMizUQnTTJsyjKn02mU7anqwfmUP4J8Q==",
- "requires": {
- "node-gyp-build": "^4.3.0"
- }
- },
- "websocket": {
- "version": "1.0.34",
- "resolved": "https://registry.npmjs.org/websocket/-/websocket-1.0.34.tgz",
- "integrity": "sha512-PRDso2sGwF6kM75QykIesBijKSVceR6jL2G8NGYyq2XrItNC2P5/qL5XeR056GhA+Ly7JMFvJb9I312mJfmqnQ==",
- "requires": {
- "bufferutil": "^4.0.1",
- "debug": "^2.2.0",
- "es5-ext": "^0.10.50",
- "typedarray-to-buffer": "^3.1.5",
- "utf-8-validate": "^5.0.2",
- "yaeti": "^0.0.6"
- }
- },
- "yaeti": {
- "version": "0.0.6",
- "resolved": "https://registry.npmjs.org/yaeti/-/yaeti-0.0.6.tgz",
- "integrity": "sha512-MvQa//+KcZCUkBTIC9blM+CU9J2GzuTytsOUwf2lidtvkx/6gnEp1QvJv34t9vdjhFmha/mUiNDbN0D0mJWdug=="
- }
}
}
diff --git a/assets/package.json b/assets/package.json
index 2a7ce9b..7483dae 100644
--- a/assets/package.json
+++ b/assets/package.json
@@ -1,5 +1,5 @@
{
"dependencies": {
- "@tealbase/realtime-js": "^2.1.0"
+ "@tealbase/tealbase-js": "^2.26.0"
}
-}
+}
\ No newline at end of file
diff --git a/bench/secrets.exs b/bench/secrets.exs
index e139f32..2e44ddd 100644
--- a/bench/secrets.exs
+++ b/bench/secrets.exs
@@ -12,7 +12,7 @@ string_to_decrypt = "A5mS7ggkPXm0FaKKoZtrsYNlZA3qZxFe9XA9w2YYqgU="
Benchee.run(%{
"authorize_jwt" => fn ->
- {:ok, _} = ChannelsAuthorization.authorize_conn(jwt, jwt_secret)
+ {:ok, _} = ChannelsAuthorization.authorize_conn(jwt, jwt_secret, nil)
end,
"encrypt_string" => fn ->
H.encrypt!(string_to_encrypt, secret_key)
diff --git a/config/config.exs b/config/config.exs
index 595f934..ef2f1a4 100644
--- a/config/config.exs
+++ b/config/config.exs
@@ -13,20 +13,12 @@ config :realtime,
# Configures the endpoint
config :realtime, RealtimeWeb.Endpoint,
- url: [host: "localhost"],
+ url: [host: "127.0.0.1"],
secret_key_base: "ktyW57usZxrivYdvLo9os7UGcUUZYKchOMHT3tzndmnHuxD09k+fQnPUmxlPMUI3",
render_errors: [view: RealtimeWeb.ErrorView, accepts: ~w(html json), layout: false],
pubsub_server: Realtime.PubSub,
live_view: [signing_salt: "wUMBeR8j"]
-config :realtime, :phoenix_swagger,
- swagger_files: %{
- "priv/static/swagger.json" => [
- router: RealtimeWeb.Router,
- endpoint: RealtimeWeb.Endpoint
- ]
- }
-
config :realtime, :extensions,
postgres_cdc_rls: %{
type: :postgres_cdc,
@@ -34,17 +26,8 @@ config :realtime, :extensions,
driver: Extensions.PostgresCdcRls,
supervisor: Extensions.PostgresCdcRls.Supervisor,
db_settings: Extensions.PostgresCdcRls.DbSettings
- },
- postgres_cdc_stream: %{
- type: :postgres_cdc,
- key: "postgres_cdc_stream",
- driver: Extensions.PostgresCdcStream,
- supervisor: Extensions.PostgresCdcStream.Supervisor,
- db_settings: Extensions.PostgresCdcStream.DbSettings
}
-config :phoenix_swagger, json_library: Jason
-
config :esbuild,
version: "0.14.29",
default: [
@@ -55,7 +38,7 @@ config :esbuild,
]
config :tailwind,
- version: "3.1.8",
+ version: "3.3.2",
default: [
args: ~w(
--config=tailwind.config.js
@@ -68,37 +51,29 @@ config :tailwind,
# Configures Elixir's Logger
config :logger, :console,
format: "$time $metadata[$level] $message\n",
- metadata: [:request_id, :project, :external_id]
+ metadata: [:request_id, :project, :external_id, :application_name, :sub, :error_code]
# Use Jason for JSON parsing in Phoenix
config :phoenix, :json_library, Jason
+config :open_api_spex, :cache_adapter, OpenApiSpex.Plug.PersistentTermCache
+
config :logflare_logger_backend,
flush_interval: 1_000,
max_batch_size: 50,
metadata: :all
-config :libcluster,
- debug: false,
- topologies: [
- default: [
- # The selected clustering strategy. Required.
- strategy: Cluster.Strategy.Epmd,
- # Configuration for the provided strategy. Optional.
- # config: [hosts: [:"a@127.0.0.1", :"b@127.0.0.1"]],
- # The function to use for connecting nodes. The node
- # name will be appended to the argument list. Optional
- connect: {:net_kernel, :connect_node, []},
- # The function to use for disconnecting nodes. The node
- # name will be appended to the argument list. Optional
- disconnect: {:erlang, :disconnect_node, []},
- # The function to use for listing nodes.
- # This function must return a list of node names. Optional
- list_nodes: {:erlang, :nodes, [:connected]}
- ]
- ]
+config :phoenix, :filter_parameters, {:keep, []}
-config :phoenix, :filter_parameters, ["apikey"]
+config :opentelemetry,
+ resource_detectors: [:otel_resource_app_env, :otel_resource_env_var],
+ resource: %{
+ :"service.name" => "realtime"
+ },
+ text_map_propagators: [:baggage, :trace_context],
+ # Exporter must be configured through environment variables
+ traces_exporter: :none,
+ span_processor: :batch
# Import environment specific config. This must remain at the bottom
# of this file so it overrides the configuration defined above.
diff --git a/config/dev.exs b/config/dev.exs
index 49d0e61..41e18bf 100644
--- a/config/dev.exs
+++ b/config/dev.exs
@@ -73,7 +73,7 @@ config :realtime, RealtimeWeb.Endpoint,
# Do not include metadata nor timestamps in development logs
config :logger, :console,
format: "$time [$level] $message $metadata\n",
- metadata: [:error_code, :file, :pid, :project, :external_id]
+ metadata: [:error_code, :file, :pid, :project, :external_id, :application_name, :region, :request_id]
# Set a higher stacktrace during development. Avoid configuring such
# in production as building large stacktraces may be expensive.
@@ -82,14 +82,7 @@ config :phoenix, :stacktrace_depth, 20
# Initialize plugs at runtime for faster development compilation
config :phoenix, :plug_init_mode, :runtime
-config :libcluster,
- topologies: [
- dev: [
- strategy: Cluster.Strategy.Epmd,
- config: [
- hosts: [:"orange@127.0.0.1", :"pink@127.0.0.1"]
- ],
- connect: {:net_kernel, :connect_node, []},
- disconnect: {:net_kernel, :disconnect_node, []}
- ]
- ]
+# Disable caching to ensure the rendered spec is refreshed
+config :open_api_spex, :cache_adapter, OpenApiSpex.Plug.NoneCache
+
+config :opentelemetry, traces_exporter: {:otel_exporter_stdout, []}
diff --git a/config/prod.exs b/config/prod.exs
index 4062648..00a098f 100644
--- a/config/prod.exs
+++ b/config/prod.exs
@@ -15,7 +15,7 @@ import Config
# Do not print debug messages in production
config :logger, :warning,
format: "$time [$level] $message $metadata\n",
- metadata: [:error_code, :file, :pid, :project, :external_id]
+ metadata: [:error_code, :file, :pid, :project, :external_id, :application_name, :region, :request_id]
# ## SSL Support
#
diff --git a/config/runtime.exs b/config/runtime.exs
index bbb1bea..c1e103a 100644
--- a/config/runtime.exs
+++ b/config/runtime.exs
@@ -3,6 +3,76 @@ import Config
config :logflare_logger_backend,
url: System.get_env("LOGFLARE_LOGGER_BACKEND_URL", "https://api.logflare.app")
+app_name = System.get_env("APP_NAME", "")
+default_db_host = System.get_env("DB_HOST", "127.0.0.1")
+username = System.get_env("DB_USER", "postgres")
+password = System.get_env("DB_PASSWORD", "postgres")
+database = System.get_env("DB_NAME", "postgres")
+port = System.get_env("DB_PORT", "5432")
+db_version = System.get_env("DB_IP_VERSION")
+slot_name_suffix = System.get_env("SLOT_NAME_SUFFIX")
+
+migration_partition_slots =
+ System.get_env("MIGRATION_PARTITION_SLOTS", "#{System.schedulers_online() * 2}") |> String.to_integer()
+
+connect_partition_slots =
+ System.get_env("CONNECT_PARTITION_SLOTS", "#{System.schedulers_online() * 2}") |> String.to_integer()
+
+connect_throttle_limit_per_second = System.get_env("CONNECT_THROTTLE_LIMIT_PER_SECOND", "1") |> String.to_integer()
+
+if !(db_version in [nil, "ipv6", "ipv4"]),
+ do: raise("Invalid IP version, please set either ipv6 or ipv4")
+
+socket_options =
+ cond do
+ db_version == "ipv6" ->
+ [:inet6]
+
+ db_version == "ipv4" ->
+ [:inet]
+
+ true ->
+ case Realtime.Database.detect_ip_version(default_db_host) do
+ {:ok, ip_version} -> [ip_version]
+ {:error, reason} -> raise "Failed to detect IP version for DB_HOST: #{reason}"
+ end
+ end
+
+config :realtime,
+ migration_partition_slots: migration_partition_slots,
+ connect_partition_slots: connect_partition_slots,
+ connect_throttle_limit_per_second: connect_throttle_limit_per_second,
+ tenant_max_bytes_per_second: System.get_env("TENANT_MAX_BYTES_PER_SECOND", "100000") |> String.to_integer(),
+ tenant_max_channels_per_client: System.get_env("TENANT_MAX_CHANNELS_PER_CLIENT", "100") |> String.to_integer(),
+ tenant_max_concurrent_users: System.get_env("TENANT_MAX_CONCURRENT_USERS", "200") |> String.to_integer(),
+ tenant_max_events_per_second: System.get_env("TENANT_MAX_EVENTS_PER_SECOND", "100") |> String.to_integer(),
+ tenant_max_joins_per_second: System.get_env("TENANT_MAX_JOINS_PER_SECOND", "100") |> String.to_integer(),
+ metrics_cleaner_schedule_timer_in_ms:
+ System.get_env("METRICS_CLEANER_SCHEDULE_TIMER_IN_MS", "1800000") |> String.to_integer(),
+ rpc_timeout: System.get_env("RPC_TIMEOUT", "30000") |> String.to_integer()
+
+run_janitor? = System.get_env("RUN_JANITOR", "false") == "true"
+
+if config_env() == :test || !run_janitor? do
+ config :realtime, run_janitor: false
+else
+ config :realtime,
+ # disabled for now by default
+ run_janitor: System.get_env("RUN_JANITOR", "false") == "true",
+ janitor_schedule_randomize: System.get_env("JANITOR_SCHEDULE_RANDOMIZE", "true") == "true",
+ janitor_max_children: System.get_env("JANITOR_MAX_CHILDREN", "5") |> String.to_integer(),
+ janitor_chunk_size: System.get_env("JANITOR_CHUNK_SIZE", "10") |> String.to_integer(),
+ # defaults the runner to only start after 10 minutes
+ janitor_run_after_in_ms: System.get_env("JANITOR_RUN_AFTER_IN_MS", "600000") |> String.to_integer(),
+ janitor_children_timeout: System.get_env("JANITOR_CHILDREN_TIMEOUT", "5000") |> String.to_integer(),
+ # defaults to 4 hours
+ janitor_schedule_timer:
+ :timer.hours(4)
+ |> to_string()
+ |> then(&System.get_env("JANITOR_SCHEDULE_TIMER_IN_MS", &1))
+ |> String.to_integer()
+end
+
if config_env() == :prod do
secret_key_base =
System.get_env("SECRET_KEY_BASE") ||
@@ -11,15 +81,18 @@ if config_env() == :prod do
You can generate one by calling: mix phx.gen.secret
"""
- app_name =
- System.get_env("FLY_APP_NAME") ||
- raise "APP_NAME not available"
+ if app_name == "" do
+ raise "APP_NAME not available"
+ end
config :realtime, RealtimeWeb.Endpoint,
server: true,
url: [host: "#{app_name}.fly.dev", port: 80],
http: [
port: String.to_integer(System.get_env("PORT") || "4000"),
+ protocol_options: [
+ max_header_value_length: String.to_integer(System.get_env("MAX_HEADER_LENGTH") || "4096")
+ ],
transport_options: [
# max_connection is per connection supervisor
# num_conns_sups defaults to num_acceptors
@@ -33,37 +106,27 @@ if config_env() == :prod do
],
check_origin: false,
secret_key_base: secret_key_base
-
- config :libcluster,
- debug: false,
- topologies: [
- fly6pn: [
- strategy: Cluster.Strategy.DNSPoll,
- config: [
- polling_interval: 5_000,
- query: System.get_env("DNS_NODES"),
- node_basename: app_name
- ]
- ]
- ]
end
if config_env() != :test do
+ config :logger, level: System.get_env("LOG_LEVEL", "info") |> String.to_existing_atom()
+
+ platform = if System.get_env("AWS_EXECUTION_ENV") == "AWS_ECS_FARGATE", do: :aws, else: :fly
+
config :realtime,
+ request_id_baggage_key: System.get_env("REQUEST_ID_BAGGAGE_KEY", "request-id"),
secure_channels: System.get_env("SECURE_CHANNELS", "true") == "true",
jwt_claim_validators: System.get_env("JWT_CLAIM_VALIDATORS", "{}"),
api_jwt_secret: System.get_env("API_JWT_SECRET"),
+ api_blocklist: System.get_env("API_TOKEN_BLOCKLIST", "") |> String.split(","),
+ metrics_blocklist: System.get_env("METRICS_TOKEN_BLOCKLIST", "") |> String.split(","),
metrics_jwt_secret: System.get_env("METRICS_JWT_SECRET"),
db_enc_key: System.get_env("DB_ENC_KEY"),
- fly_region: System.get_env("FLY_REGION"),
- fly_alloc_id: System.get_env("FLY_ALLOC_ID"),
- prom_poll_rate: System.get_env("PROM_POLL_RATE", "5000") |> String.to_integer()
-
- default_db_host = System.get_env("DB_HOST", "localhost")
- username = System.get_env("DB_USER", "postgres")
- password = System.get_env("DB_PASSWORD", "postgres")
- database = System.get_env("DB_NAME", "postgres")
- port = System.get_env("DB_PORT", "5432")
+ region: System.get_env("REGION"),
+ prom_poll_rate: System.get_env("PROM_POLL_RATE", "5000") |> String.to_integer(),
+ platform: platform,
+ slot_name_suffix: slot_name_suffix
+
queue_target = System.get_env("DB_QUEUE_TARGET", "5000") |> String.to_integer()
queue_interval = System.get_env("DB_QUEUE_INTERVAL", "5000") |> String.to_integer()
@@ -85,13 +148,20 @@ if config_env() != :test do
parameters: [
application_name: "tealbase_mt_realtime"
],
- after_connect: after_connect_query_args
+ after_connect: after_connect_query_args,
+ socket_options: socket_options
replica_repos = %{
Realtime.Repo.Replica.FRA => System.get_env("DB_HOST_REPLICA_FRA", default_db_host),
Realtime.Repo.Replica.IAD => System.get_env("DB_HOST_REPLICA_IAD", default_db_host),
Realtime.Repo.Replica.SIN => System.get_env("DB_HOST_REPLICA_SIN", default_db_host),
- Realtime.Repo.Replica.SJC => System.get_env("DB_HOST_REPLICA_SJC", default_db_host)
+ Realtime.Repo.Replica.SJC => System.get_env("DB_HOST_REPLICA_SJC", default_db_host),
+ Realtime.Repo.Replica.Singapore => System.get_env("DB_HOST_REPLICA_SIN", default_db_host),
+ Realtime.Repo.Replica.London => System.get_env("DB_HOST_REPLICA_FRA", default_db_host),
+ Realtime.Repo.Replica.NorthVirginia => System.get_env("DB_HOST_REPLICA_IAD", default_db_host),
+ Realtime.Repo.Replica.Oregon => System.get_env("DB_HOST_REPLICA_SJC", default_db_host),
+ Realtime.Repo.Replica.SanJose => System.get_env("DB_HOST_REPLICA_SJC", default_db_host),
+ Realtime.Repo.Replica.Local => default_db_host
}
# username, password, database, and port must match primary credentials
@@ -111,6 +181,78 @@ if config_env() != :test do
end
end
+default_cluster_strategy =
+ config_env()
+ |> case do
+ :prod -> "DNS"
+ _ -> "EPMD"
+ end
+
+cluster_topologies =
+ System.get_env("CLUSTER_STRATEGIES", default_cluster_strategy)
+ |> String.upcase()
+ |> String.split(",")
+ |> Enum.reduce([], fn strategy, acc ->
+ strategy
+ |> String.trim()
+ |> case do
+ "DNS" ->
+ [
+ fly6pn: [
+ strategy: Cluster.Strategy.DNSPoll,
+ config: [
+ polling_interval: 5_000,
+ query: System.get_env("DNS_NODES"),
+ node_basename: app_name
+ ]
+ ]
+ ] ++ acc
+
+ "POSTGRES" ->
+ version = "#{Application.spec(:realtime)[:vsn]}" |> String.replace(".", "_")
+
+ [
+ postgres: [
+ strategy: Realtime.Cluster.Strategy.Postgres,
+ config: [
+ hostname: default_db_host,
+ username: username,
+ password: password,
+ database: database,
+ port: port,
+ parameters: [
+ application_name: "cluster_node_#{node()}"
+ ],
+ heartbeat_interval: 5_000,
+ node_timeout: 15_000,
+ channel_name: System.get_env("POSTGRES_CLUSTER_CHANNEL_NAME", "realtime_cluster_#{version}")
+ ]
+ ]
+ ] ++ acc
+
+ "EPMD" ->
+ [
+ dev: [
+ strategy: Cluster.Strategy.Epmd,
+ config: [
+ hosts: [:"orange@127.0.0.1", :"pink@127.0.0.1"]
+ ],
+ connect: {:net_kernel, :connect_node, []},
+ disconnect: {:net_kernel, :disconnect_node, []}
+ ]
+ ] ++ acc
+
+ _ ->
+ acc
+ end
+ end)
+
+if config_env() == :prod do
+ config :libcluster,
+ debug: false,
+ topologies: cluster_topologies
+end
+
if System.get_env("LOGS_ENGINE") == "logflare" do
if !System.get_env("LOGFLARE_API_KEY") or !System.get_env("LOGFLARE_SOURCE_ID") do
raise """
@@ -120,5 +262,7 @@ if System.get_env("LOGS_ENGINE") == "logflare" do
end
config :logger,
+ sync_threshold: 6_000,
+ discard_threshold: 6_000,
backends: [LogflareLogger.HttpBackend]
end
diff --git a/config/test.exs b/config/test.exs
index ff710e4..60745f4 100644
--- a/config/test.exs
+++ b/config/test.exs
@@ -10,33 +10,47 @@ for repo <- [
Realtime.Repo.Replica.FRA,
Realtime.Repo.Replica.IAD,
Realtime.Repo.Replica.SIN,
- Realtime.Repo.Replica.SJC
+ Realtime.Repo.Replica.SJC,
+ Realtime.Repo.Replica.Singapore,
+ Realtime.Repo.Replica.London,
+ Realtime.Repo.Replica.NorthVirginia,
+ Realtime.Repo.Replica.Oregon,
+ Realtime.Repo.Replica.SanJose
] do
config :realtime, repo,
username: "postgres",
password: "postgres",
database: "realtime_test",
- hostname: "localhost",
+ hostname: "127.0.0.1",
pool: Ecto.Adapters.SQL.Sandbox
end
-# We don't run a server during test. If one is required,
-# you can enable the server option below.
+# Running server during tests to run integration tests
config :realtime, RealtimeWeb.Endpoint,
http: [port: 4002],
- server: false
+ server: true
config :realtime,
+ region: "us-east-1",
secure_channels: true,
db_enc_key: "1234567890123456",
jwt_claim_validators: System.get_env("JWT_CLAIM_VALIDATORS", "{}"),
- api_jwt_secret: System.get_env("API_JWT_SECRET"),
+ api_jwt_secret: System.get_env("API_JWT_SECRET", "secret"),
metrics_jwt_secret: "test",
prom_poll_rate: 5_000,
- fly_alloc_id: "123e4567-e89b-12d3-a456-426614174000"
+ request_id_baggage_key: "sb-request-id"
-config :joken,
- current_time_adapter: RealtimeWeb.Joken.CurrentTime.Mock
+# Print only errors during test
+config :logger,
+ compile_time_purge_matching: [[module: Postgrex], [module: DBConnection]],
+ level: :warning
-# Print only warnings and errors during test
-config :logger, level: :warn
+# Configures Elixir's Logger
+config :logger, :console,
+ format: "$time $metadata[$level] $message\n",
+ metadata: [:request_id, :project, :external_id, :application_name, :sub]
+
+config :opentelemetry,
+ span_processor: :simple,
+ traces_exporter: :none,
+ processors: [{:otel_simple_processor, %{}}]
diff --git a/coveralls.json b/coveralls.json
new file mode 100644
index 0000000..eee3db6
--- /dev/null
+++ b/coveralls.json
@@ -0,0 +1,28 @@
+{
+ "skip_files": [
+ "lib/realtime_web/api_spec.ex",
+ "lib/realtime_web/channels/presence.ex",
+ "lib/realtime_web/controllers/page_controller.ex",
+ "lib/realtime_web/dashboard/",
+ "lib/realtime_web/endpoint.ex",
+ "lib/realtime_web/gettext.ex",
+ "lib/realtime_web/live/",
+ "lib/realtime_web/open_api_schemas.ex",
+ "lib/realtime_web/telemetry.ex",
+ "lib/realtime_web/views/",
+ "lib/realtime.ex",
+ "lib/realtime/adapters/changes.ex",
+ "lib/realtime/adapters/postgres/decoder.ex",
+ "lib/realtime/adapters/postgres/oid_database.ex",
+ "lib/realtime/adapters/postgres/protocol/",
+ "lib/realtime/application.ex",
+ "lib/realtime/monitoring/prom_ex/plugins/phoenix.ex",
+ "lib/realtime/operations.ex",
+ "lib/realtime/release.ex",
+ "lib/realtime/tenants/authorization/policies/broadcast_policies.ex",
+ "lib/realtime/tenants/authorization/policies/presence_policies.ex",
+ "lib/realtime/tenants/repo/migrations/",
+ "/lib/realtime/tenants/cache_supervisor.ex",
+ "test/"
+ ]
+}
\ No newline at end of file
diff --git a/demo/package-lock.json b/demo/package-lock.json
index 0f7ee0e..01e19a1 100644
--- a/demo/package-lock.json
+++ b/demo/package-lock.json
@@ -13,7 +13,7 @@
"lodash.clonedeep": "^4.5.0",
"lodash.samplesize": "^4.2.0",
"lodash.throttle": "^4.1.1",
- "next": "^12.1.0",
+ "next": "^14.2.26",
"react": "17.0.2",
"react-dom": "17.0.2"
},
@@ -26,7 +26,7 @@
"autoprefixer": "^10.4.4",
"eslint": "8.11.0",
"eslint-config-next": "^12.3.4",
- "postcss": "^8.4.12",
+ "postcss": "^8.4.31",
"tailwindcss": "^3.0.23",
"typescript": "4.6.2"
}
@@ -128,11 +128,12 @@
}
},
"node_modules/@babel/runtime": {
- "version": "7.20.1",
- "resolved": "https://registry.npmjs.org/@babel/runtime/-/runtime-7.20.1.tgz",
- "integrity": "sha512-mrzLkl6U9YLF8qpqI7TB82PESyEGjm/0Ly91jG575eVxMMlb8fYfOXFZIJ8XfLrJZQbm7dlKry2bJmXBUEkdFg==",
+ "version": "7.27.0",
+ "resolved": "https://registry.npmjs.org/@babel/runtime/-/runtime-7.27.0.tgz",
+ "integrity": "sha512-VtPOkrdPHZsKc/clNqyi9WUA8TINkZ4cGk63UUE3u4pmB2k+ZMQRDuIOagv8UVd6j7k0T3+RRIb7beKTebNbcw==",
+ "license": "MIT",
"dependencies": {
- "regenerator-runtime": "^0.13.10"
+ "regenerator-runtime": "^0.14.0"
},
"engines": {
"node": ">=6.9.0"
@@ -151,6 +152,12 @@
"node": ">=6.9.0"
}
},
+ "node_modules/@babel/runtime/node_modules/regenerator-runtime": {
+ "version": "0.14.1",
+ "resolved": "https://registry.npmjs.org/regenerator-runtime/-/regenerator-runtime-0.14.1.tgz",
+ "integrity": "sha512-dYnhHh0nJoMfnkZs6GmmhFknAGRrLznOu5nc9ML+EJxGvrx6H7teuevqVqCuPcPK//3eDrrjQhehXVx9cnkGdw==",
+ "license": "MIT"
+ },
"node_modules/@eslint/eslintrc": {
"version": "1.2.1",
"resolved": "https://registry.npmjs.org/@eslint/eslintrc/-/eslintrc-1.2.1.tgz",
@@ -222,9 +229,10 @@
}
},
"node_modules/@next/env": {
- "version": "12.1.0",
- "resolved": "https://registry.npmjs.org/@next/env/-/env-12.1.0.tgz",
- "integrity": "sha512-nrIgY6t17FQ9xxwH3jj0a6EOiQ/WDHUos35Hghtr+SWN/ntHIQ7UpuvSi0vaLzZVHQWaDupKI+liO5vANcDeTQ=="
+ "version": "14.2.26",
+ "resolved": "https://registry.npmjs.org/@next/env/-/env-14.2.26.tgz",
+ "integrity": "sha512-vO//GJ/YBco+H7xdQhzJxF7ub3SUwft76jwaeOyVVQFHCi5DCnkP16WHB+JBylo4vOKPoZBlR94Z8xBxNBdNJA==",
+ "license": "MIT"
},
"node_modules/@next/eslint-plugin-next": {
"version": "12.3.4",
@@ -255,28 +263,14 @@
"url": "https://github.com/sponsors/isaacs"
}
},
- "node_modules/@next/swc-android-arm64": {
- "version": "12.1.0",
- "resolved": "https://registry.npmjs.org/@next/swc-android-arm64/-/swc-android-arm64-12.1.0.tgz",
- "integrity": "sha512-/280MLdZe0W03stA69iL+v6I+J1ascrQ6FrXBlXGCsGzrfMaGr7fskMa0T5AhQIVQD4nA/46QQWxG//DYuFBcA==",
- "cpu": [
- "arm64"
- ],
- "optional": true,
- "os": [
- "android"
- ],
- "engines": {
- "node": ">= 10"
- }
- },
"node_modules/@next/swc-darwin-arm64": {
- "version": "12.1.0",
- "resolved": "https://registry.npmjs.org/@next/swc-darwin-arm64/-/swc-darwin-arm64-12.1.0.tgz",
- "integrity": "sha512-R8vcXE2/iONJ1Unf5Ptqjk6LRW3bggH+8drNkkzH4FLEQkHtELhvcmJwkXcuipyQCsIakldAXhRbZmm3YN1vXg==",
+ "version": "14.2.26",
+ "resolved": "https://registry.npmjs.org/@next/swc-darwin-arm64/-/swc-darwin-arm64-14.2.26.tgz",
+ "integrity": "sha512-zDJY8gsKEseGAxG+C2hTMT0w9Nk9N1Sk1qV7vXYz9MEiyRoF5ogQX2+vplyUMIfygnjn9/A04I6yrUTRTuRiyQ==",
"cpu": [
"arm64"
],
+ "license": "MIT",
"optional": true,
"os": [
"darwin"
@@ -286,12 +280,13 @@
}
},
"node_modules/@next/swc-darwin-x64": {
- "version": "12.1.0",
- "resolved": "https://registry.npmjs.org/@next/swc-darwin-x64/-/swc-darwin-x64-12.1.0.tgz",
- "integrity": "sha512-ieAz0/J0PhmbZBB8+EA/JGdhRHBogF8BWaeqR7hwveb6SYEIJaDNQy0I+ZN8gF8hLj63bEDxJAs/cEhdnTq+ug==",
+ "version": "14.2.26",
+ "resolved": "https://registry.npmjs.org/@next/swc-darwin-x64/-/swc-darwin-x64-14.2.26.tgz",
+ "integrity": "sha512-U0adH5ryLfmTDkahLwG9sUQG2L0a9rYux8crQeC92rPhi3jGQEY47nByQHrVrt3prZigadwj/2HZ1LUUimuSbg==",
"cpu": [
"x64"
],
+ "license": "MIT",
"optional": true,
"os": [
"darwin"
@@ -300,28 +295,14 @@
"node": ">= 10"
}
},
- "node_modules/@next/swc-linux-arm-gnueabihf": {
- "version": "12.1.0",
- "resolved": "https://registry.npmjs.org/@next/swc-linux-arm-gnueabihf/-/swc-linux-arm-gnueabihf-12.1.0.tgz",
- "integrity": "sha512-njUd9hpl6o6A5d08dC0cKAgXKCzm5fFtgGe6i0eko8IAdtAPbtHxtpre3VeSxdZvuGFh+hb0REySQP9T1ttkog==",
- "cpu": [
- "arm"
- ],
- "optional": true,
- "os": [
- "linux"
- ],
- "engines": {
- "node": ">= 10"
- }
- },
"node_modules/@next/swc-linux-arm64-gnu": {
- "version": "12.1.0",
- "resolved": "https://registry.npmjs.org/@next/swc-linux-arm64-gnu/-/swc-linux-arm64-gnu-12.1.0.tgz",
- "integrity": "sha512-OqangJLkRxVxMhDtcb7Qn1xjzFA3s50EIxY7mljbSCLybU+sByPaWAHY4px97ieOlr2y4S0xdPKkQ3BCAwyo6Q==",
+ "version": "14.2.26",
+ "resolved": "https://registry.npmjs.org/@next/swc-linux-arm64-gnu/-/swc-linux-arm64-gnu-14.2.26.tgz",
+ "integrity": "sha512-SINMl1I7UhfHGM7SoRiw0AbwnLEMUnJ/3XXVmhyptzriHbWvPPbbm0OEVG24uUKhuS1t0nvN/DBvm5kz6ZIqpg==",
"cpu": [
"arm64"
],
+ "license": "MIT",
"optional": true,
"os": [
"linux"
@@ -331,12 +312,13 @@
}
},
"node_modules/@next/swc-linux-arm64-musl": {
- "version": "12.1.0",
- "resolved": "https://registry.npmjs.org/@next/swc-linux-arm64-musl/-/swc-linux-arm64-musl-12.1.0.tgz",
- "integrity": "sha512-hB8cLSt4GdmOpcwRe2UzI5UWn6HHO/vLkr5OTuNvCJ5xGDwpPXelVkYW/0+C3g5axbDW2Tym4S+MQCkkH9QfWA==",
+ "version": "14.2.26",
+ "resolved": "https://registry.npmjs.org/@next/swc-linux-arm64-musl/-/swc-linux-arm64-musl-14.2.26.tgz",
+ "integrity": "sha512-s6JaezoyJK2DxrwHWxLWtJKlqKqTdi/zaYigDXUJ/gmx/72CrzdVZfMvUc6VqnZ7YEvRijvYo+0o4Z9DencduA==",
"cpu": [
"arm64"
],
+ "license": "MIT",
"optional": true,
"os": [
"linux"
@@ -346,12 +328,13 @@
}
},
"node_modules/@next/swc-linux-x64-gnu": {
- "version": "12.1.0",
- "resolved": "https://registry.npmjs.org/@next/swc-linux-x64-gnu/-/swc-linux-x64-gnu-12.1.0.tgz",
- "integrity": "sha512-OKO4R/digvrVuweSw/uBM4nSdyzsBV5EwkUeeG4KVpkIZEe64ZwRpnFB65bC6hGwxIBnTv5NMSnJ+0K/WmG78A==",
+ "version": "14.2.26",
+ "resolved": "https://registry.npmjs.org/@next/swc-linux-x64-gnu/-/swc-linux-x64-gnu-14.2.26.tgz",
+ "integrity": "sha512-FEXeUQi8/pLr/XI0hKbe0tgbLmHFRhgXOUiPScz2hk0hSmbGiU8aUqVslj/6C6KA38RzXnWoJXo4FMo6aBxjzg==",
"cpu": [
"x64"
],
+ "license": "MIT",
"optional": true,
"os": [
"linux"
@@ -361,12 +344,13 @@
}
},
"node_modules/@next/swc-linux-x64-musl": {
- "version": "12.1.0",
- "resolved": "https://registry.npmjs.org/@next/swc-linux-x64-musl/-/swc-linux-x64-musl-12.1.0.tgz",
- "integrity": "sha512-JohhgAHZvOD3rQY7tlp7NlmvtvYHBYgY0x5ZCecUT6eCCcl9lv6iV3nfu82ErkxNk1H893fqH0FUpznZ/H3pSw==",
+ "version": "14.2.26",
+ "resolved": "https://registry.npmjs.org/@next/swc-linux-x64-musl/-/swc-linux-x64-musl-14.2.26.tgz",
+ "integrity": "sha512-BUsomaO4d2DuXhXhgQCVt2jjX4B4/Thts8nDoIruEJkhE5ifeQFtvW5c9JkdOtYvE5p2G0hcwQ0UbRaQmQwaVg==",
"cpu": [
"x64"
],
+ "license": "MIT",
"optional": true,
"os": [
"linux"
@@ -376,12 +360,13 @@
}
},
"node_modules/@next/swc-win32-arm64-msvc": {
- "version": "12.1.0",
- "resolved": "https://registry.npmjs.org/@next/swc-win32-arm64-msvc/-/swc-win32-arm64-msvc-12.1.0.tgz",
- "integrity": "sha512-T/3gIE6QEfKIJ4dmJk75v9hhNiYZhQYAoYm4iVo1TgcsuaKLFa+zMPh4056AHiG6n9tn2UQ1CFE8EoybEsqsSw==",
+ "version": "14.2.26",
+ "resolved": "https://registry.npmjs.org/@next/swc-win32-arm64-msvc/-/swc-win32-arm64-msvc-14.2.26.tgz",
+ "integrity": "sha512-5auwsMVzT7wbB2CZXQxDctpWbdEnEW/e66DyXO1DcgHxIyhP06awu+rHKshZE+lPLIGiwtjo7bsyeuubewwxMw==",
"cpu": [
"arm64"
],
+ "license": "MIT",
"optional": true,
"os": [
"win32"
@@ -391,12 +376,13 @@
}
},
"node_modules/@next/swc-win32-ia32-msvc": {
- "version": "12.1.0",
- "resolved": "https://registry.npmjs.org/@next/swc-win32-ia32-msvc/-/swc-win32-ia32-msvc-12.1.0.tgz",
- "integrity": "sha512-iwnKgHJdqhIW19H9PRPM9j55V6RdcOo6rX+5imx832BCWzkDbyomWnlzBfr6ByUYfhohb8QuH4hSGEikpPqI0Q==",
+ "version": "14.2.26",
+ "resolved": "https://registry.npmjs.org/@next/swc-win32-ia32-msvc/-/swc-win32-ia32-msvc-14.2.26.tgz",
+ "integrity": "sha512-GQWg/Vbz9zUGi9X80lOeGsz1rMH/MtFO/XqigDznhhhTfDlDoynCM6982mPCbSlxJ/aveZcKtTlwfAjwhyxDpg==",
"cpu": [
"ia32"
],
+ "license": "MIT",
"optional": true,
"os": [
"win32"
@@ -406,12 +392,13 @@
}
},
"node_modules/@next/swc-win32-x64-msvc": {
- "version": "12.1.0",
- "resolved": "https://registry.npmjs.org/@next/swc-win32-x64-msvc/-/swc-win32-x64-msvc-12.1.0.tgz",
- "integrity": "sha512-aBvcbMwuanDH4EMrL2TthNJy+4nP59Bimn8egqv6GHMVj0a44cU6Au4PjOhLNqEh9l+IpRGBqMTzec94UdC5xg==",
+ "version": "14.2.26",
+ "resolved": "https://registry.npmjs.org/@next/swc-win32-x64-msvc/-/swc-win32-x64-msvc-14.2.26.tgz",
+ "integrity": "sha512-2rdB3T1/Gp7bv1eQTTm9d1Y1sv9UuJ2LAwOE0Pe2prHKe32UNscj7YS13fRB37d0GAiGNR+Y7ZcW8YjDI8Ns0w==",
"cpu": [
"x64"
],
+ "license": "MIT",
"optional": true,
"os": [
"win32"
@@ -2262,6 +2249,20 @@
"react": "^16.8 || ^17.0"
}
},
+ "node_modules/@swc/counter": {
+ "version": "0.1.3",
+ "resolved": "https://registry.npmjs.org/@swc/counter/-/counter-0.1.3.tgz",
+ "integrity": "sha512-e2BR4lsJkkRlKZ/qCHPw9ZaSxc0MVUd7gtbtaB7aMvHeJVYe8sOB8DBZkP2DtISHGSku9sCK6T6cnY0CtXrOCQ=="
+ },
+ "node_modules/@swc/helpers": {
+ "version": "0.5.5",
+ "resolved": "https://registry.npmjs.org/@swc/helpers/-/helpers-0.5.5.tgz",
+ "integrity": "sha512-KGYxvIOXcceOAbEk4bi/dVLEK9z8sZ0uBB3Il5b1rhfClSpcX0yfRO0KmTkqR2cnQDymwLB+25ZyMzICg/cm/A==",
+ "dependencies": {
+ "@swc/counter": "^0.1.3",
+ "tslib": "^2.4.0"
+ }
+ },
"node_modules/@tailwindcss/forms": {
"version": "0.4.1",
"resolved": "https://registry.npmjs.org/@tailwindcss/forms/-/forms-0.4.1.tgz",
@@ -2766,11 +2767,11 @@
}
},
"node_modules/braces": {
- "version": "3.0.2",
- "resolved": "https://registry.npmjs.org/braces/-/braces-3.0.2.tgz",
- "integrity": "sha512-b8um+L1RzM3WDSzvhm6gIz1yfTbBt6YTlcEKAvsmqCZZFw46z626lVj9j1yEPW33H5H+lBQpZMP1k8l+78Ha0A==",
+ "version": "3.0.3",
+ "resolved": "https://registry.npmjs.org/braces/-/braces-3.0.3.tgz",
+ "integrity": "sha512-yQbXgO/OSZVD2IsiLlro+7Hf6Q18EJrKSEsdoMzKePKXct3gvD8oLcOQdIzGupr5Fj+EDe8gO/lxc1BzfMpxvA==",
"dependencies": {
- "fill-range": "^7.0.1"
+ "fill-range": "^7.1.1"
},
"engines": {
"node": ">=8"
@@ -2816,6 +2817,17 @@
"node": ">=6.14.2"
}
},
+ "node_modules/busboy": {
+ "version": "1.6.0",
+ "resolved": "https://registry.npmjs.org/busboy/-/busboy-1.6.0.tgz",
+ "integrity": "sha512-8SFQbg/0hQ9xy3UNTB0YEnsNBbWfhf7RtnzpL7TkBiTBRfrQ9Fxcnz7VJsleJpyp6rVLvXiuORqjlHi5q+PYuA==",
+ "dependencies": {
+ "streamsearch": "^1.1.0"
+ },
+ "engines": {
+ "node": ">=10.16.0"
+ }
+ },
"node_modules/call-bind": {
"version": "1.0.2",
"resolved": "https://registry.npmjs.org/call-bind/-/call-bind-1.0.2.tgz",
@@ -2846,9 +2858,9 @@
}
},
"node_modules/caniuse-lite": {
- "version": "1.0.30001434",
- "resolved": "https://registry.npmjs.org/caniuse-lite/-/caniuse-lite-1.0.30001434.tgz",
- "integrity": "sha512-aOBHrLmTQw//WFa2rcF1If9fa3ypkC1wzqqiKHgfdrXTWcU8C4gKVZT77eQAPWN1APys3+uQ0Df07rKauXGEYA==",
+ "version": "1.0.30001689",
+ "resolved": "https://registry.npmjs.org/caniuse-lite/-/caniuse-lite-1.0.30001689.tgz",
+ "integrity": "sha512-CmeR2VBycfa+5/jOfnp/NpWPGd06nf1XYiefUvhXFfZE4GkRc9jv+eGPS4nT558WS/8lYCzV8SlANCIPvbWP1g==",
"funding": [
{
"type": "opencollective",
@@ -2857,6 +2869,10 @@
{
"type": "tidelift",
"url": "https://tidelift.com/funding/github/npm/caniuse-lite"
+ },
+ {
+ "type": "github",
+ "url": "https://github.com/sponsors/ai"
}
]
},
@@ -3209,13 +3225,14 @@
}
},
"node_modules/es5-ext": {
- "version": "0.10.62",
- "resolved": "https://registry.npmjs.org/es5-ext/-/es5-ext-0.10.62.tgz",
- "integrity": "sha512-BHLqn0klhEpnOKSrzn/Xsz2UIW8j+cGmo9JLzr8BiUapV8hPL9+FliFqjwr9ngW7jWdnxv6eO+/LqyhJVqgrjA==",
+ "version": "0.10.64",
+ "resolved": "https://registry.npmjs.org/es5-ext/-/es5-ext-0.10.64.tgz",
+ "integrity": "sha512-p2snDhiLaXe6dahss1LddxqEm+SkuDvV8dnIQG0MWjyHpcMNfXKPE+/Cc0y+PhxJX3A4xGNeFCj5oc0BUh6deg==",
"hasInstallScript": true,
"dependencies": {
"es6-iterator": "^2.0.3",
"es6-symbol": "^3.1.3",
+ "esniff": "^2.0.1",
"next-tick": "^1.1.0"
},
"engines": {
@@ -3610,6 +3627,25 @@
"node": ">=6.0.0"
}
},
+ "node_modules/esniff": {
+ "version": "2.0.1",
+ "resolved": "https://registry.npmjs.org/esniff/-/esniff-2.0.1.tgz",
+ "integrity": "sha512-kTUIGKQ/mDPFoJ0oVfcmyJn4iBDRptjNVIzwIFR7tqWXdVI9xfA2RMwY/gbSpJG3lkdWNEjLap/NqVHZiJsdfg==",
+ "dependencies": {
+ "d": "^1.0.1",
+ "es5-ext": "^0.10.62",
+ "event-emitter": "^0.3.5",
+ "type": "^2.7.2"
+ },
+ "engines": {
+ "node": ">=0.10"
+ }
+ },
+ "node_modules/esniff/node_modules/type": {
+ "version": "2.7.2",
+ "resolved": "https://registry.npmjs.org/type/-/type-2.7.2.tgz",
+ "integrity": "sha512-dzlvlNlt6AXU7EBSfpAscydQ7gXB+pPGsPnfJnZpiNJBDj7IaJzQlBZYGdEi4R9HmPdBv2XmWJ6YUtoTa7lmCw=="
+ },
"node_modules/espree": {
"version": "9.3.1",
"resolved": "https://registry.npmjs.org/espree/-/espree-9.3.1.tgz",
@@ -3666,6 +3702,15 @@
"node": ">=0.10.0"
}
},
+ "node_modules/event-emitter": {
+ "version": "0.3.5",
+ "resolved": "https://registry.npmjs.org/event-emitter/-/event-emitter-0.3.5.tgz",
+ "integrity": "sha512-D9rRn9y7kLPnJ+hMq7S/nhvoKwwvVJahBi2BPmx3bvbsEdK3W9ii8cBSGjP+72/LnM4n6fo3+dkCX5FeTQruXA==",
+ "dependencies": {
+ "d": "1",
+ "es5-ext": "~0.10.14"
+ }
+ },
"node_modules/ext": {
"version": "1.7.0",
"resolved": "https://registry.npmjs.org/ext/-/ext-1.7.0.tgz",
@@ -3744,9 +3789,9 @@
}
},
"node_modules/fill-range": {
- "version": "7.0.1",
- "resolved": "https://registry.npmjs.org/fill-range/-/fill-range-7.0.1.tgz",
- "integrity": "sha512-qOo9F+dMUmC2Lcb4BbVvnKJxTPjCm+RRpe4gDuGrzkL7mEVl/djYSu2OdQ2Pa302N4oqkSg9ir6jaLWJ2USVpQ==",
+ "version": "7.1.1",
+ "resolved": "https://registry.npmjs.org/fill-range/-/fill-range-7.1.1.tgz",
+ "integrity": "sha512-YsGpe3WHLK8ZYi4tWDg2Jy3ebRz2rXowDxnld4bkQB00cc/1Zw9AWnC0i9ztDJitivtQvaI9KaLyKrc+hBW0yg==",
"dependencies": {
"to-regex-range": "^5.0.1"
},
@@ -3982,6 +4027,11 @@
"url": "https://github.com/sponsors/sindresorhus"
}
},
+ "node_modules/graceful-fs": {
+ "version": "4.2.11",
+ "resolved": "https://registry.npmjs.org/graceful-fs/-/graceful-fs-4.2.11.tgz",
+ "integrity": "sha512-RbJ5/jmFcNNCcDV5o9eTnBLJ/HszWV0P73bc+Ff4nS/rJj+YaS6IGyiOL0VoBYX+l1Wrl3k63h/KrH+nhJ0XvQ=="
+ },
"node_modules/has": {
"version": "1.0.3",
"resolved": "https://registry.npmjs.org/has/-/has-1.0.3.tgz",
@@ -4517,12 +4567,12 @@
}
},
"node_modules/micromatch": {
- "version": "4.0.4",
- "resolved": "https://registry.npmjs.org/micromatch/-/micromatch-4.0.4.tgz",
- "integrity": "sha512-pRmzw/XUcwXGpD9aI9q/0XOwLNygjETJ8y0ao0wdqprrzDa4YnxLcz7fQRZr8voh8V10kGhABbNcHVk5wHgWwg==",
+ "version": "4.0.8",
+ "resolved": "https://registry.npmjs.org/micromatch/-/micromatch-4.0.8.tgz",
+ "integrity": "sha512-PXwfBhYu0hBCPw8Dn0E+WDYb7af3dSLVWKi3HGv84IdF4TyFoC0ysxFd0Goxw7nSv4T/PzEJQxsYsEiFCKo2BA==",
"dependencies": {
- "braces": "^3.0.1",
- "picomatch": "^2.2.3"
+ "braces": "^3.0.3",
+ "picomatch": "^2.3.1"
},
"engines": {
"node": ">=8.6"
@@ -4563,9 +4613,15 @@
"dev": true
},
"node_modules/nanoid": {
- "version": "3.3.4",
- "resolved": "https://registry.npmjs.org/nanoid/-/nanoid-3.3.4.tgz",
- "integrity": "sha512-MqBkQh/OHTS2egovRtLk45wEyNXwF+cokD+1YPf9u5VfJiRdAiRwB2froX5Co9Rh20xs4siNPm8naNotSD6RBw==",
+ "version": "3.3.8",
+ "resolved": "https://registry.npmjs.org/nanoid/-/nanoid-3.3.8.tgz",
+ "integrity": "sha512-WNLf5Sd8oZxOm+TzppcYk8gVOgP+l58xNy58D0nbUnOxOWRWvlcCV4kUF7ltmI6PsrLl/BgKEyS4mqsGChFN0w==",
+ "funding": [
+ {
+ "type": "github",
+ "url": "https://github.com/sponsors/ai"
+ }
+ ],
"bin": {
"nanoid": "bin/nanoid.cjs"
},
@@ -4580,47 +4636,48 @@
"dev": true
},
"node_modules/next": {
- "version": "12.1.0",
- "resolved": "https://registry.npmjs.org/next/-/next-12.1.0.tgz",
- "integrity": "sha512-s885kWvnIlxsUFHq9UGyIyLiuD0G3BUC/xrH0CEnH5lHEWkwQcHOORgbDF0hbrW9vr/7am4ETfX4A7M6DjrE7Q==",
- "dependencies": {
- "@next/env": "12.1.0",
- "caniuse-lite": "^1.0.30001283",
- "postcss": "8.4.5",
- "styled-jsx": "5.0.0",
- "use-subscription": "1.5.1"
+ "version": "14.2.26",
+ "resolved": "https://registry.npmjs.org/next/-/next-14.2.26.tgz",
+ "integrity": "sha512-b81XSLihMwCfwiUVRRja3LphLo4uBBMZEzBBWMaISbKTwOmq3wPknIETy/8000tr7Gq4WmbuFYPS7jOYIf+ZJw==",
+ "license": "MIT",
+ "dependencies": {
+ "@next/env": "14.2.26",
+ "@swc/helpers": "0.5.5",
+ "busboy": "1.6.0",
+ "caniuse-lite": "^1.0.30001579",
+ "graceful-fs": "^4.2.11",
+ "postcss": "8.4.31",
+ "styled-jsx": "5.1.1"
},
"bin": {
"next": "dist/bin/next"
},
"engines": {
- "node": ">=12.22.0"
+ "node": ">=18.17.0"
},
"optionalDependencies": {
- "@next/swc-android-arm64": "12.1.0",
- "@next/swc-darwin-arm64": "12.1.0",
- "@next/swc-darwin-x64": "12.1.0",
- "@next/swc-linux-arm-gnueabihf": "12.1.0",
- "@next/swc-linux-arm64-gnu": "12.1.0",
- "@next/swc-linux-arm64-musl": "12.1.0",
- "@next/swc-linux-x64-gnu": "12.1.0",
- "@next/swc-linux-x64-musl": "12.1.0",
- "@next/swc-win32-arm64-msvc": "12.1.0",
- "@next/swc-win32-ia32-msvc": "12.1.0",
- "@next/swc-win32-x64-msvc": "12.1.0"
- },
- "peerDependencies": {
- "fibers": ">= 3.1.0",
- "node-sass": "^6.0.0 || ^7.0.0",
- "react": "^17.0.2 || ^18.0.0-0",
- "react-dom": "^17.0.2 || ^18.0.0-0",
+ "@next/swc-darwin-arm64": "14.2.26",
+ "@next/swc-darwin-x64": "14.2.26",
+ "@next/swc-linux-arm64-gnu": "14.2.26",
+ "@next/swc-linux-arm64-musl": "14.2.26",
+ "@next/swc-linux-x64-gnu": "14.2.26",
+ "@next/swc-linux-x64-musl": "14.2.26",
+ "@next/swc-win32-arm64-msvc": "14.2.26",
+ "@next/swc-win32-ia32-msvc": "14.2.26",
+ "@next/swc-win32-x64-msvc": "14.2.26"
+ },
+ "peerDependencies": {
+ "@opentelemetry/api": "^1.1.0",
+ "@playwright/test": "^1.41.2",
+ "react": "^18.2.0",
+ "react-dom": "^18.2.0",
"sass": "^1.3.0"
},
"peerDependenciesMeta": {
- "fibers": {
+ "@opentelemetry/api": {
"optional": true
},
- "node-sass": {
+ "@playwright/test": {
"optional": true
},
"sass": {
@@ -4634,20 +4691,30 @@
"integrity": "sha512-CXdUiJembsNjuToQvxayPZF9Vqht7hewsvy2sOWafLvi2awflj9mOC6bHIg50orX8IJvWKY9wYQ/zB2kogPslQ=="
},
"node_modules/next/node_modules/postcss": {
- "version": "8.4.5",
- "resolved": "https://registry.npmjs.org/postcss/-/postcss-8.4.5.tgz",
- "integrity": "sha512-jBDboWM8qpaqwkMwItqTQTiFikhs/67OYVvblFFTM7MrZjt6yMKd6r2kgXizEbTTljacm4NldIlZnhbjr84QYg==",
+ "version": "8.4.31",
+ "resolved": "https://registry.npmjs.org/postcss/-/postcss-8.4.31.tgz",
+ "integrity": "sha512-PS08Iboia9mts/2ygV3eLpY5ghnUcfLV/EXTOW1E2qYxJKGGBUtNjN76FYHnMs36RmARn41bC0AZmn+rR0OVpQ==",
+ "funding": [
+ {
+ "type": "opencollective",
+ "url": "https://opencollective.com/postcss/"
+ },
+ {
+ "type": "tidelift",
+ "url": "https://tidelift.com/funding/github/npm/postcss"
+ },
+ {
+ "type": "github",
+ "url": "https://github.com/sponsors/ai"
+ }
+ ],
"dependencies": {
- "nanoid": "^3.1.30",
+ "nanoid": "^3.3.6",
"picocolors": "^1.0.0",
- "source-map-js": "^1.0.1"
+ "source-map-js": "^1.0.2"
},
"engines": {
"node": "^10 || ^12 || >=14"
- },
- "funding": {
- "type": "opencollective",
- "url": "https://opencollective.com/postcss/"
}
},
"node_modules/node-fetch": {
@@ -4915,9 +4982,9 @@
}
},
"node_modules/postcss": {
- "version": "8.4.14",
- "resolved": "https://registry.npmjs.org/postcss/-/postcss-8.4.14.tgz",
- "integrity": "sha512-E398TUmfAYFPBSdzgeieK2Y1+1cpdxJx8yXbK/m57nRhKSmk1GB2tO4lbLBtlkfPQTDKfe4Xqv1ASWPpayPEig==",
+ "version": "8.4.32",
+ "resolved": "https://registry.npmjs.org/postcss/-/postcss-8.4.32.tgz",
+ "integrity": "sha512-D/kj5JNu6oo2EIy+XL/26JEDTlIbB8hw85G8StOE6L74RQAVVP5rej6wxCNqyMbR4RkPfqvezVbPw81Ngd6Kcw==",
"funding": [
{
"type": "opencollective",
@@ -4926,10 +4993,14 @@
{
"type": "tidelift",
"url": "https://tidelift.com/funding/github/npm/postcss"
+ },
+ {
+ "type": "github",
+ "url": "https://github.com/sponsors/ai"
}
],
"dependencies": {
- "nanoid": "^3.3.4",
+ "nanoid": "^3.3.7",
"picocolors": "^1.0.0",
"source-map-js": "^1.0.2"
},
@@ -5164,7 +5235,8 @@
"node_modules/regenerator-runtime": {
"version": "0.13.11",
"resolved": "https://registry.npmjs.org/regenerator-runtime/-/regenerator-runtime-0.13.11.tgz",
- "integrity": "sha512-kY1AZVr2Ra+t+piVaJ4gxaFaReZVH40AKNo7UCX6W+dEwBo/2oZJzqfuN1qLq1oL45o56cPaTXELwrTh8Fpggg=="
+ "integrity": "sha512-kY1AZVr2Ra+t+piVaJ4gxaFaReZVH40AKNo7UCX6W+dEwBo/2oZJzqfuN1qLq1oL45o56cPaTXELwrTh8Fpggg==",
+ "dev": true
},
"node_modules/regexp.prototype.flags": {
"version": "1.4.3",
@@ -5355,6 +5427,14 @@
"node": ">=0.10.0"
}
},
+ "node_modules/streamsearch": {
+ "version": "1.1.0",
+ "resolved": "https://registry.npmjs.org/streamsearch/-/streamsearch-1.1.0.tgz",
+ "integrity": "sha512-Mcc5wHehp9aXz1ax6bZUyY5afg9u2rv5cqQI3mRrYkGC8rW2hM02jWuwjtL++LS5qinSyhj2QfLyNsuc+VsExg==",
+ "engines": {
+ "node": ">=10.0.0"
+ }
+ },
"node_modules/string.prototype.matchall": {
"version": "4.0.8",
"resolved": "https://registry.npmjs.org/string.prototype.matchall/-/string.prototype.matchall-4.0.8.tgz",
@@ -5436,14 +5516,17 @@
}
},
"node_modules/styled-jsx": {
- "version": "5.0.0",
- "resolved": "https://registry.npmjs.org/styled-jsx/-/styled-jsx-5.0.0.tgz",
- "integrity": "sha512-qUqsWoBquEdERe10EW8vLp3jT25s/ssG1/qX5gZ4wu15OZpmSMFI2v+fWlRhLfykA5rFtlJ1ME8A8pm/peV4WA==",
+ "version": "5.1.1",
+ "resolved": "https://registry.npmjs.org/styled-jsx/-/styled-jsx-5.1.1.tgz",
+ "integrity": "sha512-pW7uC1l4mBZ8ugbiZrcIsiIvVx1UmTfw7UkC3Um2tmfUq9Bhk8IiyEIPl6F8agHgjzku6j0xQEZbfA5uSgSaCw==",
+ "dependencies": {
+ "client-only": "0.0.1"
+ },
"engines": {
"node": ">= 12.0.0"
},
"peerDependencies": {
- "react": ">= 16.8.0 || 17.x.x || 18.x.x"
+ "react": ">= 16.8.0 || 17.x.x || ^18.0.0-0"
},
"peerDependenciesMeta": {
"@babel/core": {
@@ -5724,17 +5807,6 @@
}
}
},
- "node_modules/use-subscription": {
- "version": "1.5.1",
- "resolved": "https://registry.npmjs.org/use-subscription/-/use-subscription-1.5.1.tgz",
- "integrity": "sha512-Xv2a1P/yReAjAbhylMfFplFKj9GssgTwN7RlcTxBujFQcloStWNDQdc4g4NRWH9xS4i/FDk04vQBptAXoF3VcA==",
- "dependencies": {
- "object-assign": "^4.1.1"
- },
- "peerDependencies": {
- "react": "^16.8.0 || ^17.0.0"
- }
- },
"node_modules/utf-8-validate": {
"version": "5.0.10",
"resolved": "https://registry.npmjs.org/utf-8-validate/-/utf-8-validate-5.0.10.tgz",
@@ -5833,9 +5905,9 @@
}
},
"node_modules/word-wrap": {
- "version": "1.2.3",
- "resolved": "https://registry.npmjs.org/word-wrap/-/word-wrap-1.2.3.tgz",
- "integrity": "sha512-Hz/mrNwitNRh/HUAtM/VT/5VH+ygD6DV7mYKZAtHOrbs8U7lvPS6xf7EJKMF0uW1KJCl0H701g3ZGus+muE5vQ==",
+ "version": "1.2.4",
+ "resolved": "https://registry.npmjs.org/word-wrap/-/word-wrap-1.2.4.tgz",
+ "integrity": "sha512-2V81OA4ugVo5pRo46hAoD2ivUJx8jXmWXfUkY4KFNw0hEptvN0QfH3K4nHiwzGeKl5rFKedV48QVoqYavy4YpA==",
"dev": true,
"engines": {
"node": ">=0.10.0"
@@ -5954,11 +6026,18 @@
}
},
"@babel/runtime": {
- "version": "7.20.1",
- "resolved": "https://registry.npmjs.org/@babel/runtime/-/runtime-7.20.1.tgz",
- "integrity": "sha512-mrzLkl6U9YLF8qpqI7TB82PESyEGjm/0Ly91jG575eVxMMlb8fYfOXFZIJ8XfLrJZQbm7dlKry2bJmXBUEkdFg==",
+ "version": "7.27.0",
+ "resolved": "https://registry.npmjs.org/@babel/runtime/-/runtime-7.27.0.tgz",
+ "integrity": "sha512-VtPOkrdPHZsKc/clNqyi9WUA8TINkZ4cGk63UUE3u4pmB2k+ZMQRDuIOagv8UVd6j7k0T3+RRIb7beKTebNbcw==",
"requires": {
- "regenerator-runtime": "^0.13.10"
+ "regenerator-runtime": "^0.14.0"
+ },
+ "dependencies": {
+ "regenerator-runtime": {
+ "version": "0.14.1",
+ "resolved": "https://registry.npmjs.org/regenerator-runtime/-/regenerator-runtime-0.14.1.tgz",
+ "integrity": "sha512-dYnhHh0nJoMfnkZs6GmmhFknAGRrLznOu5nc9ML+EJxGvrx6H7teuevqVqCuPcPK//3eDrrjQhehXVx9cnkGdw=="
+ }
}
},
"@babel/runtime-corejs3": {
@@ -6022,9 +6101,9 @@
}
},
"@next/env": {
- "version": "12.1.0",
- "resolved": "https://registry.npmjs.org/@next/env/-/env-12.1.0.tgz",
- "integrity": "sha512-nrIgY6t17FQ9xxwH3jj0a6EOiQ/WDHUos35Hghtr+SWN/ntHIQ7UpuvSi0vaLzZVHQWaDupKI+liO5vANcDeTQ=="
+ "version": "14.2.26",
+ "resolved": "https://registry.npmjs.org/@next/env/-/env-14.2.26.tgz",
+ "integrity": "sha512-vO//GJ/YBco+H7xdQhzJxF7ub3SUwft76jwaeOyVVQFHCi5DCnkP16WHB+JBylo4vOKPoZBlR94Z8xBxNBdNJA=="
},
"@next/eslint-plugin-next": {
"version": "12.3.4",
@@ -6051,70 +6130,58 @@
}
}
},
- "@next/swc-android-arm64": {
- "version": "12.1.0",
- "resolved": "https://registry.npmjs.org/@next/swc-android-arm64/-/swc-android-arm64-12.1.0.tgz",
- "integrity": "sha512-/280MLdZe0W03stA69iL+v6I+J1ascrQ6FrXBlXGCsGzrfMaGr7fskMa0T5AhQIVQD4nA/46QQWxG//DYuFBcA==",
- "optional": true
- },
"@next/swc-darwin-arm64": {
- "version": "12.1.0",
- "resolved": "https://registry.npmjs.org/@next/swc-darwin-arm64/-/swc-darwin-arm64-12.1.0.tgz",
- "integrity": "sha512-R8vcXE2/iONJ1Unf5Ptqjk6LRW3bggH+8drNkkzH4FLEQkHtELhvcmJwkXcuipyQCsIakldAXhRbZmm3YN1vXg==",
+ "version": "14.2.26",
+ "resolved": "https://registry.npmjs.org/@next/swc-darwin-arm64/-/swc-darwin-arm64-14.2.26.tgz",
+ "integrity": "sha512-zDJY8gsKEseGAxG+C2hTMT0w9Nk9N1Sk1qV7vXYz9MEiyRoF5ogQX2+vplyUMIfygnjn9/A04I6yrUTRTuRiyQ==",
"optional": true
},
"@next/swc-darwin-x64": {
- "version": "12.1.0",
- "resolved": "https://registry.npmjs.org/@next/swc-darwin-x64/-/swc-darwin-x64-12.1.0.tgz",
- "integrity": "sha512-ieAz0/J0PhmbZBB8+EA/JGdhRHBogF8BWaeqR7hwveb6SYEIJaDNQy0I+ZN8gF8hLj63bEDxJAs/cEhdnTq+ug==",
- "optional": true
- },
- "@next/swc-linux-arm-gnueabihf": {
- "version": "12.1.0",
- "resolved": "https://registry.npmjs.org/@next/swc-linux-arm-gnueabihf/-/swc-linux-arm-gnueabihf-12.1.0.tgz",
- "integrity": "sha512-njUd9hpl6o6A5d08dC0cKAgXKCzm5fFtgGe6i0eko8IAdtAPbtHxtpre3VeSxdZvuGFh+hb0REySQP9T1ttkog==",
+ "version": "14.2.26",
+ "resolved": "https://registry.npmjs.org/@next/swc-darwin-x64/-/swc-darwin-x64-14.2.26.tgz",
+ "integrity": "sha512-U0adH5ryLfmTDkahLwG9sUQG2L0a9rYux8crQeC92rPhi3jGQEY47nByQHrVrt3prZigadwj/2HZ1LUUimuSbg==",
"optional": true
},
"@next/swc-linux-arm64-gnu": {
- "version": "12.1.0",
- "resolved": "https://registry.npmjs.org/@next/swc-linux-arm64-gnu/-/swc-linux-arm64-gnu-12.1.0.tgz",
- "integrity": "sha512-OqangJLkRxVxMhDtcb7Qn1xjzFA3s50EIxY7mljbSCLybU+sByPaWAHY4px97ieOlr2y4S0xdPKkQ3BCAwyo6Q==",
+ "version": "14.2.26",
+ "resolved": "https://registry.npmjs.org/@next/swc-linux-arm64-gnu/-/swc-linux-arm64-gnu-14.2.26.tgz",
+ "integrity": "sha512-SINMl1I7UhfHGM7SoRiw0AbwnLEMUnJ/3XXVmhyptzriHbWvPPbbm0OEVG24uUKhuS1t0nvN/DBvm5kz6ZIqpg==",
"optional": true
},
"@next/swc-linux-arm64-musl": {
- "version": "12.1.0",
- "resolved": "https://registry.npmjs.org/@next/swc-linux-arm64-musl/-/swc-linux-arm64-musl-12.1.0.tgz",
- "integrity": "sha512-hB8cLSt4GdmOpcwRe2UzI5UWn6HHO/vLkr5OTuNvCJ5xGDwpPXelVkYW/0+C3g5axbDW2Tym4S+MQCkkH9QfWA==",
+ "version": "14.2.26",
+ "resolved": "https://registry.npmjs.org/@next/swc-linux-arm64-musl/-/swc-linux-arm64-musl-14.2.26.tgz",
+ "integrity": "sha512-s6JaezoyJK2DxrwHWxLWtJKlqKqTdi/zaYigDXUJ/gmx/72CrzdVZfMvUc6VqnZ7YEvRijvYo+0o4Z9DencduA==",
"optional": true
},
"@next/swc-linux-x64-gnu": {
- "version": "12.1.0",
- "resolved": "https://registry.npmjs.org/@next/swc-linux-x64-gnu/-/swc-linux-x64-gnu-12.1.0.tgz",
- "integrity": "sha512-OKO4R/digvrVuweSw/uBM4nSdyzsBV5EwkUeeG4KVpkIZEe64ZwRpnFB65bC6hGwxIBnTv5NMSnJ+0K/WmG78A==",
+ "version": "14.2.26",
+ "resolved": "https://registry.npmjs.org/@next/swc-linux-x64-gnu/-/swc-linux-x64-gnu-14.2.26.tgz",
+ "integrity": "sha512-FEXeUQi8/pLr/XI0hKbe0tgbLmHFRhgXOUiPScz2hk0hSmbGiU8aUqVslj/6C6KA38RzXnWoJXo4FMo6aBxjzg==",
"optional": true
},
"@next/swc-linux-x64-musl": {
- "version": "12.1.0",
- "resolved": "https://registry.npmjs.org/@next/swc-linux-x64-musl/-/swc-linux-x64-musl-12.1.0.tgz",
- "integrity": "sha512-JohhgAHZvOD3rQY7tlp7NlmvtvYHBYgY0x5ZCecUT6eCCcl9lv6iV3nfu82ErkxNk1H893fqH0FUpznZ/H3pSw==",
+ "version": "14.2.26",
+ "resolved": "https://registry.npmjs.org/@next/swc-linux-x64-musl/-/swc-linux-x64-musl-14.2.26.tgz",
+ "integrity": "sha512-BUsomaO4d2DuXhXhgQCVt2jjX4B4/Thts8nDoIruEJkhE5ifeQFtvW5c9JkdOtYvE5p2G0hcwQ0UbRaQmQwaVg==",
"optional": true
},
"@next/swc-win32-arm64-msvc": {
- "version": "12.1.0",
- "resolved": "https://registry.npmjs.org/@next/swc-win32-arm64-msvc/-/swc-win32-arm64-msvc-12.1.0.tgz",
- "integrity": "sha512-T/3gIE6QEfKIJ4dmJk75v9hhNiYZhQYAoYm4iVo1TgcsuaKLFa+zMPh4056AHiG6n9tn2UQ1CFE8EoybEsqsSw==",
+ "version": "14.2.26",
+ "resolved": "https://registry.npmjs.org/@next/swc-win32-arm64-msvc/-/swc-win32-arm64-msvc-14.2.26.tgz",
+ "integrity": "sha512-5auwsMVzT7wbB2CZXQxDctpWbdEnEW/e66DyXO1DcgHxIyhP06awu+rHKshZE+lPLIGiwtjo7bsyeuubewwxMw==",
"optional": true
},
"@next/swc-win32-ia32-msvc": {
- "version": "12.1.0",
- "resolved": "https://registry.npmjs.org/@next/swc-win32-ia32-msvc/-/swc-win32-ia32-msvc-12.1.0.tgz",
- "integrity": "sha512-iwnKgHJdqhIW19H9PRPM9j55V6RdcOo6rX+5imx832BCWzkDbyomWnlzBfr6ByUYfhohb8QuH4hSGEikpPqI0Q==",
+ "version": "14.2.26",
+ "resolved": "https://registry.npmjs.org/@next/swc-win32-ia32-msvc/-/swc-win32-ia32-msvc-14.2.26.tgz",
+ "integrity": "sha512-GQWg/Vbz9zUGi9X80lOeGsz1rMH/MtFO/XqigDznhhhTfDlDoynCM6982mPCbSlxJ/aveZcKtTlwfAjwhyxDpg==",
"optional": true
},
"@next/swc-win32-x64-msvc": {
- "version": "12.1.0",
- "resolved": "https://registry.npmjs.org/@next/swc-win32-x64-msvc/-/swc-win32-x64-msvc-12.1.0.tgz",
- "integrity": "sha512-aBvcbMwuanDH4EMrL2TthNJy+4nP59Bimn8egqv6GHMVj0a44cU6Au4PjOhLNqEh9l+IpRGBqMTzec94UdC5xg==",
+ "version": "14.2.26",
+ "resolved": "https://registry.npmjs.org/@next/swc-win32-x64-msvc/-/swc-win32-x64-msvc-14.2.26.tgz",
+ "integrity": "sha512-2rdB3T1/Gp7bv1eQTTm9d1Y1sv9UuJ2LAwOE0Pe2prHKe32UNscj7YS13fRB37d0GAiGNR+Y7ZcW8YjDI8Ns0w==",
"optional": true
},
"@nodelib/fs.scandir": {
@@ -7653,6 +7720,20 @@
}
}
},
+ "@swc/counter": {
+ "version": "0.1.3",
+ "resolved": "https://registry.npmjs.org/@swc/counter/-/counter-0.1.3.tgz",
+ "integrity": "sha512-e2BR4lsJkkRlKZ/qCHPw9ZaSxc0MVUd7gtbtaB7aMvHeJVYe8sOB8DBZkP2DtISHGSku9sCK6T6cnY0CtXrOCQ=="
+ },
+ "@swc/helpers": {
+ "version": "0.5.5",
+ "resolved": "https://registry.npmjs.org/@swc/helpers/-/helpers-0.5.5.tgz",
+ "integrity": "sha512-KGYxvIOXcceOAbEk4bi/dVLEK9z8sZ0uBB3Il5b1rhfClSpcX0yfRO0KmTkqR2cnQDymwLB+25ZyMzICg/cm/A==",
+ "requires": {
+ "@swc/counter": "^0.1.3",
+ "tslib": "^2.4.0"
+ }
+ },
"@tailwindcss/forms": {
"version": "0.4.1",
"resolved": "https://registry.npmjs.org/@tailwindcss/forms/-/forms-0.4.1.tgz",
@@ -8011,11 +8092,11 @@
}
},
"braces": {
- "version": "3.0.2",
- "resolved": "https://registry.npmjs.org/braces/-/braces-3.0.2.tgz",
- "integrity": "sha512-b8um+L1RzM3WDSzvhm6gIz1yfTbBt6YTlcEKAvsmqCZZFw46z626lVj9j1yEPW33H5H+lBQpZMP1k8l+78Ha0A==",
+ "version": "3.0.3",
+ "resolved": "https://registry.npmjs.org/braces/-/braces-3.0.3.tgz",
+ "integrity": "sha512-yQbXgO/OSZVD2IsiLlro+7Hf6Q18EJrKSEsdoMzKePKXct3gvD8oLcOQdIzGupr5Fj+EDe8gO/lxc1BzfMpxvA==",
"requires": {
- "fill-range": "^7.0.1"
+ "fill-range": "^7.1.1"
}
},
"browserslist": {
@@ -8038,6 +8119,14 @@
"node-gyp-build": "^4.3.0"
}
},
+ "busboy": {
+ "version": "1.6.0",
+ "resolved": "https://registry.npmjs.org/busboy/-/busboy-1.6.0.tgz",
+ "integrity": "sha512-8SFQbg/0hQ9xy3UNTB0YEnsNBbWfhf7RtnzpL7TkBiTBRfrQ9Fxcnz7VJsleJpyp6rVLvXiuORqjlHi5q+PYuA==",
+ "requires": {
+ "streamsearch": "^1.1.0"
+ }
+ },
"call-bind": {
"version": "1.0.2",
"resolved": "https://registry.npmjs.org/call-bind/-/call-bind-1.0.2.tgz",
@@ -8059,9 +8148,9 @@
"integrity": "sha512-QOSvevhslijgYwRx6Rv7zKdMF8lbRmx+uQGx2+vDc+KI/eBnsy9kit5aj23AgGu3pa4t9AgwbnXWqS+iOY+2aA=="
},
"caniuse-lite": {
- "version": "1.0.30001434",
- "resolved": "https://registry.npmjs.org/caniuse-lite/-/caniuse-lite-1.0.30001434.tgz",
- "integrity": "sha512-aOBHrLmTQw//WFa2rcF1If9fa3ypkC1wzqqiKHgfdrXTWcU8C4gKVZT77eQAPWN1APys3+uQ0Df07rKauXGEYA=="
+ "version": "1.0.30001689",
+ "resolved": "https://registry.npmjs.org/caniuse-lite/-/caniuse-lite-1.0.30001689.tgz",
+ "integrity": "sha512-CmeR2VBycfa+5/jOfnp/NpWPGd06nf1XYiefUvhXFfZE4GkRc9jv+eGPS4nT558WS/8lYCzV8SlANCIPvbWP1g=="
},
"chalk": {
"version": "4.1.2",
@@ -8333,12 +8422,13 @@
}
},
"es5-ext": {
- "version": "0.10.62",
- "resolved": "https://registry.npmjs.org/es5-ext/-/es5-ext-0.10.62.tgz",
- "integrity": "sha512-BHLqn0klhEpnOKSrzn/Xsz2UIW8j+cGmo9JLzr8BiUapV8hPL9+FliFqjwr9ngW7jWdnxv6eO+/LqyhJVqgrjA==",
+ "version": "0.10.64",
+ "resolved": "https://registry.npmjs.org/es5-ext/-/es5-ext-0.10.64.tgz",
+ "integrity": "sha512-p2snDhiLaXe6dahss1LddxqEm+SkuDvV8dnIQG0MWjyHpcMNfXKPE+/Cc0y+PhxJX3A4xGNeFCj5oc0BUh6deg==",
"requires": {
"es6-iterator": "^2.0.3",
"es6-symbol": "^3.1.3",
+ "esniff": "^2.0.1",
"next-tick": "^1.1.0"
}
},
@@ -8646,6 +8736,24 @@
"integrity": "sha512-mQ+suqKJVyeuwGYHAdjMFqjCyfl8+Ldnxuyp3ldiMBFKkvytrXUZWaiPCEav8qDHKty44bD+qV1IP4T+w+xXRA==",
"dev": true
},
+ "esniff": {
+ "version": "2.0.1",
+ "resolved": "https://registry.npmjs.org/esniff/-/esniff-2.0.1.tgz",
+ "integrity": "sha512-kTUIGKQ/mDPFoJ0oVfcmyJn4iBDRptjNVIzwIFR7tqWXdVI9xfA2RMwY/gbSpJG3lkdWNEjLap/NqVHZiJsdfg==",
+ "requires": {
+ "d": "^1.0.1",
+ "es5-ext": "^0.10.62",
+ "event-emitter": "^0.3.5",
+ "type": "^2.7.2"
+ },
+ "dependencies": {
+ "type": {
+ "version": "2.7.2",
+ "resolved": "https://registry.npmjs.org/type/-/type-2.7.2.tgz",
+ "integrity": "sha512-dzlvlNlt6AXU7EBSfpAscydQ7gXB+pPGsPnfJnZpiNJBDj7IaJzQlBZYGdEi4R9HmPdBv2XmWJ6YUtoTa7lmCw=="
+ }
+ }
+ },
"espree": {
"version": "9.3.1",
"resolved": "https://registry.npmjs.org/espree/-/espree-9.3.1.tgz",
@@ -8687,6 +8795,15 @@
"integrity": "sha512-kVscqXk4OCp68SZ0dkgEKVi6/8ij300KBWTJq32P/dYeWTSwK41WyTxalN1eRmA5Z9UU/LX9D7FWSmV9SAYx6g==",
"dev": true
},
+ "event-emitter": {
+ "version": "0.3.5",
+ "resolved": "https://registry.npmjs.org/event-emitter/-/event-emitter-0.3.5.tgz",
+ "integrity": "sha512-D9rRn9y7kLPnJ+hMq7S/nhvoKwwvVJahBi2BPmx3bvbsEdK3W9ii8cBSGjP+72/LnM4n6fo3+dkCX5FeTQruXA==",
+ "requires": {
+ "d": "1",
+ "es5-ext": "~0.10.14"
+ }
+ },
"ext": {
"version": "1.7.0",
"resolved": "https://registry.npmjs.org/ext/-/ext-1.7.0.tgz",
@@ -8760,9 +8877,9 @@
}
},
"fill-range": {
- "version": "7.0.1",
- "resolved": "https://registry.npmjs.org/fill-range/-/fill-range-7.0.1.tgz",
- "integrity": "sha512-qOo9F+dMUmC2Lcb4BbVvnKJxTPjCm+RRpe4gDuGrzkL7mEVl/djYSu2OdQ2Pa302N4oqkSg9ir6jaLWJ2USVpQ==",
+ "version": "7.1.1",
+ "resolved": "https://registry.npmjs.org/fill-range/-/fill-range-7.1.1.tgz",
+ "integrity": "sha512-YsGpe3WHLK8ZYi4tWDg2Jy3ebRz2rXowDxnld4bkQB00cc/1Zw9AWnC0i9ztDJitivtQvaI9KaLyKrc+hBW0yg==",
"requires": {
"to-regex-range": "^5.0.1"
}
@@ -8926,6 +9043,11 @@
"slash": "^3.0.0"
}
},
+ "graceful-fs": {
+ "version": "4.2.11",
+ "resolved": "https://registry.npmjs.org/graceful-fs/-/graceful-fs-4.2.11.tgz",
+ "integrity": "sha512-RbJ5/jmFcNNCcDV5o9eTnBLJ/HszWV0P73bc+Ff4nS/rJj+YaS6IGyiOL0VoBYX+l1Wrl3k63h/KrH+nhJ0XvQ=="
+ },
"has": {
"version": "1.0.3",
"resolved": "https://registry.npmjs.org/has/-/has-1.0.3.tgz",
@@ -9326,12 +9448,12 @@
"integrity": "sha512-8q7VEgMJW4J8tcfVPy8g09NcQwZdbwFEqhe/WZkoIzjn/3TGDwtOCYtXGxA3O8tPzpczCCDgv+P2P5y00ZJOOg=="
},
"micromatch": {
- "version": "4.0.4",
- "resolved": "https://registry.npmjs.org/micromatch/-/micromatch-4.0.4.tgz",
- "integrity": "sha512-pRmzw/XUcwXGpD9aI9q/0XOwLNygjETJ8y0ao0wdqprrzDa4YnxLcz7fQRZr8voh8V10kGhABbNcHVk5wHgWwg==",
+ "version": "4.0.8",
+ "resolved": "https://registry.npmjs.org/micromatch/-/micromatch-4.0.8.tgz",
+ "integrity": "sha512-PXwfBhYu0hBCPw8Dn0E+WDYb7af3dSLVWKi3HGv84IdF4TyFoC0ysxFd0Goxw7nSv4T/PzEJQxsYsEiFCKo2BA==",
"requires": {
- "braces": "^3.0.1",
- "picomatch": "^2.2.3"
+ "braces": "^3.0.3",
+ "picomatch": "^2.3.1"
}
},
"mini-svg-data-uri": {
@@ -9360,9 +9482,9 @@
"dev": true
},
"nanoid": {
- "version": "3.3.4",
- "resolved": "https://registry.npmjs.org/nanoid/-/nanoid-3.3.4.tgz",
- "integrity": "sha512-MqBkQh/OHTS2egovRtLk45wEyNXwF+cokD+1YPf9u5VfJiRdAiRwB2froX5Co9Rh20xs4siNPm8naNotSD6RBw=="
+ "version": "3.3.8",
+ "resolved": "https://registry.npmjs.org/nanoid/-/nanoid-3.3.8.tgz",
+ "integrity": "sha512-WNLf5Sd8oZxOm+TzppcYk8gVOgP+l58xNy58D0nbUnOxOWRWvlcCV4kUF7ltmI6PsrLl/BgKEyS4mqsGChFN0w=="
},
"natural-compare": {
"version": "1.4.0",
@@ -9371,36 +9493,36 @@
"dev": true
},
"next": {
- "version": "12.1.0",
- "resolved": "https://registry.npmjs.org/next/-/next-12.1.0.tgz",
- "integrity": "sha512-s885kWvnIlxsUFHq9UGyIyLiuD0G3BUC/xrH0CEnH5lHEWkwQcHOORgbDF0hbrW9vr/7am4ETfX4A7M6DjrE7Q==",
- "requires": {
- "@next/env": "12.1.0",
- "@next/swc-android-arm64": "12.1.0",
- "@next/swc-darwin-arm64": "12.1.0",
- "@next/swc-darwin-x64": "12.1.0",
- "@next/swc-linux-arm-gnueabihf": "12.1.0",
- "@next/swc-linux-arm64-gnu": "12.1.0",
- "@next/swc-linux-arm64-musl": "12.1.0",
- "@next/swc-linux-x64-gnu": "12.1.0",
- "@next/swc-linux-x64-musl": "12.1.0",
- "@next/swc-win32-arm64-msvc": "12.1.0",
- "@next/swc-win32-ia32-msvc": "12.1.0",
- "@next/swc-win32-x64-msvc": "12.1.0",
- "caniuse-lite": "^1.0.30001283",
- "postcss": "8.4.5",
- "styled-jsx": "5.0.0",
- "use-subscription": "1.5.1"
+ "version": "14.2.26",
+ "resolved": "https://registry.npmjs.org/next/-/next-14.2.26.tgz",
+ "integrity": "sha512-b81XSLihMwCfwiUVRRja3LphLo4uBBMZEzBBWMaISbKTwOmq3wPknIETy/8000tr7Gq4WmbuFYPS7jOYIf+ZJw==",
+ "requires": {
+ "@next/env": "14.2.26",
+ "@next/swc-darwin-arm64": "14.2.26",
+ "@next/swc-darwin-x64": "14.2.26",
+ "@next/swc-linux-arm64-gnu": "14.2.26",
+ "@next/swc-linux-arm64-musl": "14.2.26",
+ "@next/swc-linux-x64-gnu": "14.2.26",
+ "@next/swc-linux-x64-musl": "14.2.26",
+ "@next/swc-win32-arm64-msvc": "14.2.26",
+ "@next/swc-win32-ia32-msvc": "14.2.26",
+ "@next/swc-win32-x64-msvc": "14.2.26",
+ "@swc/helpers": "0.5.5",
+ "busboy": "1.6.0",
+ "caniuse-lite": "^1.0.30001579",
+ "graceful-fs": "^4.2.11",
+ "postcss": "8.4.31",
+ "styled-jsx": "5.1.1"
},
"dependencies": {
"postcss": {
- "version": "8.4.5",
- "resolved": "https://registry.npmjs.org/postcss/-/postcss-8.4.5.tgz",
- "integrity": "sha512-jBDboWM8qpaqwkMwItqTQTiFikhs/67OYVvblFFTM7MrZjt6yMKd6r2kgXizEbTTljacm4NldIlZnhbjr84QYg==",
+ "version": "8.4.31",
+ "resolved": "https://registry.npmjs.org/postcss/-/postcss-8.4.31.tgz",
+ "integrity": "sha512-PS08Iboia9mts/2ygV3eLpY5ghnUcfLV/EXTOW1E2qYxJKGGBUtNjN76FYHnMs36RmARn41bC0AZmn+rR0OVpQ==",
"requires": {
- "nanoid": "^3.1.30",
+ "nanoid": "^3.3.6",
"picocolors": "^1.0.0",
- "source-map-js": "^1.0.1"
+ "source-map-js": "^1.0.2"
}
}
}
@@ -9590,11 +9712,11 @@
"integrity": "sha512-JU3teHTNjmE2VCGFzuY8EXzCDVwEqB2a8fsIvwaStHhAWJEeVd1o1QD80CU6+ZdEXXSLbSsuLwJjkCBWqRQUVA=="
},
"postcss": {
- "version": "8.4.14",
- "resolved": "https://registry.npmjs.org/postcss/-/postcss-8.4.14.tgz",
- "integrity": "sha512-E398TUmfAYFPBSdzgeieK2Y1+1cpdxJx8yXbK/m57nRhKSmk1GB2tO4lbLBtlkfPQTDKfe4Xqv1ASWPpayPEig==",
+ "version": "8.4.32",
+ "resolved": "https://registry.npmjs.org/postcss/-/postcss-8.4.32.tgz",
+ "integrity": "sha512-D/kj5JNu6oo2EIy+XL/26JEDTlIbB8hw85G8StOE6L74RQAVVP5rej6wxCNqyMbR4RkPfqvezVbPw81Ngd6Kcw==",
"requires": {
- "nanoid": "^3.3.4",
+ "nanoid": "^3.3.7",
"picocolors": "^1.0.0",
"source-map-js": "^1.0.2"
}
@@ -9732,7 +9854,8 @@
"regenerator-runtime": {
"version": "0.13.11",
"resolved": "https://registry.npmjs.org/regenerator-runtime/-/regenerator-runtime-0.13.11.tgz",
- "integrity": "sha512-kY1AZVr2Ra+t+piVaJ4gxaFaReZVH40AKNo7UCX6W+dEwBo/2oZJzqfuN1qLq1oL45o56cPaTXELwrTh8Fpggg=="
+ "integrity": "sha512-kY1AZVr2Ra+t+piVaJ4gxaFaReZVH40AKNo7UCX6W+dEwBo/2oZJzqfuN1qLq1oL45o56cPaTXELwrTh8Fpggg==",
+ "dev": true
},
"regexp.prototype.flags": {
"version": "1.4.3",
@@ -9854,6 +9977,11 @@
"resolved": "https://registry.npmjs.org/source-map-js/-/source-map-js-1.0.2.tgz",
"integrity": "sha512-R0XvVJ9WusLiqTCEiGCmICCMplcCkIwwR11mOSD9CR5u+IXYdiseeEuXCVAjS54zqwkLcPNnmU4OeJ6tUrWhDw=="
},
+ "streamsearch": {
+ "version": "1.1.0",
+ "resolved": "https://registry.npmjs.org/streamsearch/-/streamsearch-1.1.0.tgz",
+ "integrity": "sha512-Mcc5wHehp9aXz1ax6bZUyY5afg9u2rv5cqQI3mRrYkGC8rW2hM02jWuwjtL++LS5qinSyhj2QfLyNsuc+VsExg=="
+ },
"string.prototype.matchall": {
"version": "4.0.8",
"resolved": "https://registry.npmjs.org/string.prototype.matchall/-/string.prototype.matchall-4.0.8.tgz",
@@ -9914,10 +10042,12 @@
"dev": true
},
"styled-jsx": {
- "version": "5.0.0",
- "resolved": "https://registry.npmjs.org/styled-jsx/-/styled-jsx-5.0.0.tgz",
- "integrity": "sha512-qUqsWoBquEdERe10EW8vLp3jT25s/ssG1/qX5gZ4wu15OZpmSMFI2v+fWlRhLfykA5rFtlJ1ME8A8pm/peV4WA==",
- "requires": {}
+ "version": "5.1.1",
+ "resolved": "https://registry.npmjs.org/styled-jsx/-/styled-jsx-5.1.1.tgz",
+ "integrity": "sha512-pW7uC1l4mBZ8ugbiZrcIsiIvVx1UmTfw7UkC3Um2tmfUq9Bhk8IiyEIPl6F8agHgjzku6j0xQEZbfA5uSgSaCw==",
+ "requires": {
+ "client-only": "0.0.1"
+ }
},
"supports-color": {
"version": "7.2.0",
@@ -10106,14 +10236,6 @@
"tslib": "^2.0.0"
}
},
- "use-subscription": {
- "version": "1.5.1",
- "resolved": "https://registry.npmjs.org/use-subscription/-/use-subscription-1.5.1.tgz",
- "integrity": "sha512-Xv2a1P/yReAjAbhylMfFplFKj9GssgTwN7RlcTxBujFQcloStWNDQdc4g4NRWH9xS4i/FDk04vQBptAXoF3VcA==",
- "requires": {
- "object-assign": "^4.1.1"
- }
- },
"utf-8-validate": {
"version": "5.0.10",
"resolved": "https://registry.npmjs.org/utf-8-validate/-/utf-8-validate-5.0.10.tgz",
@@ -10198,9 +10320,9 @@
}
},
"word-wrap": {
- "version": "1.2.3",
- "resolved": "https://registry.npmjs.org/word-wrap/-/word-wrap-1.2.3.tgz",
- "integrity": "sha512-Hz/mrNwitNRh/HUAtM/VT/5VH+ygD6DV7mYKZAtHOrbs8U7lvPS6xf7EJKMF0uW1KJCl0H701g3ZGus+muE5vQ==",
+ "version": "1.2.4",
+ "resolved": "https://registry.npmjs.org/word-wrap/-/word-wrap-1.2.4.tgz",
+ "integrity": "sha512-2V81OA4ugVo5pRo46hAoD2ivUJx8jXmWXfUkY4KFNw0hEptvN0QfH3K4nHiwzGeKl5rFKedV48QVoqYavy4YpA==",
"dev": true
},
"wrappy": {
diff --git a/demo/package.json b/demo/package.json
index efe0816..f0870a5 100644
--- a/demo/package.json
+++ b/demo/package.json
@@ -14,7 +14,7 @@
"lodash.clonedeep": "^4.5.0",
"lodash.samplesize": "^4.2.0",
"lodash.throttle": "^4.1.1",
- "next": "^12.1.0",
+ "next": "^14.2.26",
"react": "17.0.2",
"react-dom": "17.0.2"
},
@@ -27,7 +27,7 @@
"autoprefixer": "^10.4.4",
"eslint": "8.11.0",
"eslint-config-next": "^12.3.4",
- "postcss": "^8.4.12",
+ "postcss": "^8.4.31",
"tailwindcss": "^3.0.23",
"typescript": "4.6.2"
}
diff --git a/deploy/fly/prod.toml b/deploy/fly/prod.toml
index df71a88..8a3a0fd 100644
--- a/deploy/fly/prod.toml
+++ b/deploy/fly/prod.toml
@@ -1,7 +1,15 @@
+# fly.toml app configuration file generated for realtime-prod on 2023-08-08T09:07:09-07:00
+#
+# See https://fly.io/docs/reference/configuration/ for information about how to use this file.
+#
+
app = "realtime-prod"
+primary_region = "sea"
kill_signal = "SIGTERM"
-kill_timeout = 5
-processes = []
+kill_timeout = "5s"
+
+[experimental]
+ auto_rollback = true
[deploy]
release_command = "/app/bin/migrate"
@@ -9,49 +17,38 @@ processes = []
[env]
DNS_NODES = "realtime-prod.internal"
+ ERL_CRASH_DUMP = "/data/erl_crash.dump"
+ ERL_CRASH_DUMP_SECONDS = "30"
-[experimental]
- allowed_public_ports = []
- auto_rollback = true
-
-[mounts]
- source="data_vol"
- destination="/data"
[[services]]
+ protocol = "tcp"
internal_port = 4000
processes = ["app"]
- protocol = "tcp"
- script_checks = []
- [services.concurrency]
- # should match :ranch.info max_connections * num_acceptors
- hard_limit = 100000
- soft_limit = 100000
- type = "connections"
[[services.ports]]
- force_https = true
- handlers = ["http"]
port = 80
+ handlers = ["http"]
+ force_https = true
[[services.ports]]
- handlers = ["tls", "http"]
port = 443
+ handlers = ["tls", "http"]
+ [services.concurrency]
+ type = "connections"
+ hard_limit = 100000
+ soft_limit = 100000
[[services.tcp_checks]]
- grace_period = "30s"
interval = "15s"
- restart_limit = 6
timeout = "2s"
-
+ grace_period = "30s"
+
[[services.http_checks]]
- interval = 10000
+ interval = "10s"
+ timeout = "2s"
grace_period = "5s"
method = "get"
path = "/"
protocol = "http"
- restart_limit = 0
- timeout = 2000
tls_skip_verify = false
- [services.http_checks.headers]
-
diff --git a/deploy/fly/qa.toml b/deploy/fly/qa.toml
index 1453cd6..1fc957e 100644
--- a/deploy/fly/qa.toml
+++ b/deploy/fly/qa.toml
@@ -9,6 +9,8 @@ processes = []
[env]
DNS_NODES = "realtime-qa.internal"
+ ERL_CRASH_DUMP = "/data/erl_crash.dump"
+ ERL_CRASH_DUMP_SECONDS = 30
[experimental]
allowed_public_ports = []
diff --git a/deploy/fly/staging.toml b/deploy/fly/staging.toml
index f111500..7bcad8a 100644
--- a/deploy/fly/staging.toml
+++ b/deploy/fly/staging.toml
@@ -1,7 +1,15 @@
+# fly.toml app configuration file generated for realtime-staging on 2023-06-27T07:39:20-07:00
+#
+# See https://fly.io/docs/reference/configuration/ for information about how to use this file.
+#
+
app = "realtime-staging"
+primary_region = "lhr"
kill_signal = "SIGTERM"
-kill_timeout = 5
-processes = []
+kill_timeout = "5s"
+
+[experimental]
+ auto_rollback = true
[deploy]
release_command = "/app/bin/migrate"
@@ -9,48 +17,43 @@ processes = []
[env]
DNS_NODES = "realtime-staging.internal"
+ ERL_CRASH_DUMP = "/data/erl_crash.dump"
+ ERL_CRASH_DUMP_SECONDS = "30"
-[experimental]
- allowed_public_ports = []
- auto_rollback = true
-
-[mounts]
- source="data_vol"
- destination="/data"
+[[mounts]]
+ source = "data_vol_machines"
+ destination = "/data"
+ processes = ["app"]
[[services]]
+ protocol = "tcp"
internal_port = 4000
processes = ["app"]
- protocol = "tcp"
- script_checks = []
- [services.concurrency]
- # should match ranch.info
- hard_limit = 16384
- soft_limit = 16384
- type = "connections"
[[services.ports]]
- force_https = true
- handlers = ["http"]
port = 80
+ handlers = ["http"]
+ force_https = true
[[services.ports]]
- handlers = ["tls", "http"]
port = 443
+ handlers = ["tls", "http"]
+ [services.concurrency]
+ type = "connections"
+ hard_limit = 16384
+ soft_limit = 16384
[[services.tcp_checks]]
- grace_period = "30s"
interval = "15s"
- restart_limit = 6
timeout = "2s"
+ grace_period = "30s"
+ restart_limit = 6
[[services.http_checks]]
- interval = 10000
+ interval = "10s"
+ timeout = "2s"
grace_period = "5s"
+ restart_limit = 0
method = "get"
path = "/"
protocol = "http"
- restart_limit = 0
- timeout = 2000
- tls_skip_verify = false
- [services.http_checks.headers]
diff --git a/dev/postgres/00-setup.sql b/dev/postgres/00-tealbase-schema.sql
similarity index 100%
rename from dev/postgres/00-setup.sql
rename to dev/postgres/00-tealbase-schema.sql
diff --git a/docker-compose.dbs.yml b/docker-compose.dbs.yml
index 51247bf..0b5ab0b 100644
--- a/docker-compose.dbs.yml
+++ b/docker-compose.dbs.yml
@@ -8,7 +8,16 @@ services:
- "5432:5432"
volumes:
- ./dev/postgres:/docker-entrypoint-initdb.d/
- command: postgres -c config_file=/etc/postgresql/postgresql.conf
+ command: postgres -c config_file=/etc/postgresql/postgresql.conf
+ environment:
+ POSTGRES_HOST: /var/run/postgresql
+ POSTGRES_PASSWORD: postgres
+ tenant_db:
+ image: tealbase/postgres:14.1.0.105
+ container_name: tenant-db
+ ports:
+ - "5433:5432"
+ command: postgres -c config_file=/etc/postgresql/postgresql.conf
environment:
POSTGRES_HOST: /var/run/postgresql
POSTGRES_PASSWORD: postgres
diff --git a/docker-compose.yml b/docker-compose.yml
index 796f413..d0a2cd7 100644
--- a/docker-compose.yml
+++ b/docker-compose.yml
@@ -1,5 +1,3 @@
-version: '3'
-
services:
db:
image: tealbase/postgres:14.1.0.105
@@ -8,19 +6,28 @@ services:
- "5432:5432"
volumes:
- ./dev/postgres:/docker-entrypoint-initdb.d/
- command: postgres -c config_file=/etc/postgresql/postgresql.conf
+ command: postgres -c config_file=/etc/postgresql/postgresql.conf
+ environment:
+ POSTGRES_HOST: /var/run/postgresql
+ POSTGRES_PASSWORD: postgres
+ tenant_db:
+ image: tealbase/postgres:14.1.0.105
+ container_name: tenant-db
+ ports:
+ - "5433:5432"
+ command: postgres -c config_file=/etc/postgresql/postgresql.conf
environment:
POSTGRES_HOST: /var/run/postgresql
POSTGRES_PASSWORD: postgres
-
realtime:
depends_on:
- db
build: .
- image: local/tealbase/realtime:latest
container_name: realtime-server
ports:
- "4000:4000"
+ extra_hosts:
+ - "host.docker.internal:host-gateway"
environment:
PORT: 4000
DB_HOST: host.docker.internal
@@ -31,10 +38,13 @@ services:
DB_ENC_KEY: tealbaserealtime
DB_AFTER_CONNECT_QUERY: 'SET search_path TO _realtime'
API_JWT_SECRET: dc447559-996d-4761-a306-f47a5eab1623
- FLY_ALLOC_ID: fly123
- FLY_APP_NAME: realtime
SECRET_KEY_BASE: UpNVntn3cDxHJpq99YMc1T1AQgQpc8kfYTuRgBiYa15BLrx8etQoXz3gZv1/u2oq
ERL_AFLAGS: -proto_dist inet_tcp
- ENABLE_TAILSCALE: "false"
+ RLIMIT_NOFILE: 1000000
DNS_NODES: "''"
- command: sh -c "/app/bin/migrate && /app/bin/realtime eval 'Realtime.Release.seeds(Realtime.Repo)' && /app/bin/server"
+ APP_NAME: realtime
+ RUN_JANITOR: true
+ JANITOR_INTERVAL: 60000
+ LOG_LEVEL: "info"
+ SEED_SELF_HOST: true
+
diff --git a/lib/extensions/extensions.ex b/lib/extensions/extensions.ex
index 0c8e4a9..aaf2882 100644
--- a/lib/extensions/extensions.ex
+++ b/lib/extensions/extensions.ex
@@ -1,4 +1,7 @@
defmodule Realtime.Extensions do
+ @moduledoc """
+ This module provides functions to get extension settings.
+ """
def db_settings(type) do
db_settings =
Application.get_env(:realtime, :extensions)
diff --git a/lib/extensions/postgres_cdc_rls/cdc_rls.ex b/lib/extensions/postgres_cdc_rls/cdc_rls.ex
index c7b9f34..9174398 100644
--- a/lib/extensions/postgres_cdc_rls/cdc_rls.ex
+++ b/lib/extensions/postgres_cdc_rls/cdc_rls.ex
@@ -1,52 +1,41 @@
defmodule Extensions.PostgresCdcRls do
- @moduledoc false
+ @moduledoc """
+ Callbacks for initiating a Postgres connection and creating a Realtime subscription for database changes.
+ """
+
@behaviour Realtime.PostgresCdc
require Logger
+ import Realtime.Logs
alias RealtimeWeb.Endpoint
- alias Realtime.PostgresCdc
alias Extensions.PostgresCdcRls, as: Rls
alias Rls.Subscriptions
+ alias Realtime.Rpc
+ @spec handle_connect(map()) :: {:ok, {pid(), pid()}} | nil
def handle_connect(args) do
- Enum.reduce_while(1..5, nil, fn retry, acc ->
- get_manager_conn(args["id"])
- |> case do
- nil ->
- start_distributed(args)
- if retry > 1, do: Process.sleep(1_000)
- {:cont, acc}
-
- :wait ->
- Process.sleep(1_000)
- {:cont, acc}
-
- {:ok, pid, conn} ->
- {:halt, {:ok, {pid, conn}}}
- end
- end)
+ case get_manager_conn(args["id"]) do
+ {:error, nil} ->
+ start_distributed(args)
+ nil
+
+ {:error, :wait} ->
+ nil
+
+ {:ok, pid, conn} ->
+ {:ok, {pid, conn}}
+ end
end
def handle_after_connect({manager_pid, conn}, settings, params) do
- opts = params
publication = settings["publication"]
+ opts = [conn, publication, params, manager_pid, self()]
conn_node = node(conn)
if conn_node !== node() do
- :rpc.call(conn_node, Subscriptions, :create, [conn, publication, opts], 15_000)
+ Rpc.call(conn_node, Subscriptions, :create, opts, timeout: 15_000)
else
- Subscriptions.create(conn, publication, opts)
- end
- |> case do
- {:ok, _} = response ->
- for %{id: id} <- params do
- send(manager_pid, {:subscribed, {self(), id}})
- end
-
- response
-
- other ->
- other
+ apply(Subscriptions, :create, opts)
end
end
@@ -54,10 +43,18 @@ defmodule Extensions.PostgresCdcRls do
Endpoint.subscribe("realtime:postgres:" <> tenant, metadata)
end
- def handle_stop(tenant, timeout) do
+ @doc """
+ Stops the Supervision tree for a tenant.
+
+ Expects an `external_id` as the `tenant`.
+ """
+
+ @spec handle_stop(String.t(), non_neg_integer()) :: :ok
+ def handle_stop(tenant, timeout) when is_binary(tenant) do
case :syn.whereis_name({__MODULE__, tenant}) do
:undefined ->
Logger.warning("Database supervisor not found for tenant #{tenant}")
+ :ok
pid ->
DynamicSupervisor.stop(pid, :shutdown, timeout)
@@ -67,92 +64,59 @@ defmodule Extensions.PostgresCdcRls do
## Internal functions
def start_distributed(%{"region" => region, "id" => tenant} = args) do
- fly_region = PostgresCdc.aws_to_fly(region)
- launch_node = PostgresCdc.launch_node(tenant, fly_region, node())
+ platform_region = Realtime.Nodes.platform_region_translator(region)
+ launch_node = Realtime.Nodes.launch_node(tenant, platform_region, node())
Logger.warning(
- "Starting distributed postgres extension #{inspect(lauch_node: launch_node, region: region, fly_region: fly_region)}"
+ "Starting distributed postgres extension #{inspect(lauch_node: launch_node, region: region, platform_region: platform_region)}"
)
- case :rpc.call(launch_node, __MODULE__, :start, [args], 30_000) do
+ case Rpc.call(launch_node, __MODULE__, :start, [args], timeout: 30_000, tenant: tenant) do
{:ok, _pid} = ok ->
ok
{:error, {:already_started, _pid}} = error ->
- Logger.info("Postgres Extention already started on node #{inspect(launch_node)}")
+ Logger.info("Postgres Extension already started on node #{inspect(launch_node)}")
error
error ->
- Logger.error("Error starting Postgres Extention: #{inspect(error, pretty: true)}")
+ log_error("ErrorStartingPostgresCDC", error)
error
end
end
@doc """
- Start db poller.
-
+ Start db poller. Expects an `external_id` as a `tenant`.
"""
- @spec start(map()) :: :ok | {:error, :already_started | :reserved}
- def start(args) do
- addrtype =
- case args["ip_version"] do
- 6 ->
- :inet6
-
- _ ->
- :inet
- end
- args =
- Map.merge(args, %{
- "db_socket_opts" => [addrtype],
- "subs_pool_size" => Map.get(args, "subcriber_pool_size", 5)
- })
+ @spec start(map()) :: :ok | {:error, :already_started | :reserved}
+ def start(%{"id" => tenant} = args) when is_binary(tenant) do
+ args = Map.merge(args, %{"subs_pool_size" => Map.get(args, "subcriber_pool_size", 4)})
- Logger.debug("Starting postgres stream extension with args: #{inspect(args, pretty: true)}")
+ Logger.debug("Starting #{__MODULE__} extension with args: #{inspect(args, pretty: true)}")
DynamicSupervisor.start_child(
- {:via, PartitionSupervisor, {Rls.DynamicSupervisor, self()}},
+ {:via, PartitionSupervisor, {Rls.DynamicSupervisor, tenant}},
%{
- id: args["id"],
+ id: tenant,
start: {Rls.WorkerSupervisor, :start_link, [args]},
restart: :transient
}
)
end
- @spec get_manager_conn(String.t()) :: nil | :wait | {:ok, pid(), pid()}
+ @spec get_manager_conn(String.t()) :: {:error, nil | :wait} | {:ok, pid(), pid()}
def get_manager_conn(id) do
- :syn.lookup(__MODULE__, id)
- |> case do
- {_, %{manager: nil, subs_pool: nil}} ->
- :wait
-
- {_, %{manager: manager, subs_pool: conn}} ->
- {:ok, manager, conn}
-
- _ ->
- nil
- end
- end
-
- def create_subscription(conn, publication, opts, timeout \\ 5_000) do
- conn_node = node(conn)
-
- if conn_node !== node() do
- :rpc.call(conn_node, Subscriptions, :create, [conn, publication, opts], timeout)
- else
- Subscriptions.create(conn, publication, opts)
+ case :syn.lookup(__MODULE__, id) do
+ {_, %{manager: nil, subs_pool: nil}} -> {:error, :wait}
+ {_, %{manager: manager, subs_pool: conn}} -> {:ok, manager, conn}
+ _ -> {:error, nil}
end
end
@spec supervisor_id(String.t(), String.t()) :: {atom(), String.t(), map()}
def supervisor_id(tenant, region) do
- {
- __MODULE__,
- tenant,
- %{region: region, manager: nil, subs_pool: nil}
- }
+ {__MODULE__, tenant, %{region: region, manager: nil, subs_pool: nil}}
end
@spec update_meta(String.t(), pid(), pid()) :: {:ok, {pid(), term()}} | {:error, term()}
@@ -161,9 +125,7 @@ defmodule Extensions.PostgresCdcRls do
if node(pid) == node(manager_pid) do
%{meta | manager: manager_pid, subs_pool: subs_pool}
else
- Logger.error(
- "Node mismatch for tenant #{tenant} #{inspect(node(pid))} #{inspect(node(manager_pid))}"
- )
+ Logger.warning("Node mismatch for tenant #{tenant} #{inspect(node(pid))} #{inspect(node(manager_pid))}")
meta
end
diff --git a/lib/extensions/postgres_cdc_rls/db_settings.ex b/lib/extensions/postgres_cdc_rls/db_settings.ex
index e3a216b..0e20c74 100644
--- a/lib/extensions/postgres_cdc_rls/db_settings.ex
+++ b/lib/extensions/postgres_cdc_rls/db_settings.ex
@@ -3,26 +3,24 @@ defmodule Extensions.PostgresCdcRls.DbSettings do
Schema callbacks for CDC RLS implementation.
"""
- def default() do
+ def default do
%{
"poll_interval_ms" => 100,
"poll_max_changes" => 100,
"poll_max_record_bytes" => 1_048_576,
"publication" => "tealbase_realtime",
- "slot_name" => "tealbase_realtime_replication_slot",
- "ip_version" => 4
+ "slot_name" => "tealbase_realtime_replication_slot"
}
end
- def required() do
+ def required do
[
{"region", &is_binary/1, false},
{"db_host", &is_binary/1, true},
{"db_name", &is_binary/1, true},
{"db_user", &is_binary/1, true},
{"db_port", &is_binary/1, true},
- {"db_password", &is_binary/1, true},
- {"ip_version", &is_integer/1, false}
+ {"db_password", &is_binary/1, true}
]
end
end
diff --git a/lib/extensions/postgres_cdc_rls/message_dispatcher.ex b/lib/extensions/postgres_cdc_rls/message_dispatcher.ex
index f9cba28..6a2e455 100644
--- a/lib/extensions/postgres_cdc_rls/message_dispatcher.ex
+++ b/lib/extensions/postgres_cdc_rls/message_dispatcher.ex
@@ -15,9 +15,7 @@ defmodule Extensions.PostgresCdcRls.MessageDispatcher do
_ =
Enum.reduce(topic_subscriptions, %{}, fn
- {_pid,
- {:subscriber_fastlane, fastlane_pid, serializer, ids, join_topic, tenant, is_new_api}},
- cache ->
+ {_pid, {:subscriber_fastlane, fastlane_pid, serializer, ids, join_topic, tenant, is_new_api}}, cache ->
for {bin_id, id} <- ids, reduce: [] do
acc ->
if MapSet.member?(sub_ids, bin_id) do
@@ -68,7 +66,8 @@ defmodule Extensions.PostgresCdcRls.MessageDispatcher do
end
defp count(tenant) do
- Tenants.db_events_per_second_key(tenant)
+ tenant
+ |> Tenants.db_events_per_second_key()
|> GenCounter.add()
end
end
diff --git a/lib/extensions/postgres_cdc_rls/migrations.ex b/lib/extensions/postgres_cdc_rls/migrations.ex
deleted file mode 100644
index a3b5df9..0000000
--- a/lib/extensions/postgres_cdc_rls/migrations.ex
+++ /dev/null
@@ -1,141 +0,0 @@
-defmodule Extensions.PostgresCdcRls.Migrations do
- @moduledoc """
- Run Realtime database migrations for tenant's database.
- """
-
- use GenServer
-
- alias Realtime.Repo
-
- alias Realtime.Extensions.Rls.Repo.Migrations.{
- CreateRealtimeSubscriptionTable,
- CreateRealtimeCheckFiltersTrigger,
- CreateRealtimeQuoteWal2jsonFunction,
- CreateRealtimeCheckEqualityOpFunction,
- CreateRealtimeBuildPreparedStatementSqlFunction,
- CreateRealtimeCastFunction,
- CreateRealtimeIsVisibleThroughFiltersFunction,
- CreateRealtimeApplyRlsFunction,
- GrantRealtimeUsageToAuthenticatedRole,
- EnableRealtimeApplyRlsFunctionPostgrest9Compatibility,
- UpdateRealtimeSubscriptionCheckFiltersFunctionSecurity,
- UpdateRealtimeBuildPreparedStatementSqlFunctionForCompatibilityWithAllTypes,
- EnableGenericSubscriptionClaims,
- AddWalPayloadOnErrorsInApplyRlsFunction,
- UpdateChangeTimestampToIso8601ZuluFormat,
- UpdateSubscriptionCheckFiltersFunctionDynamicTableName,
- UpdateApplyRlsFunctionToApplyIso8601,
- AddQuotedRegtypesSupport,
- AddOutputForDataLessThanEqual64BytesWhenPayloadTooLarge,
- AddQuotedRegtypesBackwardCompatibilitySupport,
- RecreateRealtimeBuildPreparedStatementSqlFunction,
- NullPassesFiltersRecreateIsVisibleThroughFilters,
- UpdateApplyRlsFunctionToPassThroughDeleteEventsOnFilter,
- MillisecondPrecisionForWalrus,
- AddInOpToFilters,
- EnableFilteringOnDeleteRecord,
- UpdateSubscriptionCheckFiltersForInFilterNonTextTypes,
- ConvertCommitTimestampToUtc,
- OutputFullRecordWhenUnchangedToast
- }
-
- alias Realtime.Helpers, as: H
-
- @migrations [
- {20_211_116_024_918, CreateRealtimeSubscriptionTable},
- {20_211_116_045_059, CreateRealtimeCheckFiltersTrigger},
- {20_211_116_050_929, CreateRealtimeQuoteWal2jsonFunction},
- {20_211_116_051_442, CreateRealtimeCheckEqualityOpFunction},
- {20_211_116_212_300, CreateRealtimeBuildPreparedStatementSqlFunction},
- {20_211_116_213_355, CreateRealtimeCastFunction},
- {20_211_116_213_934, CreateRealtimeIsVisibleThroughFiltersFunction},
- {20_211_116_214_523, CreateRealtimeApplyRlsFunction},
- {20_211_122_062_447, GrantRealtimeUsageToAuthenticatedRole},
- {20_211_124_070_109, EnableRealtimeApplyRlsFunctionPostgrest9Compatibility},
- {20_211_202_204_204, UpdateRealtimeSubscriptionCheckFiltersFunctionSecurity},
- {20_211_202_204_605,
- UpdateRealtimeBuildPreparedStatementSqlFunctionForCompatibilityWithAllTypes},
- {20_211_210_212_804, EnableGenericSubscriptionClaims},
- {20_211_228_014_915, AddWalPayloadOnErrorsInApplyRlsFunction},
- {20_220_107_221_237, UpdateChangeTimestampToIso8601ZuluFormat},
- {20_220_228_202_821, UpdateSubscriptionCheckFiltersFunctionDynamicTableName},
- {20_220_312_004_840, UpdateApplyRlsFunctionToApplyIso8601},
- {20_220_603_231_003, AddQuotedRegtypesSupport},
- {20_220_603_232_444, AddOutputForDataLessThanEqual64BytesWhenPayloadTooLarge},
- {20_220_615_214_548, AddQuotedRegtypesBackwardCompatibilitySupport},
- {20_220_712_093_339, RecreateRealtimeBuildPreparedStatementSqlFunction},
- {20_220_908_172_859, NullPassesFiltersRecreateIsVisibleThroughFilters},
- {20_220_916_233_421, UpdateApplyRlsFunctionToPassThroughDeleteEventsOnFilter},
- {20_230_119_133_233, MillisecondPrecisionForWalrus},
- {20_230_128_025_114, AddInOpToFilters},
- {20_230_128_025_212, EnableFilteringOnDeleteRecord},
- {20_230_227_211_149, UpdateSubscriptionCheckFiltersForInFilterNonTextTypes},
- {20_230_228_184_745, ConvertCommitTimestampToUtc},
- {20_230_308_225_145, OutputFullRecordWhenUnchangedToast}
- ]
-
- @spec start_link(GenServer.options()) :: GenServer.on_start()
- def start_link(opts) do
- GenServer.start_link(__MODULE__, opts)
- end
-
- ## Callbacks
-
- @impl true
- def init(%{"id" => id} = args) do
- Logger.metadata(external_id: id, project: id)
- # applying tenant's migrations
- apply_migrations(args)
- # need try to stop this PID
- {:ok, %{}}
- # {:ok, %{}, {:continue, :stop}}
- end
-
- @impl true
- def handle_continue(:stop, %{}) do
- {:stop, :normal, %{}}
- end
-
- @spec apply_migrations(map()) :: [integer()]
- defp apply_migrations(
- %{
- "db_host" => db_host,
- "db_port" => db_port,
- "db_name" => db_name,
- "db_user" => db_user,
- "db_password" => db_password,
- "db_socket_opts" => db_socket_opts
- } = _args
- ) do
- {host, port, name, user, pass} =
- H.decrypt_creds(
- db_host,
- db_port,
- db_name,
- db_user,
- db_password
- )
-
- Repo.with_dynamic_repo(
- [
- hostname: host,
- port: port,
- database: name,
- password: pass,
- username: user,
- pool_size: 2,
- socket_options: db_socket_opts
- ],
- fn repo ->
- Ecto.Migrator.run(
- Repo,
- @migrations,
- :up,
- all: true,
- prefix: "realtime",
- dynamic_repo: repo
- )
- end
- )
- end
-end
diff --git a/lib/extensions/postgres_cdc_rls/replication_poller.ex b/lib/extensions/postgres_cdc_rls/replication_poller.ex
index 47321fd..796e6ea 100644
--- a/lib/extensions/postgres_cdc_rls/replication_poller.ex
+++ b/lib/extensions/postgres_cdc_rls/replication_poller.ex
@@ -8,52 +8,34 @@ defmodule Extensions.PostgresCdcRls.ReplicationPoller do
require Logger
- import Realtime.Helpers, only: [cancel_timer: 1, decrypt_creds: 5]
+ import Realtime.Logs
+ import Realtime.Helpers
- alias Extensions.PostgresCdcRls.{Replications, MessageDispatcher}
alias DBConnection.Backoff
- alias Realtime.PubSub
- alias Realtime.Adapters.Changes.{
- DeletedRecord,
- NewRecord,
- UpdatedRecord
- }
+ alias Extensions.PostgresCdcRls.MessageDispatcher
+ alias Extensions.PostgresCdcRls.Replications
- @queue_target 5_000
+ alias Realtime.Adapters.Changes.DeletedRecord
+ alias Realtime.Adapters.Changes.NewRecord
+ alias Realtime.Adapters.Changes.UpdatedRecord
+ alias Realtime.Database
+ alias Realtime.PubSub
- def start_link(opts) do
- GenServer.start_link(__MODULE__, opts)
- end
+ def start_link(opts), do: GenServer.start_link(__MODULE__, opts)
@impl true
def init(args) do
- {:ok, conn} =
- connect_db(
- args["db_host"],
- args["db_port"],
- args["db_name"],
- args["db_user"],
- args["db_password"],
- args["db_socket_opts"]
- )
-
tenant = args["id"]
+ Logger.metadata(external_id: tenant, project: tenant)
state = %{
- backoff:
- Backoff.new(
- backoff_min: 100,
- backoff_max: 5_000,
- backoff_type: :rand_exp
- ),
- conn: conn,
+ backoff: Backoff.new(backoff_min: 100, backoff_max: 5_000, backoff_type: :rand_exp),
db_host: args["db_host"],
db_port: args["db_port"],
db_name: args["db_name"],
db_user: args["db_user"],
db_pass: args["db_password"],
- db_socket_opts: args["db_socket_opts"],
max_changes: args["poll_max_changes"],
max_record_bytes: args["poll_max_record_bytes"],
poll_interval_ms: args["poll_interval_ms"],
@@ -65,12 +47,17 @@ defmodule Extensions.PostgresCdcRls.ReplicationPoller do
tenant: tenant
}
- Logger.metadata(external_id: tenant, project: tenant)
-
- {:ok, state, {:continue, :prepare}}
+ {:ok, state, {:continue, {:connect, args}}}
end
@impl true
+ def handle_continue({:connect, args}, state) do
+ realtime_rls_settings = Database.from_settings(args, "realtime_rls")
+ {:ok, conn} = Database.connect_db(realtime_rls_settings)
+ state = Map.put(state, :conn, conn)
+ {:noreply, state, {:continue, :prepare}}
+ end
+
def handle_continue(:prepare, state) do
{:noreply, prepare_replication(state)}
end
@@ -95,118 +82,53 @@ defmodule Extensions.PostgresCdcRls.ReplicationPoller do
cancel_timer(poll_ref)
cancel_timer(retry_ref)
- try do
- {time, response} =
- :timer.tc(Replications, :list_changes, [
- conn,
- slot_name,
- publication,
- max_changes,
- max_record_bytes
- ])
-
- Realtime.Telemetry.execute(
- [:realtime, :replication, :poller, :query, :stop],
- %{duration: time},
- %{tenant: tenant}
- )
-
- response
- catch
- {:error, reason} ->
- {:error, reason}
- end
- |> case do
- {:ok,
- %Postgrex.Result{
- columns: ["wal", "is_rls_enabled", "subscription_ids", "errors"] = columns,
- rows: [_ | _] = rows,
- num_rows: rows_count
- }} ->
- Enum.reduce(rows, [], fn row, acc ->
- columns
- |> Enum.zip(row)
- |> generate_record()
- |> case do
- nil ->
- acc
-
- record_struct ->
- [record_struct | acc]
- end
- end)
- |> Enum.reverse()
- |> Enum.each(fn change ->
- Phoenix.PubSub.broadcast_from(
- PubSub,
- self(),
- "realtime:postgres:" <> tenant,
- change,
- MessageDispatcher
- )
- end)
-
- {:ok, rows_count}
+ args = [conn, slot_name, publication, max_changes, max_record_bytes]
+ {time, list_changes} = :timer.tc(Replications, :list_changes, args)
+ record_list_changes_telemetry(time, tenant)
- {:ok, _} ->
- {:ok, 0}
+ case handle_list_changes_result(list_changes, tenant) do
+ {:ok, row_count} ->
+ Backoff.reset(backoff)
- {:error, reason} ->
- {:error, reason}
- end
- |> case do
- {:ok, rows_num} ->
- backoff = Backoff.reset(backoff)
-
- poll_ref =
- if rows_num > 0 do
+ pool_ref =
+ if row_count > 0 do
send(self(), :poll)
nil
else
Process.send_after(self(), :poll, poll_interval_ms)
end
- {:noreply, %{state | backoff: backoff, poll_ref: poll_ref}}
+ {:noreply, %{state | backoff: backoff, poll_ref: pool_ref}}
{:error, %Postgrex.Error{postgres: %{code: :object_in_use, message: msg}}} ->
- Logger.error("Error polling replication: :object_in_use")
-
+ log_error("ReplicationSlotBeingUsed", msg)
[_, db_pid] = Regex.run(~r/PID\s(\d*)$/, msg)
db_pid = String.to_integer(db_pid)
{:ok, diff} = Replications.get_pg_stat_activity_diff(conn, db_pid)
- Logger.warn(
- "Database PID #{db_pid} found in pg_stat_activity with state_change diff of #{diff}"
- )
+ Logger.warning("Database PID #{db_pid} found in pg_stat_activity with state_change diff of #{diff}")
if retry_count > 3 do
case Replications.terminate_backend(conn, slot_name) do
- {:ok, :terminated} ->
- Logger.warn("Replication slot in use - terminating")
-
- {:error, :slot_not_found} ->
- Logger.warn("Replication slot not found")
-
- {:error, error} ->
- Logger.warn("Error terminating backend: #{inspect(error)}")
+ {:ok, :terminated} -> Logger.warning("Replication slot in use - terminating")
+ {:error, :slot_not_found} -> Logger.warning("Replication slot not found")
+ {:error, error} -> Logger.warning("Error terminating backend: #{inspect(error)}")
end
end
{timeout, backoff} = Backoff.backoff(backoff)
retry_ref = Process.send_after(self(), :retry, timeout)
- {:noreply,
- %{state | backoff: backoff, retry_ref: retry_ref, retry_count: retry_count + 1}}
+ {:noreply, %{state | backoff: backoff, retry_ref: retry_ref, retry_count: retry_count + 1}}
{:error, reason} ->
- Logger.error("Error polling replication: #{inspect(reason, pretty: true)}")
+ log_error("PoolingReplicationError", reason)
{timeout, backoff} = Backoff.backoff(backoff)
retry_ref = Process.send_after(self(), :retry, timeout)
- {:noreply,
- %{state | backoff: backoff, retry_ref: retry_ref, retry_count: retry_count + 1}}
+ {:noreply, %{state | backoff: backoff, retry_ref: retry_ref, retry_count: retry_count + 1}}
end
end
@@ -216,6 +138,61 @@ defmodule Extensions.PostgresCdcRls.ReplicationPoller do
{:noreply, prepare_replication(state)}
end
+ def slot_name_suffix do
+ case Application.get_env(:realtime, :slot_name_suffix) do
+ nil -> ""
+ slot_name_suffix -> "_" <> slot_name_suffix
+ end
+ end
+
+ defp convert_errors([_ | _] = errors), do: errors
+
+ defp convert_errors(_), do: nil
+
+ defp prepare_replication(%{backoff: backoff, conn: conn, slot_name: slot_name, retry_count: retry_count} = state) do
+ case Replications.prepare_replication(conn, slot_name) do
+ {:ok, _} ->
+ send(self(), :poll)
+ state
+
+ {:error, error} ->
+ log_error("PoolingReplicationPreparationError", error)
+
+ {timeout, backoff} = Backoff.backoff(backoff)
+ retry_ref = Process.send_after(self(), :retry, timeout)
+ %{state | backoff: backoff, retry_ref: retry_ref, retry_count: retry_count + 1}
+ end
+ end
+
+ defp record_list_changes_telemetry(time, tenant) do
+ Realtime.Telemetry.execute(
+ [:realtime, :replication, :poller, :query, :stop],
+ %{duration: time},
+ %{tenant: tenant}
+ )
+ end
+
+ defp handle_list_changes_result(
+ {:ok,
+ %Postgrex.Result{
+ columns: ["wal", "is_rls_enabled", "subscription_ids", "errors"] = columns,
+ rows: [_ | _] = rows,
+ num_rows: rows_count
+ }},
+ tenant
+ ) do
+ for row <- rows,
+ change <- columns |> Enum.zip(row) |> generate_record() |> List.wrap() do
+ topic = "realtime:postgres:" <> tenant
+ Phoenix.PubSub.broadcast_from(PubSub, self(), topic, change, MessageDispatcher)
+ end
+
+ {:ok, rows_count}
+ end
+
+ defp handle_list_changes_result({:ok, _}, _), do: {:ok, 0}
+ defp handle_list_changes_result({:error, reason}, _), do: {:error, reason}
+
def generate_record([
{"wal",
%{
@@ -290,52 +267,4 @@ defmodule Extensions.PostgresCdcRls.ReplicationPoller do
end
def generate_record(_), do: nil
-
- def slot_name_suffix() do
- case System.get_env("SLOT_NAME_SUFFIX") do
- nil ->
- ""
-
- value ->
- Logger.debug("Using slot name suffix: " <> value)
- "_" <> value
- end
- end
-
- defp convert_errors([_ | _] = errors), do: errors
-
- defp convert_errors(_), do: nil
-
- defp connect_db(host, port, name, user, pass, socket_opts) do
- {host, port, name, user, pass} = decrypt_creds(host, port, name, user, pass)
-
- Postgrex.start_link(
- hostname: host,
- port: port,
- database: name,
- password: pass,
- username: user,
- queue_target: @queue_target,
- parameters: [
- application_name: "realtime_rls"
- ],
- socket_options: socket_opts
- )
- end
-
- defp prepare_replication(
- %{backoff: backoff, conn: conn, slot_name: slot_name, retry_count: retry_count} = state
- ) do
- case Replications.prepare_replication(conn, slot_name) do
- {:ok, _} ->
- send(self(), :poll)
- state
-
- {:error, error} ->
- Logger.error("Prepare replication error: #{inspect(error)}")
- {timeout, backoff} = Backoff.backoff(backoff)
- retry_ref = Process.send_after(self(), :retry, timeout)
- %{state | backoff: backoff, retry_ref: retry_ref, retry_count: retry_count + 1}
- end
- end
end
diff --git a/lib/extensions/postgres_cdc_rls/replications.ex b/lib/extensions/postgres_cdc_rls/replications.ex
index 351a071..16b4f99 100644
--- a/lib/extensions/postgres_cdc_rls/replications.ex
+++ b/lib/extensions/postgres_cdc_rls/replications.ex
@@ -72,61 +72,7 @@ defmodule Extensions.PostgresCdcRls.Replications do
def list_changes(conn, slot_name, publication, max_changes, max_record_bytes) do
query(
conn,
- "with pub as (
- select
- concat_ws(
- ',',
- case when bool_or(pubinsert) then 'insert' else null end,
- case when bool_or(pubupdate) then 'update' else null end,
- case when bool_or(pubdelete) then 'delete' else null end
- ) as w2j_actions,
- coalesce(
- string_agg(
- realtime.quote_wal2json(format('%I.%I', schemaname, tablename)::regclass),
- ','
- ) filter (where ppt.tablename is not null and ppt.tablename not like '% %'),
- ''
- ) w2j_add_tables
- from
- pg_publication pp
- left join pg_publication_tables ppt
- on pp.pubname = ppt.pubname
- where
- pp.pubname = $1
- group by
- pp.pubname
- limit 1
- ),
- w2j as (
- select
- x.*, pub.w2j_add_tables
- from
- pub,
- pg_logical_slot_get_changes(
- $2, null, $3,
- 'include-pk', 'true',
- 'include-transaction', 'false',
- 'include-timestamp', 'true',
- 'include-type-oids', 'true',
- 'format-version', '2',
- 'actions', pub.w2j_actions,
- 'add-tables', pub.w2j_add_tables
- ) x
- )
- select
- xyz.wal,
- xyz.is_rls_enabled,
- xyz.subscription_ids,
- xyz.errors
- from
- w2j,
- realtime.apply_rls(
- wal := w2j.data::jsonb,
- max_record_bytes := $4
- ) xyz(wal, is_rls_enabled, subscription_ids, errors)
- where
- w2j.w2j_add_tables <> ''
- and xyz.subscription_ids[1] is not null",
+ "select * from realtime.list_changes($1, $2, $3, $4)",
[
publication,
slot_name,
diff --git a/lib/extensions/postgres_cdc_rls/repo/migrations/20211116024918_create_realtime_subscription_table.ex b/lib/extensions/postgres_cdc_rls/repo/migrations/20211116024918_create_realtime_subscription_table.ex
deleted file mode 100644
index 48846ab..0000000
--- a/lib/extensions/postgres_cdc_rls/repo/migrations/20211116024918_create_realtime_subscription_table.ex
+++ /dev/null
@@ -1,35 +0,0 @@
-defmodule Realtime.Extensions.Rls.Repo.Migrations.CreateRealtimeSubscriptionTable do
- @moduledoc false
-
- use Ecto.Migration
-
- def change do
- execute("create type realtime.equality_op as enum(
- 'eq', 'neq', 'lt', 'lte', 'gt', 'gte'
- );")
-
- execute("create type realtime.user_defined_filter as (
- column_name text,
- op realtime.equality_op,
- value text
- );")
-
- execute("create table realtime.subscription (
- -- Tracks which users are subscribed to each table
- id bigint not null generated always as identity,
- user_id uuid not null,
- -- Populated automatically by trigger. Required to enable auth.email()
- email varchar(255),
- entity regclass not null,
- filters realtime.user_defined_filter[] not null default '{}',
- created_at timestamp not null default timezone('utc', now()),
-
- constraint pk_subscription primary key (id),
- unique (entity, user_id, filters)
- )")
-
- execute(
- "create index ix_realtime_subscription_entity on realtime.subscription using hash (entity)"
- )
- end
-end
diff --git a/lib/extensions/postgres_cdc_rls/subscription_manager.ex b/lib/extensions/postgres_cdc_rls/subscription_manager.ex
index 37634c2..dbec3de 100644
--- a/lib/extensions/postgres_cdc_rls/subscription_manager.ex
+++ b/lib/extensions/postgres_cdc_rls/subscription_manager.ex
@@ -4,10 +4,14 @@ defmodule Extensions.PostgresCdcRls.SubscriptionManager do
"""
use GenServer
require Logger
+ import Realtime.Logs
alias Extensions.PostgresCdcRls, as: Rls
+
+ alias Realtime.Database
+ alias Realtime.Helpers
+
alias Rls.Subscriptions
- alias Realtime.Helpers, as: H
@timeout 15_000
@max_delete_records 1000
@@ -54,31 +58,34 @@ defmodule Extensions.PostgresCdcRls.SubscriptionManager do
@impl true
def init(args) do
- %{
- "id" => id,
- "publication" => publication,
- "subscribers_tid" => subscribers_tid,
- "db_host" => host,
- "db_port" => port,
- "db_name" => name,
- "db_user" => user,
- "db_password" => pass,
- "db_socket_opts" => socket_opts,
- "subs_pool_size" => subs_pool_size
- } = args
-
+ %{"id" => id} = args
Logger.metadata(external_id: id, project: id)
+ {:ok, nil, {:continue, {:connect, args}}}
+ end
+
+ @impl true
+ def handle_continue({:connect, args}, _) do
+ %{"id" => id, "publication" => publication, "subscribers_tid" => subscribers_tid} = args
- {:ok, conn} = H.connect_db(host, port, name, user, pass, socket_opts, 1)
- {:ok, conn_pub} = H.connect_db(host, port, name, user, pass, socket_opts, subs_pool_size)
+ subscription_manager_settings = Database.from_settings(args, "realtime_subscription_manager")
+
+ subscription_manager_pub_settings =
+ Database.from_settings(args, "realtime_subscription_manager_pub")
+
+ {:ok, conn} = Database.connect_db(subscription_manager_settings)
+ {:ok, conn_pub} = Database.connect_db(subscription_manager_pub_settings)
{:ok, _} = Subscriptions.maybe_delete_all(conn)
+
Rls.update_meta(id, self(), conn_pub)
+ oids = Subscriptions.fetch_publication_tables(conn, publication)
+
state = %State{
id: id,
conn: conn,
publication: publication,
subscribers_tid: subscribers_tid,
+ oids: oids,
delete_queue: %{
ref: check_delete_queue(),
queue: :queue.new()
@@ -87,14 +94,15 @@ defmodule Extensions.PostgresCdcRls.SubscriptionManager do
}
send(self(), :check_oids)
- {:ok, state}
+ {:noreply, state}
end
@impl true
def handle_info({:subscribed, {pid, id}}, state) do
- true =
- state.subscribers_tid
- |> :ets.insert({pid, id, Process.monitor(pid), node(pid)})
+ case :ets.match(state.subscribers_tid, {pid, id, :"$1", :_}) do
+ [] -> :ets.insert(state.subscribers_tid, {pid, id, Process.monitor(pid), node(pid)})
+ _ -> :ok
+ end
{:noreply, %{state | no_users_ts: nil}}
end
@@ -103,7 +111,7 @@ defmodule Extensions.PostgresCdcRls.SubscriptionManager do
:check_oids,
%State{check_oid_ref: ref, conn: conn, publication: publication, oids: old_oids} = state
) do
- H.cancel_timer(ref)
+ Helpers.cancel_timer(ref)
oids =
case Subscriptions.fetch_publication_tables(conn, publication) do
@@ -138,7 +146,8 @@ defmodule Extensions.PostgresCdcRls.SubscriptionManager do
values ->
for {_pid, id, _ref, _node} <- values, reduce: q do
acc ->
- UUID.string_to_binary!(id)
+ id
+ |> UUID.string_to_binary!()
|> :queue.in(acc)
end
end
@@ -147,11 +156,13 @@ defmodule Extensions.PostgresCdcRls.SubscriptionManager do
end
def handle_info(:check_delete_queue, %State{delete_queue: %{ref: ref, queue: q}} = state) do
- H.cancel_timer(ref)
+ Helpers.cancel_timer(ref)
q1 =
- if !:queue.is_empty(q) do
- {ids, q1} = H.queue_take(q, @max_delete_records)
+ if :queue.is_empty(q) do
+ q
+ else
+ {ids, q1} = Helpers.queue_take(q, @max_delete_records)
Logger.debug("delete sub id #{inspect(ids)}")
case Subscriptions.delete_multi(state.conn, ids) do
@@ -159,25 +170,19 @@ defmodule Extensions.PostgresCdcRls.SubscriptionManager do
q1
{:error, reason} ->
- Logger.error("delete subscriptions from the queue failed: #{inspect(reason)}")
+ log_error("SubscriptionDeletionFailed", reason)
+
q
end
- else
- q
end
- ref =
- if :queue.is_empty(q1) do
- check_delete_queue()
- else
- check_delete_queue(1_000)
- end
+ ref = if :queue.is_empty(q1), do: check_delete_queue(), else: check_delete_queue(1_000)
{:noreply, %{state | delete_queue: %{ref: ref, queue: q1}}}
end
def handle_info(:check_no_users, %{subscribers_tid: tid, no_users_ts: ts} = state) do
- H.cancel_timer(state.no_users_ref)
+ Helpers.cancel_timer(state.no_users_ref)
ts_new =
case {:ets.info(tid, :size), ts != nil && ts + @stop_after < now()} do
@@ -197,37 +202,19 @@ defmodule Extensions.PostgresCdcRls.SubscriptionManager do
end
def handle_info(msg, state) do
- Logger.error("Undef msg #{inspect(msg, pretty: true)}")
+ log_error("UnhandledProcessMessage", msg)
+
{:noreply, state}
end
## Internal functions
- defp check_delete_queue(timeout \\ @timeout) do
- Process.send_after(
- self(),
- :check_delete_queue,
- timeout
- )
- end
+ defp check_oids, do: Process.send_after(self(), :check_oids, @check_oids_interval)
- defp check_oids() do
- Process.send_after(
- self(),
- :check_oids,
- @check_oids_interval
- )
- end
+ defp now, do: System.system_time(:millisecond)
- defp now() do
- System.system_time(:millisecond)
- end
+ defp check_no_users, do: Process.send_after(self(), :check_no_users, @check_no_users_interval)
- defp check_no_users() do
- Process.send_after(
- self(),
- :check_no_users,
- @check_no_users_interval
- )
- end
+ defp check_delete_queue(timeout \\ @timeout),
+ do: Process.send_after(self(), :check_delete_queue, timeout)
end
diff --git a/lib/extensions/postgres_cdc_rls/subscriptions.ex b/lib/extensions/postgres_cdc_rls/subscriptions.ex
index 44d281a..2dd4491 100644
--- a/lib/extensions/postgres_cdc_rls/subscriptions.ex
+++ b/lib/extensions/postgres_cdc_rls/subscriptions.ex
@@ -4,33 +4,33 @@ defmodule Extensions.PostgresCdcRls.Subscriptions do
"""
require Logger
import Postgrex, only: [transaction: 2, query: 3, rollback: 2]
+ import Realtime.Logs
@type conn() :: Postgrex.conn()
@filter_types ["eq", "neq", "lt", "lte", "gt", "gte", "in"]
- @spec create(conn(), String.t(), list(map())) ::
+ @spec create(conn(), String.t(), [map()], pid(), pid()) ::
{:ok, Postgrex.Result.t()}
- | {:error,
- Exception.t() | :malformed_subscription_params | {:subscription_insert_failed, map()}}
- def create(conn, publication, params_list) do
+ | {:error, Exception.t() | :malformed_subscription_params | {:subscription_insert_failed, map()}}
+ def create(conn, publication, params_list, manager, caller) do
sql = "with sub_tables as (
- select
- rr.entity
- from
- pg_publication_tables pub,
- lateral (
- select
- format('%I.%I', pub.schemaname, pub.tablename)::regclass entity
- ) rr
- where
- pub.pubname = $1
- and pub.schemaname like (case $2 when '*' then '%' else $2 end)
- and pub.tablename like (case $3 when '*' then '%' else $3 end)
- )
- insert into realtime.subscription as x(
- subscription_id,
- entity,
+ select
+ rr.entity
+ from
+ pg_publication_tables pub,
+ lateral (
+ select
+ format('%I.%I', pub.schemaname, pub.tablename)::regclass entity
+ ) rr
+ where
+ pub.pubname = $1
+ and pub.schemaname like (case $2 when '*' then '%' else $2 end)
+ and pub.tablename like (case $3 when '*' then '%' else $3 end)
+ )
+ insert into realtime.subscription as x(
+ subscription_id,
+ entity,
filters,
claims
)
@@ -50,25 +50,28 @@ defmodule Extensions.PostgresCdcRls.Subscriptions do
id"
transaction(conn, fn conn ->
- params_list
- |> Enum.map(fn %{id: id, claims: claims, params: params} ->
+ Enum.map(params_list, fn %{id: id, claims: claims, params: params} ->
case parse_subscription_params(params) do
{:ok, [schema, table, filters]} ->
case query(conn, sql, [publication, schema, table, id, claims, filters]) do
{:ok, %{num_rows: num} = result} when num > 0 ->
+ send(manager, {:subscribed, {caller, id}})
result
{:ok, _} ->
- rollback(
- conn,
- "Subscription insert failed with 0 rows. Check that tables are part of publication #{publication} and subscription params are correct: #{inspect(params)}"
- )
+ msg =
+ "Unable to subscribe to changes with given parameters. Please check Realtime is enabled for the given connect parameters: [#{params_to_log(params)}]"
+
+ log_warning("RealtimeDisabledForConfiguration", msg)
+ rollback(conn, msg)
{:error, exception} ->
- rollback(
- conn,
- "Subscription insert failed with error: #{Exception.message(exception)}. Check that tables are part of publication #{publication} and subscription params are correct: #{inspect(params)}"
- )
+ msg =
+ "Unable to subscribe to changes with given parameters. An exception happened so please check your connect parameters: [#{params_to_log(params)}]. Exception: #{Exception.message(exception)}"
+
+ log_error("RealtimeSubscriptionError", msg)
+
+ rollback(conn, msg)
end
{:error, reason} ->
@@ -78,6 +81,12 @@ defmodule Extensions.PostgresCdcRls.Subscriptions do
end)
end
+ defp params_to_log(map) do
+ map
+ |> Map.to_list()
+ |> Enum.map_join(", ", fn {k, v} -> "#{k}: #{to_log(v)}" end)
+ end
+
@spec delete(conn(), String.t()) :: any()
def delete(conn, id) do
Logger.debug("Delete subscription")
@@ -136,7 +145,10 @@ defmodule Extensions.PostgresCdcRls.Subscriptions do
{:ok, %{columns: ["schemaname", "tablename", "oid"], rows: rows}} ->
Enum.reduce(rows, %{}, fn [schema, table, oid], acc ->
if String.contains?(table, " ") do
- Logger.error("Publication table name contains spaces: \"#{schema}\".\"#{table}\"")
+ log_error(
+ "TableHasSpacesInName",
+ "Table name cannot have spaces: \"#{schema}\".\"#{table}\""
+ )
end
Map.put(acc, {schema, table}, [oid])
@@ -207,9 +219,13 @@ defmodule Extensions.PostgresCdcRls.Subscriptions do
%{"table" => table} ->
{:ok, ["public", table, []]}
- _ ->
+ map when is_map_key(map, "user_token") or is_map_key(map, "auth_token") ->
+ {:error,
+ "No subscription params provided. Please provide at least a `schema` or `table` to subscribe to: "}
+
+ error ->
{:error,
- "No subscription params provided. Please provide at least a `schema` or `table` to subscribe to."}
+ "No subscription params provided. Please provide at least a `schema` or `table` to subscribe to: #{inspect(error)}"}
end
end
diff --git a/lib/extensions/postgres_cdc_rls/subscriptions_checker.ex b/lib/extensions/postgres_cdc_rls/subscriptions_checker.ex
index 9404cb7..77ad498 100644
--- a/lib/extensions/postgres_cdc_rls/subscriptions_checker.ex
+++ b/lib/extensions/postgres_cdc_rls/subscriptions_checker.ex
@@ -2,11 +2,15 @@ defmodule Extensions.PostgresCdcRls.SubscriptionsChecker do
@moduledoc false
use GenServer
require Logger
-
+ import Realtime.Logs
alias Extensions.PostgresCdcRls, as: Rls
- alias Rls.Subscriptions
- alias Realtime.Helpers, as: H
+ alias Realtime.Database
+ alias Realtime.Helpers
+ alias Realtime.Rpc
+ alias Realtime.Telemetry
+
+ alias Rls.Subscriptions
@timeout 120_000
@max_delete_records 1000
@@ -36,20 +40,19 @@ defmodule Extensions.PostgresCdcRls.SubscriptionsChecker do
@impl true
def init(args) do
- %{
- "id" => id,
- "db_host" => host,
- "db_port" => port,
- "db_name" => name,
- "db_user" => user,
- "db_password" => pass,
- "db_socket_opts" => socket_opts,
- "subscribers_tid" => subscribers_tid
- } = args
-
+ %{"id" => id} = args
Logger.metadata(external_id: id, project: id)
+ {:ok, nil, {:continue, {:connect, args}}}
+ end
- {:ok, conn} = H.connect_db(host, port, name, user, pass, socket_opts, 1)
+ @impl true
+ def handle_continue({:connect, args}, _) do
+ %{"id" => id, "subscribers_tid" => subscribers_tid} = args
+
+ realtime_subscription_checker_settings =
+ Database.from_settings(args, "realtime_subscription_checker")
+
+ {:ok, conn} = Database.connect_db(realtime_subscription_checker_settings)
state = %State{
id: id,
@@ -62,20 +65,22 @@ defmodule Extensions.PostgresCdcRls.SubscriptionsChecker do
}
}
- {:ok, state}
+ {:noreply, state}
end
@impl true
def handle_info(
:check_active_pids,
- %State{check_active_pids: ref, subscribers_tid: tid, delete_queue: delete_queue} = state
+ %State{check_active_pids: ref, subscribers_tid: tid, delete_queue: delete_queue, id: id} =
+ state
) do
- H.cancel_timer(ref)
+ Helpers.cancel_timer(ref)
ids =
- subscribers_by_node(tid)
+ tid
+ |> subscribers_by_node()
|> not_alive_pids_dist()
- |> pop_not_alive_pids(tid)
+ |> pop_not_alive_pids(tid, id)
new_delete_queue =
if length(ids) > 0 do
@@ -96,44 +101,52 @@ defmodule Extensions.PostgresCdcRls.SubscriptionsChecker do
end
def handle_info(:check_delete_queue, %State{delete_queue: %{ref: ref, queue: q}} = state) do
- H.cancel_timer(ref)
+ Helpers.cancel_timer(ref)
new_queue =
- if !:queue.is_empty(q) do
- {ids, q1} = H.queue_take(q, @max_delete_records)
- Logger.error("Delete #{length(ids)} phantom subscribers from db")
+ if :queue.is_empty(q) do
+ q
+ else
+ {ids, q1} = Helpers.queue_take(q, @max_delete_records)
+ Logger.warning("Delete #{length(ids)} phantom subscribers from db")
case Subscriptions.delete_multi(state.conn, ids) do
{:ok, _} ->
q1
{:error, reason} ->
- Logger.error("delete phantom subscriptions from the queue failed: #{inspect(reason)}")
+ log_error("UnableToDeletePhantomSubscriptions", reason)
+
q
end
- else
- q
end
- new_ref = if !:queue.is_empty(new_queue), do: check_delete_queue(), else: ref
+ new_ref = if :queue.is_empty(new_queue), do: ref, else: check_delete_queue()
{:noreply, %{state | delete_queue: %{ref: new_ref, queue: new_queue}}}
end
## Internal functions
- @spec pop_not_alive_pids([pid()], :ets.tid()) :: [Ecto.UUID.t()]
- def pop_not_alive_pids(pids, tid) do
+ @spec pop_not_alive_pids([pid()], :ets.tid(), binary()) :: [Ecto.UUID.t()]
+ def pop_not_alive_pids(pids, tid, tenant_id) do
Enum.reduce(pids, [], fn pid, acc ->
case :ets.lookup(tid, pid) do
[] ->
- Logger.error("Can't find pid in subscribers table: #{inspect(pid)}")
+ Telemetry.execute(
+ [:realtime, :subscriptions_checker, :pid_not_found],
+ %{quantity: 1},
+ %{tenant_id: tenant_id}
+ )
+
acc
results ->
for {^pid, postgres_id, _ref, _node} <- results do
- Logger.error(
- "Detected phantom subscriber #{inspect(pid)} with postgres_id #{inspect(postgres_id)}"
+ Telemetry.execute(
+ [:realtime, :subscriptions_checker, :phantom_pid_detected],
+ %{quantity: 1},
+ %{tenant_id: tenant_id}
)
:ets.delete(tid, pid)
@@ -146,12 +159,7 @@ defmodule Extensions.PostgresCdcRls.SubscriptionsChecker do
@spec subscribers_by_node(:ets.tid()) :: %{node() => MapSet.t(pid())}
def subscribers_by_node(tid) do
fn {pid, _postgres_id, _ref, node}, acc ->
- set =
- if Map.has_key?(acc, node) do
- MapSet.put(acc[node], pid)
- else
- MapSet.new([pid])
- end
+ set = if Map.has_key?(acc, node), do: MapSet.put(acc[node], pid), else: MapSet.new([pid])
Map.put(acc, node, set)
end
@@ -164,9 +172,9 @@ defmodule Extensions.PostgresCdcRls.SubscriptionsChecker do
if node == node() do
acc ++ not_alive_pids(pids)
else
- case :rpc.call(node, __MODULE__, :not_alive_pids, [pids], 15_000) do
+ case Rpc.call(node, __MODULE__, :not_alive_pids, [pids], timeout: 15_000) do
{:badrpc, _} = error ->
- Logger.error("Can't check pids on node #{inspect(node)}: #{inspect(error)}")
+ log_error("UnableToCheckProcessesOnRemoteNode", error)
acc
pids ->
@@ -178,28 +186,10 @@ defmodule Extensions.PostgresCdcRls.SubscriptionsChecker do
@spec not_alive_pids(MapSet.t(pid())) :: [pid()] | []
def not_alive_pids(pids) do
- Enum.reduce(pids, [], fn pid, acc ->
- if Process.alive?(pid) do
- acc
- else
- [pid | acc]
- end
- end)
+ Enum.reduce(pids, [], fn pid, acc -> if Process.alive?(pid), do: acc, else: [pid | acc] end)
end
- defp check_delete_queue() do
- Process.send_after(
- self(),
- :check_delete_queue,
- 1000
- )
- end
+ defp check_delete_queue, do: Process.send_after(self(), :check_delete_queue, 1000)
- defp check_active_pids() do
- Process.send_after(
- self(),
- :check_active_pids,
- @timeout
- )
- end
+ defp check_active_pids, do: Process.send_after(self(), :check_active_pids, @timeout)
end
diff --git a/lib/extensions/postgres_cdc_rls/supervisor.ex b/lib/extensions/postgres_cdc_rls/supervisor.ex
index d6a726a..21e1241 100644
--- a/lib/extensions/postgres_cdc_rls/supervisor.ex
+++ b/lib/extensions/postgres_cdc_rls/supervisor.ex
@@ -4,10 +4,10 @@ defmodule Extensions.PostgresCdcRls.Supervisor do
"""
use Supervisor
- alias Extensions.PostgresCdcRls, as: Rls
+ alias Extensions.PostgresCdcRls
@spec start_link :: :ignore | {:error, any} | {:ok, pid}
- def start_link() do
+ def start_link do
Supervisor.start_link(__MODULE__, [], name: __MODULE__)
end
@@ -15,29 +15,23 @@ defmodule Extensions.PostgresCdcRls.Supervisor do
def init(_args) do
load_migrations_modules()
- :syn.set_event_handler(Rls.SynHandler)
- :syn.add_node_to_scopes([Rls])
+ :syn.add_node_to_scopes([PostgresCdcRls])
children = [
{
PartitionSupervisor,
- partitions: 20,
- child_spec: DynamicSupervisor,
- strategy: :one_for_one,
- name: Rls.DynamicSupervisor
+ partitions: 20, child_spec: DynamicSupervisor, strategy: :one_for_one, name: PostgresCdcRls.DynamicSupervisor
}
]
Supervisor.init(children, strategy: :one_for_one)
end
- defp load_migrations_modules() do
+ defp load_migrations_modules do
{:ok, modules} = :application.get_key(:realtime, :modules)
modules
- |> Enum.filter(
- &String.starts_with?(to_string(&1), "Elixir.Realtime.Extensions.Rls.Repo.Migrations")
- )
+ |> Enum.filter(&String.starts_with?(to_string(&1), "Elixir.Realtime.Tenants.Migrations"))
|> Enum.each(&Code.ensure_loaded!/1)
end
end
diff --git a/lib/extensions/postgres_cdc_rls/syn_handler.ex b/lib/extensions/postgres_cdc_rls/syn_handler.ex
deleted file mode 100644
index b4d0f72..0000000
--- a/lib/extensions/postgres_cdc_rls/syn_handler.ex
+++ /dev/null
@@ -1,69 +0,0 @@
-defmodule Extensions.PostgresCdcRls.SynHandler do
- @moduledoc """
- Custom defined Syn's callbacks
- """
- require Logger
- alias RealtimeWeb.Endpoint
-
- def on_process_unregistered(Extensions.PostgresCdcRls, name, _pid, _meta, reason) do
- Logger.warn("PostgresCdcRls terminated: #{inspect(name)} #{node()}")
-
- if reason != :syn_conflict_resolution do
- Endpoint.local_broadcast("postgres_cdc:" <> name, "postgres_cdc_down", nil)
- end
- end
-
- def resolve_registry_conflict(
- Extensions.PostgresCdcRls,
- name,
- {pid1, %{region: region}, time1},
- {pid2, _, time2}
- ) do
- fly_region = Realtime.PostgresCdc.aws_to_fly(region)
-
- fly_region_nodes =
- :syn.members(RegionNodes, fly_region)
- |> Enum.map(fn {_, [node: node]} -> node end)
-
- {keep, stop} =
- Enum.filter([pid1, pid2], fn pid ->
- Enum.member?(fly_region_nodes, node(pid))
- end)
- |> case do
- [pid] ->
- {pid, if(pid != pid1, do: pid1, else: pid2)}
-
- _ ->
- if time1 < time2 do
- {pid1, pid2}
- else
- {pid2, pid1}
- end
- end
-
- if node() == node(stop) do
- spawn(fn ->
- resp =
- if Process.alive?(stop) do
- try do
- DynamicSupervisor.stop(stop, :shutdown, 30_000)
- catch
- error, reason -> {:error, {error, reason}}
- end
- else
- :not_alive
- end
-
- Endpoint.broadcast("postgres_cdc:" <> name, "postgres_cdc_down", nil)
-
- Logger.warn(
- "Resolving #{name} conflict, stop local pid: #{inspect(stop)}, response: #{inspect(resp)}"
- )
- end)
- else
- Logger.warn("Resolving #{name} conflict, remote pid: #{inspect(stop)}")
- end
-
- keep
- end
-end
diff --git a/lib/extensions/postgres_cdc_rls/worker_supervisor.ex b/lib/extensions/postgres_cdc_rls/worker_supervisor.ex
index 0608b8c..68a8a64 100644
--- a/lib/extensions/postgres_cdc_rls/worker_supervisor.ex
+++ b/lib/extensions/postgres_cdc_rls/worker_supervisor.ex
@@ -2,33 +2,30 @@ defmodule Extensions.PostgresCdcRls.WorkerSupervisor do
@moduledoc false
use Supervisor
- alias Extensions.PostgresCdcRls, as: Rls
+ alias Extensions.PostgresCdcRls
- alias Rls.{
- Migrations,
+ alias PostgresCdcRls.{
ReplicationPoller,
SubscriptionManager,
SubscriptionsChecker
}
+ alias Realtime.Api
+ alias Realtime.PostgresCdc.Exception
+
def start_link(args) do
- name = Rls.supervisor_id(args["id"], args["region"])
+ name = PostgresCdcRls.supervisor_id(args["id"], args["region"])
Supervisor.start_link(__MODULE__, args, name: {:via, :syn, name})
end
@impl true
- def init(args) do
- tid_args =
- Map.merge(args, %{
- "subscribers_tid" => :ets.new(__MODULE__, [:public, :bag])
- })
+ def init(%{"id" => tenant} = args) when is_binary(tenant) do
+ Logger.metadata(external_id: tenant, project: tenant)
+ unless Api.get_tenant_by_external_id(tenant, :primary), do: raise(Exception)
+
+ tid_args = Map.merge(args, %{"subscribers_tid" => :ets.new(__MODULE__, [:public, :bag])})
children = [
- %{
- id: Migrations,
- start: {Migrations, :start_link, [args]},
- restart: :transient
- },
%{
id: ReplicationPoller,
start: {ReplicationPoller, :start_link, [args]},
@@ -46,6 +43,6 @@ defmodule Extensions.PostgresCdcRls.WorkerSupervisor do
}
]
- Supervisor.init(children, strategy: :one_for_all, max_restarts: 10, max_seconds: 60)
+ Supervisor.init(children, strategy: :rest_for_one, max_restarts: 10, max_seconds: 60)
end
end
diff --git a/lib/extensions/postgres_cdc_stream/cdc_stream.ex b/lib/extensions/postgres_cdc_stream/cdc_stream.ex
deleted file mode 100644
index c14ad9f..0000000
--- a/lib/extensions/postgres_cdc_stream/cdc_stream.ex
+++ /dev/null
@@ -1,124 +0,0 @@
-defmodule Extensions.PostgresCdcStream do
- @moduledoc false
- @behaviour Realtime.PostgresCdc
-
- require Logger
-
- alias Realtime.PostgresCdc
- alias Extensions.PostgresCdcStream, as: Stream
-
- def handle_connect(opts) do
- Enum.reduce_while(1..5, nil, fn retry, acc ->
- get_manager_conn(opts["id"])
- |> case do
- nil ->
- start_distributed(opts)
- if retry > 1, do: Process.sleep(1_000)
- {:cont, acc}
-
- {:ok, pid, _conn} ->
- {:halt, {:ok, pid}}
- end
- end)
- end
-
- def handle_after_connect(_, _, _) do
- {:ok, nil}
- end
-
- def handle_subscribe(pg_change_params, tenant, metadata) do
- Enum.each(pg_change_params, fn e ->
- topic(tenant, e.params)
- |> RealtimeWeb.Endpoint.subscribe(metadata)
- end)
- end
-
- def handle_stop(tenant, timeout) do
- case :syn.lookup(PostgresCdcStream, tenant) do
- :undefined ->
- Logger.warning("Database supervisor not found for tenant #{tenant}")
-
- {pid, _} ->
- DynamicSupervisor.stop(pid, :shutdown, timeout)
- end
- end
-
- @spec get_manager_conn(String.t()) :: nil | {:ok, pid(), pid()}
- def get_manager_conn(id) do
- Phoenix.Tracker.get_by_key(Stream.Tracker, "postgres_cdc_stream", id)
- |> case do
- [] ->
- nil
-
- [{_, %{manager_pid: pid, conn: conn}}] ->
- {:ok, pid, conn}
- end
- end
-
- def start_distributed(%{"region" => region, "id" => tenant} = args) do
- fly_region = PostgresCdc.aws_to_fly(region)
- launch_node = PostgresCdc.launch_node(tenant, fly_region, node())
-
- Logger.warning(
- "Starting distributed postgres extension #{inspect(lauch_node: launch_node, region: region, fly_region: fly_region)}"
- )
-
- case :rpc.call(launch_node, __MODULE__, :start, [args], 30_000) do
- {:ok, _pid} = ok ->
- ok
-
- {:error, {:already_started, _pid}} = error ->
- Logger.info("Postgres Extention already started on node #{inspect(launch_node)}")
- error
-
- error ->
- Logger.error("Error starting Postgres Extention: #{inspect(error, pretty: true)}")
- error
- end
- end
-
- @spec start(map()) :: :ok | {:error, :already_started | :reserved}
- def start(args) do
- addrtype =
- case args["ip_version"] do
- 6 ->
- :inet6
-
- _ ->
- :inet
- end
-
- args =
- Map.merge(args, %{
- "db_socket_opts" => [addrtype]
- })
-
- Logger.debug("Starting postgres stream extension with args: #{inspect(args, pretty: true)}")
-
- DynamicSupervisor.start_child(
- {:via, PartitionSupervisor, {Stream.DynamicSupervisor, self()}},
- %{
- id: args["id"],
- start: {Stream.WorkerSupervisor, :start_link, [args]},
- restart: :transient
- }
- )
- end
-
- def topic(tenant, params) do
- "cdc_stream:" <> tenant <> ":" <> :erlang.term_to_binary(params)
- end
-
- def track_manager(id, pid, conn) do
- Phoenix.Tracker.track(
- Stream.Tracker,
- self(),
- "postgres_cdc_stream",
- id,
- %{
- conn: conn,
- manager_pid: pid
- }
- )
- end
-end
diff --git a/lib/extensions/postgres_cdc_stream/db_settings.ex b/lib/extensions/postgres_cdc_stream/db_settings.ex
deleted file mode 100644
index afa3df2..0000000
--- a/lib/extensions/postgres_cdc_stream/db_settings.ex
+++ /dev/null
@@ -1,28 +0,0 @@
-defmodule Extensions.PostgresCdcStream.DbSettings do
- @moduledoc """
- Schema callbacks for CDC Stream implementation.
- """
-
- @spec default :: map()
- def default() do
- %{
- "publication" => "tealbase_realtime",
- "slot_name" => "tealbase_realtime_replication_slot",
- "ip_version" => 4,
- "dynamic_slot" => false
- }
- end
-
- @spec required :: [{String.t(), fun(), boolean()}]
- def required() do
- [
- {"region", &is_binary/1, false},
- {"db_host", &is_binary/1, true},
- {"db_name", &is_binary/1, true},
- {"db_user", &is_binary/1, true},
- {"db_port", &is_binary/1, true},
- {"db_password", &is_binary/1, true},
- {"ip_version", &is_integer/1, false}
- ]
- end
-end
diff --git a/lib/extensions/postgres_cdc_stream/message_dispatcher.ex b/lib/extensions/postgres_cdc_stream/message_dispatcher.ex
deleted file mode 100644
index 307f3d2..0000000
--- a/lib/extensions/postgres_cdc_stream/message_dispatcher.ex
+++ /dev/null
@@ -1,56 +0,0 @@
-# This file draws from https://github.com/phoenixframework/phoenix/blob/9941711736c8464b27b40914a4d954ed2b4f5958/lib/phoenix/channel/server.ex
-# License: https://github.com/phoenixframework/phoenix/blob/518a4640a70aa4d1370a64c2280d598e5b928168/LICENSE.md
-
-defmodule Extensions.PostgresCdcStream.MessageDispatcher do
- @moduledoc """
- Hook invoked by Phoenix.PubSub dispatch.
- """
-
- alias Phoenix.Socket.Broadcast
-
- def dispatch([_ | _] = topic_subscriptions, _from, payload) do
- _ =
- Enum.reduce(topic_subscriptions, %{}, fn
- {_pid, {:subscriber_fastlane, fastlane_pid, serializer, ids, join_topic, is_new_api}},
- cache ->
- Enum.map(ids, fn {_bin_id, id} -> id end)
- |> case do
- [_ | _] = valid_ids ->
- new_payload =
- if is_new_api do
- %Broadcast{
- topic: join_topic,
- event: "postgres_changes",
- payload: %{ids: valid_ids, data: payload}
- }
- else
- %Broadcast{
- topic: join_topic,
- event: payload.type,
- payload: payload
- }
- end
-
- broadcast_message(cache, fastlane_pid, new_payload, serializer)
-
- _ ->
- cache
- end
- end)
-
- :ok
- end
-
- defp broadcast_message(cache, fastlane_pid, msg, serializer) do
- case cache do
- %{^msg => encoded_msg} ->
- send(fastlane_pid, encoded_msg)
- cache
-
- %{} ->
- encoded_msg = serializer.fastlane!(msg)
- send(fastlane_pid, encoded_msg)
- Map.put(cache, msg, encoded_msg)
- end
- end
-end
diff --git a/lib/extensions/postgres_cdc_stream/replication.ex b/lib/extensions/postgres_cdc_stream/replication.ex
deleted file mode 100644
index 31c8b11..0000000
--- a/lib/extensions/postgres_cdc_stream/replication.ex
+++ /dev/null
@@ -1,280 +0,0 @@
-defmodule Extensions.PostgresCdcStream.Replication do
- @moduledoc """
- Subscribes to the Postgres replication slot, decodes write ahead log binary messages
- and broadcasts them to the `MessageDispatcher`.
- """
-
- use Postgrex.ReplicationConnection
- require Logger
-
- alias Extensions.PostgresCdcStream, as: Stream
- alias Realtime.Helpers, as: H
- alias Realtime.Adapters.Postgres.Decoder
-
- alias Decoder.Messages.{
- Begin,
- Relation,
- Insert,
- Update,
- Delete,
- Commit
- }
-
- alias Realtime.Adapters.Changes.{DeletedRecord, NewRecord, UpdatedRecord}
-
- def start_link(args) do
- opts = connection_opts(args)
-
- slot_name =
- if args["dynamic_slot"] do
- args["slot_name"] <> "_" <> (System.system_time(:second) |> Integer.to_string())
- else
- args["slot_name"]
- end
-
- init = %{
- tenant: args["id"],
- publication: args["publication"],
- slot_name: slot_name
- }
-
- Postgrex.ReplicationConnection.start_link(__MODULE__, init, opts)
- end
-
- @spec stop(pid) :: :ok
- def stop(pid) do
- GenServer.stop(pid)
- end
-
- @impl true
- def init(args) do
- tid = :ets.new(__MODULE__, [:public, :set])
- state = %{tid: tid, step: nil, ts: nil}
- {:ok, Map.merge(args, state)}
- end
-
- @impl true
- def handle_connect(state) do
- query =
- "CREATE_REPLICATION_SLOT #{state.slot_name} TEMPORARY LOGICAL pgoutput NOEXPORT_SNAPSHOT"
-
- {:query, query, %{state | step: :create_slot}}
- end
-
- @impl true
- def handle_result(results, %{step: :create_slot} = state) when is_list(results) do
- query =
- "START_REPLICATION SLOT #{state.slot_name} LOGICAL 0/0 (proto_version '1', publication_names '#{state.publication}')"
-
- Stream.track_manager(state.tenant, self(), nil)
- {:stream, query, [], %{state | step: :streaming}}
- end
-
- def handle_result(_results, state) do
- {:noreply, state}
- end
-
- @impl true
- def handle_data(<>, state) do
- new_state =
- Decoder.decode_message(msg)
- |> process_message(state)
-
- {:noreply, new_state}
- end
-
- # keepalive
- def handle_data(<>, state) do
- messages =
- case reply do
- 1 -> [<>]
- 0 -> []
- end
-
- {:noreply, messages, state}
- end
-
- def handle_data(data, state) do
- Logger.error("Unknown data: #{inspect(data)}")
- {:noreply, state}
- end
-
- defp process_message(
- %Relation{id: id, columns: columns, namespace: schema, name: table},
- state
- ) do
- columns =
- Enum.map(columns, fn %{name: name, type: type} ->
- %{name: name, type: type}
- end)
-
- :ets.insert(state.tid, {id, columns, schema, table})
- state
- end
-
- defp process_message(%Begin{commit_timestamp: ts}, state) do
- %{state | ts: ts}
- end
-
- defp process_message(%Commit{}, state) do
- %{state | ts: nil}
- end
-
- defp process_message(%Insert{} = msg, state) do
- Logger.debug("Got message: #{inspect(msg)}")
- [{_, columns, schema, table}] = :ets.lookup(state.tid, msg.relation_id)
-
- %NewRecord{
- columns: columns,
- commit_timestamp: state.ts,
- errors: nil,
- schema: schema,
- table: table,
- record: data_tuple_to_map(columns, msg.tuple_data),
- type: "UPDATE"
- }
- |> broadcast(state.tenant)
-
- state
- end
-
- defp process_message(%Update{} = msg, state) do
- Logger.debug("Got message: #{inspect(msg)}")
- [{_, columns, schema, table}] = :ets.lookup(state.tid, msg.relation_id)
-
- %UpdatedRecord{
- columns: columns,
- commit_timestamp: state.ts,
- errors: nil,
- schema: schema,
- table: table,
- old_record: data_tuple_to_map(columns, msg.old_tuple_data),
- record: data_tuple_to_map(columns, msg.tuple_data),
- type: "UPDATE"
- }
- |> broadcast(state.tenant)
-
- state
- end
-
- defp process_message(%Delete{} = msg, state) do
- Logger.debug("Got message: #{inspect(msg)}")
- [{_, columns, schema, table}] = :ets.lookup(state.tid, msg.relation_id)
-
- %DeletedRecord{
- columns: columns,
- commit_timestamp: state.ts,
- errors: nil,
- schema: schema,
- table: table,
- old_record: data_tuple_to_map(columns, msg.old_tuple_data),
- type: "UPDATE"
- }
- |> broadcast(state.tenant)
-
- state
- end
-
- defp process_message(msg, state) do
- Logger.error("Unknown message: #{inspect(msg)}")
- state
- end
-
- def broadcast(change, tenant) do
- [
- %{"schema" => "*"},
- %{"schema" => change.schema},
- %{"schema" => change.schema, "table" => "*"},
- %{"schema" => change.schema, "table" => change.table}
- ]
- |> List.foldl([], fn e, acc ->
- [Map.put(e, "event", "*"), Map.put(e, "event", change.type) | acc]
- end)
- |> List.foldl([], fn e, acc ->
- if Map.has_key?(change, :record) do
- Enum.reduce(change.record, [e], fn {k, v}, acc ->
- [Map.put(e, "filter", "#{k}=eq.#{v}") | acc]
- end) ++ acc
- else
- acc
- end
- end)
- |> Enum.each(fn params ->
- Phoenix.PubSub.broadcast_from(
- Realtime.PubSub,
- self(),
- Stream.topic(tenant, params),
- change,
- Stream.MessageDispatcher
- )
- end)
- end
-
- def data_tuple_to_map(column, tuple_data) do
- column
- |> Enum.with_index()
- |> Enum.reduce_while(%{}, fn {column_map, index}, acc ->
- case column_map do
- %{name: column_name, type: column_type}
- when is_binary(column_name) and is_binary(column_type) ->
- try do
- {:ok, elem(tuple_data, index)}
- rescue
- ArgumentError -> :error
- end
- |> case do
- {:ok, record} ->
- {:cont, Map.put(acc, column_name, convert_column_record(record, column_type))}
-
- :error ->
- {:halt, acc}
- end
-
- _ ->
- {:cont, acc}
- end
- end)
- end
-
- defp convert_column_record(record, "timestamp") when is_binary(record) do
- with {:ok, %NaiveDateTime{} = naive_date_time} <- Timex.parse(record, "{RFC3339}"),
- %DateTime{} = date_time <- Timex.to_datetime(naive_date_time) do
- DateTime.to_iso8601(date_time)
- else
- _ -> record
- end
- end
-
- defp convert_column_record(record, "timestamptz") when is_binary(record) do
- case Timex.parse(record, "{RFC3339}") do
- {:ok, %DateTime{} = date_time} -> DateTime.to_iso8601(date_time)
- _ -> record
- end
- end
-
- defp convert_column_record(record, _column_type) do
- record
- end
-
- @epoch DateTime.to_unix(~U[2000-01-01 00:00:00Z], :microsecond)
- defp current_time(), do: System.os_time(:microsecond) - @epoch
-
- def connection_opts(args) do
- {host, port, name, user, pass} =
- H.decrypt_creds(
- args["db_host"],
- args["db_port"],
- args["db_name"],
- args["db_user"],
- args["db_password"]
- )
-
- [
- hostname: host,
- database: name,
- username: user,
- password: pass,
- port: port
- ]
- end
-end
diff --git a/lib/extensions/postgres_cdc_stream/supervisor.ex b/lib/extensions/postgres_cdc_stream/supervisor.ex
deleted file mode 100644
index 8bebb05..0000000
--- a/lib/extensions/postgres_cdc_stream/supervisor.ex
+++ /dev/null
@@ -1,31 +0,0 @@
-defmodule Extensions.PostgresCdcStream.Supervisor do
- @moduledoc """
- Supervisor to spin up the Postgres CDC Stream tree.
- """
- use Supervisor
-
- alias Extensions.PostgresCdcStream, as: Stream
-
- @spec start_link :: :ignore | {:error, any} | {:ok, pid}
- def start_link() do
- Supervisor.start_link(__MODULE__, [], name: __MODULE__)
- end
-
- @impl true
- def init(_args) do
- :syn.add_node_to_scopes([PostgresCdcStream])
-
- children = [
- {
- PartitionSupervisor,
- partitions: 20,
- child_spec: DynamicSupervisor,
- strategy: :one_for_one,
- name: Stream.DynamicSupervisor
- },
- Stream.Tracker
- ]
-
- Supervisor.init(children, strategy: :one_for_one)
- end
-end
diff --git a/lib/extensions/postgres_cdc_stream/tracker.ex b/lib/extensions/postgres_cdc_stream/tracker.ex
deleted file mode 100644
index 6b597d3..0000000
--- a/lib/extensions/postgres_cdc_stream/tracker.ex
+++ /dev/null
@@ -1,40 +0,0 @@
-defmodule Extensions.PostgresCdcStream.Tracker do
- @moduledoc """
- Tracks the state of the CDC stream and broadcasts a message to the channel
- when the stream is down.
- """
- use Phoenix.Tracker
- require Logger
-
- alias RealtimeWeb.Endpoint
-
- def start_link(opts) do
- pool_opts = [
- name: __MODULE__,
- pubsub_server: Realtime.PubSub,
- pool_size: 5
- ]
-
- opts = Keyword.merge(pool_opts, opts)
- Phoenix.Tracker.start_link(__MODULE__, opts, opts)
- end
-
- def init(opts) do
- server = Keyword.fetch!(opts, :pubsub_server)
- {:ok, %{pubsub_server: server, node_name: Phoenix.PubSub.node_name(server)}}
- end
-
- def handle_diff(diff, state) do
- for {_topic, {_joins, leaves}} <- diff do
- for {id, _meta} <- leaves do
- Endpoint.local_broadcast(
- "postgres_cdc:" <> id,
- "postgres_cdc_down",
- nil
- )
- end
- end
-
- {:ok, state}
- end
-end
diff --git a/lib/extensions/postgres_cdc_stream/worker_supervisor.ex b/lib/extensions/postgres_cdc_stream/worker_supervisor.ex
deleted file mode 100644
index a95d3fc..0000000
--- a/lib/extensions/postgres_cdc_stream/worker_supervisor.ex
+++ /dev/null
@@ -1,23 +0,0 @@
-defmodule Extensions.PostgresCdcStream.WorkerSupervisor do
- @moduledoc false
- use Supervisor
- alias Extensions.PostgresCdcStream, as: Stream
-
- def start_link(args) do
- name = [name: {:via, :syn, {PostgresCdcStream, args["id"]}}]
- Supervisor.start_link(__MODULE__, args, name)
- end
-
- @impl true
- def init(args) do
- children = [
- %{
- id: Stream.Replication,
- start: {Stream.Replication, :start_link, [args]},
- restart: :transient
- }
- ]
-
- Supervisor.init(children, strategy: :one_for_all, max_restarts: 10, max_seconds: 60)
- end
-end
diff --git a/lib/extensions/postgres/adapters/changes.ex b/lib/realtime/adapters/changes.ex
similarity index 82%
rename from lib/extensions/postgres/adapters/changes.ex
rename to lib/realtime/adapters/changes.ex
index 6c4c727..450510c 100644
--- a/lib/extensions/postgres/adapters/changes.ex
+++ b/lib/realtime/adapters/changes.ex
@@ -7,9 +7,13 @@ defmodule Realtime.Adapters.Changes do
@moduledoc """
This module provides structures of CDC changes.
"""
- defmodule(Transaction, do: defstruct([:changes, :commit_timestamp]))
+ defmodule Transaction do
+ @moduledoc false
+ defstruct [:changes, :commit_timestamp]
+ end
defmodule NewRecord do
+ @moduledoc false
@derive {Jason.Encoder, except: [:subscription_ids]}
defstruct [
:columns,
@@ -24,6 +28,7 @@ defmodule Realtime.Adapters.Changes do
end
defmodule UpdatedRecord do
+ @moduledoc false
@derive {Jason.Encoder, except: [:subscription_ids]}
defstruct [
:columns,
@@ -39,6 +44,7 @@ defmodule Realtime.Adapters.Changes do
end
defmodule DeletedRecord do
+ @moduledoc false
@derive {Jason.Encoder, except: [:subscription_ids]}
defstruct [
:columns,
@@ -52,7 +58,10 @@ defmodule Realtime.Adapters.Changes do
]
end
- defmodule(TruncatedRelation, do: defstruct([:type, :schema, :table, :commit_timestamp]))
+ defmodule TruncatedRelation do
+ @moduledoc false
+ defstruct [:type, :schema, :table, :commit_timestamp]
+ end
end
Protocol.derive(Jason.Encoder, Realtime.Adapters.Changes.Transaction)
diff --git a/lib/extensions/postgres/adapters/postgres/decoder/decoder.ex b/lib/realtime/adapters/postgres/decoder.ex
similarity index 91%
rename from lib/extensions/postgres/adapters/postgres/decoder/decoder.ex
rename to lib/realtime/adapters/postgres/decoder.ex
index 7b90127..e5ea161 100644
--- a/lib/extensions/postgres/adapters/postgres/decoder/decoder.ex
+++ b/lib/realtime/adapters/postgres/decoder.ex
@@ -130,18 +130,6 @@ defmodule Realtime.Adapters.Postgres.Decoder do
"""
defstruct [:data]
end
-
- defmodule Relation.Column do
- @moduledoc """
- Struct representing a column in a relation in PostgreSQL's logical decoding output.
-
- * `flags` - Column flags.
- * `name` - The name of the column.
- * `type` - The OID of the column type.
- * `type_modifier` - The type modifier of the column.
- """
- defstruct [:flags, :name, :type, :type_modifier]
- end
end
require Logger
@@ -186,9 +174,7 @@ defmodule Realtime.Adapters.Postgres.Decoder do
}
end
- defp decode_message_impl(
- <<"C", _flags::binary-1, lsn::binary-8, end_lsn::binary-8, timestamp::integer-64>>
- ) do
+ defp decode_message_impl(<<"C", _flags::binary-1, lsn::binary-8, end_lsn::binary-8, timestamp::integer-64>>) do
%Commit{
flags: [],
lsn: decode_lsn(lsn),
@@ -229,9 +215,7 @@ defmodule Realtime.Adapters.Postgres.Decoder do
}
end
- defp decode_message_impl(
- <<"I", relation_id::integer-32, "N", number_of_columns::integer-16, tuple_data::binary>>
- ) do
+ defp decode_message_impl(<<"I", relation_id::integer-32, "N", number_of_columns::integer-16, tuple_data::binary>>) do
{<<>>, decoded_tuple_data} = decode_tuple_data(tuple_data, number_of_columns)
%Insert{
@@ -240,9 +224,7 @@ defmodule Realtime.Adapters.Postgres.Decoder do
}
end
- defp decode_message_impl(
- <<"U", relation_id::integer-32, "N", number_of_columns::integer-16, tuple_data::binary>>
- ) do
+ defp decode_message_impl(<<"U", relation_id::integer-32, "N", number_of_columns::integer-16, tuple_data::binary>>) do
{<<>>, decoded_tuple_data} = decode_tuple_data(tuple_data, number_of_columns)
%Update{
@@ -252,8 +234,7 @@ defmodule Realtime.Adapters.Postgres.Decoder do
end
defp decode_message_impl(
- <<"U", relation_id::integer-32, key_or_old::binary-1, number_of_columns::integer-16,
- tuple_data::binary>>
+ <<"U", relation_id::integer-32, key_or_old::binary-1, number_of_columns::integer-16, tuple_data::binary>>
)
when key_or_old == "O" or key_or_old == "K" do
{<<"N", new_number_of_columns::integer-16, new_tuple_binary::binary>>, old_decoded_tuple_data} =
@@ -273,8 +254,7 @@ defmodule Realtime.Adapters.Postgres.Decoder do
end
defp decode_message_impl(
- <<"D", relation_id::integer-32, key_or_old::binary-1, number_of_columns::integer-16,
- tuple_data::binary>>
+ <<"D", relation_id::integer-32, key_or_old::binary-1, number_of_columns::integer-16, tuple_data::binary>>
)
when key_or_old == "K" or key_or_old == "O" do
{<<>>, decoded_tuple_data} = decode_tuple_data(tuple_data, number_of_columns)
@@ -289,9 +269,7 @@ defmodule Realtime.Adapters.Postgres.Decoder do
end
end
- defp decode_message_impl(
- <<"T", number_of_relations::integer-32, options::integer-8, column_ids::binary>>
- ) do
+ defp decode_message_impl(<<"T", number_of_relations::integer-32, options::integer-8, column_ids::binary>>) do
truncated_relations =
for relation_id_bin <- column_ids |> :binary.bin_to_list() |> Enum.chunk_every(4),
do: relation_id_bin |> :binary.list_to_bin() |> :binary.decode_unsigned()
@@ -313,7 +291,7 @@ defmodule Realtime.Adapters.Postgres.Decoder do
defp decode_message_impl(<<"Y", data_type_id::integer-32, namespace_and_name::binary>>) do
[namespace, name_with_null] = :binary.split(namespace_and_name, <<0>>)
- name = String.slice(name_with_null, 0..-2)
+ name = String.slice(name_with_null, 0..-2//1)
%Type{
id: data_type_id,
diff --git a/lib/extensions/postgres/adapters/postgres/decoder/oid_database.ex b/lib/realtime/adapters/postgres/oid_database.ex
similarity index 98%
rename from lib/extensions/postgres/adapters/postgres/decoder/oid_database.ex
rename to lib/realtime/adapters/postgres/oid_database.ex
index 73f621b..4291cfb 100644
--- a/lib/extensions/postgres/adapters/postgres/decoder/oid_database.ex
+++ b/lib/realtime/adapters/postgres/oid_database.ex
@@ -11,7 +11,7 @@
# Following query was used to generate this file:
# SELECT json_object_agg(UPPER(PT.typname), PT.oid::int4 ORDER BY pt.oid)
# FROM pg_type PT
-# WHERE typnamespace = (SELECT pgn.oid FROM pg_namespace pgn WHERE nspname = 'pg_catalog') -- Take only builting Postgres types with stable OID (extension types are not guaranted to be stable)
+# WHERE typnamespace = (SELECT pgn.oid FROM pg_namespace pgn WHERE nspname = 'pg_catalog') -- Take only builting Postgres types with stable OID (extension types are not guaranteed to be stable)
# AND typtype = 'b' -- Only basic types
# AND typisdefined -- Ignore undefined types
diff --git a/lib/realtime/adapters/postgres/protocol.ex b/lib/realtime/adapters/postgres/protocol.ex
new file mode 100644
index 0000000..6218348
--- /dev/null
+++ b/lib/realtime/adapters/postgres/protocol.ex
@@ -0,0 +1,63 @@
+defmodule Realtime.Adapters.Postgres.Protocol do
+ @moduledoc """
+ This module is responsible for parsing the Postgres WAL messages.
+ """
+ alias Realtime.Adapters.Postgres.Protocol.Write
+ alias Realtime.Adapters.Postgres.Protocol.KeepAlive
+
+ defguard is_write(value) when binary_part(value, 0, 1) == <>
+ defguard is_keep_alive(value) when binary_part(value, 0, 1) == <>
+
+ def parse(<>) do
+ %Write{
+ server_wal_start: server_wal_start,
+ server_wal_end: server_wal_end,
+ server_system_clock: server_system_clock,
+ message: message
+ }
+ end
+
+ def parse(<>) do
+ reply =
+ case reply do
+ 0 -> :later
+ 1 -> :now
+ end
+
+ %KeepAlive{wal_end: wal_end, clock: clock, reply: reply}
+ end
+
+ @doc """
+ Message to send to the server to request a standby status update.
+
+ Check https://www.postgresql.org/docs/current/protocol-replication.html#PROTOCOL-REPLICATION-STANDBY-STATUS-UPDATE for more information
+ """
+ @spec standby_status(integer(), integer(), integer(), :now | :later, integer() | nil) :: [
+ binary()
+ ]
+ def standby_status(last_wal_received, last_wal_flushed, last_wal_applied, reply, clock \\ nil)
+
+ def standby_status(last_wal_received, last_wal_flushed, last_wal_applied, reply, nil) do
+ standby_status(last_wal_received, last_wal_flushed, last_wal_applied, reply, current_time())
+ end
+
+ def standby_status(last_wal_received, last_wal_flushed, last_wal_applied, reply, clock) do
+ reply =
+ case reply do
+ :now -> 1
+ :later -> 0
+ end
+
+ [
+ <>
+ ]
+ end
+
+ @doc """
+ Message to send the server to not do any operation since the server can wait
+ """
+ def hold, do: []
+
+ @epoch DateTime.to_unix(~U[2000-01-01 00:00:00Z], :microsecond)
+ def current_time, do: System.os_time(:microsecond) - @epoch
+end
diff --git a/lib/realtime/adapters/postgres/protocol/keep_alive.ex b/lib/realtime/adapters/postgres/protocol/keep_alive.ex
new file mode 100644
index 0000000..3c47ba0
--- /dev/null
+++ b/lib/realtime/adapters/postgres/protocol/keep_alive.ex
@@ -0,0 +1,24 @@
+defmodule Realtime.Adapters.Postgres.Protocol.KeepAlive do
+ @moduledoc """
+ Primary keepalive message (B)
+ Byte1('k')
+ Identifies the message as a sender keepalive.
+
+ Int64
+ The current end of WAL on the server.
+
+ Int64
+ The server's system clock at the time of transmission, as microseconds since midnight on 2000-01-01.
+
+ Byte1
+ 1 means that the client should reply to this message as soon as possible, to avoid a timeout disconnect. 0 otherwise.
+
+ The receiving process can send replies back to the sender at any time, using one of the following message formats (also in the payload of a CopyData message):
+ """
+ @type t :: %__MODULE__{
+ wal_end: integer(),
+ clock: integer(),
+ reply: :now | :await
+ }
+ defstruct [:wal_end, :clock, :reply]
+end
diff --git a/lib/realtime/adapters/postgres/protocol/write.ex b/lib/realtime/adapters/postgres/protocol/write.ex
new file mode 100644
index 0000000..68134c3
--- /dev/null
+++ b/lib/realtime/adapters/postgres/protocol/write.ex
@@ -0,0 +1,22 @@
+defmodule Realtime.Adapters.Postgres.Protocol.Write do
+ @moduledoc """
+ XLogData (B)
+ Byte1('w')
+ Identifies the message as WAL data.
+
+ Int64
+ The starting point of the WAL data in this message.
+
+ Int64
+ The current end of WAL on the server.
+
+ Int64
+ The server's system clock at the time of transmission, as microseconds since midnight on 2000-01-01.
+
+ Byten
+ A section of the WAL data stream.
+
+ A single WAL record is never split across two XLogData messages. When a WAL record crosses a WAL page boundary, and is therefore already split using continuation records, it can be split at the page boundary. In other words, the first main WAL record and its continuation records can be sent in different XLogData messages.
+ """
+ defstruct [:server_wal_start, :server_wal_end, :server_system_clock, :message]
+end
diff --git a/lib/realtime/api.ex b/lib/realtime/api.ex
index b947878..f89500d 100644
--- a/lib/realtime/api.ex
+++ b/lib/realtime/api.ex
@@ -6,7 +6,13 @@ defmodule Realtime.Api do
import Ecto.Query
- alias Realtime.{Repo, Api.Tenant, Api.Extensions, RateCounter, GenCounter, Tenants}
+ alias Realtime.Repo
+ alias Realtime.Repo.Replica
+ alias Realtime.Api.Tenant
+ alias Realtime.Api.Extensions
+ alias Realtime.RateCounter
+ alias Realtime.GenCounter
+ alias Realtime.Tenants
@doc """
Returns the list of tenants.
@@ -17,16 +23,23 @@ defmodule Realtime.Api do
[%Tenant{}, ...]
"""
- def list_tenants() do
- repo_replica = Repo.replica()
+ def list_tenants do
+ repo_replica = Replica.replica()
Tenant
|> repo_replica.all()
|> repo_replica.preload(:extensions)
end
+ @doc """
+ Returns list of tenants with filter options:
+ * order_by
+ * search external id
+ * limit
+ * ordering (desc / asc)
+ """
def list_tenants(opts) when is_list(opts) do
- repo_replica = Repo.replica()
+ repo_replica = Replica.replica()
field = Keyword.get(opts, :order_by, "inserted_at") |> String.to_atom()
external_id = Keyword.get(opts, :search)
@@ -64,7 +77,7 @@ defmodule Realtime.Api do
** (Ecto.NoResultsError)
"""
- def get_tenant!(id), do: Repo.replica().get!(Tenant, id)
+ def get_tenant!(id), do: Replica.replica().get!(Tenant, id)
@doc """
Creates a tenant.
@@ -101,9 +114,21 @@ defmodule Realtime.Api do
def update_tenant(%Tenant{} = tenant, attrs) do
tenant
|> Tenant.changeset(attrs)
+ |> tap(&maybe_trigger_disconnect/1)
|> Repo.update()
end
+ defp maybe_trigger_disconnect(%Ecto.Changeset{
+ changes: changes,
+ valid?: true,
+ data: %{external_id: external_id}
+ })
+ when is_map_key(changes, :jwt_jwks) or is_map_key(changes, :jwt_secret) do
+ Phoenix.PubSub.broadcast!(Realtime.PubSub, "realtime:operations:" <> external_id, :disconnect)
+ end
+
+ defp maybe_trigger_disconnect(_), do: nil
+
@doc """
Deletes a tenant.
@@ -146,13 +171,18 @@ defmodule Realtime.Api do
Tenant.changeset(tenant, attrs)
end
- @spec get_tenant_by_external_id(String.t()) :: Tenant.t() | nil
- def get_tenant_by_external_id(external_id) do
- repo_replica = Repo.replica()
+ @spec get_tenant_by_external_id(String.t(), atom()) :: Tenant.t() | nil
+ def get_tenant_by_external_id(external_id, repo \\ :replica)
+ when repo in [:primary, :replica] do
+ repo =
+ case repo do
+ :primary -> Repo
+ :replica -> Replica.replica()
+ end
Tenant
- |> repo_replica.get_by(external_id: external_id)
- |> repo_replica.preload(:extensions)
+ |> repo.get_by(external_id: external_id)
+ |> repo.preload(:extensions)
end
def list_extensions(type \\ "postgres_cdc_rls") do
@@ -160,7 +190,7 @@ defmodule Realtime.Api do
where: e.type == ^type,
select: e
)
- |> Repo.replica().all()
+ |> Replica.replica().all()
end
def rename_settings_field(from, to) do
diff --git a/lib/realtime/api/extensions.ex b/lib/realtime/api/extensions.ex
index e954a67..4ecb1a0 100644
--- a/lib/realtime/api/extensions.ex
+++ b/lib/realtime/api/extensions.ex
@@ -5,7 +5,8 @@ defmodule Realtime.Api.Extensions do
use Ecto.Schema
import Ecto.Changeset
- import Realtime.Helpers, only: [encrypt!: 2]
+
+ alias Realtime.Crypto
@primary_key {:id, :binary_id, autogenerate: true}
@foreign_key_type :binary_id
@@ -42,11 +43,9 @@ defmodule Realtime.Api.Extensions do
def encrypt_settings(changeset, required) do
update_change(changeset, :settings, fn settings ->
- secure_key = Application.get_env(:realtime, :db_enc_key)
-
Enum.reduce(required, settings, fn
{field, _, true}, acc ->
- encrypted = encrypt!(settings[field], secure_key)
+ encrypted = Crypto.encrypt!(settings[field])
%{acc | field => encrypted}
_, acc ->
diff --git a/lib/realtime/api/message.ex b/lib/realtime/api/message.ex
new file mode 100644
index 0000000..90ebc5b
--- /dev/null
+++ b/lib/realtime/api/message.ex
@@ -0,0 +1,47 @@
+defmodule Realtime.Api.Message do
+ @moduledoc """
+ Defines the Message schema to be used to check RLS authorization policies
+ """
+ use Ecto.Schema
+ import Ecto.Changeset
+
+ @primary_key {:id, Ecto.UUID, autogenerate: true}
+ @schema_prefix "realtime"
+
+ schema "messages" do
+ field(:topic, :string)
+ field(:extension, Ecto.Enum, values: [:broadcast, :presence])
+ field(:payload, :map)
+ field(:event, :string)
+ field(:private, :boolean)
+
+ timestamps()
+ end
+
+ def changeset(message, attrs) do
+ message
+ |> cast(attrs, [
+ :topic,
+ :extension,
+ :payload,
+ :event,
+ :private,
+ :inserted_at,
+ :updated_at
+ ])
+ |> validate_required([:topic, :extension])
+ |> put_timestamp(:updated_at)
+ |> maybe_put_timestamp(:inserted_at)
+ end
+
+ defp put_timestamp(changeset, field) do
+ changeset |> put_change(field, NaiveDateTime.utc_now() |> NaiveDateTime.truncate(:second))
+ end
+
+ defp maybe_put_timestamp(changeset, field) do
+ case Map.get(changeset.data, field) do
+ nil -> put_timestamp(changeset, field)
+ _ -> changeset
+ end
+ end
+end
diff --git a/lib/realtime/api/tenant.ex b/lib/realtime/api/tenant.ex
index 7278b19..19bc614 100644
--- a/lib/realtime/api/tenant.ex
+++ b/lib/realtime/api/tenant.ex
@@ -5,7 +5,7 @@ defmodule Realtime.Api.Tenant do
use Ecto.Schema
import Ecto.Changeset
alias Realtime.Api.Extensions
- import Realtime.Helpers, only: [encrypt!: 2]
+ alias Realtime.Crypto
@type t :: %__MODULE__{}
@@ -15,14 +15,18 @@ defmodule Realtime.Api.Tenant do
field(:name, :string)
field(:external_id, :string)
field(:jwt_secret, :string)
+ field(:jwt_jwks, :map)
field(:postgres_cdc_default, :string)
- field(:max_concurrent_users, :integer, default: 200)
- field(:max_events_per_second, :integer, default: 100)
- field(:max_bytes_per_second, :integer, default: 100_000)
- field(:max_channels_per_client, :integer, default: 100)
- field(:max_joins_per_second, :integer, default: 500)
+ field(:max_concurrent_users, :integer)
+ field(:max_events_per_second, :integer)
+ field(:max_bytes_per_second, :integer)
+ field(:max_channels_per_client, :integer)
+ field(:max_joins_per_second, :integer)
+ field(:suspend, :boolean, default: false)
field(:events_per_second_rolling, :float, virtual: true)
field(:events_per_second_now, :integer, virtual: true)
+ field(:private_only, :boolean, default: false)
+ field(:migrations_ran, :integer, default: 0)
has_many(:extensions, Realtime.Api.Extensions,
foreign_key: :tenant_external_id,
@@ -57,19 +61,21 @@ defmodule Realtime.Api.Tenant do
attrs
end
- ###
-
tenant
|> cast(attrs, [
:name,
:external_id,
:jwt_secret,
+ :jwt_jwks,
:max_concurrent_users,
:max_events_per_second,
:postgres_cdc_default,
:max_bytes_per_second,
:max_channels_per_client,
- :max_joins_per_second
+ :max_joins_per_second,
+ :suspend,
+ :private_only,
+ :migrations_ran
])
|> validate_required([
:external_id,
@@ -77,13 +83,25 @@ defmodule Realtime.Api.Tenant do
])
|> unique_constraint([:external_id])
|> encrypt_jwt_secret()
+ |> maybe_set_default(:max_bytes_per_second, :tenant_max_bytes_per_second)
+ |> maybe_set_default(:max_channels_per_client, :tenant_max_channels_per_client)
+ |> maybe_set_default(:max_concurrent_users, :tenant_max_concurrent_users)
+ |> maybe_set_default(:max_events_per_second, :tenant_max_events_per_second)
+ |> maybe_set_default(:max_joins_per_second, :tenant_max_joins_per_second)
|> cast_assoc(:extensions, with: &Extensions.changeset/2)
end
+ def maybe_set_default(changeset, property, config_key) do
+ has_key? = Map.get(changeset.data, property) || Map.get(changeset.changes, property)
+
+ if has_key? do
+ changeset
+ else
+ put_change(changeset, property, Application.fetch_env!(:realtime, config_key))
+ end
+ end
+
def encrypt_jwt_secret(changeset) do
- update_change(changeset, :jwt_secret, fn jwt_secret ->
- secure_key = Application.get_env(:realtime, :db_enc_key)
- encrypt!(jwt_secret, secure_key)
- end)
+ update_change(changeset, :jwt_secret, &Crypto.encrypt!/1)
end
end
diff --git a/lib/realtime/application.ex b/lib/realtime/application.ex
index bff658b..746b501 100644
--- a/lib/realtime/application.ex
+++ b/lib/realtime/application.ex
@@ -4,12 +4,24 @@ defmodule Realtime.Application do
@moduledoc false
use Application
- require Logger, warn: false
+ require Logger
+
+ alias Realtime.Repo.Replica
defmodule JwtSecretError, do: defexception([:message])
defmodule JwtClaimValidatorsError, do: defexception([:message])
def start(_type, _args) do
+ opentelemetry_setup()
+ primary_config = :logger.get_primary_config()
+
+ # add the region to logs
+ :ok =
+ :logger.set_primary_config(
+ :metadata,
+ Enum.into([region: System.get_env("REGION")], primary_config.metadata)
+ )
+
topologies = Application.get_env(:libcluster, :topologies) || []
if Application.fetch_env!(:realtime, :secure_channels) do
@@ -27,62 +39,61 @@ defmodule Realtime.Application do
:gen_event.swap_sup_handler(
:erl_signal_server,
{:erl_signal_handler, []},
- {Realtime.SignalHandler, []}
+ {Realtime.SignalHandler, %{handler_mod: :erl_signal_handler}}
)
Realtime.PromEx.set_metrics_tags()
+ :ets.new(Realtime.Tenants.Connect, [:named_table, :set, :public])
+ :syn.set_event_handler(Realtime.SynHandler)
+
+ :ok = :syn.add_node_to_scopes([:users, RegionNodes, Realtime.Tenants.Connect])
- Registry.start_link(
- keys: :duplicate,
- name: Realtime.Registry
- )
-
- Registry.start_link(
- keys: :unique,
- name: Realtime.Registry.Unique
- )
-
- :syn.add_node_to_scopes([:users, RegionNodes])
- :syn.join(RegionNodes, System.get_env("FLY_REGION"), self(), node: node())
-
- extensions_supervisors =
- Enum.reduce(Application.get_env(:realtime, :extensions), [], fn
- {_, %{supervisor: name}}, acc ->
- [
- %{
- id: name,
- start: {name, :start_link, []},
- restart: :transient
- }
- | acc
- ]
-
- _, acc ->
- acc
- end)
+ region = Application.get_env(:realtime, :region)
+ :syn.join(RegionNodes, region, self(), node: node())
+ migration_partition_slots = Application.get_env(:realtime, :migration_partition_slots)
+ connect_partition_slots = Application.get_env(:realtime, :connect_partition_slots)
children =
[
+ Realtime.ErlSysMon,
Realtime.PromEx,
- {Cluster.Supervisor, [topologies, [name: Realtime.ClusterSupervisor]]},
+ {Realtime.Telemetry.Logger, handler_id: "telemetry-logger"},
Realtime.Repo,
RealtimeWeb.Telemetry,
+ {Cluster.Supervisor, [topologies, [name: Realtime.ClusterSupervisor]]},
{Phoenix.PubSub, name: Realtime.PubSub, pool_size: 10},
- Realtime.GenCounter.DynamicSupervisor,
{Cachex, name: Realtime.RateCounter},
- Realtime.Tenants.Cache,
+ Realtime.Tenants.CacheSupervisor,
+ Realtime.GenCounter.DynamicSupervisor,
Realtime.RateCounter.DynamicSupervisor,
- RealtimeWeb.Endpoint,
- RealtimeWeb.Presence,
- {Task.Supervisor, name: Realtime.TaskSupervisor},
Realtime.Latency,
- Realtime.Telemetry.Logger
- ] ++ extensions_supervisors
+ {Registry, keys: :duplicate, name: Realtime.Registry},
+ {Registry, keys: :unique, name: Realtime.Registry.Unique},
+ {Task.Supervisor, name: Realtime.TaskSupervisor},
+ {PartitionSupervisor,
+ child_spec: {DynamicSupervisor, max_restarts: 0},
+ strategy: :one_for_one,
+ name: Realtime.Tenants.Migrations.DynamicSupervisor,
+ partitions: migration_partition_slots},
+ {PartitionSupervisor,
+ child_spec: DynamicSupervisor,
+ strategy: :one_for_one,
+ name: Realtime.Tenants.Connect.DynamicSupervisor,
+ partitions: connect_partition_slots},
+ {PartitionSupervisor,
+ child_spec: DynamicSupervisor,
+ strategy: :one_for_one,
+ name: Realtime.Tenants.ReplicationConnection.DynamicSupervisor},
+ {PartitionSupervisor,
+ child_spec: DynamicSupervisor, strategy: :one_for_one, name: Realtime.Tenants.Listen.DynamicSupervisor},
+ RealtimeWeb.Endpoint,
+ RealtimeWeb.Presence
+ ] ++ extensions_supervisors() ++ janitor_tasks()
children =
- case Realtime.Repo.replica() do
+ case Replica.replica() do
Realtime.Repo -> children
- replica_repo -> List.insert_at(children, 2, replica_repo)
+ replica -> List.insert_at(children, 2, replica)
end
# See https://hexdocs.pm/elixir/Supervisor.html
@@ -90,4 +101,48 @@ defmodule Realtime.Application do
opts = [strategy: :one_for_one, name: Realtime.Supervisor]
Supervisor.start_link(children, opts)
end
+
+ defp extensions_supervisors do
+ Enum.reduce(Application.get_env(:realtime, :extensions), [], fn
+ {_, %{supervisor: name}}, acc ->
+ opts = %{
+ id: name,
+ start: {name, :start_link, []},
+ restart: :transient
+ }
+
+ [opts | acc]
+
+ _, acc ->
+ acc
+ end)
+ end
+
+ defp janitor_tasks do
+ if Application.fetch_env!(:realtime, :run_janitor) do
+ janitor_max_children =
+ Application.get_env(:realtime, :janitor_max_children)
+
+ janitor_children_timeout =
+ Application.get_env(:realtime, :janitor_children_timeout)
+
+ [
+ {Task.Supervisor,
+ name: Realtime.Tenants.Janitor.TaskSupervisor,
+ max_children: janitor_max_children,
+ max_seconds: janitor_children_timeout,
+ max_restarts: 1},
+ Realtime.Tenants.Janitor,
+ Realtime.MetricsCleaner
+ ]
+ else
+ []
+ end
+ end
+
+ defp opentelemetry_setup do
+ :opentelemetry_cowboy.setup()
+ OpentelemetryPhoenix.setup(adapter: :cowboy2)
+ OpentelemetryEcto.setup([:realtime, :repo], db_statement: :enabled)
+ end
end
diff --git a/lib/realtime/cluster_strategy/postgres.ex b/lib/realtime/cluster_strategy/postgres.ex
new file mode 100644
index 0000000..eed4f15
--- /dev/null
+++ b/lib/realtime/cluster_strategy/postgres.ex
@@ -0,0 +1,104 @@
+defmodule Realtime.Cluster.Strategy.Postgres do
+ @moduledoc """
+ A libcluster strategy that uses Postgres LISTEN/NOTIFY to determine the cluster topology.
+
+ This strategy operates by having all nodes in the cluster listen for and send notifications to a shared Postgres channel.
+
+ When a node comes online, it begins to broadcast its name in a "heartbeat" message to the channel. All other nodes that receive this message attempt to connect to it.
+
+ This strategy does not check connectivity between nodes and does not disconnect them
+
+ ## Options
+
+ * `heartbeat_interval` - The interval at which to send heartbeat messages in milliseconds (optional; default: 5_000)
+ * `channel_name` - The name of the channel to which nodes will listen and notify (optional; default: "cluster)
+ """
+ use GenServer
+
+ alias Cluster.Strategy
+ alias Cluster.Logger
+ alias Postgrex, as: P
+
+ def start_link(args), do: GenServer.start_link(__MODULE__, args)
+
+ def init([state]) do
+ opts = [
+ hostname: Keyword.fetch!(state.config, :hostname),
+ username: Keyword.fetch!(state.config, :username),
+ password: Keyword.fetch!(state.config, :password),
+ database: Keyword.fetch!(state.config, :database),
+ port: Keyword.fetch!(state.config, :port),
+ parameters: Keyword.fetch!(state.config, :parameters),
+ channel_name: Keyword.fetch!(state.config, :channel_name)
+ ]
+
+ new_config =
+ state.config
+ |> Keyword.put_new(:heartbeat_interval, 5_000)
+ |> Keyword.delete(:url)
+
+ meta = %{
+ opts: fn -> opts end,
+ conn: nil,
+ conn_notif: nil,
+ heartbeat_ref: make_ref()
+ }
+
+ {:ok, %{state | config: new_config, meta: meta}, {:continue, :connect}}
+ end
+
+ def handle_continue(:connect, state) do
+ with {:ok, conn} <- P.start_link(state.meta.opts.()),
+ {:ok, conn_notif} <- P.Notifications.start_link(state.meta.opts.()),
+ {_, _} <- P.Notifications.listen(conn_notif, state.config[:channel_name]) do
+ Logger.info(state.topology, "Connected to Postgres database")
+
+ meta = %{
+ state.meta
+ | conn: conn,
+ conn_notif: conn_notif,
+ heartbeat_ref: heartbeat(0)
+ }
+
+ {:noreply, put_in(state.meta, meta)}
+ else
+ reason ->
+ Logger.error(state.topology, "Failed to connect to Postgres: #{inspect(reason)}")
+ {:noreply, state}
+ end
+ end
+
+ def handle_info(:heartbeat, state) do
+ Process.cancel_timer(state.meta.heartbeat_ref)
+ P.query(state.meta.conn, "NOTIFY #{state.config[:channel_name]}, '#{node()}'", [])
+ ref = heartbeat(state.config[:heartbeat_interval])
+ {:noreply, put_in(state.meta.heartbeat_ref, ref)}
+ end
+
+ def handle_info({:notification, _, _, _, node}, state) do
+ node = String.to_atom(node)
+
+ if node != node() do
+ topology = state.topology
+ Logger.debug(topology, "Trying to connect to node: #{node}")
+
+ case Strategy.connect_nodes(topology, state.connect, state.list_nodes, [node]) do
+ :ok -> Logger.debug(topology, "Connected to node: #{node}")
+ {:error, _} -> Logger.error(topology, "Failed to connect to node: #{node}")
+ end
+ end
+
+ {:noreply, state}
+ end
+
+ def handle_info(msg, state) do
+ Logger.error(state.topology, "Undefined message #{inspect(msg, pretty: true)}")
+ {:noreply, state}
+ end
+
+ ### Internal functions
+ @spec heartbeat(non_neg_integer()) :: reference()
+ defp heartbeat(interval) when interval >= 0 do
+ Process.send_after(self(), :heartbeat, interval)
+ end
+end
diff --git a/lib/realtime/context_cache.ex b/lib/realtime/context_cache.ex
index 55f5aee..afacf4c 100644
--- a/lib/realtime/context_cache.ex
+++ b/lib/realtime/context_cache.ex
@@ -9,14 +9,9 @@ defmodule Realtime.ContextCache do
cache = cache_name(context)
cache_key = {{fun, arity}, args}
- case Cachex.fetch(cache, cache_key, fn {{_fun, _arity}, args} ->
- {:commit, {:cached, apply(context, fun, args)}}
- end) do
- {:commit, {:cached, value}} ->
- value
-
- {:ok, {:cached, value}} ->
- value
+ case Cachex.fetch(cache, cache_key, fn {{_fun, _arity}, args} -> {:commit, {:cached, apply(context, fun, args)}} end) do
+ {:commit, {:cached, value}} -> value
+ {:ok, {:cached, value}} -> value
end
end
diff --git a/lib/realtime/crypto.ex b/lib/realtime/crypto.ex
new file mode 100644
index 0000000..576dc00
--- /dev/null
+++ b/lib/realtime/crypto.ex
@@ -0,0 +1,40 @@
+defmodule Realtime.Crypto do
+ @moduledoc """
+ Encrypt and decrypt operations required by Realtime. It uses the secret set on Application.get_env(:realtime, :db_enc_key)
+ """
+
+ @doc """
+ Encrypts the given text
+ """
+ @spec encrypt!(binary()) :: binary()
+ def encrypt!(text) do
+ secret_key = Application.get_env(:realtime, :db_enc_key)
+
+ :aes_128_ecb
+ |> :crypto.crypto_one_time(secret_key, pad(text), true)
+ |> Base.encode64()
+ end
+
+ @doc """
+ Decrypts the given base64 encoded text
+ """
+ @spec decrypt!(binary()) :: binary()
+ def decrypt!(base64_text) do
+ secret_key = Application.get_env(:realtime, :db_enc_key)
+ crypto_text = Base.decode64!(base64_text)
+
+ :aes_128_ecb
+ |> :crypto.crypto_one_time(secret_key, crypto_text, false)
+ |> unpad()
+ end
+
+ defp pad(data) do
+ to_add = 16 - rem(byte_size(data), 16)
+ data <> :binary.copy(<>, to_add)
+ end
+
+ defp unpad(data) do
+ to_remove = :binary.last(data)
+ :binary.part(data, 0, byte_size(data) - to_remove)
+ end
+end
diff --git a/lib/realtime/database.ex b/lib/realtime/database.ex
new file mode 100644
index 0000000..81550b1
--- /dev/null
+++ b/lib/realtime/database.ex
@@ -0,0 +1,372 @@
+defmodule Realtime.Database do
+ @moduledoc """
+ Handles tenant database operations
+ """
+ require Logger
+
+ import Realtime.Logs
+
+ alias Realtime.Api.Tenant
+ alias Realtime.Crypto
+ alias Realtime.PostgresCdc
+ alias Realtime.Rpc
+ alias Realtime.Telemetry
+
+ defstruct [
+ :hostname,
+ :port,
+ :database,
+ :username,
+ :password,
+ :pool_size,
+ :queue_target,
+ :application_name,
+ :max_restarts,
+ :socket_options,
+ ssl: true,
+ backoff_type: :rand_exp
+ ]
+
+ @type t :: %__MODULE__{
+ hostname: binary(),
+ database: binary(),
+ username: binary(),
+ password: binary(),
+ port: non_neg_integer(),
+ pool_size: non_neg_integer(),
+ queue_target: non_neg_integer(),
+ application_name: binary(),
+ max_restarts: non_neg_integer() | nil,
+ ssl: boolean(),
+ socket_options: list(),
+ backoff_type: :stop | :exp | :rand | :rand_exp
+ }
+
+ @cdc "postgres_cdc_rls"
+ @doc """
+ Creates a database connection struct from the given tenant.
+ """
+ @spec from_tenant(Tenant.t(), binary(), :stop | :exp | :rand | :rand_exp) :: t()
+ def from_tenant(%Tenant{} = tenant, application_name, backoff \\ :rand_exp) do
+ tenant
+ |> then(&Realtime.PostgresCdc.filter_settings(@cdc, &1.extensions))
+ |> then(&from_settings(&1, application_name, backoff))
+ end
+
+ @doc """
+ Creates a database connection struct from the given settings.
+ """
+ @spec from_settings(map(), binary(), :stop | :exp | :rand | :rand_exp) :: t()
+ def from_settings(settings, application_name, backoff \\ :rand_exp) do
+ pool = pool_size_by_application_name(application_name, settings)
+
+ settings =
+ settings
+ |> Map.take(["db_host", "db_port", "db_name", "db_user", "db_password"])
+ |> Enum.map(fn {k, v} -> {k, Crypto.decrypt!(v)} end)
+ |> Map.new()
+ |> then(&Map.merge(settings, &1))
+
+ {:ok, addrtype} = detect_ip_version(settings["db_host"])
+ ssl = if default_ssl_param(settings), do: [verify: :verify_none], else: false
+
+ %__MODULE__{
+ hostname: settings["db_host"],
+ port: String.to_integer(settings["db_port"]),
+ database: settings["db_name"],
+ username: settings["db_user"],
+ password: settings["db_password"],
+ pool_size: pool,
+ queue_target: settings["db_queue_target"] || 5_000,
+ application_name: application_name,
+ backoff_type: backoff,
+ socket_options: [addrtype],
+ ssl: ssl
+ }
+ end
+
+ @available_connection_factor 0.95
+
+ @doc """
+ Checks if the Tenant CDC extension information is properly configured and that we're able to query against the tenant database.
+ """
+
+ @spec check_tenant_connection(Tenant.t() | nil) :: {:error, atom()} | {:ok, pid()}
+ def check_tenant_connection(nil), do: {:error, :tenant_not_found}
+
+ def check_tenant_connection(tenant) do
+ tenant
+ |> then(&PostgresCdc.filter_settings(@cdc, &1.extensions))
+ |> then(fn settings ->
+ required_pool = tenant_pool_requirements(settings)
+ check_settings = from_settings(settings, "realtime_connect", :stop)
+ check_settings = Map.put(check_settings, :max_restarts, 0)
+
+ with {:ok, conn} <- connect_db(check_settings) do
+ query =
+ "select (current_setting('max_connections')::int - count(*))::int from pg_stat_activity where application_name != 'realtime_connect'"
+
+ case Postgrex.query(conn, query, []) do
+ {:ok, %{rows: [[available_connections]]}} ->
+ requirement = ceil(required_pool * @available_connection_factor)
+
+ if requirement < available_connections do
+ {:ok, conn}
+ else
+ log_error(
+ "DatabaseLackOfConnections",
+ "Only #{available_connections} available connections. At least #{requirement} connections are required."
+ )
+
+ {:error, :tenant_db_too_many_connections}
+ end
+
+ {:error, e} ->
+ Process.exit(conn, :kill)
+ log_error("UnableToConnectToTenantDatabase", e)
+ {:error, e}
+ end
+ end
+ end)
+ end
+
+ @doc """
+ Connects to the database using the given settings.
+ """
+ @spec connect(Tenant.t(), binary(), :stop | :exp | :rand | :rand_exp) ::
+ {:ok, pid()} | {:error, any()}
+ def connect(tenant, application_name, backoff \\ :stop) do
+ tenant
+ |> from_tenant(application_name, backoff)
+ |> connect_db()
+ end
+
+ @doc """
+ If the param `ssl_enforced` is not set, it defaults to true.
+ """
+ @spec default_ssl_param(map) :: boolean
+ def default_ssl_param(%{"ssl_enforced" => ssl_enforced}) when is_boolean(ssl_enforced),
+ do: ssl_enforced
+
+ def default_ssl_param(_), do: true
+
+ @doc """
+ Runs database transaction in local node or against a target node withing a Postgrex transaction
+ """
+ @spec transaction(pid | DBConnection.t(), fun(), keyword(), keyword()) :: {:ok, any()} | {:error, any()}
+ def transaction(db_conn, func, opts \\ [], metadata \\ [])
+
+ def transaction(%DBConnection{} = db_conn, func, opts, metadata),
+ do: transaction_catched(db_conn, func, opts, metadata)
+
+ def transaction(db_conn, func, opts, metadata) when node() == node(db_conn),
+ do: transaction_catched(db_conn, func, opts, metadata)
+
+ def transaction(db_conn, func, opts, metadata) do
+ metadata = Keyword.put(metadata, :target, node(db_conn))
+ args = [db_conn, func, opts, metadata]
+
+ case Rpc.enhanced_call(node(db_conn), __MODULE__, :transaction, args) do
+ {:ok, value} -> {:ok, value}
+ {:error, :rpc_error, error} -> {:error, error}
+ {:error, error} -> {:error, error}
+ end
+ end
+
+ defp transaction_catched(db_conn, func, opts, metadata) do
+ telemetry = Keyword.get(opts, :telemetry, nil)
+
+ if telemetry do
+ tenant_id = Keyword.get(opts, :tenant_id, nil)
+ {latency, value} = :timer.tc(Postgrex, :transaction, [db_conn, func, opts], :millisecond)
+ Telemetry.execute(telemetry, %{latency: latency}, %{tenant_id: tenant_id})
+ value
+ else
+ Postgrex.transaction(db_conn, func, opts)
+ end
+ rescue
+ e ->
+ log_error("ErrorExecutingTransaction", e, metadata)
+ {:error, e}
+ end
+
+ @spec connect_db(__MODULE__.t()) :: {:ok, pid()} | {:error, any()}
+ def connect_db(%__MODULE__{} = settings) do
+ %__MODULE__{
+ hostname: hostname,
+ port: port,
+ database: database,
+ username: username,
+ password: password,
+ pool_size: pool_size,
+ queue_target: queue_target,
+ application_name: application_name,
+ backoff_type: backoff_type,
+ max_restarts: max_restarts,
+ socket_options: socket_options,
+ ssl: ssl
+ } = settings
+
+ Logger.metadata(application_name: application_name)
+ metadata = Logger.metadata()
+
+ [
+ hostname: hostname,
+ port: port,
+ database: database,
+ username: username,
+ password: password,
+ pool_size: pool_size,
+ queue_target: queue_target,
+ parameters: [application_name: application_name],
+ socket_options: socket_options,
+ backoff_type: backoff_type,
+ ssl: ssl,
+ configure: fn args ->
+ Logger.metadata(metadata)
+ args
+ end
+ ]
+ |> then(fn opts ->
+ if max_restarts, do: Keyword.put(opts, :max_restarts, max_restarts), else: opts
+ end)
+ |> Postgrex.start_link()
+ end
+
+ @doc """
+ Returns the pool size for a given application name. Override pool size if provided.
+
+ `realtime_rls` and `realtime_broadcast_changes` will be handled as a special scenario as it will need to be hardcoded as 1 otherwise replication slots will be tried to be reused leading to errors
+ `realtime_migrations` will be handled as a special scenario as it requires 2 connections.
+ """
+ @spec pool_size_by_application_name(binary(), map() | nil) :: non_neg_integer()
+ def pool_size_by_application_name(application_name, settings) do
+ case application_name do
+ "realtime_subscription_manager" -> settings["subcriber_pool_size"] || 1
+ "realtime_subscription_manager_pub" -> settings["subs_pool_size"] || 1
+ "realtime_subscription_checker" -> settings["subs_pool_size"] || 1
+ "realtime_connect" -> settings["db_pool"] || 1
+ "realtime_health_check" -> 1
+ "realtime_janitor" -> 1
+ "realtime_migrations" -> 2
+ "realtime_broadcast_changes" -> 1
+ "realtime_rls" -> 1
+ "realtime_replication_slot_teardown" -> 1
+ _ -> 1
+ end
+ end
+
+ @doc """
+ Gets the external id from a host connection string found in the conn.
+ """
+ @spec get_external_id(String.t()) :: {:ok, String.t()} | {:error, atom()}
+ def get_external_id(host) when is_binary(host) do
+ case String.split(host, ".", parts: 2) do
+ [id] -> {:ok, id}
+ [id, _] -> {:ok, id}
+ end
+ end
+
+ @doc """
+ Detects the IP version for a given host.
+ """
+ @spec detect_ip_version(String.t()) :: {:ok, :inet | :inet6} | {:error, :nxdomain}
+ def detect_ip_version(host) when is_binary(host) do
+ host = String.to_charlist(host)
+
+ cond do
+ match?({:ok, _}, :inet6_tcp.getaddr(host)) -> {:ok, :inet6}
+ match?({:ok, _}, :inet.gethostbyname(host)) -> {:ok, :inet}
+ true -> {:error, :nxdomain}
+ end
+ end
+
+ @doc """
+ Terminates all replication slots with the name containing 'realtime' in the tenant database.
+ """
+ @spec replication_slot_teardown(Tenant.t()) :: :ok
+ def replication_slot_teardown(tenant) do
+ {:ok, conn} = connect(tenant, "realtime_replication_slot_teardown")
+
+ query =
+ "select slot_name from pg_replication_slots where slot_name like '%realtime%'"
+
+ with {:ok, %{rows: [rows]}} <- Postgrex.query(conn, query, []) do
+ rows
+ |> Enum.reject(&is_nil/1)
+ |> Enum.each(&replication_slot_teardown(conn, &1))
+ end
+
+ GenServer.stop(conn)
+ :ok
+ end
+
+ @doc """
+ Terminates replication slot with a given name in the tenant database.
+ """
+ @spec replication_slot_teardown(pid() | Tenant.t(), String.t()) :: :ok
+ def replication_slot_teardown(%Tenant{} = tenant, slot_name) do
+ {:ok, conn} = connect(tenant, "realtime_replication_slot_teardown")
+ replication_slot_teardown(conn, slot_name)
+ :ok
+ end
+
+ def replication_slot_teardown(conn, slot_name) do
+ Postgrex.query(
+ conn,
+ "select active_pid, pg_terminate_backend(active_pid), pg_drop_replication_slot(slot_name) from pg_replication_slots where slot_name = $1",
+ [slot_name]
+ )
+
+ Postgrex.query(conn, "select pg_drop_replication_slot($1)", [slot_name])
+ :ok
+ end
+
+ @doc """
+ Transforms database settings into keyword list to be used by Postgrex.
+ ## Examples
+
+ iex> Database.opts(%Database{hostname: "localhost", port: 5432, database: "realtime", username: "postgres", password: "postgres", application_name: "test", backoff_type: :stop, pool_size: 10, queue_target: 10_000, socket_options: [:inet], ssl: true}) |> Enum.sort()
+ [
+ application_name: "test",
+ backoff_type: :stop,
+ database: "realtime",
+ hostname: "localhost",
+ max_restarts: nil,
+ password: "postgres",
+ pool_size: 10,
+ port: 5432,
+ queue_target: 10000,
+ socket_options: [:inet],
+ ssl: true,
+ username: "postgres"
+ ]
+ """
+
+ @spec opts(__MODULE__.t()) :: keyword()
+ def opts(%__MODULE__{} = settings) do
+ settings
+ |> Map.from_struct()
+ |> Map.to_list()
+ |> Keyword.new()
+ end
+
+ defp tenant_pool_requirements(settings) do
+ application_names = [
+ "realtime_subscription_manager",
+ "realtime_subscription_manager_pub",
+ "realtime_subscription_checker",
+ "realtime_health_check",
+ "realtime_janitor",
+ "realtime_migrations",
+ "realtime_broadcast_changes",
+ "realtime_rls",
+ "realtime_replication_slot_teardown",
+ "realtime_connect"
+ ]
+
+ Enum.reduce(application_names, 0, fn application_name, acc ->
+ acc + pool_size_by_application_name(application_name, settings)
+ end)
+ end
+end
diff --git a/lib/realtime/gen_counter/gen_counter.ex b/lib/realtime/gen_counter/gen_counter.ex
index d604a16..46690a5 100644
--- a/lib/realtime/gen_counter/gen_counter.ex
+++ b/lib/realtime/gen_counter/gen_counter.ex
@@ -7,10 +7,9 @@ defmodule Realtime.GenCounter do
counters are not serialized through the GenServer keeping GenCounters as performant as possible.
"""
use GenServer
-
- alias Realtime.GenCounter
-
require Logger
+ import Realtime.Logs
+ alias Realtime.GenCounter
defstruct id: nil, counters: []
@@ -31,7 +30,8 @@ defmodule Realtime.GenCounter do
@doc """
Creates a new counter from any Erlang term.
"""
- @spec new(term) :: {:ok, {:write_concurrency, reference()}} | {:error, term()}
+ @spec new({atom(), atom(), term()}) ::
+ {:ok, {:write_concurrency, reference()}} | {:error, term()}
def new(term) do
id = :erlang.phash2(term)
@@ -50,7 +50,7 @@ defmodule Realtime.GenCounter do
started
err ->
- Logger.error("Error creating counter #{inspect(err)}")
+ log_error("UnableToCreateCounter", err)
{:error, :not_created}
end
end
@@ -70,11 +70,12 @@ defmodule Realtime.GenCounter do
@spec add(term(), integer()) :: :ok | :error
def add(term, count) when is_integer(count) do
- with {:ok, counter_ref} <- find_counter(term) do
- :counters.add(counter_ref, 1, count)
- else
+ case find_counter(term) do
+ {:ok, counter_ref, _pid} ->
+ :counters.add(counter_ref, 1, count)
+
err ->
- Logger.error("Error incrimenting counter", error_string: inspect(err))
+ log_error("UnableToIncrementCounter", err)
:error
end
end
@@ -94,11 +95,12 @@ defmodule Realtime.GenCounter do
@spec sub(term(), integer()) :: :ok | :error
def sub(term, count) when is_integer(count) do
- with {:ok, counter_ref} <- find_counter(term) do
- :counters.sub(counter_ref, 1, count)
- else
+ case find_counter(term) do
+ {:ok, counter_ref, _pid} ->
+ :counters.sub(counter_ref, 1, count)
+
err ->
- Logger.error("Error decrimenting counter", error_string: inspect(err))
+ log_error("UnableToDecrementCounter", err)
:error
end
end
@@ -109,11 +111,12 @@ defmodule Realtime.GenCounter do
@spec put(term(), integer()) :: :ok | :error
def put(term, count) when is_integer(count) do
- with {:ok, counter_ref} <- find_counter(term) do
- :counters.put(counter_ref, 1, count)
- else
+ case find_counter(term) do
+ {:ok, counter_ref, _pid} ->
+ :counters.put(counter_ref, 1, count)
+
err ->
- Logger.error("Error updating counter", error_string: inspect(err))
+ log_error("UnableToUpdateCounter", err)
:error
end
end
@@ -125,11 +128,11 @@ defmodule Realtime.GenCounter do
@spec info(term()) :: %{memory: integer(), size: integer()} | :error
def info(term) do
case find_counter(term) do
- {:ok, counter_ref} ->
+ {:ok, counter_ref, _pid} ->
:counters.info(counter_ref)
- err ->
- Logger.error("Counter not found", error_string: inspect(err))
+ _err ->
+ log_error("UnableToFindCounter", "Unable to find counter")
:error
end
end
@@ -141,12 +144,13 @@ defmodule Realtime.GenCounter do
@spec get(term()) ::
{:ok, integer()} | {:error, :counter_not_found}
def get(term) do
- with {:ok, counter_ref} <- find_counter(term) do
- count = :counters.get(counter_ref, 1)
- {:ok, count}
- else
+ case find_counter(term) do
+ {:ok, counter_ref, _pid} ->
+ count = :counters.get(counter_ref, 1)
+ {:ok, count}
+
err ->
- Logger.error("Counter not found", error_string: inspect(err))
+ log_error("UnableToFindCounter", "Counter not found")
err
end
end
@@ -159,14 +163,23 @@ defmodule Realtime.GenCounter do
end
end
+ @spec find_counter(term) ::
+ {:ok, :counters.counters_ref(), pid()} | {:error, :counter_not_found}
+ def find_counter(term) do
+ id = :erlang.phash2(term)
+
+ case Registry.lookup(Realtime.Registry.Unique, {__MODULE__, :counter, id}) do
+ [{pid, counter_ref}] -> {:ok, counter_ref, pid}
+ _error -> {:error, :counter_not_found}
+ end
+ end
+
# Callbacks
@impl true
def init(args) do
- id = Keyword.get(args, :id)
-
+ id = Keyword.fetch!(args, :id)
state = %__MODULE__{id: id, counters: []}
-
{:ok, state}
end
@@ -189,13 +202,4 @@ defmodule Realtime.GenCounter do
_error -> {:error, :worker_not_found}
end
end
-
- defp find_counter(term) do
- id = :erlang.phash2(term)
-
- case Registry.lookup(Realtime.Registry.Unique, {__MODULE__, :counter, id}) do
- [{_pid, counter_ref}] -> {:ok, counter_ref}
- _error -> {:error, :counter_not_found}
- end
- end
end
diff --git a/lib/realtime/helpers.ex b/lib/realtime/helpers.ex
index bfbd8e0..6c62097 100644
--- a/lib/realtime/helpers.ex
+++ b/lib/realtime/helpers.ex
@@ -2,145 +2,12 @@ defmodule Realtime.Helpers do
@moduledoc """
This module includes helper functions for different contexts that can't be union in one module.
"""
+ require Logger
@spec cancel_timer(reference() | nil) :: non_neg_integer() | false | :ok | nil
def cancel_timer(nil), do: nil
def cancel_timer(ref), do: Process.cancel_timer(ref)
- def encrypt!(text, secret_key) do
- :aes_128_ecb
- |> :crypto.crypto_one_time(secret_key, pad(text), true)
- |> Base.encode64()
- end
-
- def decrypt!(base64_text, secret_key) do
- crypto_text = Base.decode64!(base64_text)
-
- :aes_128_ecb
- |> :crypto.crypto_one_time(secret_key, crypto_text, false)
- |> unpad()
- end
-
- @spec connect_db(
- String.t(),
- String.t(),
- String.t(),
- String.t(),
- String.t(),
- list(),
- non_neg_integer(),
- non_neg_integer()
- ) ::
- {:ok, pid} | {:error, Postgrex.Error.t() | term()}
- def connect_db(host, port, name, user, pass, socket_opts, pool \\ 5, queue_target \\ 5_000) do
- secure_key = Application.get_env(:realtime, :db_enc_key)
-
- host = decrypt!(host, secure_key)
- port = decrypt!(port, secure_key)
- name = decrypt!(name, secure_key)
- pass = decrypt!(pass, secure_key)
- user = decrypt!(user, secure_key)
-
- Postgrex.start_link(
- hostname: host,
- port: port,
- database: name,
- password: pass,
- username: user,
- pool_size: pool,
- queue_target: queue_target,
- parameters: [
- application_name: "tealbase_realtime"
- ],
- socket_options: socket_opts
- )
- end
-
- @doc """
- Gets the external id from a host connection string found in the conn.
-
- ## Examples
-
- iex> Realtime.Helpers.get_external_id("tenant.realtime.tealbase.co")
- {:ok, "tenant"}
-
- iex> Realtime.Helpers.get_external_id("tenant.tealbase.co")
- {:ok, "tenant"}
-
- iex> Realtime.Helpers.get_external_id("localhost")
- {:ok, "localhost"}
-
- """
-
- @spec get_external_id(String.t()) :: {:ok, String.t()} | {:error, atom()}
- def get_external_id(host) when is_binary(host) do
- case String.split(host, ".", parts: 2) do
- [] -> {:error, :tenant_not_found_in_host}
- [id] -> {:ok, id}
- [id, _] -> {:ok, id}
- end
- end
-
- def decrypt_creds(host, port, name, user, pass) do
- secure_key = Application.get_env(:realtime, :db_enc_key)
-
- {
- decrypt!(host, secure_key),
- decrypt!(port, secure_key),
- decrypt!(name, secure_key),
- decrypt!(user, secure_key),
- decrypt!(pass, secure_key)
- }
- end
-
- def short_node_id() do
- fly_alloc_id = Application.get_env(:realtime, :fly_alloc_id)
-
- case String.split(fly_alloc_id, "-", parts: 2) do
- [short_alloc_id, _] -> short_alloc_id
- _ -> fly_alloc_id
- end
- end
-
- @doc """
- Gets a short node name from a node name when a node name looks like `realtime-prod@fdaa:0:cc:a7b:b385:83c3:cfe3:2`
-
- ## Examples
-
- iex> node = Node.self()
- iex> Realtime.Helpers.short_node_id_from_name(node)
- "nohost"
-
- iex> node = :"realtime-prod@fdaa:0:cc:a7b:b385:83c3:cfe3:2"
- iex> Realtime.Helpers.short_node_id_from_name(node)
- "83c3cfe3"
-
- iex> node = :"pink@127.0.0.1"
- iex> Realtime.Helpers.short_node_id_from_name(node)
- "127.0.0.1"
-
- iex> node = :"pink@10.0.1.1"
- iex> Realtime.Helpers.short_node_id_from_name(node)
- "10.0.1.1"
-
- iex> node = :"realtime@host.name.internal"
- iex> Realtime.Helpers.short_node_id_from_name(node)
- "host.name.internal"
- """
-
- @spec short_node_id_from_name(atom()) :: String.t()
- def short_node_id_from_name(name) when is_atom(name) do
- [_, host] = name |> Atom.to_string() |> String.split("@", parts: 2)
-
- case String.split(host, ":", parts: 8) do
- [_, _, _, _, _, one, two, _] ->
- one <> two
-
- _other ->
- host
- end
- end
-
@doc """
Takes the first N items from the queue and returns the list of items and the new queue.
@@ -166,14 +33,4 @@ defmodule Realtime.Helpers do
end
end)
end
-
- defp pad(data) do
- to_add = 16 - rem(byte_size(data), 16)
- data <> :binary.copy(<>, to_add)
- end
-
- defp unpad(data) do
- to_remove = :binary.last(data)
- :binary.part(data, 0, byte_size(data) - to_remove)
- end
end
diff --git a/lib/realtime/logs.ex b/lib/realtime/logs.ex
new file mode 100644
index 0000000..8d6574c
--- /dev/null
+++ b/lib/realtime/logs.ex
@@ -0,0 +1,49 @@
+defmodule Realtime.Logs do
+ @moduledoc """
+ Logging operations for Realtime
+ """
+ require Logger
+
+ @doc """
+ Prepares a value to be logged
+ """
+ def to_log(value) when is_binary(value), do: value
+ def to_log(value), do: inspect(value, pretty: true)
+
+ @doc """
+ Logs error with a given Operational Code
+ """
+ @spec log_error(String.t(), any(), keyword()) :: :ok
+ def log_error(code, error, metadata \\ []) do
+ Logger.error("#{code}: #{to_log(error)}", [error_code: code] ++ metadata)
+ end
+
+ @doc """
+ Logs warning with a given Operational Code
+ """
+ @spec log_error(String.t(), any(), keyword()) :: :ok
+ def log_warning(code, warning, metadata \\ []) do
+ Logger.warning("#{code}: #{to_log(warning)}", [{:error_code, code} | metadata])
+ end
+end
+
+defimpl Jason.Encoder, for: DBConnection.ConnectionError do
+ def encode(
+ %DBConnection.ConnectionError{message: message, reason: reason, severity: severity},
+ _opts
+ ) do
+ inspect(%{message: message, reason: reason, severity: severity}, pretty: true)
+ end
+end
+
+defimpl Jason.Encoder, for: Postgrex.Error do
+ def encode(
+ %Postgrex.Error{
+ message: message,
+ postgres: %{code: code, schema: schema, table: table}
+ },
+ _opts
+ ) do
+ inspect(%{message: message, schema: schema, table: table, code: code}, pretty: true)
+ end
+end
diff --git a/lib/realtime/messages.ex b/lib/realtime/messages.ex
new file mode 100644
index 0000000..c6d571d
--- /dev/null
+++ b/lib/realtime/messages.ex
@@ -0,0 +1,40 @@
+defmodule Realtime.Messages do
+ @moduledoc """
+ Handles `realtime.messages` table operations
+ """
+
+ @doc """
+ Deletes messages older than 72 hours for a given tenant connection
+ """
+ @spec delete_old_messages(pid()) :: :ok
+ def delete_old_messages(conn) do
+ limit =
+ NaiveDateTime.utc_now()
+ |> NaiveDateTime.add(-72, :hour)
+ |> NaiveDateTime.to_date()
+
+ %{rows: rows} =
+ Postgrex.query!(
+ conn,
+ """
+ SELECT child.relname
+ FROM pg_inherits
+ JOIN pg_class parent ON pg_inherits.inhparent = parent.oid
+ JOIN pg_class child ON pg_inherits.inhrelid = child.oid
+ JOIN pg_namespace nmsp_parent ON nmsp_parent.oid = parent.relnamespace
+ JOIN pg_namespace nmsp_child ON nmsp_child.oid = child.relnamespace
+ WHERE parent.relname = 'messages'
+ AND nmsp_child.nspname = 'realtime'
+ """,
+ []
+ )
+
+ rows
+ |> Enum.filter(fn ["messages_" <> date] ->
+ date |> String.replace("_", "-") |> Date.from_iso8601!() |> Date.compare(limit) == :lt
+ end)
+ |> Enum.each(&Postgrex.query!(conn, "DROP TABLE IF EXISTS realtime.#{&1}", []))
+
+ :ok
+ end
+end
diff --git a/lib/realtime/metrics_cleaner.ex b/lib/realtime/metrics_cleaner.ex
new file mode 100644
index 0000000..912a1fb
--- /dev/null
+++ b/lib/realtime/metrics_cleaner.ex
@@ -0,0 +1,63 @@
+defmodule Realtime.MetricsCleaner do
+ @moduledoc false
+
+ use GenServer
+ require Logger
+
+ defstruct [:check_ref, :interval]
+
+ def start_link(args), do: GenServer.start_link(__MODULE__, args)
+
+ def init(_args) do
+ interval = Application.get_env(:realtime, :metrics_cleaner_schedule_timer_in_ms)
+
+ Logger.info("Starting MetricsCleaner")
+ {:ok, %{check_ref: check(interval), interval: interval}}
+ end
+
+ def handle_info(:check, %{interval: interval} = state) do
+ Process.cancel_timer(state.check_ref)
+
+ {exec_time, _} = :timer.tc(fn -> loop_and_cleanup_metrics_table() end)
+
+ if exec_time > :timer.seconds(5),
+ do: Logger.warning("Metrics check took: #{exec_time} ms")
+
+ {:noreply, %{state | check_ref: check(interval)}}
+ end
+
+ def handle_info(msg, state) do
+ Logger.error("Unexpected message: #{inspect(msg)}")
+ {:noreply, state}
+ end
+
+ defp check(interval) do
+ Process.send_after(self(), :check, interval)
+ end
+
+ @table_name :"syn_registry_by_name_Elixir.Realtime.Tenants.Connect"
+ @metrics_table Realtime.PromEx.Metrics
+ @filter_spec [{{{:_, %{tenant: :"$1"}}, :_}, [], [:"$1"]}]
+ @tenant_id_spec [{{:"$1", :_, :_, :_, :_, :_}, [], [:"$1"]}]
+ defp loop_and_cleanup_metrics_table do
+ tenant_ids = :ets.select(@table_name, @tenant_id_spec)
+
+ :ets.select(@metrics_table, @filter_spec)
+ |> Enum.uniq()
+ |> Enum.reject(fn tenant_id -> tenant_id in tenant_ids end)
+ |> Enum.each(fn tenant_id -> delete_metric(tenant_id) end)
+ end
+
+ @doc """
+ Deletes all metrics that contain the given tenant or database_host.
+ """
+ @spec delete_metric(String.t()) :: :ok
+ def delete_metric(tenant) do
+ :ets.select_delete(@metrics_table, [
+ {{{:_, %{tenant: tenant}}, :_}, [], [true]},
+ {{{:_, %{database_host: "db.#{tenant}.tealbase.co"}}, :_}, [], [true]}
+ ])
+
+ :ok
+ end
+end
diff --git a/lib/realtime/monitoring/erl_sys_mon.ex b/lib/realtime/monitoring/erl_sys_mon.ex
new file mode 100644
index 0000000..58975a0
--- /dev/null
+++ b/lib/realtime/monitoring/erl_sys_mon.ex
@@ -0,0 +1,28 @@
+defmodule Realtime.ErlSysMon do
+ @moduledoc """
+ Logs Erlang System Monitor events.
+ """
+
+ use GenServer
+
+ require Logger
+
+ @defults [
+ :busy_dist_port,
+ :busy_port,
+ {:long_gc, 250},
+ {:long_schedule, 100},
+ {:long_message_queue, {0, 1_000}}
+ ]
+ def start_link(args \\ @defults), do: GenServer.start_link(__MODULE__, args)
+
+ def init(args) do
+ :erlang.system_monitor(self(), args)
+ {:ok, []}
+ end
+
+ def handle_info(msg, state) do
+ Logger.error("#{__MODULE__} message: " <> inspect(msg))
+ {:noreply, state}
+ end
+end
diff --git a/lib/realtime/monitoring/latency.ex b/lib/realtime/monitoring/latency.ex
index 09cf101..6acefcb 100644
--- a/lib/realtime/monitoring/latency.ex
+++ b/lib/realtime/monitoring/latency.ex
@@ -4,10 +4,11 @@ defmodule Realtime.Latency do
"""
use GenServer
-
require Logger
+ import Realtime.Logs
- alias Realtime.Helpers
+ alias Realtime.Nodes
+ alias Realtime.Rpc
defmodule Payload do
@moduledoc false
@@ -52,7 +53,7 @@ defmodule Realtime.Latency do
end
def handle_info(msg, state) do
- Logger.warn("Unexpected message: #{inspect(msg)}")
+ Logger.warning("Unexpected message: #{inspect(msg)}")
{:noreply, state}
end
@@ -67,49 +68,33 @@ defmodule Realtime.Latency do
Pings all the nodes in the cluster one after another and returns with their responses.
There is a timeout for a single node rpc, and a timeout to yield_many which should really
never get hit because these pings happen async under the Realtime.TaskSupervisor.
-
- ## Examples
-
- Emulate a healthy remote node:
-
- iex> [{%Task{}, {:ok, %{response: {:ok, {:pong, "iad"}}}}}] = Realtime.Latency.ping()
-
- Emulate a slow but healthy remote node:
-
- iex> [{%Task{}, {:ok, %{response: {:ok, {:pong, "iad"}}}}}] = Realtime.Latency.ping(5_000, 10_000, 30_000)
-
- Emulate an unhealthy remote node:
-
- iex> [{%Task{}, {:ok, %{response: {:badrpc, :timeout}}}}] = Realtime.Latency.ping(5_000, 1_000)
-
- No response from our Task for a remote node at all:
-
- iex> [{%Task{}, nil}] = Realtime.Latency.ping(10_000, 5_000, 2_000)
-
"""
- @spec ping :: [{%Task{}, tuple() | nil}]
+ @spec ping :: [{Task.t(), tuple() | nil}]
def ping(pong_timeout \\ 0, timer_timeout \\ 5_000, yield_timeout \\ 5_000) do
tasks =
for n <- [Node.self() | Node.list()] do
Task.Supervisor.async(Realtime.TaskSupervisor, fn ->
{latency, response} =
- :timer.tc(fn -> :rpc.call(n, __MODULE__, :pong, [pong_timeout], timer_timeout) end)
+ :timer.tc(fn ->
+ Rpc.call(n, __MODULE__, :pong, [pong_timeout], timeout: timer_timeout)
+ end)
latency_ms = latency / 1_000
- fly_region = Application.get_env(:realtime, :fly_region, "iad")
- short_name = Helpers.short_node_id_from_name(n)
- from_node = Helpers.short_node_id_from_name(Node.self())
+ region = Application.get_env(:realtime, :region, "not_set")
+ short_name = Nodes.short_node_id_from_name(n)
+ from_node = Nodes.short_node_id_from_name(Node.self())
case response do
{:badrpc, reason} ->
- Logger.error(
- "Network error: can't connect to node #{short_name} from #{fly_region} - #{inspect(reason)}"
+ log_error(
+ "RealtimeNodeDisconnected",
+ "Unable to connect to #{short_name} from #{region}: #{reason}"
)
payload = %Payload{
from_node: from_node,
- from_region: fly_region,
+ from_region: region,
node: short_name,
region: nil,
latency: latency_ms,
@@ -124,13 +109,13 @@ defmodule Realtime.Latency do
{:ok, {:pong, remote_region}} ->
if latency_ms > 1_000,
do:
- Logger.warn(
- "Network warning: latency to #{remote_region} (#{short_name}) from #{fly_region} (#{from_node}) is #{latency_ms} ms"
+ Logger.warning(
+ "Network warning: latency to #{remote_region} (#{short_name}) from #{region} (#{from_node}) is #{latency_ms} ms"
)
payload = %Payload{
from_node: from_node,
- from_region: fly_region,
+ from_region: region,
node: short_name,
region: remote_region,
latency: latency_ms,
@@ -158,8 +143,8 @@ defmodule Realtime.Latency do
"""
@spec pong :: {:ok, {:pong, String.t()}}
- def pong() do
- region = Application.get_env(:realtime, :fly_region, "iad")
+ def pong do
+ region = Application.get_env(:realtime, :region, "not_set")
{:ok, {:pong, region}}
end
@@ -169,7 +154,7 @@ defmodule Realtime.Latency do
pong()
end
- defp ping_after() do
+ defp ping_after do
Process.send_after(self(), :ping, @every)
end
end
diff --git a/lib/realtime/monitoring/os_metrics.ex b/lib/realtime/monitoring/os_metrics.ex
index 7239010..b2623d2 100644
--- a/lib/realtime/monitoring/os_metrics.ex
+++ b/lib/realtime/monitoring/os_metrics.ex
@@ -4,13 +4,14 @@ defmodule Realtime.OsMetrics do
"""
@spec ram_usage() :: float()
- def ram_usage() do
+ def ram_usage do
mem = :memsup.get_system_memory_data()
- 100 - mem[:free_memory] / mem[:total_memory] * 100
+ free_mem = if Mix.env() in [:dev, :test], do: mem[:free_memory], else: mem[:available_memory]
+ 100 - free_mem / mem[:total_memory] * 100
end
@spec cpu_la() :: %{avg1: float(), avg5: float(), avg15: float()}
- def cpu_la() do
+ def cpu_la do
%{
avg1: :cpu_sup.avg1() / 256,
avg5: :cpu_sup.avg5() / 256,
@@ -19,7 +20,7 @@ defmodule Realtime.OsMetrics do
end
@spec cpu_util() :: float() | {:error, term()}
- def cpu_util() do
+ def cpu_util do
:cpu_sup.util()
end
end
diff --git a/lib/realtime/monitoring/prom_ex.ex b/lib/realtime/monitoring/prom_ex.ex
index 2d8ec80..dc9a05c 100644
--- a/lib/realtime/monitoring/prom_ex.ex
+++ b/lib/realtime/monitoring/prom_ex.ex
@@ -1,7 +1,10 @@
defmodule Realtime.PromEx do
- alias Realtime.PromEx.Plugins.{OsMon, Phoenix, Tenants, Tenant}
-
- import Realtime.Helpers, only: [short_node_id: 0]
+ alias Realtime.Nodes
+ alias Realtime.PromEx.Plugins.Channels
+ alias Realtime.PromEx.Plugins.OsMon
+ alias Realtime.PromEx.Plugins.Phoenix
+ alias Realtime.PromEx.Plugins.Tenant
+ alias Realtime.PromEx.Plugins.Tenants
@moduledoc """
Be sure to add the following to finish setting up PromEx:
@@ -65,16 +68,12 @@ defmodule Realtime.PromEx do
poll_rate = Application.get_env(:realtime, :prom_poll_rate)
[
- # PromEx built in plugins
- # Plugins.Application,
{Plugins.Beam, poll_rate: poll_rate, metric_prefix: [:beam]},
{Phoenix, router: RealtimeWeb.Router, poll_rate: poll_rate, metric_prefix: [:phoenix]},
- # {Plugins.Ecto, poll_rate: poll_rate, metric_prefix: [:ecto]},
- # Plugins.Oban,
- # Plugins.PhoenixLiveView
{OsMon, poll_rate: poll_rate},
{Tenants, poll_rate: poll_rate},
- {Tenant, poll_rate: poll_rate}
+ {Tenant, poll_rate: poll_rate},
+ {Channels, poll_rate: poll_rate}
]
end
@@ -101,7 +100,7 @@ defmodule Realtime.PromEx do
]
end
- def get_metrics() do
+ def get_metrics do
%{
region: region,
node_host: node_host,
@@ -113,23 +112,17 @@ defmodule Realtime.PromEx do
metrics =
PromEx.get_metrics(Realtime.PromEx)
|> String.split("\n")
- |> Enum.map(fn line ->
+ |> Enum.map_join("\n", fn line ->
case Regex.run(~r/(?!\#)^(\w+)(?:{(.*?)})?\s*(.+)$/, line) do
nil ->
line
[_, key, tags, value] ->
- tags =
- if tags == "" do
- def_tags
- else
- tags <> "," <> def_tags
- end
+ tags = if tags == "", do: def_tags, else: tags <> "," <> def_tags
"#{key}{#{tags}} #{value}"
end
end)
- |> Enum.join("\n")
Realtime.PromEx.__ets_cron_flusher_name__()
|> PromEx.ETSCronFlusher.defer_ets_flush()
@@ -137,19 +130,19 @@ defmodule Realtime.PromEx do
metrics
end
- def set_metrics_tags() do
+ def set_metrics_tags do
[_, node_host] = node() |> Atom.to_string() |> String.split("@")
metrics_tags = %{
- region: Application.get_env(:realtime, :fly_region),
+ region: Application.get_env(:realtime, :region),
node_host: node_host,
- short_alloc_id: short_node_id()
+ short_alloc_id: Nodes.short_node_id_from_name(node())
}
Application.put_env(:realtime, :metrics_tags, metrics_tags)
end
- def get_metrics_tags() do
+ def get_metrics_tags do
Application.get_env(:realtime, :metrics_tags)
end
end
diff --git a/lib/realtime/monitoring/prom_ex/plugins/channels.ex b/lib/realtime/monitoring/prom_ex/plugins/channels.ex
new file mode 100644
index 0000000..357838f
--- /dev/null
+++ b/lib/realtime/monitoring/prom_ex/plugins/channels.ex
@@ -0,0 +1,20 @@
+defmodule Realtime.PromEx.Plugins.Channels do
+ @moduledoc """
+ Realtime channels monitoring plugin for PromEx
+ """
+ use PromEx.Plugin
+ require Logger
+
+ @impl true
+ def event_metrics(_opts) do
+ Event.build(:realtime, [
+ counter(
+ [:realtime, :channel, :error],
+ event_name: [:realtime, :channel, :error],
+ measurement: :code,
+ tags: [:code],
+ description: "Count of errors in the Realtime channels initialization"
+ )
+ ])
+ end
+end
diff --git a/lib/realtime/monitoring/prom_ex/plugins/osmon.ex b/lib/realtime/monitoring/prom_ex/plugins/osmon.ex
index 5047913..67d1fcb 100644
--- a/lib/realtime/monitoring/prom_ex/plugins/osmon.ex
+++ b/lib/realtime/monitoring/prom_ex/plugins/osmon.ex
@@ -61,7 +61,7 @@ defmodule Realtime.PromEx.Plugins.OsMon do
)
end
- def execute_metrics() do
+ def execute_metrics do
execute_metrics(@event_ram_usage, %{ram: OsMetrics.ram_usage()})
execute_metrics(@event_cpu_util, %{cpu: OsMetrics.cpu_util()})
execute_metrics(@event_cpu_la, OsMetrics.cpu_la())
diff --git a/lib/realtime/monitoring/prom_ex/plugins/phoenix.ex b/lib/realtime/monitoring/prom_ex/plugins/phoenix.ex
index ba452e3..ab1d6e3 100644
--- a/lib/realtime/monitoring/prom_ex/plugins/phoenix.ex
+++ b/lib/realtime/monitoring/prom_ex/plugins/phoenix.ex
@@ -55,17 +55,12 @@ if Code.ensure_loaded?(Phoenix) do
)
end
- def execute_metrics() do
+ def execute_metrics do
active_conn =
- case :ets.lookup(:ranch_server, {:listener_sup, HTTP}) do
- [] ->
- -1
-
- _ ->
- HTTP
- |> :ranch_server.get_connections_sup()
- |> :supervisor.count_children()
- |> Keyword.get(:active)
+ if :ranch.info()[HTTP] do
+ :ranch.info(HTTP)[:active_connections]
+ else
+ -1
end
:telemetry.execute(@event_all_connections, %{active: active_conn}, %{})
@@ -123,8 +118,7 @@ if Code.ensure_loaded?(Phoenix) do
metric_prefix ++ [:socket, :connected, :duration, :milliseconds],
event_name: [:phoenix, :socket_connected],
measurement: :duration,
- description:
- "The time it takes for the application to establish a socket connection.",
+ description: "The time it takes for the application to establish a socket connection.",
reporter_options: [
buckets: [10, 100, 500, 1_000, 5_000, 10_000]
],
diff --git a/lib/realtime/monitoring/prom_ex/plugins/tenant.ex b/lib/realtime/monitoring/prom_ex/plugins/tenant.ex
index c121faf..a122460 100644
--- a/lib/realtime/monitoring/prom_ex/plugins/tenant.ex
+++ b/lib/realtime/monitoring/prom_ex/plugins/tenant.ex
@@ -21,7 +21,8 @@ defmodule Realtime.PromEx.Plugins.Tenant do
# Event metrics definitions
[
channel_events(),
- replication_metrics()
+ replication_metrics(),
+ subscription_metrics()
]
end
@@ -56,7 +57,7 @@ defmodule Realtime.PromEx.Plugins.Tenant do
)
end
- def execute_tenant_metrics() do
+ def execute_tenant_metrics do
tenants = Tenants.list_connected_tenants(Node.self())
for t <- tenants do
@@ -64,15 +65,17 @@ defmodule Realtime.PromEx.Plugins.Tenant do
cluster_count = UsersCounter.tenant_users(t)
tenant = Tenants.Cache.get_tenant_by_external_id(t)
- Telemetry.execute(
- [:realtime, :connections],
- %{connected: count, connected_cluster: cluster_count, limit: tenant.max_concurrent_users},
- %{tenant: t}
- )
+ if tenant != nil do
+ Telemetry.execute(
+ [:realtime, :connections],
+ %{connected: count, connected_cluster: cluster_count, limit: tenant.max_concurrent_users},
+ %{tenant: t}
+ )
+ end
end
end
- defp replication_metrics() do
+ defp replication_metrics do
Event.build(
:realtime_tenant_replication_event_metrics,
[
@@ -84,14 +87,36 @@ defmodule Realtime.PromEx.Plugins.Tenant do
tags: [:tenant],
unit: {:microsecond, :millisecond},
reporter_options: [
- buckets: [125, 250, 500, 1_000, 2_000, 4_000, 8_000, 16_000, 32_000, 64_000]
+ buckets: [125, 250, 500, 1_000, 2_000, 4_000, 8_000, 16_000]
]
)
]
)
end
- defp channel_events() do
+ defp subscription_metrics do
+ Event.build(
+ :realtime_tenant_channel_event_metrics,
+ [
+ sum(
+ [:realtime, :subscriptions_checker, :pid_not_found],
+ event_name: [:realtime, :subscriptions_checker, :pid_not_found],
+ measurement: :sum,
+ description: "Sum of pids not found in Subscription tables.",
+ tags: [:tenant]
+ ),
+ sum(
+ [:realtime, :subscriptions_checker, :phantom_pid_detected],
+ event_name: [:realtime, :subscriptions_checker, :phantom_pid_detected],
+ measurement: :sum,
+ description: "Sum of phantom pids detected in Subscription tables.",
+ tags: [:tenant]
+ )
+ ]
+ )
+ end
+
+ defp channel_events do
Event.build(
:realtime_tenant_channel_event_metrics,
[
@@ -122,6 +147,27 @@ defmodule Realtime.PromEx.Plugins.Tenant do
measurement: :limit,
description: "Rate limit of joins per second on a Realtime Channel.",
tags: [:tenant]
+ ),
+ sum(
+ [:realtime, :channel, :presence_events],
+ event_name: [:realtime, :rate_counter, :channel, :presence_events],
+ measurement: :sum,
+ description: "Sum of presence messages sent on a Realtime Channel.",
+ tags: [:tenant]
+ ),
+ last_value(
+ [:realtime, :tenants, :read_authorization_check],
+ event_name: [:realtime, :tenants, :read_authorization_check],
+ measurement: :count,
+ description: "Last value of read authorization checks.",
+ tags: [:tenant]
+ ),
+ last_value(
+ [:realtime, :tenants, :write_authorization_check],
+ event_name: [:realtime, :tenants, :write_authorization_check],
+ measurement: :count,
+ description: "Last value of write authorization checks.",
+ tags: [:tenant]
)
]
)
diff --git a/lib/realtime/monitoring/prom_ex/plugins/tenants.ex b/lib/realtime/monitoring/prom_ex/plugins/tenants.ex
index 49a78b1..db814f5 100644
--- a/lib/realtime/monitoring/prom_ex/plugins/tenants.ex
+++ b/lib/realtime/monitoring/prom_ex/plugins/tenants.ex
@@ -2,46 +2,57 @@ defmodule Realtime.PromEx.Plugins.Tenants do
@moduledoc false
use PromEx.Plugin
+
+ alias PromEx.MetricTypes.Event
+ alias Realtime.Tenants.Connect
+
require Logger
@event_connected [:prom_ex, :plugin, :realtime, :tenants, :connected]
+ @impl true
+ def event_metrics(_) do
+ Event.build(:realtime, [
+ distribution(
+ [:realtime, :rpc],
+ event_name: [:realtime, :rpc],
+ description: "Latency of rpc calls triggered by a tenant action",
+ measurement: :latency,
+ unit: {:microsecond, :millisecond},
+ tags: [:success],
+ reporter_options: [buckets: [10, 250, 5000, 15_000]]
+ )
+ ])
+ end
+
@impl true
def polling_metrics(opts) do
poll_rate = Keyword.get(opts, :poll_rate)
[
- metrics(poll_rate)
+ Polling.build(
+ :realtime_tenants_events,
+ poll_rate,
+ {__MODULE__, :execute_metrics, []},
+ [
+ last_value(
+ [:realtime, :tenants, :connected],
+ event_name: @event_connected,
+ description: "The total count of connected tenants.",
+ measurement: :connected
+ )
+ ]
+ )
]
end
- defp metrics(poll_rate) do
- Polling.build(
- :realtime_tenants_events,
- poll_rate,
- {__MODULE__, :execute_metrics, []},
- [
- last_value(
- [:realtime, :tenants, :connected],
- event_name: @event_connected,
- description: "The total count of connected tenants.",
- measurement: :connected
- )
- ]
- )
- end
-
- def execute_metrics() do
+ def execute_metrics do
connected =
- if Enum.member?(:syn.node_scopes(), Extensions.PostgresCdcRls) do
- :syn.local_registry_count(Extensions.PostgresCdcRls)
- else
- -1
- end
-
- execute_metrics(@event_connected, %{
- connected: connected
- })
+ if Enum.member?(:syn.node_scopes(), Connect),
+ do: :syn.local_registry_count(Connect),
+ else: -1
+
+ execute_metrics(@event_connected, %{connected: connected})
end
defp execute_metrics(event, metrics) do
diff --git a/lib/realtime/nodes.ex b/lib/realtime/nodes.ex
new file mode 100644
index 0000000..f11742a
--- /dev/null
+++ b/lib/realtime/nodes.ex
@@ -0,0 +1,220 @@
+defmodule Realtime.Nodes do
+ @moduledoc """
+ Handles common needs for :syn module operations
+ """
+ require Logger
+ alias Realtime.Api.Tenant
+
+ @doc """
+ Gets the node to launch the Postgres connection on for a tenant.
+ """
+ @spec get_node_for_tenant(Tenant.t()) :: {:ok, node()} | {:error, term()}
+ def get_node_for_tenant(nil), do: {:error, :tenant_not_found}
+
+ def get_node_for_tenant(%Tenant{extensions: extensions, external_id: tenant_id}) do
+ with region <- get_region(extensions),
+ tenant_region <- platform_region_translator(region),
+ node <- launch_node(tenant_id, tenant_region, node()) do
+ {:ok, node}
+ end
+ end
+
+ defp get_region(extensions) do
+ extensions
+ |> Enum.map(fn %{settings: %{"region" => region}} -> region end)
+ |> Enum.uniq()
+ |> hd()
+ end
+
+ @doc """
+ Translates a region from a platform to the closest tealbase tenant region
+ """
+ @spec platform_region_translator(String.t() | nil) :: nil | binary()
+ def platform_region_translator(nil), do: nil
+
+ def platform_region_translator(tenant_region) when is_binary(tenant_region) do
+ platform = Application.get_env(:realtime, :platform)
+ region_mapping(platform, tenant_region)
+ end
+
+ defp region_mapping(:aws, tenant_region) do
+ case tenant_region do
+ "ap-east-1" -> "ap-southeast-1"
+ "ap-northeast-1" -> "ap-southeast-1"
+ "ap-northeast-2" -> "ap-southeast-1"
+ "ap-south-1" -> "ap-southeast-1"
+ "ap-southeast-1" -> "ap-southeast-1"
+ "ap-southeast-2" -> "ap-southeast-2"
+ "ca-central-1" -> "us-east-1"
+ "eu-central-1" -> "eu-west-2"
+ "eu-central-2" -> "eu-west-2"
+ "eu-north-1" -> "eu-west-2"
+ "eu-west-1" -> "eu-west-2"
+ "eu-west-2" -> "eu-west-2"
+ "eu-west-3" -> "eu-west-2"
+ "sa-east-1" -> "us-east-1"
+ "us-east-1" -> "us-east-1"
+ "us-east-2" -> "us-east-1"
+ "us-west-1" -> "us-west-1"
+ "us-west-2" -> "us-west-1"
+ _ -> nil
+ end
+ end
+
+ defp region_mapping(:fly, tenant_region) do
+ case tenant_region do
+ "us-east-1" -> "iad"
+ "us-west-1" -> "sea"
+ "sa-east-1" -> "iad"
+ "ca-central-1" -> "iad"
+ "ap-southeast-1" -> "syd"
+ "ap-northeast-1" -> "syd"
+ "ap-northeast-2" -> "syd"
+ "ap-southeast-2" -> "syd"
+ "ap-east-1" -> "syd"
+ "ap-south-1" -> "syd"
+ "eu-west-1" -> "lhr"
+ "eu-west-2" -> "lhr"
+ "eu-west-3" -> "lhr"
+ "eu-central-1" -> "lhr"
+ _ -> nil
+ end
+ end
+
+ defp region_mapping(_, tenant_region), do: tenant_region
+
+ @doc """
+ Lists the nodes in a region. Sorts by node name in case the list order
+ is unstable.
+ """
+
+ @spec region_nodes(String.t() | nil) :: [atom()]
+ def region_nodes(region) when is_binary(region) do
+ :syn.members(RegionNodes, region)
+ |> Enum.map(fn {_pid, [node: node]} -> node end)
+ |> Enum.sort()
+ end
+
+ def region_nodes(nil), do: []
+
+ @doc """
+ Picks the node to launch the Postgres connection on.
+
+ If there are not two nodes in a region the connection is established from
+ the `default` node given.
+ """
+ @spec launch_node(String.t(), String.t() | nil, atom()) :: atom()
+ def launch_node(tenant_id, region, default) do
+ case region_nodes(region) do
+ [node] ->
+ Logger.warning("Only one region node (#{inspect(node)}) for #{region} using default #{inspect(default)}")
+
+ default
+
+ [] ->
+ Logger.warning("Zero region nodes for #{region} using #{inspect(default)}")
+ default
+
+ regions_nodes ->
+ member_count = Enum.count(regions_nodes)
+ index = :erlang.phash2(tenant_id, member_count)
+
+ Enum.fetch!(regions_nodes, index)
+ end
+ end
+
+ @doc """
+ Gets a short node name from a node name when a node name looks like `realtime-prod@fdaa:0:cc:a7b:b385:83c3:cfe3:2`
+
+ ## Examples
+
+ iex> node = Node.self()
+ iex> Realtime.Helpers.short_node_id_from_name(node)
+ "nohost"
+
+ iex> node = :"realtime-prod@fdaa:0:cc:a7b:b385:83c3:cfe3:2"
+ iex> Realtime.Helpers.short_node_id_from_name(node)
+ "83c3cfe3"
+
+ iex> node = :"pink@127.0.0.1"
+ iex> Realtime.Helpers.short_node_id_from_name(node)
+ "127.0.0.1"
+
+ iex> node = :"pink@10.0.1.1"
+ iex> Realtime.Helpers.short_node_id_from_name(node)
+ "10.0.1.1"
+
+ iex> node = :"realtime@host.name.internal"
+ iex> Realtime.Helpers.short_node_id_from_name(node)
+ "host.name.internal"
+ """
+
+ @spec short_node_id_from_name(atom()) :: String.t()
+ def short_node_id_from_name(name) when is_atom(name) do
+ [_, host] = name |> Atom.to_string() |> String.split("@", parts: 2)
+
+ case String.split(host, ":", parts: 8) do
+ [_, _, _, _, _, one, two, _] ->
+ one <> two
+
+ _other ->
+ host
+ end
+ end
+
+ @mapping_realtime_region_to_tenant_region_aws %{
+ "ap-southeast-1" => [
+ "ap-east-1",
+ "ap-northeast-1",
+ "ap-northeast-2",
+ "ap-south-1",
+ "ap-southeast-1"
+ ],
+ "ap-southeast-2" => ["ap-southeast-2"],
+ "eu-west-2" => [
+ "eu-central-1",
+ "eu-central-2",
+ "eu-north-1",
+ "eu-west-1",
+ "eu-west-2",
+ "eu-west-3"
+ ],
+ "us-east-1" => [
+ "ca-central-1",
+ "sa-east-1",
+ "us-east-1",
+ "us-east-2"
+ ],
+ "us-west-1" => ["us-west-1", "us-west-2"]
+ }
+ @mapping_realtime_region_to_tenant_region_fly %{
+ "iad" => ["ca-central-1", "sa-east-1", "us-east-1"],
+ "lhr" => ["eu-central-1", "eu-west-1", "eu-west-2", "eu-west-3"],
+ "sea" => ["us-west-1"],
+ "syd" => [
+ "ap-east-1",
+ "ap-northeast-1",
+ "ap-northeast-2",
+ "ap-south-1",
+ "ap-southeast-1",
+ "ap-southeast-2"
+ ]
+ }
+
+ @doc """
+ Fetches the tenant regions for a given realtime reagion
+ """
+ @spec region_to_tenant_regions(String.t()) :: list() | nil
+ def region_to_tenant_regions(region) do
+ platform = Application.get_env(:realtime, :platform)
+
+ mappings =
+ case platform do
+ :aws -> @mapping_realtime_region_to_tenant_region_aws
+ :fly -> @mapping_realtime_region_to_tenant_region_fly
+ _ -> %{}
+ end
+
+ Map.get(mappings, region)
+ end
+end
diff --git a/lib/realtime/operations.ex b/lib/realtime/operations.ex
new file mode 100644
index 0000000..76efa38
--- /dev/null
+++ b/lib/realtime/operations.ex
@@ -0,0 +1,99 @@
+defmodule Realtime.Operations do
+ @moduledoc """
+ Support operations for Realtime.
+ """
+ alias Realtime.Rpc
+
+ @doc """
+ Ensures connected users are connected to the closest region by killing and restart the connection process.
+ """
+ def rebalance do
+ Enum.reduce(:syn.group_names(:users), 0, fn tenant, acc ->
+ case :syn.lookup(Extensions.PostgresCdcRls, tenant) do
+ {pid, %{region: region}} ->
+ platform_region = Realtime.Nodes.platform_region_translator(region)
+ current_node = node(pid)
+
+ case Realtime.Nodes.launch_node(tenant, platform_region, false) do
+ ^current_node -> acc
+ _ -> stop_user_tenant_process(tenant, platform_region, acc)
+ end
+
+ _ ->
+ acc
+ end
+ end)
+ end
+
+ @doc """
+ Kills all connections to a tenant database in all connected nodes
+ """
+ @spec kill_connections_to_tenant_id_in_all_nodes(String.t(), atom()) :: list()
+ def kill_connections_to_tenant_id_in_all_nodes(tenant_id, reason \\ :normal) do
+ [node() | Node.list()]
+ |> Task.async_stream(
+ fn node ->
+ Rpc.enhanced_call(node, __MODULE__, :kill_connections_to_tenant_id, [tenant_id, reason])
+ end,
+ timeout: 5000
+ )
+ |> Enum.map(& &1)
+ end
+
+ @doc """
+ Kills all connections to a tenant database in the current node
+ """
+ @spec kill_connections_to_tenant_id(String.t(), atom()) :: :ok
+ def kill_connections_to_tenant_id(tenant_id, reason) do
+ Logger.metadata(external_id: tenant_id, project: tenant_id)
+
+ pids_to_kill =
+ for pid <- Process.list(),
+ info = Process.info(pid),
+ dict = Keyword.get(info, :dictionary, []),
+ match?({DBConnection.Connection, :init, 1}, dict[:"$initial_call"]),
+ Keyword.get(dict, :"$logger_metadata$")[:external_id] == tenant_id,
+ links = Keyword.get(info, :links) do
+ links
+ |> Enum.filter(fn pid ->
+ is_pid(pid) &&
+ pid |> Process.info() |> Keyword.get(:dictionary, []) |> Keyword.get(:"$initial_call") ==
+ {:supervisor, DBConnection.ConnectionPool.Pool, 1}
+ end)
+ end
+
+ Enum.each(pids_to_kill, &Process.exit(&1, reason))
+ end
+
+ @doc """
+ Kills all Ecto.Migration.Runner processes that are linked only to Ecto.MigratorSupervisor
+ """
+ @spec dirty_terminate_runners :: list()
+ def dirty_terminate_runners do
+ Ecto.MigratorSupervisor
+ |> DynamicSupervisor.which_children()
+ |> Enum.reduce([], fn
+ {_, pid, :worker, [Ecto.Migration.Runner]}, acc ->
+ if length(Process.info(pid)[:links]) < 2 do
+ [{pid, Agent.stop(pid, :normal, 5_000)} | acc]
+ else
+ acc
+ end
+
+ _, acc ->
+ acc
+ end)
+ end
+
+ defp stop_user_tenant_process(tenant, platform_region, acc) do
+ Extensions.PostgresCdcRls.handle_stop(tenant, 5_000)
+ # credo:disable-for-next-line
+ IO.inspect({"Stopped", tenant, platform_region})
+ Process.sleep(1_500)
+ acc + 1
+ catch
+ kind, reason ->
+ # credo:disable-for-next-line
+ IO.inspect({"Failed to stop", tenant, kind, reason})
+ end
+end
diff --git a/lib/realtime/postgres_cdc.ex b/lib/realtime/postgres_cdc.ex
index 6923d85..eef81a1 100644
--- a/lib/realtime/postgres_cdc.ex
+++ b/lib/realtime/postgres_cdc.ex
@@ -3,9 +3,15 @@ defmodule Realtime.PostgresCdc do
require Logger
+ alias Realtime.Api.Tenant
+
@timeout 10_000
@extensions Application.compile_env(:realtime, :extensions)
+ defmodule Exception do
+ defexception message: "PostgresCdc error!"
+ end
+
def connect(module, opts) do
apply(module, :handle_connect, [opts])
end
@@ -15,41 +21,54 @@ defmodule Realtime.PostgresCdc do
end
def subscribe(module, pg_change_params, tenant, metadata) do
- RealtimeWeb.Endpoint.subscribe("postgres_cdc:" <> tenant)
+ RealtimeWeb.Endpoint.subscribe("postgres_cdc_rls:" <> tenant)
apply(module, :handle_subscribe, [pg_change_params, tenant, metadata])
end
+ @spec stop(module, Tenant.t(), pos_integer) :: :ok
def stop(module, tenant, timeout \\ @timeout) do
- apply(module, :handle_stop, [tenant, timeout])
+ apply(module, :handle_stop, [tenant.external_id, timeout])
end
+ @doc """
+ Stops all available drivers within a specified timeout.
+
+ Expects all handle_stop calls to return `:ok` within the `stop_timeout`.
+
+ We want all available drivers to stop within the `timeout`.
+ """
+
+ @spec stop_all(Tenant.t(), pos_integer) :: :ok | :error
def stop_all(tenant, timeout \\ @timeout) do
- available_drivers()
- |> Enum.each(fn module ->
- stop(module, tenant, timeout)
- end)
+ count = Enum.count(available_drivers())
+ stop_timeout = Kernel.ceil(timeout / count)
+
+ stops = Enum.map(available_drivers(), fn module -> stop(module, tenant, stop_timeout) end)
+
+ case Enum.all?(stops, &(&1 == :ok)) do
+ true -> :ok
+ false -> :error
+ end
end
@spec available_drivers :: list
- def available_drivers() do
+ def available_drivers do
@extensions
|> Enum.filter(fn {_, e} -> e.type == :postgres_cdc end)
|> Enum.map(fn {_, e} -> e.driver end)
end
+ @spec filter_settings(binary(), list()) :: map()
def filter_settings(key, extensions) do
- [cdc] =
- Enum.filter(extensions, fn e ->
- if e.type == key do
- true
- else
- false
- end
- end)
+ [cdc] = Enum.filter(extensions, fn e -> e.type == key end)
cdc.settings
end
+ @doc """
+ Gets the extension module for a tenant.
+ """
+
@spec driver(String.t()) :: {:ok, module()} | {:error, String.t()}
def driver(tenant_key) do
@extensions
@@ -60,68 +79,7 @@ defmodule Realtime.PostgresCdc do
end
end
- @spec aws_to_fly(String.t()) :: nil | <<_::24>>
- def aws_to_fly(aws_region) when is_binary(aws_region) do
- case aws_region do
- "us-east-1" -> "iad"
- "us-west-1" -> "sea"
- "sa-east-1" -> "iad"
- "ca-central-1" -> "iad"
- "ap-southeast-1" -> "syd"
- "ap-northeast-1" -> "syd"
- "ap-northeast-2" -> "syd"
- "ap-southeast-2" -> "syd"
- "ap-south-1" -> "syd"
- "eu-west-1" -> "lhr"
- "eu-west-2" -> "lhr"
- "eu-west-3" -> "lhr"
- "eu-central-1" -> "lhr"
- _ -> nil
- end
- end
-
- @doc """
- Lists the nodes in a region. Sorts by node name in case the list order
- is unstable.
- """
-
- @spec region_nodes(String.t()) :: [atom()]
- def region_nodes(region) when is_binary(region) do
- :syn.members(RegionNodes, region)
- |> Enum.map(fn {_pid, [node: node]} -> node end)
- |> Enum.sort()
- end
-
- @doc """
- Picks the node to launch the Postgres connection on.
-
- If there are not two nodes in a region the connection is established from
- the `default` node given.
- """
-
- @spec launch_node(String.t(), String.t(), atom()) :: atom()
- def launch_node(tenant, fly_region, default) do
- case region_nodes(fly_region) do
- [node] ->
- Logger.warning(
- "Only one region node (#{inspect(node)}) for #{fly_region} using default #{inspect(default)}"
- )
-
- default
-
- [] ->
- Logger.warning("Zero region nodes for #{fly_region} using #{inspect(default)}")
- default
-
- regions_nodes ->
- member_count = Enum.count(regions_nodes)
- index = :erlang.phash2(tenant, member_count)
-
- Enum.at(regions_nodes, index)
- end
- end
-
- @callback handle_connect(any()) :: {:ok, pid()} | {:error, any()}
+ @callback handle_connect(any()) :: {:ok, any()} | nil
@callback handle_after_connect(any(), any(), any()) :: {:ok, any()} | {:error, any()}
@callback handle_subscribe(any(), any(), any()) :: :ok
@callback handle_stop(any(), any()) :: any()
diff --git a/lib/realtime/rate_counter/rate_counter.ex b/lib/realtime/rate_counter/rate_counter.ex
index f59f362..8930286 100644
--- a/lib/realtime/rate_counter/rate_counter.ex
+++ b/lib/realtime/rate_counter/rate_counter.ex
@@ -35,7 +35,8 @@ defmodule Realtime.RateCounter do
event_name: [@app_name] ++ [:rate_counter],
measurements: %{sum: 0},
metadata: %{}
- }
+ },
+ counter_pid: nil
@type t :: %__MODULE__{
id: term(),
@@ -51,19 +52,35 @@ defmodule Realtime.RateCounter do
event_name: :telemetry.event_name(),
measurements: :telemetry.event_measurements(),
metadata: :telemetry.event_metadata()
- }
+ },
+ counter_pid: pid()
}
@spec start_link([keyword()]) :: {:ok, pid()} | {:error, {:already_started, pid()}}
def start_link(args) do
id = Keyword.get(args, :id)
- unless id, do: raise("Supply an identifier to start a counter!")
+ if !id, do: raise("Supply an identifier to start a counter!")
GenServer.start_link(__MODULE__, args,
name: {:via, Registry, {Realtime.Registry.Unique, {__MODULE__, :rate_counter, id}}}
)
end
+ @spec stop(term()) :: :ok
+ def stop(tenant_id) do
+ keys =
+ Registry.select(Realtime.Registry.Unique, [
+ {{{:"$1", :_, {:_, :_, :"$2"}}, :"$3", :_}, [{:==, :"$1", __MODULE__}, {:==, :"$2", tenant_id}], [:"$_"]}
+ ])
+
+ Enum.each(keys, fn {{_, _, key}, {pid, _}} ->
+ if Process.alive?(pid), do: GenServer.stop(pid)
+ Cachex.del!(@cache, key)
+ end)
+
+ :ok
+ end
+
@doc """
Starts a new RateCounter under a DynamicSupervisor
"""
@@ -93,51 +110,59 @@ defmodule Realtime.RateCounter do
@impl true
def init(args) do
- id = Keyword.get(args, :id)
+ id = Keyword.fetch!(args, :id)
+ telem_opts = Keyword.get(args, :telemetry)
every = Keyword.get(args, :tick, @tick)
max_bucket_len = Keyword.get(args, :max_bucket_len, @max_bucket_len)
idle_shutdown_ms = Keyword.get(args, :idle_shutdown, @idle_shutdown)
-
- telem_opts = Keyword.get(args, :telemetry)
-
- telemetry =
- if telem_opts,
- do: %{
- emit: true,
- event_name: [@app_name] ++ [:rate_counter] ++ telem_opts.event_name,
- measurements: Map.merge(%{sum: 0}, telem_opts.measurements),
- metadata: Map.merge(%{id: id}, telem_opts.metadata)
- },
- else: %{emit: false}
-
Logger.info("Starting #{__MODULE__} for: #{inspect(id)}")
- ensure_counter_started(id)
+ case ensure_counter_started(id) do
+ {:ok, _ref, pid} ->
+ Process.monitor(pid)
- ticker = tick(0)
+ telemetry =
+ if telem_opts do
+ Logger.metadata(telem_opts.metadata)
- idle_shutdown_ref =
- unless idle_shutdown_ms == :infinity, do: shutdown_after(idle_shutdown_ms), else: nil
+ %{
+ emit: true,
+ event_name: [@app_name] ++ [:rate_counter] ++ telem_opts.event_name,
+ measurements: Map.merge(%{sum: 0}, telem_opts.measurements),
+ metadata: Map.merge(%{id: id}, telem_opts.metadata)
+ }
+ else
+ %{emit: false}
+ end
+
+ ticker = tick(0)
+
+ idle_shutdown_ref =
+ if idle_shutdown_ms != :infinity, do: shutdown_after(idle_shutdown_ms), else: nil
+
+ state = %__MODULE__{
+ id: id,
+ tick: every,
+ tick_ref: ticker,
+ max_bucket_len: max_bucket_len,
+ idle_shutdown: idle_shutdown_ms,
+ idle_shutdown_ref: idle_shutdown_ref,
+ telemetry: telemetry,
+ counter_pid: pid
+ }
- state = %__MODULE__{
- id: id,
- tick: every,
- tick_ref: ticker,
- max_bucket_len: max_bucket_len,
- idle_shutdown: idle_shutdown_ms,
- idle_shutdown_ref: idle_shutdown_ref,
- telemetry: telemetry
- }
+ Cachex.put!(@cache, id, state)
- Cachex.put!(@cache, id, state)
+ {:ok, state}
- {:ok, state}
+ _ ->
+ {:shutdown, :kill, %{}}
+ end
end
@impl true
def handle_info(:tick, state) do
Process.cancel_timer(state.tick_ref)
-
{:ok, count} = GenCounter.get(state.id)
:ok = GenCounter.put(state.id, 0)
@@ -178,6 +203,10 @@ defmodule Realtime.RateCounter do
{:stop, :normal, state}
end
+ def handle_info({:DOWN, _, :process, counter_pid, _}, %{counter_pid: counter_pid} = state) do
+ {:stop, :shutdown, state}
+ end
+
defp tick(every) do
Process.send_after(self(), :tick, every)
end
@@ -187,9 +216,7 @@ defmodule Realtime.RateCounter do
end
defp ensure_counter_started(id) do
- case GenCounter.get(id) do
- {:ok, _count} -> :ok
- {:error, :counter_not_found} -> GenCounter.new(id)
- end
+ GenCounter.new(id)
+ GenCounter.find_counter(id)
end
end
diff --git a/lib/realtime/release.ex b/lib/realtime/release.ex
index 39441a7..54e1e35 100644
--- a/lib/realtime/release.ex
+++ b/lib/realtime/release.ex
@@ -20,6 +20,7 @@ defmodule Realtime.Release do
def seeds(repo) do
load_app()
+ {:ok, _} = Application.ensure_all_started(:realtime)
{:ok, {:ok, _}, _} =
Ecto.Migrator.with_repo(repo, fn _repo ->
diff --git a/lib/realtime/repo.ex b/lib/realtime/repo.ex
index b7259b6..91fed35 100644
--- a/lib/realtime/repo.ex
+++ b/lib/realtime/repo.ex
@@ -1,51 +1,263 @@
defmodule Realtime.Repo do
+ require Logger
+
use Ecto.Repo,
otp_app: :realtime,
adapter: Ecto.Adapters.Postgres
- @replicas %{
- "sea" => Realtime.Repo.Replica.SJC,
- "sjc" => Realtime.Repo.Replica.SJC,
- "gru" => Realtime.Repo.Replica.IAD,
- "iad" => Realtime.Repo.Replica.IAD,
- "sin" => Realtime.Repo.Replica.SIN,
- "maa" => Realtime.Repo.Replica.SIN,
- "syd" => Realtime.Repo.Replica.SIN,
- "lhr" => Realtime.Repo.Replica.FRA,
- "fra" => Realtime.Repo.Replica.FRA
- }
+ import Ecto.Query
+ import Realtime.Logs
def with_dynamic_repo(config, callback) do
default_dynamic_repo = get_dynamic_repo()
- {:ok, repo} = [name: nil, pool_size: 1] |> Keyword.merge(config) |> Realtime.Repo.start_link()
+ {:ok, repo} = [name: nil, pool_size: 2] |> Keyword.merge(config) |> Realtime.Repo.start_link()
try do
- Realtime.Repo.put_dynamic_repo(repo)
+ put_dynamic_repo(repo)
callback.(repo)
after
- Realtime.Repo.put_dynamic_repo(default_dynamic_repo)
+ put_dynamic_repo(default_dynamic_repo)
Supervisor.stop(repo)
end
end
- if Mix.env() == :test do
- def replica, do: __MODULE__
- else
- def replica,
- do:
- Map.get(
- @replicas,
- Application.get_env(:realtime, :fly_region),
- Realtime.Repo
- )
- end
-
- for replica_repo <- @replicas |> Map.values() |> Enum.uniq() do
- defmodule replica_repo do
- use Ecto.Repo,
- otp_app: :realtime,
- adapter: Ecto.Adapters.Postgres,
- read_only: true
+ @doc """
+ Lists all records for a given query and converts them into a given struct
+ """
+ @spec all(DBConnection.conn(), Ecto.Queryable.t(), module(), [Postgrex.execute_option()]) ::
+ {:ok, list(struct())} | {:error, any()}
+ def all(conn, query, result_struct, opts \\ []) do
+ conn
+ |> run_all_query(query, opts)
+ |> result_to_structs(result_struct)
+ end
+
+ @doc """
+ Fetches one record for a given query and converts it into a given struct
+ """
+ @spec one(
+ DBConnection.conn(),
+ Ecto.Query.t(),
+ module(),
+ Postgrex.option() | Keyword.t()
+ ) ::
+ {:error, any()} | {:ok, struct()} | Ecto.Changeset.t()
+ def one(conn, query, result_struct, opts \\ []) do
+ conn
+ |> run_all_query(query, opts)
+ |> result_to_single_struct(result_struct, nil)
+ end
+
+ @doc """
+ Inserts a given changeset into the database and converts the result into a given struct
+ """
+ @spec insert(
+ DBConnection.conn(),
+ Ecto.Changeset.t(),
+ module(),
+ Postgrex.option() | Keyword.t()
+ ) ::
+ {:ok, struct()} | {:error, any()} | Ecto.Changeset.t()
+ def insert(conn, changeset, result_struct, opts \\ []) do
+ with {:ok, {query, args}} <- insert_query_from_changeset(changeset) do
+ conn
+ |> run_query_with_trap(query, args, opts)
+ |> result_to_single_struct(result_struct, changeset)
+ end
+ end
+
+ @doc """
+ Inserts all changesets into the database and converts the result into a given list of structs
+ """
+ @spec insert_all_entries(
+ DBConnection.conn(),
+ [Ecto.Changeset.t()],
+ module(),
+ Postgrex.option() | Keyword.t()
+ ) ::
+ {:ok, [struct()]} | {:error, any()} | Ecto.Changeset.t()
+ def insert_all_entries(conn, changesets, result_struct, opts \\ []) do
+ with {:ok, {query, args}} <- insert_all_query_from_changeset(changesets) do
+ conn
+ |> run_query_with_trap(query, args, opts)
+ |> result_to_structs(result_struct)
+ end
+ end
+
+ @doc """
+ Deletes records for a given query and returns the number of deleted records
+ """
+ @spec del(DBConnection.conn(), Ecto.Queryable.t()) ::
+ {:ok, non_neg_integer()} | {:error, any()}
+ def del(conn, query) do
+ with {:ok, %Postgrex.Result{num_rows: num_rows}} <- run_delete_query(conn, query) do
+ {:ok, num_rows}
+ end
+ end
+
+ @doc """
+ Updates an entry based on the changeset and returns the updated entry
+ """
+ @spec update(DBConnection.conn(), Ecto.Changeset.t(), module()) ::
+ {:ok, struct()} | {:error, any()} | Ecto.Changeset.t()
+ def update(conn, changeset, result_struct, opts \\ []) do
+ with {:ok, {query, args}} <- update_query_from_changeset(changeset) do
+ conn
+ |> run_query_with_trap(query, args, opts)
+ |> result_to_single_struct(result_struct, changeset)
+ end
+ end
+
+ defp result_to_single_struct(
+ {:error, %Postgrex.Error{postgres: %{code: :unique_violation, constraint: "channels_name_index"}}},
+ _struct,
+ changeset
+ ) do
+ Ecto.Changeset.add_error(changeset, :name, "has already been taken")
+ end
+
+ defp result_to_single_struct({:error, _} = error, _, _), do: error
+
+ defp result_to_single_struct({:ok, %Postgrex.Result{rows: []}}, _, _) do
+ {:error, :not_found}
+ end
+
+ defp result_to_single_struct({:ok, %Postgrex.Result{rows: [row], columns: columns}}, struct, _) do
+ {:ok, load(struct, Enum.zip(columns, row))}
+ end
+
+ defp result_to_single_struct({:ok, %Postgrex.Result{num_rows: num_rows}}, _, _) do
+ raise("expected at most one result but got #{num_rows} in result")
+ end
+
+ defp result_to_structs({:error, _} = error, _), do: error
+
+ defp result_to_structs({:ok, %Postgrex.Result{rows: rows, columns: columns}}, struct) do
+ {:ok, Enum.map(rows, &load(struct, Enum.zip(columns, &1)))}
+ end
+
+ defp insert_query_from_changeset(%{valid?: false} = changeset), do: {:error, changeset}
+
+ defp insert_query_from_changeset(changeset) do
+ schema = changeset.data.__struct__
+ source = schema.__schema__(:source)
+ prefix = schema.__schema__(:prefix)
+ acc = %{header: [], rows: []}
+
+ %{header: header, rows: rows} =
+ Enum.reduce(changeset.changes, acc, fn {field, row}, %{header: header, rows: rows} ->
+ row =
+ case row do
+ row when is_boolean(row) -> row
+ row when is_atom(row) -> Atom.to_string(row)
+ _ -> row
+ end
+
+ %{
+ header: [Atom.to_string(field) | header],
+ rows: [row | rows]
+ }
+ end)
+
+ table = "\"#{prefix}\".\"#{source}\""
+ header = "(#{Enum.map_join(header, ",", &"\"#{&1}\"")})"
+
+ arg_index =
+ rows
+ |> Enum.with_index(1)
+ |> Enum.map_join(",", fn {_, index} -> "$#{index}" end)
+
+ {:ok, {"INSERT INTO #{table} #{header} VALUES (#{arg_index}) RETURNING *", rows}}
+ end
+
+ defp insert_all_query_from_changeset(changesets) do
+ invalid = Enum.filter(changesets, &(!&1.valid?))
+
+ if invalid != [] do
+ {:error, changesets}
+ else
+ [schema] = changesets |> Enum.map(& &1.data.__struct__) |> Enum.uniq()
+
+ source = schema.__schema__(:source)
+ prefix = schema.__schema__(:prefix)
+ changes = Enum.map(changesets, & &1.changes)
+
+ %{header: header, rows: rows} =
+ Enum.reduce(changes, %{header: [], rows: []}, fn v, changes_acc ->
+ Enum.reduce(v, changes_acc, fn {field, row}, %{header: header, rows: rows} ->
+ row =
+ case row do
+ row when is_boolean(row) -> row
+ row when is_atom(row) -> Atom.to_string(row)
+ _ -> row
+ end
+
+ %{
+ header: Enum.uniq([Atom.to_string(field) | header]),
+ rows: [row | rows]
+ }
+ end)
+ end)
+
+ args_index =
+ rows
+ |> Enum.chunk_every(length(header))
+ |> Enum.reduce({"", 1}, fn row, {acc, count} ->
+ arg_index =
+ row
+ |> Enum.with_index(count)
+ |> Enum.map_join("", fn {_, index} -> "$#{index}," end)
+ |> String.trim_trailing(",")
+ |> then(&"(#{&1})")
+
+ {"#{acc},#{arg_index}", count + length(row)}
+ end)
+ |> elem(0)
+ |> String.trim_leading(",")
+
+ table = "\"#{prefix}\".\"#{source}\""
+ header = "(#{Enum.map_join(header, ",", &"\"#{&1}\"")})"
+ {:ok, {"INSERT INTO #{table} #{header} VALUES #{args_index} RETURNING *", rows}}
end
end
+
+ defp update_query_from_changeset(%{valid?: false} = changeset), do: {:error, changeset}
+
+ defp update_query_from_changeset(changeset) do
+ %Ecto.Changeset{data: %{id: id, __struct__: struct}, changes: changes} = changeset
+ changes = Keyword.new(changes)
+ query = from(c in struct, where: c.id == ^id, select: c, update: [set: ^changes])
+ {:ok, to_sql(:update_all, query)}
+ end
+
+ defp run_all_query(conn, query, opts) do
+ {query, args} = to_sql(:all, query)
+ run_query_with_trap(conn, query, args, opts)
+ end
+
+ defp run_delete_query(conn, query) do
+ {query, args} = to_sql(:delete_all, query)
+ run_query_with_trap(conn, query, args)
+ end
+
+ defp run_query_with_trap(conn, query, args, opts \\ []) do
+ Postgrex.query(conn, query, args, opts)
+ rescue
+ e ->
+ log_error("ErrorRunningQuery", e)
+ {:error, :postgrex_exception}
+ catch
+ :exit, {:noproc, {DBConnection.Holder, :checkout, _}} ->
+ log_error(
+ "UnableCheckoutConnection",
+ "Unable to checkout connection, please check your connection pool configuration"
+ )
+
+ {:error, :postgrex_exception}
+
+ :exit, reason ->
+ log_error("UnknownError", reason)
+
+ {:error, :postgrex_exception}
+ end
end
diff --git a/lib/realtime/repo_replica.ex b/lib/realtime/repo_replica.ex
new file mode 100644
index 0000000..8079ccb
--- /dev/null
+++ b/lib/realtime/repo_replica.ex
@@ -0,0 +1,77 @@
+defmodule Realtime.Repo.Replica do
+ @moduledoc """
+ Generates a read-only replica repo for the region specified in config/runtime.exs.
+ """
+ require Logger
+
+ @replicas_fly %{
+ "sea" => Realtime.Repo.Replica.SJC,
+ "sjc" => Realtime.Repo.Replica.SJC,
+ "gru" => Realtime.Repo.Replica.IAD,
+ "iad" => Realtime.Repo.Replica.IAD,
+ "sin" => Realtime.Repo.Replica.SIN,
+ "maa" => Realtime.Repo.Replica.SIN,
+ "syd" => Realtime.Repo.Replica.SIN,
+ "lhr" => Realtime.Repo.Replica.FRA,
+ "fra" => Realtime.Repo.Replica.FRA
+ }
+
+ @replicas_aws %{
+ "ap-southeast-1" => Realtime.Repo.Replica.Singapore,
+ "ap-southeast-2" => Realtime.Repo.Replica.Singapore,
+ "eu-west-2" => Realtime.Repo.Replica.London,
+ "us-east-1" => Realtime.Repo.Replica.NorthVirginia,
+ "us-west-2" => Realtime.Repo.Replica.Oregon,
+ "us-west-1" => Realtime.Repo.Replica.SanJose
+ }
+
+ @ast (quote do
+ use Ecto.Repo,
+ otp_app: :realtime,
+ adapter: Ecto.Adapters.Postgres,
+ read_only: true
+ end)
+
+ @doc """
+ Returns the replica repo module for the region specified in config/runtime.exs.
+ """
+ @spec replica() :: module()
+ def replica do
+ replicas =
+ case Application.get_env(:realtime, :platform) do
+ :aws -> @replicas_aws
+ :fly -> @replicas_fly
+ _ -> %{}
+ end
+
+ region = Application.get_env(:realtime, :region)
+ replica = Map.get(replicas, region)
+ replica_conf = Application.get_env(:realtime, replica)
+
+ # Do not create module if replica isn't set or configuration is not present
+ cond do
+ is_nil(replica) ->
+ Logger.info("Replica region not found, defaulting to Realtime.Repo")
+ Realtime.Repo
+
+ is_nil(replica_conf) ->
+ Logger.info("Replica config not found for #{region} region")
+ Realtime.Repo
+
+ true ->
+ # Check if module is present
+ case Code.ensure_compiled(replica) do
+ {:module, _} -> nil
+ _ -> {:module, _, _, _} = Module.create(replica, @ast, Macro.Env.location(__ENV__))
+ end
+
+ replica
+ end
+ end
+
+ if Mix.env() == :test do
+ def replicas_aws, do: @replicas_aws
+
+ def replicas_fly, do: @replicas_fly
+ end
+end
diff --git a/lib/realtime/rpc.ex b/lib/realtime/rpc.ex
new file mode 100644
index 0000000..efd97be
--- /dev/null
+++ b/lib/realtime/rpc.ex
@@ -0,0 +1,79 @@
+defmodule Realtime.Rpc do
+ @moduledoc """
+ RPC module for Realtime with the intent of standardizing the RPC interface and collect telemetry
+ """
+ import Realtime.Logs
+ alias Realtime.Telemetry
+
+ @doc """
+ Calls external node using :rpc.call/5 and collects telemetry
+ """
+ @spec call(atom(), atom(), atom(), any(), keyword()) :: any()
+ def call(node, mod, func, args, opts \\ []) do
+ timeout = Keyword.get(opts, :timeout, Application.get_env(:realtime, :rpc_timeout))
+ {latency, response} = :timer.tc(fn -> :rpc.call(node, mod, func, args, timeout) end)
+
+ Telemetry.execute(
+ [:realtime, :rpc],
+ %{latency: latency},
+ %{mod: mod, func: func, target_node: node, origin_node: node()}
+ )
+
+ response
+ end
+
+ @doc """
+ Calls external node using :erpc.call/5 and collects telemetry
+ """
+ @spec enhanced_call(atom(), atom(), atom(), any(), keyword()) ::
+ {:ok, any()} | {:error, :rpc_error, term()} | {:error, term()}
+ def enhanced_call(node, mod, func, args \\ [], opts \\ []) do
+ timeout = Keyword.get(opts, :timeout, Application.get_env(:realtime, :rpc_timeout))
+
+ with {latency, response} <-
+ :timer.tc(fn -> :erpc.call(node, mod, func, args, timeout) end) do
+ case response do
+ {:ok, _} ->
+ Telemetry.execute(
+ [:realtime, :rpc],
+ %{latency: latency},
+ %{mod: mod, func: func, target_node: node, origin_node: node(), success: true}
+ )
+
+ response
+
+ {:error, error} ->
+ Telemetry.execute(
+ [:realtime, :rpc],
+ %{latency: latency},
+ %{mod: mod, func: func, target_node: node, origin_node: node(), success: false}
+ )
+
+ {:error, error}
+ end
+ end
+ catch
+ _, reason ->
+ reason =
+ case reason do
+ {_, reason} -> reason
+ {_, reason, _} -> reason
+ end
+
+ Telemetry.execute(
+ [:realtime, :rpc],
+ %{latency: 0},
+ %{mod: mod, func: func, target_node: node, origin_node: node(), success: false}
+ )
+
+ log_error(
+ "ErrorOnRpcCall",
+ %{target: node, mod: mod, func: func, error: reason},
+ mod: mod,
+ func: func,
+ target: node
+ )
+
+ {:error, :rpc_error, reason}
+ end
+end
diff --git a/lib/realtime/signal_handler.ex b/lib/realtime/signal_handler.ex
index c097f2f..46908cc 100644
--- a/lib/realtime/signal_handler.ex
+++ b/lib/realtime/signal_handler.ex
@@ -3,26 +3,28 @@ defmodule Realtime.SignalHandler do
@behaviour :gen_event
require Logger
- @spec shutdown_in_progress? :: boolean()
+ @spec shutdown_in_progress? :: :ok | {:error, :shutdown_in_progress}
def shutdown_in_progress? do
- !!Application.get_env(:realtime, :shutdown_in_progress)
+ case !!Application.get_env(:realtime, :shutdown_in_progress) do
+ true -> {:error, :shutdown_in_progress}
+ false -> :ok
+ end
end
@impl true
- def init(_) do
- Logger.info("#{__MODULE__} is being initialized...")
- {:ok, %{}}
+ def init({%{handler_mod: _} = args, :ok}) do
+ {:ok, args}
end
@impl true
- def handle_event(signal, state) do
- Logger.warn("#{__MODULE__}: #{inspect(signal)} received")
+ def handle_event(signal, %{handler_mod: handler_mod} = state) do
+ Logger.error("#{__MODULE__}: #{inspect(signal)} received")
if signal == :sigterm do
Application.put_env(:realtime, :shutdown_in_progress, true)
end
- :erl_signal_handler.handle_event(signal, state)
+ handler_mod.handle_event(signal, state)
end
@impl true
diff --git a/lib/realtime/syn_handler.ex b/lib/realtime/syn_handler.ex
new file mode 100644
index 0000000..09cd385
--- /dev/null
+++ b/lib/realtime/syn_handler.ex
@@ -0,0 +1,91 @@
+defmodule Realtime.SynHandler do
+ @moduledoc """
+ Custom defined Syn's callbacks
+ """
+ require Logger
+ alias RealtimeWeb.Endpoint
+
+ @doc """
+ When processes registered with :syn are unregistered, either manually or by stopping, this
+ callback is invoked.
+
+ Other processes can subscribe to these events via PubSub to respond to them.
+
+ We want to log conflict resolutions to know when more than one process on the cluster
+ was started, and subsequently stopped because :syn handled the conflict.
+ """
+ def on_process_unregistered(mod, name, _pid, _meta, reason) do
+ case reason do
+ :syn_conflict_resolution ->
+ Logger.warning("#{mod} terminated: #{inspect(name)} #{node()}")
+
+ _ ->
+ topic = topic(mod)
+ Endpoint.local_broadcast(topic <> ":" <> name, topic <> "_down", nil)
+ end
+
+ :ok
+ end
+
+ def resolve_registry_conflict(mod, name, {pid1, %{region: region}, time1}, {pid2, _, time2}) do
+ platform_region = Realtime.Nodes.platform_region_translator(region)
+
+ platform_region_nodes =
+ RegionNodes |> :syn.members(platform_region) |> Enum.map(fn {_, [node: node]} -> node end)
+
+ {keep, stop} =
+ [pid1, pid2]
+ |> Enum.filter(fn pid ->
+ Enum.member?(platform_region_nodes, node(pid))
+ end)
+ |> then(fn
+ [pid] ->
+ {pid, if(pid != pid1, do: pid1, else: pid2)}
+
+ _ ->
+ if time1 < time2 do
+ {pid1, pid2}
+ else
+ {pid2, pid1}
+ end
+ end)
+
+ if node() == node(stop) do
+ spawn(fn -> resolve_conflict(mod, stop, name) end)
+ else
+ Logger.warning("Resolving #{name} conflict, remote pid: #{inspect(stop)}")
+ end
+
+ keep
+ end
+
+ def resolve_registry_conflict(mod, name, {pid1, _, time1}, {pid2, _, time2}) do
+ resolve_registry_conflict(mod, name, {pid1, %{region: nil}, time1}, {pid2, %{region: nil}, time2})
+ end
+
+ defp resolve_conflict(mod, stop, name) do
+ resp =
+ if Process.alive?(stop) do
+ try do
+ DynamicSupervisor.stop(stop, :shutdown, 30_000)
+ catch
+ error, reason -> {:error, {error, reason}}
+ end
+ else
+ :not_alive
+ end
+
+ topic = topic(mod)
+ Endpoint.broadcast(topic <> ":" <> name, topic <> "_down", nil)
+
+ Logger.warning("Resolving #{name} conflict, stop local pid: #{inspect(stop)}, response: #{inspect(resp)}")
+ end
+
+ defp topic(mod) do
+ mod
+ |> Macro.underscore()
+ |> String.split("/")
+ |> Enum.take(-1)
+ |> hd()
+ end
+end
diff --git a/lib/realtime/telemetry/logger.ex b/lib/realtime/telemetry/logger.ex
index 4c292eb..5bfcd00 100644
--- a/lib/realtime/telemetry/logger.ex
+++ b/lib/realtime/telemetry/logger.ex
@@ -14,17 +14,12 @@ defmodule Realtime.Telemetry.Logger do
[:realtime, :rate_counter, :channel, :db_events]
]
- def start_link(args \\ []) do
- GenServer.start_link(__MODULE__, args, name: __MODULE__)
+ def start_link(args) do
+ GenServer.start_link(__MODULE__, args)
end
- def init(_args) do
- :telemetry.attach_many(
- "telemetry-logger",
- @events,
- &__MODULE__.handle_event/4,
- []
- )
+ def init(handler_id: handler_id) do
+ :telemetry.attach_many(handler_id, @events, &__MODULE__.handle_event/4, [])
{:ok, []}
end
diff --git a/lib/realtime/telemetry/telemetry.ex b/lib/realtime/telemetry/telemetry.ex
index ea05465..6062e14 100644
--- a/lib/realtime/telemetry/telemetry.ex
+++ b/lib/realtime/telemetry/telemetry.ex
@@ -7,7 +7,7 @@ defmodule Realtime.Telemetry do
Dispatches Telemetry events.
"""
- @spec execute([atom, ...], number | map, map) :: :ok
+ @spec execute([atom, ...], map, map) :: :ok
def execute(event, measurements, metadata \\ %{}) do
:telemetry.execute(event, measurements, metadata)
end
diff --git a/lib/realtime/tenants.ex b/lib/realtime/tenants.ex
index 3d99582..7f6a510 100644
--- a/lib/realtime/tenants.ex
+++ b/lib/realtime/tenants.ex
@@ -5,23 +5,112 @@ defmodule Realtime.Tenants do
require Logger
- alias Realtime.Repo
alias Realtime.Api.Tenant
+ alias Realtime.Database
+ alias Realtime.Repo
+ alias Realtime.Repo.Replica
+ alias Realtime.Tenants.Cache
+ alias Realtime.Tenants.Connect
+ alias Realtime.Tenants.Migrations
+ alias Realtime.UsersCounter
@doc """
Gets a list of connected tenant `external_id` strings in the cluster or a node.
"""
-
- @spec list_connected_tenants :: [String.t()]
- def list_connected_tenants() do
- :syn.group_names(:users)
- end
-
@spec list_connected_tenants(atom()) :: [String.t()]
def list_connected_tenants(node) do
:syn.group_names(:users, node)
end
+ @doc """
+ Gets the database connection pid managed by the Tenants.Connect process.
+
+ ## Examples
+
+ iex> Realtime.Tenants.get_health_conn(%Realtime.Api.Tenant{external_id: "not_found_tenant"})
+ {:error, :tenant_database_connection_initializing}
+ """
+
+ @spec get_health_conn(Tenant.t()) :: {:error, term()} | {:ok, pid()}
+ def get_health_conn(%Tenant{external_id: external_id}) do
+ Connect.get_status(external_id)
+ end
+
+ @doc """
+ Checks if a tenant is healthy. A tenant is healthy if:
+ - Tenant has no db connection and zero client connetions
+ - Tenant has a db connection and >0 client connections
+
+ A tenant is not healthy if a tenant has client connections and no database connection.
+ """
+
+ @spec health_check(binary) ::
+ {:error,
+ :tenant_not_found
+ | String.t()
+ | %{
+ connected_cluster: pos_integer,
+ db_connected: false,
+ healthy: false,
+ region: String.t(),
+ node: String.t()
+ }}
+ | {:ok,
+ %{
+ connected_cluster: non_neg_integer,
+ db_connected: true,
+ healthy: true,
+ region: String.t(),
+ node: String.t()
+ }}
+ def health_check(external_id) when is_binary(external_id) do
+ region = Application.get_env(:realtime, :region)
+ node = Node.self() |> to_string()
+
+ with %Tenant{} = tenant <- Cache.get_tenant_by_external_id(external_id),
+ {:error, _} <- get_health_conn(tenant),
+ connected_cluster when connected_cluster > 0 <- UsersCounter.tenant_users(external_id) do
+ {:error,
+ %{
+ healthy: false,
+ db_connected: false,
+ connected_cluster: connected_cluster,
+ region: region,
+ node: node
+ }}
+ else
+ nil ->
+ {:error, :tenant_not_found}
+
+ {:ok, _health_conn} ->
+ connected_cluster = UsersCounter.tenant_users(external_id)
+
+ {:ok,
+ %{
+ healthy: true,
+ db_connected: true,
+ connected_cluster: connected_cluster,
+ region: region,
+ node: node
+ }}
+
+ connected_cluster when is_integer(connected_cluster) ->
+ tenant = Cache.get_tenant_by_external_id(external_id)
+ {:ok, db_conn} = Database.connect(tenant, "realtime_health_check")
+ Process.alive?(db_conn) && GenServer.stop(db_conn)
+ Migrations.run_migrations(tenant)
+
+ {:ok,
+ %{
+ healthy: true,
+ db_connected: false,
+ connected_cluster: connected_cluster,
+ region: region,
+ node: node
+ }}
+ end
+ end
+
@doc """
All the keys that we use to create counters and RateLimiters for tenants.
"""
@@ -32,7 +121,9 @@ defmodule Realtime.Tenants do
requests_per_second_key(tenant),
channels_per_client_key(tenant),
joins_per_second_key(tenant),
- events_per_second_key(tenant)
+ events_per_second_key(tenant),
+ connection_attempts_per_second_key(tenant),
+ presence_events_per_second_key(tenant)
]
end
@@ -73,8 +164,12 @@ defmodule Realtime.Tenants do
@doc """
The GenCounter key to use when counting events for RealtimeChannel events.
+ ## Examples
+ iex> Realtime.Tenants.events_per_second_key("tenant_id")
+ {:channel, :events, "tenant_id"}
+ iex> Realtime.Tenants.events_per_second_key(%Realtime.Api.Tenant{external_id: "tenant_id"})
+ {:channel, :events, "tenant_id"}
"""
-
@spec events_per_second_key(Tenant.t() | String.t()) :: {:channel, :events, String.t()}
def events_per_second_key(tenant) when is_binary(tenant) do
{:channel, :events, tenant}
@@ -86,8 +181,11 @@ defmodule Realtime.Tenants do
@doc """
The GenCounter key to use when counting events for RealtimeChannel events.
+ iex> Realtime.Tenants.db_events_per_second_key("tenant_id")
+ {:channel, :db_events, "tenant_id"}
+ iex> Realtime.Tenants.db_events_per_second_key(%Realtime.Api.Tenant{external_id: "tenant_id"})
+ {:channel, :db_events, "tenant_id"}
"""
-
@spec db_events_per_second_key(Tenant.t() | String.t()) :: {:channel, :db_events, String.t()}
def db_events_per_second_key(tenant) when is_binary(tenant) do
{:channel, :db_events, tenant}
@@ -97,6 +195,40 @@ defmodule Realtime.Tenants do
{:channel, :db_events, tenant.external_id}
end
+ @doc """
+ The GenCounter key to use when counting presence events for RealtimeChannel events.
+ ## Examples
+ iex> Realtime.Tenants.presence_events_per_second_key("tenant_id")
+ {:channel, :presence_events, "tenant_id"}
+ iex> Realtime.Tenants.presence_events_per_second_key(%Realtime.Api.Tenant{external_id: "tenant_id"})
+ {:channel, :presence_events, "tenant_id"}
+ """
+ @spec presence_events_per_second_key(Tenant.t() | String.t()) :: {:channel, :presence_events, String.t()}
+ def presence_events_per_second_key(tenant) when is_binary(tenant) do
+ {:channel, :presence_events, tenant}
+ end
+
+ def presence_events_per_second_key(%Tenant{} = tenant) do
+ {:channel, :presence_events, tenant.external_id}
+ end
+
+ @doc """
+ The GenCounter key to use when counting connection attempts against Realtime.Tenants.Connect
+ ## Examples
+ iex> Realtime.Tenants.connection_attempts_per_second_key("tenant_id")
+ {:tenant, :connection_attempts, "tenant_id"}
+ iex> Realtime.Tenants.connection_attempts_per_second_key(%Realtime.Api.Tenant{external_id: "tenant_id"})
+ {:tenant, :connection_attempts, "tenant_id"}
+ """
+ @spec connection_attempts_per_second_key(Tenant.t() | String.t()) :: {:tenant, :connection_attempts, String.t()}
+ def connection_attempts_per_second_key(tenant) when is_binary(tenant) do
+ {:tenant, :connection_attempts, tenant}
+ end
+
+ def connection_attempts_per_second_key(%Tenant{} = tenant) do
+ {:tenant, :connection_attempts, tenant.external_id}
+ end
+
@spec get_tenant_limits(Realtime.Api.Tenant.t(), maybe_improper_list) :: list
def get_tenant_limits(%Tenant{} = tenant, keys) when is_list(keys) do
nodes = [Node.self() | Node.list()]
@@ -122,10 +254,86 @@ defmodule Realtime.Tenants do
@spec get_tenant_by_external_id(String.t()) :: Tenant.t() | nil
def get_tenant_by_external_id(external_id) do
- repo_replica = Repo.replica()
+ repo_replica = Replica.replica()
Tenant
|> repo_replica.get_by(external_id: external_id)
|> repo_replica.preload(:extensions)
end
+
+ @doc """
+ Builds a PubSub topic from a tenant and a sub-topic.
+ ## Examples
+
+ iex> Realtime.Tenants.tenant_topic(%Realtime.Api.Tenant{external_id: "tenant_id"}, "sub_topic")
+ "tenant_id:sub_topic"
+ iex> Realtime.Tenants.tenant_topic("tenant_id", "sub_topic")
+ "tenant_id:sub_topic"
+ iex> Realtime.Tenants.tenant_topic(%Realtime.Api.Tenant{external_id: "tenant_id"}, "sub_topic", false)
+ "tenant_id-private:sub_topic"
+ iex> Realtime.Tenants.tenant_topic("tenant_id", "sub_topic", false)
+ "tenant_id-private:sub_topic"
+ iex> Realtime.Tenants.tenant_topic("tenant_id", ":sub_topic", false)
+ "tenant_id-private::sub_topic"
+ """
+ @spec tenant_topic(Tenant.t() | binary(), String.t(), boolean()) :: String.t()
+ def tenant_topic(external_id, sub_topic, public? \\ true)
+
+ def tenant_topic(%Tenant{external_id: external_id}, sub_topic, public?),
+ do: tenant_topic(external_id, sub_topic, public?)
+
+ def tenant_topic(external_id, sub_topic, false),
+ do: "#{external_id}-private:#{sub_topic}"
+
+ def tenant_topic(external_id, sub_topic, true),
+ do: "#{external_id}:#{sub_topic}"
+
+ @doc """
+ Sets tenant as suspended. New connections won't be accepted
+ """
+ @spec suspend_tenant_by_external_id(String.t()) :: {:ok, Tenant.t()} | {:error, term()}
+ def suspend_tenant_by_external_id(external_id) do
+ external_id
+ |> Cache.get_tenant_by_external_id()
+ |> Tenant.changeset(%{suspend: true})
+ |> Repo.update!()
+ |> tap(fn _ -> broadcast_operation_event(:suspend_tenant, external_id) end)
+ |> tap(fn _ -> Cache.distributed_invalidate_tenant_cache(external_id) end)
+ end
+
+ @doc """
+ Sets tenant as unsuspended. New connections will be accepted
+ """
+ @spec unsuspend_tenant_by_external_id(String.t()) :: {:ok, Tenant.t()} | {:error, term()}
+ def unsuspend_tenant_by_external_id(external_id) do
+ external_id
+ |> Cache.get_tenant_by_external_id()
+ |> Tenant.changeset(%{suspend: false})
+ |> Repo.update!()
+ |> tap(fn _ -> broadcast_operation_event(:unsuspend_tenant, external_id) end)
+ |> tap(fn _ -> Cache.distributed_invalidate_tenant_cache(external_id) end)
+ end
+
+ @doc """
+ Checks if migrations for a given tenant need to run.
+ """
+ @spec run_migrations?(Tenant.t()) :: boolean()
+ def run_migrations?(%Tenant{} = tenant) do
+ tenant.migrations_ran < Enum.count(Migrations.migrations())
+ end
+
+ @doc """
+ Updates the migrations_ran field for a tenant.
+ """
+ @spec update_migrations_ran(binary(), integer()) :: {:ok, Tenant.t()} | {:error, term()}
+ def update_migrations_ran(external_id, count) do
+ external_id
+ |> Cache.get_tenant_by_external_id()
+ |> Tenant.changeset(%{migrations_ran: count})
+ |> Repo.update!()
+ |> tap(fn _ -> Cache.distributed_invalidate_tenant_cache(external_id) end)
+ end
+
+ defp broadcast_operation_event(action, external_id),
+ do: Phoenix.PubSub.broadcast!(Realtime.PubSub, "realtime:operations:" <> external_id, action)
end
diff --git a/lib/realtime/tenants/authorization.ex b/lib/realtime/tenants/authorization.ex
new file mode 100644
index 0000000..25c1b0f
--- /dev/null
+++ b/lib/realtime/tenants/authorization.ex
@@ -0,0 +1,282 @@
+defmodule Realtime.Tenants.Authorization do
+ @moduledoc """
+ Runs validations based on RLS policies to set policies for a given connection and
+ creates a Realtime.Tenants.Policies struct with the accumulated results of the policies
+ for a given user and a given channel context
+
+ Each extension will have its own set of ways to check Policies against the Authorization context but we will create some setup data to be used by the policies.
+
+ Check more information at Realtime.Tenants.Authorization.Policies
+ """
+ require Logger
+ import Ecto.Query
+
+ alias Phoenix.Socket
+ alias Plug.Conn
+ alias Realtime.Api.Message
+ alias Realtime.Api.Message
+ alias Realtime.Database
+ alias Realtime.Repo
+ alias Realtime.Tenants.Authorization.Policies
+ alias DBConnection.ConnectionError
+ defstruct [:tenant_id, :topic, :headers, :jwt, :claims, :role]
+
+ @type t :: %__MODULE__{
+ :tenant_id => binary() | nil,
+ :topic => binary() | nil,
+ :claims => map(),
+ :headers => keyword({binary(), binary()}),
+ :jwt => map(),
+ :role => binary()
+ }
+
+ @doc """
+ Builds a new authorization struct which will be used to retain the information required to check Policies.
+
+ Requires a map with the following keys:
+ * topic: The name of the channel being accessed taken from the request
+ * headers: Request headers when the connection was made or WS was updated
+ * jwt: JWT String
+ * claims: JWT claims
+ * role: JWT role
+ """
+ @spec build_authorization_params(map()) :: t()
+ def build_authorization_params(map) do
+ %__MODULE__{
+ tenant_id: Map.get(map, :tenant_id),
+ topic: Map.get(map, :topic),
+ headers: Map.get(map, :headers),
+ jwt: Map.get(map, :jwt),
+ claims: Map.get(map, :claims),
+ role: Map.get(map, :role)
+ }
+ end
+
+ @doc """
+ Runs validations based on RLS policies to set policies for read policies a given connection (either Phoenix.Socket or Plug.Conn).
+ """
+ @spec get_read_authorizations(Socket.t() | Conn.t(), pid(), __MODULE__.t()) ::
+ {:ok, Socket.t() | Conn.t()} | {:error, any()} | {:error, :rls_policy_error, any()}
+
+ def get_read_authorizations(%Socket{} = socket, db_conn, authorization_context) do
+ policies = Map.get(socket.assigns, :policies) || %Policies{}
+
+ case get_read_policies_for_connection(db_conn, authorization_context, policies) do
+ {:ok, %Policies{} = policies} -> {:ok, Socket.assign(socket, :policies, policies)}
+ {:ok, {:error, %Postgrex.Error{} = error}} -> {:error, :rls_policy_error, error}
+ {:error, %ConnectionError{reason: :queue_timeout}} -> {:error, :increase_connection_pool}
+ {:error, error} -> {:error, error}
+ end
+ end
+
+ def get_read_authorizations(%Conn{} = conn, db_conn, authorization_context) do
+ policies = Map.get(conn.assigns, :policies) || %Policies{}
+
+ case get_read_policies_for_connection(db_conn, authorization_context, policies) do
+ {:ok, %Policies{} = policies} -> {:ok, Conn.assign(conn, :policies, policies)}
+ {:ok, {:error, %Postgrex.Error{} = error}} -> {:error, :rls_policy_error, error}
+ {:error, %ConnectionError{reason: :queue_timeout}} -> {:error, :increase_connection_pool}
+ {:error, error} -> {:error, error}
+ end
+ end
+
+ @doc """
+ Runs validations based on RLS policies to set policies for read policies a given connection (either Phoenix.Socket or Conn).
+ """
+ @spec get_write_authorizations(Socket.t() | Conn.t() | pid(), pid(), __MODULE__.t()) ::
+ {:ok, Socket.t() | Conn.t() | Policies.t()}
+ | {:error, any()}
+ | {:error, :rls_policy_error, any()}
+
+ def get_write_authorizations(
+ %Socket{} = socket,
+ db_conn,
+ authorization_context
+ ) do
+ policies = Map.get(socket.assigns, :policies) || %Policies{}
+
+ case get_write_policies_for_connection(db_conn, authorization_context, policies) do
+ {:ok, %Policies{} = policies} -> {:ok, Socket.assign(socket, :policies, policies)}
+ {:ok, {:error, %Postgrex.Error{} = error}} -> {:error, :rls_policy_error, error}
+ {:error, %ConnectionError{reason: :queue_timeout}} -> {:error, :increase_connection_pool}
+ {:error, error} -> {:error, error}
+ end
+ end
+
+ def get_write_authorizations(%Conn{} = conn, db_conn, authorization_context) do
+ policies = Map.get(conn.assigns, :policies) || %Policies{}
+
+ case get_write_policies_for_connection(db_conn, authorization_context, policies) do
+ {:ok, %Policies{} = policies} -> {:ok, Conn.assign(conn, :policies, policies)}
+ {:ok, {:error, %Postgrex.Error{} = error}} -> {:error, :rls_policy_error, error}
+ {:error, %ConnectionError{reason: :queue_timeout}} -> {:error, :increase_connection_pool}
+ {:error, error} -> {:error, error}
+ end
+ end
+
+ def get_write_authorizations(db_conn, db_conn, authorization_context) when is_pid(db_conn) do
+ case get_write_policies_for_connection(db_conn, authorization_context, %Policies{}) do
+ {:ok, %Policies{} = policies} -> {:ok, policies}
+ {:ok, {:error, %Postgrex.Error{} = error}} -> {:error, :rls_policy_error, error}
+ {:error, %ConnectionError{reason: :queue_timeout}} -> {:error, :increase_connection_pool}
+ {:error, error} -> {:error, error}
+ end
+ end
+
+ @doc """
+ Sets the current connection configuration with the following config values:
+ * role: The role of the user
+ * realtime.topic: The name of the channel being accessed
+ * request.jwt.claim.role: The role of the user
+ * request.jwt.claim.sub: The sub claim of the JWT token
+ * request.jwt.claims: The claims of the JWT token
+ * request.headers: The headers of the request
+ """
+ @spec set_conn_config(DBConnection.t(), t()) ::
+ {:ok, Postgrex.Result.t()} | {:error, Exception.t()}
+ def set_conn_config(conn, authorization_context) do
+ %__MODULE__{
+ topic: topic,
+ headers: headers,
+ claims: claims,
+ role: role
+ } = authorization_context
+
+ claims = Jason.encode!(claims)
+ headers = headers |> Map.new() |> Jason.encode!()
+
+ Postgrex.query(
+ conn,
+ """
+ SELECT
+ set_config('role', $1, true),
+ set_config('realtime.topic', $2, true),
+ set_config('request.jwt.claims', $3, true),
+ set_config('request.headers', $4, true)
+ """,
+ [role, topic, claims, headers]
+ )
+ end
+
+ defp get_read_policies_for_connection(conn, authorization_context, policies) do
+ opts = [telemetry: [:realtime, :tenants, :read_authorization_check], tenant_id: authorization_context.tenant_id]
+
+ Database.transaction(
+ conn,
+ fn transaction_conn ->
+ messages = [
+ Message.changeset(%Message{}, %{
+ topic: authorization_context.topic,
+ extension: :broadcast
+ }),
+ Message.changeset(%Message{}, %{
+ topic: authorization_context.topic,
+ extension: :presence
+ })
+ ]
+
+ {:ok, messages} = Repo.insert_all_entries(transaction_conn, messages, Message)
+
+ {[%{id: broadcast_id}], [%{id: presence_id}]} =
+ Enum.split_with(messages, &(&1.extension == :broadcast))
+
+ set_conn_config(transaction_conn, authorization_context)
+
+ policies =
+ get_read_policy_for_connection_and_extension(
+ transaction_conn,
+ authorization_context,
+ broadcast_id,
+ presence_id,
+ policies
+ )
+
+ Postgrex.query!(transaction_conn, "ROLLBACK AND CHAIN", [])
+ policies
+ end,
+ opts
+ )
+ end
+
+ defp get_write_policies_for_connection(conn, authorization_context, policies) do
+ opts = [telemetry: [:realtime, :tenants, :write_authorization_check], tenant_id: authorization_context.tenant_id]
+
+ Database.transaction(
+ conn,
+ fn transaction_conn ->
+ set_conn_config(transaction_conn, authorization_context)
+
+ policies =
+ get_write_policy_for_connection_and_extension(
+ transaction_conn,
+ authorization_context,
+ policies
+ )
+
+ Postgrex.query!(transaction_conn, "ROLLBACK AND CHAIN", [])
+
+ policies
+ end,
+ opts
+ )
+ end
+
+ defp get_read_policy_for_connection_and_extension(
+ conn,
+ authorization_context,
+ broadcast_id,
+ presence_id,
+ policies
+ ) do
+ query =
+ from(m in Message,
+ where: [topic: ^authorization_context.topic],
+ where: [extension: :broadcast, id: ^broadcast_id],
+ or_where: [extension: :presence, id: ^presence_id]
+ )
+
+ with {:ok, res} <- Repo.all(conn, query, Message) do
+ can_presence? = Enum.any?(res, fn %{id: id} -> id == presence_id end)
+ can_broadcast? = Enum.any?(res, fn %{id: id} -> id == broadcast_id end)
+
+ policies
+ |> Policies.update_policies(:presence, :read, can_presence?)
+ |> Policies.update_policies(:broadcast, :read, can_broadcast?)
+ end
+ end
+
+ defp get_write_policy_for_connection_and_extension(
+ conn,
+ authorization_context,
+ policies
+ ) do
+ broadcast_changeset =
+ Message.changeset(%Message{}, %{topic: authorization_context.topic, extension: :broadcast})
+
+ presence_changeset =
+ Message.changeset(%Message{}, %{topic: authorization_context.topic, extension: :presence})
+
+ policies =
+ case Repo.insert(conn, broadcast_changeset, Message, mode: :savepoint) do
+ {:ok, _} ->
+ Policies.update_policies(policies, :broadcast, :write, true)
+
+ {:error, %Postgrex.Error{postgres: %{code: :insufficient_privilege}}} ->
+ Policies.update_policies(policies, :broadcast, :write, false)
+
+ e ->
+ e
+ end
+
+ case Repo.insert(conn, presence_changeset, Message, mode: :savepoint) do
+ {:ok, _} ->
+ Policies.update_policies(policies, :presence, :write, true)
+
+ {:error, %Postgrex.Error{postgres: %{code: :insufficient_privilege}}} ->
+ Policies.update_policies(policies, :presence, :write, false)
+
+ e ->
+ e
+ end
+ end
+end
diff --git a/lib/realtime/tenants/authorization/policies.ex b/lib/realtime/tenants/authorization/policies.ex
new file mode 100644
index 0000000..57a4df4
--- /dev/null
+++ b/lib/realtime/tenants/authorization/policies.ex
@@ -0,0 +1,28 @@
+defmodule Realtime.Tenants.Authorization.Policies do
+ @moduledoc """
+ Policies structure that holds the required authorization information for a given connection.
+
+ Currently there are two types of policies:
+ * Realtime.Tenants.Authorization.Policies.BroadcastPolicies - Used to store the access to Broadcast feature on a given Topic
+ * Realtime.Tenants.Authorization.Policies.PresencePolicies - Used to store the access to Presence feature on a given Topic
+ """
+
+ alias Realtime.Tenants.Authorization.Policies.BroadcastPolicies
+ alias Realtime.Tenants.Authorization.Policies.PresencePolicies
+
+ defstruct broadcast: %BroadcastPolicies{},
+ presence: %PresencePolicies{}
+
+ @type t :: %__MODULE__{
+ broadcast: BroadcastPolicies.t(),
+ presence: PresencePolicies.t()
+ }
+
+ @doc """
+ Updates the Policies struct sub key with the given value.
+ """
+ @spec update_policies(t(), atom, atom, boolean) :: t()
+ def update_policies(policies, key, sub_key, value) do
+ Map.update!(policies, key, fn map -> Map.put(map, sub_key, value) end)
+ end
+end
diff --git a/lib/realtime/tenants/authorization/policies/broadcast_policies.ex b/lib/realtime/tenants/authorization/policies/broadcast_policies.ex
new file mode 100644
index 0000000..80d1724
--- /dev/null
+++ b/lib/realtime/tenants/authorization/policies/broadcast_policies.ex
@@ -0,0 +1,13 @@
+defmodule Realtime.Tenants.Authorization.Policies.BroadcastPolicies do
+ @moduledoc """
+ BroadcastPolicies structure that holds the required authorization information for a given connection within the scope of a sending / receiving broadcasts messages
+ """
+ require Logger
+
+ defstruct read: nil, write: nil
+
+ @type t :: %__MODULE__{
+ read: boolean() | nil,
+ write: boolean() | nil
+ }
+end
diff --git a/lib/realtime/tenants/authorization/policies/presence_policies.ex b/lib/realtime/tenants/authorization/policies/presence_policies.ex
new file mode 100644
index 0000000..228d45f
--- /dev/null
+++ b/lib/realtime/tenants/authorization/policies/presence_policies.ex
@@ -0,0 +1,13 @@
+defmodule Realtime.Tenants.Authorization.Policies.PresencePolicies do
+ @moduledoc """
+ PresencePolicies structure that holds the required authorization information for a given connection within the scope of a tracking / receiving presence messages
+ """
+ require Logger
+
+ defstruct read: nil, write: nil
+
+ @type t :: %__MODULE__{
+ read: boolean() | nil,
+ write: boolean() | nil
+ }
+end
diff --git a/lib/realtime/tenants/batch_broadcast.ex b/lib/realtime/tenants/batch_broadcast.ex
new file mode 100644
index 0000000..1741d39
--- /dev/null
+++ b/lib/realtime/tenants/batch_broadcast.ex
@@ -0,0 +1,146 @@
+defmodule Realtime.Tenants.BatchBroadcast do
+ @moduledoc """
+ Virtual schema with a representation of a batched broadcast.
+ """
+ use Ecto.Schema
+ import Ecto.Changeset
+
+ alias Realtime.Api.Tenant
+ alias Realtime.GenCounter
+ alias Realtime.RateCounter
+ alias Realtime.Tenants
+ alias Realtime.Tenants.Authorization
+ alias Realtime.Tenants.Authorization.Policies
+ alias Realtime.Tenants.Authorization.Policies.BroadcastPolicies
+ alias Realtime.Tenants.Connect
+
+ alias RealtimeWeb.Endpoint
+
+ embedded_schema do
+ embeds_many :messages, Message do
+ field :event, :string
+ field :topic, :string
+ field :payload, :map
+ field :private, :boolean, default: false
+ end
+ end
+
+ def broadcast(auth_params, tenant, messages, super_user \\ false)
+
+ def broadcast(%Plug.Conn{} = conn, %Tenant{} = tenant, messages, super_user) do
+ auth_params = %{
+ tenant_id: tenant.external_id,
+ headers: conn.req_headers,
+ jwt: conn.assigns.jwt,
+ claims: conn.assigns.claims,
+ role: conn.assigns.role
+ }
+
+ broadcast(auth_params, %Tenant{} = tenant, messages, super_user)
+ end
+
+ def broadcast(auth_params, %Tenant{} = tenant, messages, super_user) do
+ with %Ecto.Changeset{valid?: true} = changeset <- changeset(%__MODULE__{}, messages),
+ %Ecto.Changeset{changes: %{messages: messages}} = changeset,
+ events_per_second_key = Tenants.events_per_second_key(tenant),
+ :ok <- check_rate_limit(events_per_second_key, tenant, length(messages)) do
+ events =
+ messages
+ |> Enum.map(fn %{changes: event} -> event end)
+ |> Enum.group_by(fn event -> Map.get(event, :private, false) end)
+
+ # Handle events for public channel
+ events
+ |> Map.get(false, [])
+ |> Enum.each(fn %{topic: sub_topic, payload: payload, event: event} ->
+ send_message_and_count(tenant, sub_topic, event, payload, true)
+ end)
+
+ tenant_db_conn = Connect.lookup_or_start_connection(tenant.external_id)
+
+ # Handle events for private channel
+ events
+ |> Map.get(true, [])
+ |> Enum.group_by(fn event -> Map.get(event, :topic) end)
+ |> Enum.each(fn {topic, events} ->
+ if super_user do
+ Enum.each(events, fn %{topic: sub_topic, payload: payload, event: event} ->
+ send_message_and_count(tenant, sub_topic, event, payload, false)
+ end)
+ else
+ case permissions_for_message(auth_params, tenant_db_conn, topic) do
+ %Policies{broadcast: %BroadcastPolicies{write: true}} ->
+ Enum.each(events, fn %{topic: sub_topic, payload: payload, event: event} ->
+ send_message_and_count(tenant, sub_topic, event, payload, false)
+ end)
+
+ _ ->
+ nil
+ end
+ end
+ end)
+
+ :ok
+ end
+ end
+
+ def broadcast(_, nil, _, _), do: {:error, :tenant_not_found}
+
+ def changeset(payload, attrs) do
+ payload
+ |> cast(attrs, [])
+ |> cast_embed(:messages, required: true, with: &message_changeset/2)
+ end
+
+ def message_changeset(message, attrs) do
+ message
+ |> cast(attrs, [:topic, :payload, :event, :private])
+ |> maybe_put_private_change()
+ |> validate_required([:topic, :payload, :event])
+ end
+
+ defp maybe_put_private_change(changeset) do
+ case get_change(changeset, :private) do
+ nil -> put_change(changeset, :private, false)
+ _ -> changeset
+ end
+ end
+
+ defp send_message_and_count(tenant, topic, event, payload, public?) do
+ events_per_second_key = Tenants.events_per_second_key(tenant)
+ tenant_topic = Tenants.tenant_topic(tenant, topic, public?)
+ payload = %{"payload" => payload, "event" => event, "type" => "broadcast"}
+
+ GenCounter.add(events_per_second_key)
+ Endpoint.broadcast_from(self(), tenant_topic, "broadcast", payload)
+ end
+
+ defp permissions_for_message(_, {:error, _}, _), do: nil
+ defp permissions_for_message(nil, _, _), do: nil
+
+ defp permissions_for_message(auth_params, {:ok, db_conn}, topic) do
+ auth_params = auth_params |> Map.put(:topic, topic) |> Authorization.build_authorization_params()
+
+ case Authorization.get_write_authorizations(db_conn, db_conn, auth_params) do
+ {:ok, policies} -> policies
+ {:error, :not_found} -> nil
+ error -> error
+ end
+ end
+
+ defp check_rate_limit(events_per_second_key, %Tenant{} = tenant, total_messages_to_broadcast) do
+ %{max_events_per_second: max_events_per_second} = tenant
+ {:ok, %{avg: events_per_second}} = RateCounter.get(events_per_second_key)
+
+ cond do
+ events_per_second > max_events_per_second ->
+ {:error, :too_many_requests, "You have exceeded your rate limit"}
+
+ total_messages_to_broadcast + events_per_second > max_events_per_second ->
+ {:error, :too_many_requests, "Too many messages to broadcast, please reduce the batch size"}
+
+ true ->
+ :ok
+ end
+ end
+end
diff --git a/lib/realtime/tenants/cache.ex b/lib/realtime/tenants/cache.ex
index 6b39c00..3557428 100644
--- a/lib/realtime/tenants/cache.ex
+++ b/lib/realtime/tenants/cache.ex
@@ -2,21 +2,34 @@ defmodule Realtime.Tenants.Cache do
@moduledoc """
Cache for Tenants.
"""
-
require Cachex.Spec
alias Realtime.Tenants
-
+ @expiration :timer.seconds(30)
def child_spec(_) do
%{
id: __MODULE__,
- start:
- {Cachex, :start_link, [__MODULE__, [expiration: Cachex.Spec.expiration(default: 30_000)]]}
+ start: {Cachex, :start_link, [__MODULE__, [expiration: Cachex.Spec.expiration(default: @expiration)]]}
}
end
def get_tenant_by_external_id(keyword), do: apply_repo_fun(__ENV__.function, [keyword])
+ @doc """
+ Invalidates the cache for a tenant in the local node
+ """
+ def invalidate_tenant_cache(tenant_id) do
+ Cachex.del(__MODULE__, {{:get_tenant_by_external_id, 1}, [tenant_id]})
+ end
+
+ @doc """
+ Broadcasts a message to invalidate the tenant cache to all connected nodes
+ """
+ @spec distributed_invalidate_tenant_cache(String.t()) :: :ok
+ def distributed_invalidate_tenant_cache(tenant_id) when is_binary(tenant_id) do
+ Phoenix.PubSub.broadcast!(Realtime.PubSub, "realtime:invalidate_cache", tenant_id)
+ end
+
defp apply_repo_fun(arg1, arg2) do
Realtime.ContextCache.apply_fun(Tenants, arg1, arg2)
end
diff --git a/lib/realtime/tenants/cache_pub_sub_handler.ex b/lib/realtime/tenants/cache_pub_sub_handler.ex
new file mode 100644
index 0000000..4086d4f
--- /dev/null
+++ b/lib/realtime/tenants/cache_pub_sub_handler.ex
@@ -0,0 +1,27 @@
+defmodule Realtime.Tenants.CachePubSubHandler do
+ @moduledoc """
+ Process that listens to PubSub messages and triggers tenant cache invalidation.
+ """
+ use GenServer
+
+ require Logger
+
+ alias Realtime.Tenants.Cache
+
+ def start_link(opts) do
+ GenServer.start_link(__MODULE__, opts, name: __MODULE__)
+ end
+
+ @impl true
+ def init(_) do
+ Phoenix.PubSub.subscribe(Realtime.PubSub, "realtime:invalidate_cache")
+ {:ok, []}
+ end
+
+ @impl true
+ def handle_info(tenant_id, state) do
+ Logger.warning("Triggering cache invalidation", external_id: tenant_id)
+ Cache.invalidate_tenant_cache(tenant_id)
+ {:noreply, state}
+ end
+end
diff --git a/lib/realtime/tenants/cache_supervisor.ex b/lib/realtime/tenants/cache_supervisor.ex
new file mode 100644
index 0000000..7d190b7
--- /dev/null
+++ b/lib/realtime/tenants/cache_supervisor.ex
@@ -0,0 +1,19 @@
+defmodule Realtime.Tenants.CacheSupervisor do
+ @moduledoc """
+ Supervisor for Tenants Cache and Operational processes
+ """
+ use Supervisor
+
+ alias Realtime.Tenants.Cache
+ alias Realtime.Tenants.CachePubSubHandler
+
+ def start_link(init_arg) do
+ Supervisor.start_link(__MODULE__, init_arg, name: __MODULE__)
+ end
+
+ @impl true
+ def init(_init_arg) do
+ children = [CachePubSubHandler, Cache]
+ Supervisor.init(children, strategy: :one_for_one)
+ end
+end
diff --git a/lib/realtime/tenants/connect.ex b/lib/realtime/tenants/connect.ex
new file mode 100644
index 0000000..7559c0f
--- /dev/null
+++ b/lib/realtime/tenants/connect.ex
@@ -0,0 +1,411 @@
+defmodule Realtime.Tenants.Connect do
+ @moduledoc """
+ This module is responsible for attempting to connect to a tenant's database and store the DBConnection in a Syn registry.
+
+ ## Options
+ * `:check_connected_user_interval` - The interval in milliseconds to check if there are any connected users to a tenant channel. If there are no connected users, the connection will be stopped.
+ * `:erpc_timeout` - The timeout in milliseconds for the `:erpc` calls to the tenant's database.
+ """
+ use GenServer, restart: :transient
+
+ require Logger
+
+ import Realtime.Logs
+
+ alias Realtime.Api.Tenant
+ alias Realtime.Rpc
+ alias Realtime.Tenants
+ alias Realtime.Tenants.Connect.Backoff
+ alias Realtime.Tenants.Connect.CheckConnection
+ alias Realtime.Tenants.Connect.GetTenant
+ alias Realtime.Tenants.Connect.Piper
+ alias Realtime.Tenants.Connect.RegisterProcess
+ alias Realtime.Tenants.Connect.StartCounters
+ alias Realtime.Tenants.Listen
+ alias Realtime.Tenants.Migrations
+ alias Realtime.Tenants.ReplicationConnection
+ alias Realtime.UsersCounter
+
+ @rpc_timeout_default 30_000
+ @check_connected_user_interval_default 50_000
+ @connected_users_bucket_shutdown [0, 0, 0, 0, 0, 0]
+
+ defstruct tenant_id: nil,
+ db_conn_reference: nil,
+ db_conn_pid: nil,
+ replication_connection_pid: nil,
+ replication_connection_reference: nil,
+ listen_pid: nil,
+ listen_reference: nil,
+ check_connected_user_interval: nil,
+ connected_users_bucket: [1]
+
+ @doc "Check if Connect has finished setting up connections"
+ def ready?(tenant_id) do
+ case whereis(tenant_id) do
+ pid when is_pid(pid) ->
+ GenServer.call(pid, :ready?)
+
+ _ ->
+ false
+ end
+ end
+
+ @doc """
+ Returns the database connection for a tenant. If the tenant is not connected, it will attempt to connect to the tenant's database.
+ """
+ @spec lookup_or_start_connection(binary(), keyword()) ::
+ {:ok, pid()}
+ | {:error, :tenant_database_unavailable}
+ | {:error, :initializing}
+ | {:error, :tenant_database_connection_initializing}
+ | {:error, :rpc_error, term()}
+ def lookup_or_start_connection(tenant_id, opts \\ []) when is_binary(tenant_id) do
+ case get_status(tenant_id) do
+ {:ok, conn} ->
+ {:ok, conn}
+
+ {:error, :tenant_database_unavailable} ->
+ call_external_node(tenant_id, opts)
+
+ {:error, :tenant_database_connection_initializing} ->
+ Process.sleep(100)
+ call_external_node(tenant_id, opts)
+
+ {:error, :initializing} ->
+ {:error, :tenant_database_unavailable}
+ end
+ end
+
+ @doc """
+ Returns the database connection pid from :syn if it exists.
+ """
+ @spec get_status(binary()) ::
+ {:ok, pid()}
+ | {:error, :tenant_database_unavailable}
+ | {:error, :initializing}
+ | {:error, :tenant_database_connection_initializing}
+ def get_status(tenant_id) do
+ case :syn.lookup(__MODULE__, tenant_id) do
+ {_, %{conn: nil}} ->
+ {:error, :initializing}
+
+ {_, %{conn: conn}} ->
+ {:ok, conn}
+
+ :undefined ->
+ Logger.warning("Connection process starting up")
+ {:error, :tenant_database_connection_initializing}
+
+ error ->
+ log_error("SynInitializationError", error)
+ {:error, :tenant_database_unavailable}
+ end
+ end
+
+ @doc """
+ Connects to a tenant's database and stores the DBConnection in the process :syn metadata
+ """
+ @spec connect(binary(), keyword()) :: {:ok, DBConnection.t()} | {:error, term()}
+ def connect(tenant_id, opts \\ []) do
+ supervisor =
+ {:via, PartitionSupervisor, {Realtime.Tenants.Connect.DynamicSupervisor, tenant_id}}
+
+ spec = {__MODULE__, [tenant_id: tenant_id] ++ opts}
+
+ case DynamicSupervisor.start_child(supervisor, spec) do
+ {:ok, _} ->
+ get_status(tenant_id)
+
+ {:error, {:already_started, _}} ->
+ get_status(tenant_id)
+
+ {:error, {:shutdown, :tenant_db_too_many_connections}} ->
+ {:error, :tenant_db_too_many_connections}
+
+ {:error, {:shutdown, :tenant_not_found}} ->
+ {:error, :tenant_not_found}
+
+ {:error, {:shutdown, :tenant_create_backoff}} ->
+ log_warning("TooManyConnectAttempts", "Too many connect attempts to tenant database")
+ {:error, :tenant_create_backoff}
+
+ {:error, :shutdown} ->
+ log_error("UnableToConnectToTenantDatabase", "Unable to connect to tenant database")
+ {:error, :tenant_database_unavailable}
+
+ {:error, error} ->
+ log_error("UnableToConnectToTenantDatabase", error)
+ {:error, :tenant_database_unavailable}
+ end
+ end
+
+ @doc """
+ Returns the pid of the tenant Connection process and db_conn pid
+ """
+ @spec whereis(binary()) :: pid() | nil
+ def whereis(tenant_id) do
+ case :syn.lookup(__MODULE__, tenant_id) do
+ {pid, _} when is_pid(pid) -> pid
+ _ -> nil
+ end
+ end
+
+ @doc """
+ Shutdown the tenant Connection and linked processes
+ """
+ @spec shutdown(binary()) :: :ok | nil
+ def shutdown(tenant_id) do
+ case whereis(tenant_id) do
+ pid when is_pid(pid) ->
+ send(pid, :shutdown_connect)
+ :ok
+
+ _ ->
+ :ok
+ end
+ end
+
+ def start_link(opts) do
+ tenant_id = Keyword.get(opts, :tenant_id)
+
+ check_connected_user_interval =
+ Keyword.get(opts, :check_connected_user_interval, @check_connected_user_interval_default)
+
+ name = {__MODULE__, tenant_id, %{conn: nil}}
+
+ state = %__MODULE__{
+ tenant_id: tenant_id,
+ check_connected_user_interval: check_connected_user_interval
+ }
+
+ opts = Keyword.put(opts, :name, {:via, :syn, name})
+
+ GenServer.start_link(__MODULE__, state, opts)
+ end
+
+ ## GenServer callbacks
+ # Needs to be done on init/1 to guarantee the GenServer only starts if we are able to connect to the database
+ @impl GenServer
+ def init(%{tenant_id: tenant_id} = state) do
+ Logger.metadata(external_id: tenant_id, project: tenant_id)
+
+ pipes = [
+ GetTenant,
+ Backoff,
+ CheckConnection,
+ StartCounters,
+ RegisterProcess
+ ]
+
+ case Piper.run(pipes, state) do
+ {:ok, acc} ->
+ {:ok, acc, {:continue, :run_migrations}}
+
+ {:error, :tenant_not_found} ->
+ {:stop, {:shutdown, :tenant_not_found}}
+
+ {:error, :tenant_db_too_many_connections} ->
+ {:stop, {:shutdown, :tenant_db_too_many_connections}}
+
+ {:error, :tenant_create_backoff} ->
+ {:stop, {:shutdown, :tenant_create_backoff}}
+
+ {:error, error} ->
+ log_error("UnableToConnectToTenantDatabase", error)
+ {:stop, :shutdown}
+ end
+ end
+
+ def handle_continue(:run_migrations, state) do
+ %{tenant: tenant, db_conn_pid: db_conn_pid} = state
+ Logger.warning("Tenant #{tenant.external_id} is initializing: #{inspect(node())}")
+
+ with res when res in [:ok, :noop] <- Migrations.run_migrations(tenant),
+ :ok <- Migrations.create_partitions(db_conn_pid) do
+ {:noreply, state, {:continue, :start_listen_and_replication}}
+ else
+ error ->
+ log_error("MigrationsFailedToRun", error)
+ {:stop, :shutdown, state}
+ end
+ rescue
+ error ->
+ log_error("MigrationsFailedToRun", error)
+ {:stop, :shutdown, state}
+ end
+
+ def handle_continue(:start_listen_and_replication, state) do
+ %{tenant: tenant} = state
+
+ with {:ok, replication_connection_pid} <- ReplicationConnection.start(tenant, self()),
+ {:ok, listen_pid} <- Listen.start(tenant, self()) do
+ replication_connection_reference = Process.monitor(replication_connection_pid)
+ listen_reference = Process.monitor(listen_pid)
+
+ state = %{
+ state
+ | replication_connection_pid: replication_connection_pid,
+ replication_connection_reference: replication_connection_reference,
+ listen_pid: listen_pid,
+ listen_reference: listen_reference
+ }
+
+ {:noreply, state, {:continue, :setup_connected_user_events}}
+ else
+ {:error, :max_wal_senders_reached} ->
+ log_error("ReplicationMaxWalSendersReached", "Tenant database has reached the maximum number of WAL senders")
+ {:stop, :shutdown, state}
+
+ {:error, error} ->
+ log_error("StartListenAndReplicationFailed", error)
+ {:stop, :shutdown, state}
+ end
+ rescue
+ error ->
+ log_error("StartListenAndReplicationFailed", error)
+ {:stop, :shutdown, state}
+ end
+
+ @impl true
+ def handle_continue(:setup_connected_user_events, state) do
+ %{
+ check_connected_user_interval: check_connected_user_interval,
+ connected_users_bucket: connected_users_bucket,
+ tenant_id: tenant_id
+ } = state
+
+ :ok = Phoenix.PubSub.subscribe(Realtime.PubSub, "realtime:operations:" <> tenant_id)
+ send_connected_user_check_message(connected_users_bucket, check_connected_user_interval)
+ :ets.insert(__MODULE__, {tenant_id})
+ {:noreply, state}
+ end
+
+ @impl GenServer
+ def handle_info(
+ :check_connected_users,
+ %{
+ tenant_id: tenant_id,
+ check_connected_user_interval: check_connected_user_interval,
+ connected_users_bucket: connected_users_bucket
+ } = state
+ ) do
+ connected_users_bucket =
+ tenant_id
+ |> update_connected_users_bucket(connected_users_bucket)
+ |> send_connected_user_check_message(check_connected_user_interval)
+
+ {:noreply, %{state | connected_users_bucket: connected_users_bucket}}
+ end
+
+ def handle_info(:shutdown_no_connected_users, state) do
+ Logger.info("Tenant has no connected users, database connection will be terminated")
+ shutdown_connect_process(state)
+ end
+
+ def handle_info(:suspend_tenant, state) do
+ Logger.warning("Tenant was suspended, database connection will be terminated")
+ shutdown_connect_process(state)
+ end
+
+ def handle_info(:shutdown_connect, state) do
+ Logger.warning("Shutdowning tenant connection")
+ shutdown_connect_process(state)
+ end
+
+ # Handle database connection termination
+ def handle_info(
+ {:DOWN, db_conn_reference, _, _, _},
+ %{db_conn_reference: db_conn_reference} = state
+ ) do
+ Logger.warning("Database connection has been terminated")
+ {:stop, :shutdown, state}
+ end
+
+ # Handle replication connection termination
+ def handle_info(
+ {:DOWN, replication_connection_reference, _, _, _},
+ %{replication_connection_reference: replication_connection_reference} = state
+ ) do
+ Logger.warning("Replication connection has died")
+ {:stop, :shutdown, state}
+ end
+
+ # Handle listen connection termination
+ def handle_info(
+ {:DOWN, listen_reference, _, _, _},
+ %{listen_reference: listen_reference} = state
+ ) do
+ Logger.warning("Listen has been terminated")
+ {:stop, :shutdown, state}
+ end
+
+ # Ignore messages to avoid handle_info unmatched functions
+ def handle_info(_, state) do
+ {:noreply, state}
+ end
+
+ @impl true
+ def handle_call(:ready?, _from, state) do
+ # We just want to know if the process is ready to reply to the client
+ # Essentially checking if all handle_continue's were completed
+ {:reply, true, state}
+ end
+
+ @impl true
+ def terminate(reason, %{tenant_id: tenant_id}) do
+ Logger.info("Tenant #{tenant_id} has been terminated: #{inspect(reason)}")
+ Realtime.MetricsCleaner.delete_metric(tenant_id)
+ :ok
+ end
+
+ ## Private functions
+ defp call_external_node(tenant_id, opts) do
+ rpc_timeout = Keyword.get(opts, :rpc_timeout, @rpc_timeout_default)
+
+ with tenant <- Tenants.Cache.get_tenant_by_external_id(tenant_id),
+ :ok <- tenant_suspended?(tenant),
+ {:ok, node} <- Realtime.Nodes.get_node_for_tenant(tenant) do
+ Rpc.enhanced_call(node, __MODULE__, :connect, [tenant_id, opts], timeout: rpc_timeout, tenant: tenant_id)
+ end
+ end
+
+ defp update_connected_users_bucket(tenant_id, connected_users_bucket) do
+ connected_users_bucket
+ |> then(&(&1 ++ [UsersCounter.tenant_users(tenant_id)]))
+ |> Enum.take(-6)
+ end
+
+ defp send_connected_user_check_message(
+ @connected_users_bucket_shutdown,
+ check_connected_user_interval
+ ) do
+ Process.send_after(self(), :shutdown_no_connected_users, check_connected_user_interval)
+ end
+
+ defp send_connected_user_check_message(connected_users_bucket, check_connected_user_interval) do
+ Process.send_after(self(), :check_connected_users, check_connected_user_interval)
+ connected_users_bucket
+ end
+
+ defp tenant_suspended?(%Tenant{suspend: true}), do: {:error, :tenant_suspended}
+ defp tenant_suspended?(_), do: :ok
+
+ defp shutdown_connect_process(state) do
+ %{
+ db_conn_pid: db_conn_pid,
+ replication_connection_pid: replication_connection_pid,
+ listen_pid: listen_pid
+ } = state
+
+ :ok = GenServer.stop(db_conn_pid, :shutdown, 500)
+
+ replication_connection_pid && Process.alive?(replication_connection_pid) &&
+ GenServer.stop(replication_connection_pid, :normal, 500)
+
+ listen_pid && Process.alive?(listen_pid) &&
+ GenServer.stop(listen_pid, :normal, 500)
+
+ {:stop, :normal, state}
+ end
+end
diff --git a/lib/realtime/tenants/connect/backoff.ex b/lib/realtime/tenants/connect/backoff.ex
new file mode 100644
index 0000000..2d454f7
--- /dev/null
+++ b/lib/realtime/tenants/connect/backoff.ex
@@ -0,0 +1,38 @@
+defmodule Realtime.Tenants.Connect.Backoff do
+ @moduledoc """
+ Applies backoff on process initialization.
+ """
+ alias Realtime.RateCounter
+ alias Realtime.GenCounter
+ alias Realtime.Tenants
+ @behaviour Realtime.Tenants.Connect.Piper
+
+ @impl Realtime.Tenants.Connect.Piper
+ def run(acc) do
+ %{tenant_id: tenant_id} = acc
+ connect_throttle_limit_per_second = Application.fetch_env!(:realtime, :connect_throttle_limit_per_second)
+
+ with {:ok, counter} <- start_connects_per_second_counter(tenant_id),
+ {:ok, %{avg: avg}} when avg <= connect_throttle_limit_per_second <- RateCounter.get(counter) do
+ GenCounter.add(counter)
+ {:ok, acc}
+ else
+ _ -> {:error, :tenant_create_backoff}
+ end
+ end
+
+ defp start_connects_per_second_counter(tenant_id) do
+ id = Tenants.connection_attempts_per_second_key(tenant_id)
+
+ case RateCounter.get(id) do
+ {:ok, _} ->
+ :ok
+
+ {:error, _} ->
+ GenCounter.new(id)
+ RateCounter.new(id, idle_shutdown: :infinity, tick: 100, idle_shutdown_ms: :timer.minutes(5))
+ end
+
+ {:ok, id}
+ end
+end
diff --git a/lib/realtime/tenants/connect/check_connection.ex b/lib/realtime/tenants/connect/check_connection.ex
new file mode 100644
index 0000000..697c08b
--- /dev/null
+++ b/lib/realtime/tenants/connect/check_connection.ex
@@ -0,0 +1,22 @@
+defmodule Realtime.Tenants.Connect.CheckConnection do
+ @moduledoc """
+ Check tenant database connection.
+ """
+ alias Realtime.Database
+
+ @behaviour Realtime.Tenants.Connect.Piper
+ @impl true
+ def run(acc) do
+ %{tenant: tenant} = acc
+
+ case Database.check_tenant_connection(tenant) do
+ {:ok, conn} ->
+ Process.link(conn)
+ db_conn_reference = Process.monitor(conn)
+ {:ok, %{acc | db_conn_pid: conn, db_conn_reference: db_conn_reference}}
+
+ {:error, error} ->
+ {:error, error}
+ end
+ end
+end
diff --git a/lib/realtime/tenants/connect/get_tenant.ex b/lib/realtime/tenants/connect/get_tenant.ex
new file mode 100644
index 0000000..2cd14af
--- /dev/null
+++ b/lib/realtime/tenants/connect/get_tenant.ex
@@ -0,0 +1,19 @@
+defmodule Realtime.Tenants.Connect.GetTenant do
+ @moduledoc """
+ Get tenant database connection.
+ """
+
+ alias Realtime.Api.Tenant
+ alias Realtime.Tenants
+ @behaviour Realtime.Tenants.Connect.Piper
+
+ @impl Realtime.Tenants.Connect.Piper
+ def run(acc) do
+ %{tenant_id: tenant_id} = acc
+
+ case Tenants.Cache.get_tenant_by_external_id(tenant_id) do
+ %Tenant{} = tenant -> {:ok, Map.put(acc, :tenant, tenant)}
+ _ -> {:error, :tenant_not_found}
+ end
+ end
+end
diff --git a/lib/realtime/tenants/connect/piper.ex b/lib/realtime/tenants/connect/piper.ex
new file mode 100644
index 0000000..9951808
--- /dev/null
+++ b/lib/realtime/tenants/connect/piper.ex
@@ -0,0 +1,24 @@
+defmodule Realtime.Tenants.Connect.Piper do
+ @moduledoc """
+ Pipes different commands to execute specific actions during the connection process.
+ """
+ require Logger
+ @callback run(any()) :: {:ok, any()} | {:error, any()}
+
+ def run(pipers, init) do
+ Enum.reduce_while(pipers, {:ok, init}, fn piper, {:ok, acc} ->
+ case :timer.tc(fn -> piper.run(acc) end, :millisecond) do
+ {exec_time, {:ok, result}} ->
+ Logger.info("#{inspect(piper)} executed in #{exec_time} ms")
+ {:cont, {:ok, result}}
+
+ {exec_time, {:error, error}} ->
+ Logger.error("#{inspect(piper)} failed in #{exec_time} ms")
+ {:halt, {:error, error}}
+
+ _ ->
+ raise ArgumentError, "must return {:ok, _} or {:error, _}"
+ end
+ end)
+ end
+end
diff --git a/lib/realtime/tenants/connect/register_process.ex b/lib/realtime/tenants/connect/register_process.ex
new file mode 100644
index 0000000..8699890
--- /dev/null
+++ b/lib/realtime/tenants/connect/register_process.ex
@@ -0,0 +1,17 @@
+defmodule Realtime.Tenants.Connect.RegisterProcess do
+ @moduledoc """
+ Registers the database process in :syn
+ """
+ @behaviour Realtime.Tenants.Connect.Piper
+
+ @impl true
+ def run(acc) do
+ %{tenant_id: tenant_id, db_conn_pid: conn} = acc
+
+ case :syn.update_registry(Realtime.Tenants.Connect, tenant_id, fn _pid, meta -> %{meta | conn: conn} end) do
+ {:ok, _} -> {:ok, acc}
+ {:error, :undefined} -> {:error, :process_not_found}
+ {:error, reason} -> {:error, reason}
+ end
+ end
+end
diff --git a/lib/realtime/tenants/connect/start_counters.ex b/lib/realtime/tenants/connect/start_counters.ex
new file mode 100644
index 0000000..b3203a2
--- /dev/null
+++ b/lib/realtime/tenants/connect/start_counters.ex
@@ -0,0 +1,89 @@
+defmodule Realtime.Tenants.Connect.StartCounters do
+ @moduledoc """
+ Start tenant counters.
+ """
+
+ alias Realtime.GenCounter
+ alias Realtime.RateCounter
+ alias Realtime.Tenants
+
+ @behaviour Realtime.Tenants.Connect.Piper
+
+ @impl true
+ def run(acc) do
+ %{tenant: tenant} = acc
+
+ with :ok <- start_joins_per_second_counter(tenant),
+ :ok <- start_max_events_counter(tenant),
+ :ok <- start_db_events_counter(tenant) do
+ {:ok, acc}
+ end
+ end
+
+ def start_joins_per_second_counter(tenant) do
+ %{max_joins_per_second: max_joins_per_second} = tenant
+ id = Tenants.joins_per_second_key(tenant)
+ GenCounter.new(id)
+
+ res =
+ RateCounter.new(id,
+ idle_shutdown: :infinity,
+ telemetry: %{
+ event_name: [:channel, :joins],
+ measurements: %{limit: max_joins_per_second},
+ metadata: %{tenant: tenant.external_id}
+ }
+ )
+
+ case res do
+ {:ok, _} -> :ok
+ {:error, {:already_started, _}} -> :ok
+ {:error, reason} -> {:error, reason}
+ end
+ end
+
+ def start_max_events_counter(tenant) do
+ %{max_events_per_second: max_events_per_second} = tenant
+
+ key = Tenants.events_per_second_key(tenant)
+
+ GenCounter.new(key)
+
+ res =
+ RateCounter.new(key,
+ idle_shutdown: :infinity,
+ telemetry: %{
+ event_name: [:channel, :events],
+ measurements: %{limit: max_events_per_second},
+ metadata: %{tenant: tenant.external_id}
+ }
+ )
+
+ case res do
+ {:ok, _} -> :ok
+ {:error, {:already_started, _}} -> :ok
+ {:error, reason} -> {:error, reason}
+ end
+ end
+
+ def start_db_events_counter(tenant) do
+ key = Tenants.db_events_per_second_key(tenant)
+ GenCounter.new(key)
+
+ res =
+ RateCounter.new(key,
+ idle_shutdown: :infinity,
+ telemetry: %{
+ event_name: [:channel, :db_events],
+ measurements: %{},
+ metadata: %{tenant: tenant.external_id}
+ }
+ )
+
+ case res do
+ {:ok, _} -> :ok
+ {:error, {:already_started, _}} -> :ok
+ {:error, reason} -> {:error, reason}
+ end
+ end
+end
diff --git a/lib/realtime/tenants/janitor.ex b/lib/realtime/tenants/janitor.ex
new file mode 100644
index 0000000..ec278ae
--- /dev/null
+++ b/lib/realtime/tenants/janitor.ex
@@ -0,0 +1,132 @@
+defmodule Realtime.Tenants.Janitor do
+ @moduledoc """
+ Scheduled tasks for the Tenants.
+ """
+
+ use GenServer
+ require Logger
+
+ import Realtime.Logs
+
+ alias Realtime.Tenants.Janitor.MaintenanceTask
+
+ @type t :: %__MODULE__{
+ timer: pos_integer() | nil,
+ region: String.t() | nil,
+ chunks: pos_integer() | nil,
+ start_after: pos_integer() | nil,
+ randomize: boolean() | nil,
+ tasks: map()
+ }
+
+ defstruct timer: nil,
+ region: nil,
+ chunks: nil,
+ start_after: nil,
+ randomize: nil,
+ tasks: %{}
+
+ def start_link(_args) do
+ timer = Application.get_env(:realtime, :janitor_schedule_timer)
+ start_after = Application.get_env(:realtime, :janitor_run_after_in_ms, 0)
+ chunks = Application.get_env(:realtime, :janitor_chunk_size)
+ randomize = Application.get_env(:realtime, :janitor_schedule_randomize)
+ region = Application.get_env(:realtime, :region)
+
+ state = %__MODULE__{
+ timer: timer,
+ region: region,
+ chunks: chunks,
+ start_after: start_after,
+ randomize: randomize
+ }
+
+ GenServer.start_link(__MODULE__, state, name: __MODULE__)
+ end
+
+ @impl true
+ def init(%__MODULE__{start_after: start_after} = state) do
+ timer = timer(state) + start_after
+ Process.send_after(self(), :delete_old_messages, timer)
+
+ Logger.info("Janitor started")
+ {:ok, state}
+ end
+
+ @table_name Realtime.Tenants.Connect
+ @syn_table :"syn_registry_by_name_Elixir.Realtime.Tenants.Connect"
+
+ @impl true
+ def handle_info(:delete_old_messages, state) do
+ Logger.info("Janitor started")
+ %{chunks: chunks, tasks: tasks} = state
+ all_tenants = :ets.select(@table_name, [{{:"$1"}, [], [:"$1"]}])
+
+ connected_tenants =
+ :ets.select(@syn_table, [{{:"$1", :_, :_, :_, :_, :"$2"}, [{:==, :"$2", {:const, Node.self()}}], [:"$1"]}])
+
+ new_tasks =
+ MapSet.new(all_tenants ++ connected_tenants)
+ |> Enum.to_list()
+ |> Stream.chunk_every(chunks)
+ |> Stream.map(fn chunks ->
+ task =
+ Task.Supervisor.async_nolink(
+ __MODULE__.TaskSupervisor,
+ fn -> perform_maintenance_tasks(chunks) end,
+ ordered: false
+ )
+
+ {task.ref, chunks}
+ end)
+ |> Map.new()
+
+ Process.send_after(self(), :delete_old_messages, timer(state))
+
+ {:noreply, %{state | tasks: Map.merge(tasks, new_tasks)}}
+ end
+
+ def handle_info({:DOWN, ref, _, _, :normal}, state) do
+ %{tasks: tasks} = state
+ {tenants, tasks} = Map.pop(tasks, ref)
+ Logger.info("Janitor finished for tenants: #{inspect(tenants)}")
+ {:noreply, %{state | tasks: tasks}}
+ end
+
+ def handle_info({:DOWN, ref, _, _, :killed}, state) do
+ %{tasks: tasks} = state
+ tenants = Map.get(tasks, ref)
+
+ log_error(
+ "JanitorFailedToDeleteOldMessages",
+ "Scheduled cleanup failed for tenants: #{inspect(tenants)}"
+ )
+
+ {:noreply, %{state | tasks: tasks}}
+ end
+
+ def handle_info(_, state) do
+ {:noreply, state}
+ end
+
+ # Ignore in coverage has the tests would require to await a random amount of minutes up to an hour
+ # coveralls-ignore-start
+ defp timer(%{timer: timer, randomize: true}), do: timer + :timer.minutes(Enum.random(1..59))
+ # coveralls-ignore-stop
+
+ defp timer(%{timer: timer}), do: timer
+
+ defp perform_maintenance_tasks(tenants), do: Enum.map(tenants, &perform_maintenance_task/1)
+
+ defp perform_maintenance_task(tenant_external_id) do
+ Logger.metadata(project: tenant_external_id, external_id: tenant_external_id)
+ Logger.info("Janitor starting realtime.messages cleanup")
+ :ets.delete(@table_name, tenant_external_id)
+
+ with :ok <- MaintenanceTask.run(tenant_external_id) do
+ Logger.info("Janitor finished")
+
+ :ok
+ end
+ end
+end
diff --git a/lib/realtime/tenants/janitor/maintenance_task.ex b/lib/realtime/tenants/janitor/maintenance_task.ex
new file mode 100644
index 0000000..4a01432
--- /dev/null
+++ b/lib/realtime/tenants/janitor/maintenance_task.ex
@@ -0,0 +1,18 @@
+defmodule Realtime.Tenants.Janitor.MaintenanceTask do
+ @moduledoc """
+ Perform maintenance on the messages table.
+ * Delete old messages
+ * Create new partitions
+ """
+
+ @spec run(String.t()) :: :ok | {:error, any}
+ def run(tenant_external_id) do
+ with %Realtime.Api.Tenant{} = tenant <- Realtime.Tenants.Cache.get_tenant_by_external_id(tenant_external_id),
+ {:ok, conn} <- Realtime.Database.connect(tenant, "realtime_janitor"),
+ :ok <- Realtime.Messages.delete_old_messages(conn),
+ :ok <- Realtime.Tenants.Migrations.create_partitions(conn) do
+ GenServer.stop(conn)
+ :ok
+ end
+ end
+end
diff --git a/lib/realtime/tenants/listen.ex b/lib/realtime/tenants/listen.ex
new file mode 100644
index 0000000..6468d7c
--- /dev/null
+++ b/lib/realtime/tenants/listen.ex
@@ -0,0 +1,113 @@
+defmodule Realtime.Tenants.Listen do
+ @moduledoc """
+ Listen for Postgres notifications to identify issues with the functions that are being called in tenants database
+ """
+ use GenServer, restart: :transient
+ require Logger
+ alias Realtime.Api.Tenant
+ alias Realtime.Database
+ alias Realtime.Logs
+ alias Realtime.Registry.Unique
+ alias Realtime.Tenants.Cache
+
+ @type t :: %__MODULE__{
+ tenant_id: binary,
+ listen_conn: pid(),
+ monitored_pid: pid()
+ }
+ defstruct tenant_id: nil, listen_conn: nil, monitored_pid: nil
+
+ @topic "realtime:system"
+ def start_link(%__MODULE__{tenant_id: tenant_id} = state) do
+ name = {:via, Registry, {Unique, {__MODULE__, :tenant_id, tenant_id}}}
+ GenServer.start_link(__MODULE__, state, name: name)
+ end
+
+ def init(%__MODULE__{tenant_id: tenant_id, monitored_pid: monitored_pid}) do
+ Logger.metadata(external_id: tenant_id, project: tenant_id)
+ Process.monitor(monitored_pid)
+
+ tenant = Cache.get_tenant_by_external_id(tenant_id)
+ connection_opts = Database.from_tenant(tenant, "realtime_listen", :stop)
+
+ name =
+ {:via, Registry, {Realtime.Registry.Unique, {Postgrex.Notifications, :tenant_id, tenant_id}}}
+
+ settings =
+ [
+ hostname: connection_opts.hostname,
+ database: connection_opts.database,
+ password: connection_opts.password,
+ username: connection_opts.username,
+ port: connection_opts.port,
+ ssl: connection_opts.ssl,
+ socket_options: connection_opts.socket_options,
+ sync_connect: true,
+ auto_reconnect: false,
+ backoff_type: :stop,
+ max_restarts: 0,
+ name: name,
+ parameters: [application_name: "realtime_listen"]
+ ]
+
+ Logger.info("Listening for notifications on #{@topic}")
+
+ case Postgrex.Notifications.start_link(settings) do
+ {:ok, conn} ->
+ Postgrex.Notifications.listen!(conn, @topic)
+ {:ok, %{tenant_id: tenant.external_id, listen_conn: conn}}
+
+ {:error, {:already_started, conn}} ->
+ Postgrex.Notifications.listen!(conn, @topic)
+ {:ok, %{tenant_id: tenant.external_id, listen_conn: conn}}
+
+ {:error, reason} ->
+ {:stop, reason}
+ end
+ catch
+ e -> {:stop, e}
+ end
+
+ @spec start(Realtime.Api.Tenant.t(), pid()) :: {:ok, pid()} | {:error, any()}
+ def start(%Tenant{} = tenant, pid) do
+ supervisor = {:via, PartitionSupervisor, {Realtime.Tenants.Listen.DynamicSupervisor, self()}}
+ spec = {__MODULE__, %__MODULE__{tenant_id: tenant.external_id, monitored_pid: pid}}
+
+ case DynamicSupervisor.start_child(supervisor, spec) do
+ {:ok, pid} -> {:ok, pid}
+ {:error, {:already_started, pid}} -> {:ok, pid}
+ error -> {:error, error}
+ end
+ catch
+ e -> {:error, e}
+ end
+
+ @doc """
+ Finds replication connection by tenant_id
+ """
+ @spec whereis(String.t()) :: pid() | nil
+ def whereis(tenant_id) do
+ case Registry.lookup(Realtime.Registry.Unique, {Postgrex.Notifications, :tenant_id, tenant_id}) do
+ [{pid, _}] -> pid
+ [] -> nil
+ end
+ end
+
+ def handle_info({:notification, _, _, @topic, payload}, state) do
+ case Jason.decode(payload) do
+ {:ok, %{"function" => "realtime.send"} = parsed} when is_map_key(parsed, "error") ->
+ Logs.log_error("FailedSendFromDatabase", parsed)
+
+ {:error, _} ->
+ Logs.log_error("FailedToParseDiagnosticMessage", payload)
+
+ _ ->
+ :ok
+ end
+
+ {:noreply, state}
+ end
+
+ def handle_info({:DOWN, _, :process, _, _}, state), do: {:stop, :normal, state}
+ def handle_info(_, state), do: {:noreply, state}
+end
diff --git a/lib/realtime/tenants/migrations.ex b/lib/realtime/tenants/migrations.ex
new file mode 100644
index 0000000..e00d8ca
--- /dev/null
+++ b/lib/realtime/tenants/migrations.ex
@@ -0,0 +1,262 @@
+defmodule Realtime.Tenants.Migrations do
+ @moduledoc """
+ Run Realtime database migrations for tenant's database.
+ """
+ use GenServer, restart: :transient
+
+ require Logger
+
+ import Realtime.Logs
+
+ alias Realtime.Tenants
+ alias Realtime.Database
+ alias Realtime.Registry.Unique
+ alias Realtime.Repo
+ alias Realtime.Api.Tenant
+
+ alias Realtime.Tenants.Migrations.{
+ CreateRealtimeSubscriptionTable,
+ CreateRealtimeCheckFiltersTrigger,
+ CreateRealtimeQuoteWal2jsonFunction,
+ CreateRealtimeCheckEqualityOpFunction,
+ CreateRealtimeBuildPreparedStatementSqlFunction,
+ CreateRealtimeCastFunction,
+ CreateRealtimeIsVisibleThroughFiltersFunction,
+ CreateRealtimeApplyRlsFunction,
+ GrantRealtimeUsageToAuthenticatedRole,
+ EnableRealtimeApplyRlsFunctionPostgrest9Compatibility,
+ UpdateRealtimeSubscriptionCheckFiltersFunctionSecurity,
+ UpdateRealtimeBuildPreparedStatementSqlFunctionForCompatibilityWithAllTypes,
+ EnableGenericSubscriptionClaims,
+ AddWalPayloadOnErrorsInApplyRlsFunction,
+ UpdateChangeTimestampToIso8601ZuluFormat,
+ UpdateSubscriptionCheckFiltersFunctionDynamicTableName,
+ UpdateApplyRlsFunctionToApplyIso8601,
+ AddQuotedRegtypesSupport,
+ AddOutputForDataLessThanEqual64BytesWhenPayloadTooLarge,
+ AddQuotedRegtypesBackwardCompatibilitySupport,
+ RecreateRealtimeBuildPreparedStatementSqlFunction,
+ NullPassesFiltersRecreateIsVisibleThroughFilters,
+ UpdateApplyRlsFunctionToPassThroughDeleteEventsOnFilter,
+ MillisecondPrecisionForWalrus,
+ AddInOpToFilters,
+ EnableFilteringOnDeleteRecord,
+ UpdateSubscriptionCheckFiltersForInFilterNonTextTypes,
+ ConvertCommitTimestampToUtc,
+ OutputFullRecordWhenUnchangedToast,
+ CreateListChangesFunction,
+ CreateChannels,
+ SetRequiredGrants,
+ CreateRlsHelperFunctions,
+ EnableChannelsRls,
+ AddChannelsColumnForWriteCheck,
+ AddUpdateGrantToChannels,
+ AddBroadcastsPoliciesTable,
+ AddInsertAndDeleteGrantToChannels,
+ AddPresencesPoliciesTable,
+ CreateRealtimeAdminAndMoveOwnership,
+ RemoveCheckColumns,
+ RedefineAuthorizationTables,
+ FixWalrusRoleHandling,
+ UnloggedMessagesTable,
+ LoggedMessagesTable,
+ FilterDeletePostgresChanges,
+ AddPayloadToMessages,
+ ChangeMessagesIdType,
+ UuidAutoGeneration,
+ MessagesPartitioning,
+ MessagesUsingUuid,
+ FixSendFunction,
+ RecreateEntityIndexUsingBtree,
+ FixSendFunctionPartitionCreation,
+ RealtimeSendHandleExceptionsRemovePartitionCreation,
+ RealtimeSendSetsConfig,
+ RealtimeSubscriptionUnlogged,
+ RealtimeSubscriptionLogged,
+ RemoveUnusedPublications,
+ RealtimeSendSetsTopicConfig,
+ SubscriptionIndexBridgingDisabled,
+ RunSubscriptionIndexBridgingDisabled
+ }
+
+ @migrations [
+ {20_211_116_024_918, CreateRealtimeSubscriptionTable},
+ {20_211_116_045_059, CreateRealtimeCheckFiltersTrigger},
+ {20_211_116_050_929, CreateRealtimeQuoteWal2jsonFunction},
+ {20_211_116_051_442, CreateRealtimeCheckEqualityOpFunction},
+ {20_211_116_212_300, CreateRealtimeBuildPreparedStatementSqlFunction},
+ {20_211_116_213_355, CreateRealtimeCastFunction},
+ {20_211_116_213_934, CreateRealtimeIsVisibleThroughFiltersFunction},
+ {20_211_116_214_523, CreateRealtimeApplyRlsFunction},
+ {20_211_122_062_447, GrantRealtimeUsageToAuthenticatedRole},
+ {20_211_124_070_109, EnableRealtimeApplyRlsFunctionPostgrest9Compatibility},
+ {20_211_202_204_204, UpdateRealtimeSubscriptionCheckFiltersFunctionSecurity},
+ {20_211_202_204_605, UpdateRealtimeBuildPreparedStatementSqlFunctionForCompatibilityWithAllTypes},
+ {20_211_210_212_804, EnableGenericSubscriptionClaims},
+ {20_211_228_014_915, AddWalPayloadOnErrorsInApplyRlsFunction},
+ {20_220_107_221_237, UpdateChangeTimestampToIso8601ZuluFormat},
+ {20_220_228_202_821, UpdateSubscriptionCheckFiltersFunctionDynamicTableName},
+ {20_220_312_004_840, UpdateApplyRlsFunctionToApplyIso8601},
+ {20_220_603_231_003, AddQuotedRegtypesSupport},
+ {20_220_603_232_444, AddOutputForDataLessThanEqual64BytesWhenPayloadTooLarge},
+ {20_220_615_214_548, AddQuotedRegtypesBackwardCompatibilitySupport},
+ {20_220_712_093_339, RecreateRealtimeBuildPreparedStatementSqlFunction},
+ {20_220_908_172_859, NullPassesFiltersRecreateIsVisibleThroughFilters},
+ {20_220_916_233_421, UpdateApplyRlsFunctionToPassThroughDeleteEventsOnFilter},
+ {20_230_119_133_233, MillisecondPrecisionForWalrus},
+ {20_230_128_025_114, AddInOpToFilters},
+ {20_230_128_025_212, EnableFilteringOnDeleteRecord},
+ {20_230_227_211_149, UpdateSubscriptionCheckFiltersForInFilterNonTextTypes},
+ {20_230_228_184_745, ConvertCommitTimestampToUtc},
+ {20_230_308_225_145, OutputFullRecordWhenUnchangedToast},
+ {20_230_328_144_023, CreateListChangesFunction},
+ {20_231_018_144_023, CreateChannels},
+ {20_231_204_144_023, SetRequiredGrants},
+ {20_231_204_144_024, CreateRlsHelperFunctions},
+ {20_231_204_144_025, EnableChannelsRls},
+ {20_240_108_234_812, AddChannelsColumnForWriteCheck},
+ {20_240_109_165_339, AddUpdateGrantToChannels},
+ {20_240_227_174_441, AddBroadcastsPoliciesTable},
+ {20_240_311_171_622, AddInsertAndDeleteGrantToChannels},
+ {20_240_321_100_241, AddPresencesPoliciesTable},
+ {20_240_401_105_812, CreateRealtimeAdminAndMoveOwnership},
+ {20_240_418_121_054, RemoveCheckColumns},
+ {20_240_523_004_032, RedefineAuthorizationTables},
+ {20_240_618_124_746, FixWalrusRoleHandling},
+ {20_240_801_235_015, UnloggedMessagesTable},
+ {20_240_805_133_720, LoggedMessagesTable},
+ {20_240_827_160_934, FilterDeletePostgresChanges},
+ {20_240_919_163_303, AddPayloadToMessages},
+ {20_240_919_163_305, ChangeMessagesIdType},
+ {20_241_019_105_805, UuidAutoGeneration},
+ {20_241_030_150_047, MessagesPartitioning},
+ {20_241_108_114_728, MessagesUsingUuid},
+ {20_241_121_104_152, FixSendFunction},
+ {20_241_130_184_212, RecreateEntityIndexUsingBtree},
+ {20_241_220_035_512, FixSendFunctionPartitionCreation},
+ {20_241_220_123_912, RealtimeSendHandleExceptionsRemovePartitionCreation},
+ {20_241_224_161_212, RealtimeSendSetsConfig},
+ {20_250_107_150_512, RealtimeSubscriptionUnlogged},
+ {20_250_110_162_412, RealtimeSubscriptionLogged},
+ {20_250_123_174_212, RemoveUnusedPublications},
+ {20_250_128_220_012, RealtimeSendSetsTopicConfig},
+ {20_250_506_224_012, SubscriptionIndexBridgingDisabled},
+ {20_250_523_164_012, RunSubscriptionIndexBridgingDisabled}
+ ]
+
+ defstruct [:tenant_external_id, :settings]
+
+ @type t :: %__MODULE__{
+ tenant_external_id: binary(),
+ settings: map()
+ }
+
+ @doc """
+ Run migrations for the given tenant.
+ """
+ @spec run_migrations(Tenant.t()) :: :ok | :noop | {:error, any()}
+ def run_migrations(%Tenant{} = tenant) do
+ %{extensions: [%{settings: settings} | _]} = tenant
+ attrs = %__MODULE__{tenant_external_id: tenant.external_id, settings: settings}
+
+ supervisor =
+ {:via, PartitionSupervisor, {Realtime.Tenants.Migrations.DynamicSupervisor, tenant.external_id}}
+
+ spec = {__MODULE__, attrs}
+
+ if Tenants.run_migrations?(tenant) do
+ case DynamicSupervisor.start_child(supervisor, spec) do
+ :ignore -> :ok
+ error -> error
+ end
+ else
+ :noop
+ end
+ end
+
+ def start_link(%__MODULE__{tenant_external_id: tenant_external_id} = attrs) do
+ name = {:via, Registry, {Unique, {__MODULE__, :host, tenant_external_id}}}
+ GenServer.start_link(__MODULE__, attrs, name: name)
+ end
+
+ def init(%__MODULE__{tenant_external_id: tenant_external_id, settings: settings}) do
+ Logger.metadata(external_id: tenant_external_id, project: tenant_external_id)
+
+ case migrate(settings) do
+ :ok ->
+ Tenants.update_migrations_ran(tenant_external_id, Enum.count(@migrations))
+ :ignore
+
+ {:error, error} ->
+ {:stop, error}
+ end
+ end
+
+ defp migrate(settings) do
+ settings = Database.from_settings(settings, "realtime_migrations", :stop)
+
+ [
+ hostname: settings.hostname,
+ port: settings.port,
+ database: settings.database,
+ password: settings.password,
+ username: settings.username,
+ pool_size: settings.pool_size,
+ backoff_type: settings.backoff_type,
+ socket_options: settings.socket_options,
+ parameters: [application_name: settings.application_name],
+ ssl: settings.ssl
+ ]
+ |> Repo.with_dynamic_repo(fn repo ->
+ Logger.info("Applying migrations to #{settings.hostname}")
+
+ try do
+ opts = [all: true, prefix: "realtime", dynamic_repo: repo]
+ Ecto.Migrator.run(Repo, @migrations, :up, opts)
+
+ :ok
+ rescue
+ error ->
+ log_error("MigrationsFailedToRun", error)
+ {:error, error}
+ end
+ end)
+ end
+
+ @doc """
+ Create partitions against tenant db connection
+ """
+ @spec create_partitions(pid()) :: :ok
+ def create_partitions(db_conn_pid) do
+ Logger.info("Creating partitions for realtime.messages")
+ today = Date.utc_today()
+ yesterday = Date.add(today, -1)
+ future = Date.add(today, 3)
+
+ dates = Date.range(yesterday, future)
+
+ Enum.each(dates, fn date ->
+ partition_name = "messages_#{date |> Date.to_iso8601() |> String.replace("-", "_")}"
+ start_timestamp = Date.to_string(date)
+ end_timestamp = Date.to_string(Date.add(date, 1))
+
+ Database.transaction(db_conn_pid, fn conn ->
+ query = """
+ CREATE TABLE IF NOT EXISTS realtime.#{partition_name}
+ PARTITION OF realtime.messages
+ FOR VALUES FROM ('#{start_timestamp}') TO ('#{end_timestamp}');
+ """
+
+ case Postgrex.query(conn, query, []) do
+ {:ok, _} -> Logger.debug("Partition #{partition_name} created")
+ {:error, %Postgrex.Error{postgres: %{code: :duplicate_table}}} -> :ok
+ {:error, error} -> log_error("PartitionCreationFailed", error)
+ end
+ end)
+ end)
+
+ :ok
+ end
+
+ def migrations(), do: @migrations
+end
diff --git a/lib/realtime/tenants/replication_connection.ex b/lib/realtime/tenants/replication_connection.ex
new file mode 100644
index 0000000..a0abdef
--- /dev/null
+++ b/lib/realtime/tenants/replication_connection.ex
@@ -0,0 +1,362 @@
+defmodule Realtime.Tenants.ReplicationConnection do
+ @moduledoc """
+ ReplicationConnection it's the module that provides a way to stream data from a PostgreSQL database using logical replication.
+
+ ## Struct parameters
+ * `connection_opts` - The connection options to connect to the database.
+ * `table` - The table to replicate. If `:all` is passed, it will replicate all tables.
+ * `schema` - The schema of the table to replicate. If not provided, it will use the `public` schema. If `:all` is passed, this option is ignored.
+ * `opts` - The options to pass to this module
+ * `step` - The current step of the replication process
+ * `publication_name` - The name of the publication to create. If not provided, it will use the schema and table name.
+ * `replication_slot_name` - The name of the replication slot to create. If not provided, it will use the schema and table name.
+ * `output_plugin` - The output plugin to use. Default is `pgoutput`.
+ * `proto_version` - The protocol version to use. Default is `1`.
+ * `handler_module` - The module that will handle the data received from the replication stream.
+ * `metadata` - The metadata to pass to the handler module.
+
+ """
+ use Postgrex.ReplicationConnection
+ require Logger
+
+ import Realtime.Adapters.Postgres.Protocol
+ import Realtime.Adapters.Postgres.Decoder
+ import Realtime.Logs
+
+ alias Realtime.Adapters.Postgres.Decoder
+ alias Realtime.Adapters.Postgres.Protocol.KeepAlive
+ alias Realtime.Adapters.Postgres.Protocol.Write
+ alias Realtime.Api.Tenant
+ alias Realtime.Database
+ alias Realtime.Tenants.BatchBroadcast
+ alias Realtime.Tenants.Cache
+
+ @type t :: %__MODULE__{
+ tenant_id: String.t(),
+ table: String.t(),
+ schema: String.t(),
+ opts: Keyword.t(),
+ step:
+ :disconnected
+ | :check_replication_slot
+ | :create_publication
+ | :check_publication
+ | :create_slot
+ | :start_replication_slot
+ | :streaming,
+ publication_name: String.t(),
+ replication_slot_name: String.t(),
+ output_plugin: String.t(),
+ proto_version: integer(),
+ relations: map(),
+ buffer: list(),
+ monitored_pid: pid()
+ }
+ defstruct tenant_id: nil,
+ table: nil,
+ schema: "public",
+ opts: [],
+ step: :disconnected,
+ publication_name: nil,
+ replication_slot_name: nil,
+ output_plugin: "pgoutput",
+ proto_version: 1,
+ relations: %{},
+ buffer: [],
+ monitored_pid: nil
+
+ @doc """
+ Starts the replication connection for a tenant and monitors a given pid to stop the ReplicationConnection.
+ """
+ @spec start(Realtime.Api.Tenant.t(), pid()) :: {:ok, pid()} | {:error, any()}
+ def start(tenant, monitored_pid) do
+ Logger.info("Starting replication for Broadcast Changes")
+ opts = %__MODULE__{tenant_id: tenant.external_id, monitored_pid: monitored_pid}
+ supervisor_spec = supervisor_spec(tenant)
+
+ child_spec = %{
+ id: __MODULE__,
+ start: {__MODULE__, :start_link, [opts]},
+ restart: :transient,
+ type: :worker
+ }
+
+ case DynamicSupervisor.start_child(supervisor_spec, child_spec) do
+ {:ok, pid} -> {:ok, pid}
+ {:error, {:already_started, pid}} -> {:ok, pid}
+ {:error, {:bad_return_from_init, {:stop, error, _}}} -> {:error, error}
+ {:error, %Postgrex.Error{postgres: %{pg_code: "53300"}}} -> {:error, :max_wal_senders_reached}
+ error -> error
+ end
+ end
+
+ @doc """
+ Finds replication connection by tenant_id
+ """
+ @spec whereis(String.t()) :: pid() | nil
+ def whereis(tenant_id) do
+ case Registry.lookup(Realtime.Registry.Unique, {__MODULE__, tenant_id}) do
+ [{pid, _}] -> pid
+ [] -> nil
+ end
+ end
+
+ def start_link(%__MODULE__{tenant_id: tenant_id} = attrs) do
+ tenant = Cache.get_tenant_by_external_id(tenant_id)
+ connection_opts = Database.from_tenant(tenant, "realtime_broadcast_changes", :stop)
+
+ connection_opts =
+ [
+ name: {:via, Registry, {Realtime.Registry.Unique, {__MODULE__, tenant_id}}},
+ hostname: connection_opts.hostname,
+ username: connection_opts.username,
+ password: connection_opts.password,
+ database: connection_opts.database,
+ port: connection_opts.port,
+ socket_options: connection_opts.socket_options,
+ ssl: connection_opts.ssl,
+ backoff_type: :stop,
+ sync_connect: true,
+ parameters: [
+ application_name: "realtime_replication_connection"
+ ]
+ ]
+
+ case Postgrex.ReplicationConnection.start_link(__MODULE__, attrs, connection_opts) do
+ {:ok, pid} -> {:ok, pid}
+ {:error, {:already_started, pid}} -> {:ok, pid}
+ {:error, {:bad_return_from_init, {:stop, error}}} -> {:error, error}
+ {:error, error} -> {:error, error}
+ end
+ end
+
+ @impl true
+ def init(%__MODULE__{tenant_id: tenant_id, monitored_pid: monitored_pid} = state) do
+ Logger.metadata(external_id: tenant_id, project: tenant_id)
+ Process.monitor(monitored_pid)
+ state = %{state | table: "messages", schema: "realtime"}
+
+ state = %{
+ state
+ | publication_name: publication_name(state),
+ replication_slot_name: replication_slot_name(state)
+ }
+
+ Logger.info("Initializing connection with the status: #{inspect(state, pretty: true)}")
+
+ {:ok, state}
+ end
+
+ @impl true
+ def handle_connect(state) do
+ replication_slot_name = replication_slot_name(state)
+ Logger.info("Checking if replication slot #{replication_slot_name} exists")
+
+ query =
+ "SELECT * FROM pg_replication_slots WHERE slot_name = '#{replication_slot_name}'"
+
+ {:query, query, %{state | step: :check_replication_slot}}
+ end
+
+ @impl true
+ def handle_result([%Postgrex.Result{num_rows: 1}], %__MODULE__{step: :check_replication_slot}) do
+ {:disconnect, "Temporary Replication slot already exists and in use"}
+ end
+
+ def handle_result(
+ [%Postgrex.Result{num_rows: 0}],
+ %__MODULE__{step: :check_replication_slot} = state
+ ) do
+ %__MODULE__{
+ output_plugin: output_plugin,
+ replication_slot_name: replication_slot_name,
+ step: :check_replication_slot
+ } = state
+
+ Logger.info("Create replication slot #{replication_slot_name} using plugin #{output_plugin}")
+
+ query =
+ "CREATE_REPLICATION_SLOT #{replication_slot_name} TEMPORARY LOGICAL #{output_plugin} NOEXPORT_SNAPSHOT"
+
+ {:query, query, %{state | step: :check_publication}}
+ end
+
+ def handle_result([%Postgrex.Result{}], %__MODULE__{step: :check_publication} = state) do
+ %__MODULE__{table: table, schema: schema, publication_name: publication_name} = state
+
+ Logger.info("Check publication #{publication_name} for table #{schema}.#{table} exists")
+ query = "SELECT * FROM pg_publication WHERE pubname = '#{publication_name}'"
+
+ {:query, query, %{state | step: :create_publication}}
+ end
+
+ def handle_result(
+ [%Postgrex.Result{num_rows: 0}],
+ %__MODULE__{step: :create_publication} = state
+ ) do
+ %__MODULE__{table: table, schema: schema, publication_name: publication_name} = state
+
+ Logger.info("Create publication #{publication_name} for table #{schema}.#{table}")
+ query = "CREATE PUBLICATION #{publication_name} FOR TABLE #{schema}.#{table}"
+
+ {:query, query, %{state | step: :start_replication_slot}}
+ end
+
+ def handle_result(
+ [%Postgrex.Result{num_rows: 1}],
+ %__MODULE__{step: :create_publication} = state
+ ) do
+ {:query, "SELECT 1", %{state | step: :start_replication_slot}}
+ end
+
+ @impl true
+ def handle_result(
+ [%Postgrex.Result{}],
+ %__MODULE__{step: :start_replication_slot} = state
+ ) do
+ %__MODULE__{
+ proto_version: proto_version,
+ replication_slot_name: replication_slot_name,
+ publication_name: publication_name
+ } = state
+
+ Logger.info(
+ "Starting stream replication for slot #{replication_slot_name} using publication #{publication_name} and protocol version #{proto_version}"
+ )
+
+ query =
+ "START_REPLICATION SLOT #{replication_slot_name} LOGICAL 0/0 (proto_version '#{proto_version}', publication_names '#{publication_name}')"
+
+ {:stream, query, [], %{state | step: :streaming}}
+ end
+
+ def handle_result(%Postgrex.Error{postgres: %{message: message}}, _state) do
+ {:disconnect, "Error starting replication: #{message}"}
+ end
+
+ @impl true
+ def handle_data(data, state) when is_keep_alive(data) do
+ %KeepAlive{reply: reply, wal_end: wal_end} = parse(data)
+ wal_end = wal_end + 1
+
+ message =
+ case reply do
+ :now -> standby_status(wal_end, wal_end, wal_end, reply)
+ :later -> hold()
+ end
+
+ {:noreply, message, state}
+ end
+
+ def handle_data(data, state) when is_write(data) do
+ %Write{message: message} = parse(data)
+ message |> decode_message() |> then(&send(self(), &1))
+ {:noreply, [], state}
+ end
+
+ def handle_data(e, state) do
+ log_error("UnexpectedMessageReceived", e)
+ {:noreply, [], state}
+ end
+
+ @impl true
+ def handle_info(%Decoder.Messages.Relation{} = msg, state) do
+ %Decoder.Messages.Relation{id: id, namespace: namespace, name: name, columns: columns} = msg
+ %{relations: relations} = state
+ relation = %{name: name, columns: columns, namespace: namespace}
+ relations = Map.put(relations, id, relation)
+ {:noreply, %{state | relations: relations}}
+ rescue
+ e ->
+ log_error("UnableToBroadcastChanges", e)
+ {:noreply, state}
+ catch
+ e ->
+ log_error("UnableToBroadcastChanges", e)
+ {:noreply, state}
+ end
+
+ def handle_info(%Decoder.Messages.Insert{} = msg, state) do
+ %Decoder.Messages.Insert{relation_id: relation_id, tuple_data: tuple_data} = msg
+ %{relations: relations, tenant_id: tenant_id} = state
+
+ case Map.get(relations, relation_id) do
+ %{columns: columns} ->
+ to_broadcast =
+ tuple_data
+ |> Tuple.to_list()
+ |> Enum.zip(columns)
+ |> Map.new(fn
+ {nil, %{name: name}} -> {name, nil}
+ {value, %{name: name, type: "jsonb"}} -> {name, Jason.decode!(value)}
+ {value, %{name: name, type: "bool"}} -> {name, value == "t"}
+ {value, %{name: name}} -> {name, value}
+ end)
+
+ payload = Map.get(to_broadcast, "payload")
+
+ case payload do
+ nil ->
+ {:noreply, state}
+
+ payload ->
+ id = Map.fetch!(to_broadcast, "id")
+
+ to_broadcast =
+ %{
+ topic: Map.fetch!(to_broadcast, "topic"),
+ event: Map.fetch!(to_broadcast, "event"),
+ private: Map.fetch!(to_broadcast, "private"),
+ # Avoid overriding user provided id
+ payload: Map.put_new(payload, "id", id)
+ }
+
+ %Tenant{} = tenant = Cache.get_tenant_by_external_id(tenant_id)
+
+ case BatchBroadcast.broadcast(nil, tenant, %{messages: [to_broadcast]}, true) do
+ :ok -> :ok
+ error -> log_error("UnableToBatchBroadcastChanges", error)
+ end
+
+ {:noreply, state}
+ end
+
+ _ ->
+ log_error("UnknownBroadcastChangesRelation", "Relation ID not found: #{relation_id}")
+ {:noreply, state}
+ end
+ rescue
+ e ->
+ log_error("UnableToBroadcastChanges", e)
+ {:noreply, state}
+ catch
+ e ->
+ log_error("UnableToBroadcastChanges", e)
+ {:noreply, state}
+ end
+
+ def handle_info(:shutdown, _), do: {:disconnect, :normal}
+ def handle_info({:DOWN, _, :process, _, _}, _), do: {:disconnect, :normal}
+ def handle_info(_, state), do: {:noreply, state}
+
+ @impl true
+ def handle_disconnect(state) do
+ Logger.warning("Disconnecting broadcast changes handler in the step : #{inspect(state.step)}")
+ {:noreply, %{state | step: :disconnected}}
+ end
+
+ @spec supervisor_spec(Tenant.t()) :: term()
+ def supervisor_spec(%Tenant{external_id: tenant_id}) do
+ {:via, PartitionSupervisor, {__MODULE__.DynamicSupervisor, tenant_id}}
+ end
+
+ def publication_name(%__MODULE__{table: table, schema: schema}) do
+ "tealbase_#{schema}_#{table}_publication"
+ end
+
+ def replication_slot_name(%__MODULE__{table: table, schema: schema}) do
+ "tealbase_#{schema}_#{table}_replication_slot_#{slot_suffix()}"
+ end
+
+ defp slot_suffix, do: Application.get_env(:realtime, :slot_name_suffix)
+end
diff --git a/lib/realtime/tenants/repo/migrations/20211116024918_create_realtime_subscription_table.ex b/lib/realtime/tenants/repo/migrations/20211116024918_create_realtime_subscription_table.ex
new file mode 100644
index 0000000..6de98c5
--- /dev/null
+++ b/lib/realtime/tenants/repo/migrations/20211116024918_create_realtime_subscription_table.ex
@@ -0,0 +1,47 @@
+defmodule Realtime.Tenants.Migrations.CreateRealtimeSubscriptionTable do
+ @moduledoc false
+
+ use Ecto.Migration
+
+ def change do
+ execute("""
+ DO $$
+ BEGIN
+ IF NOT EXISTS (SELECT 1 FROM pg_type WHERE typname = 'equality_op') THEN
+ CREATE TYPE realtime.equality_op AS ENUM(
+ 'eq', 'neq', 'lt', 'lte', 'gt', 'gte'
+ );
+ END IF;
+ END$$;
+ """)
+
+ execute("""
+ DO $$
+ BEGIN
+ IF NOT EXISTS (SELECT 1 FROM pg_type WHERE typname = 'user_defined_filter') THEN
+ CREATE TYPE realtime.user_defined_filter as (
+ column_name text,
+ op realtime.equality_op,
+ value text
+ );
+ END IF;
+ END$$;
+ """)
+
+ execute("create table if not exists realtime.subscription (
+ -- Tracks which users are subscribed to each table
+ id bigint not null generated always as identity,
+ user_id uuid not null,
+ -- Populated automatically by trigger. Required to enable auth.email()
+ email varchar(255),
+ entity regclass not null,
+ filters realtime.user_defined_filter[] not null default '{}',
+ created_at timestamp not null default timezone('utc', now()),
+
+ constraint pk_subscription primary key (id),
+ unique (entity, user_id, filters)
+ )")
+
+ execute("create index if not exists ix_realtime_subscription_entity on realtime.subscription using hash (entity)")
+ end
+end
diff --git a/lib/extensions/postgres_cdc_rls/repo/migrations/20211116045059_create_realtime_check_filters_trigger.ex b/lib/realtime/tenants/repo/migrations/20211116045059_create_realtime_check_filters_trigger.ex
similarity index 96%
rename from lib/extensions/postgres_cdc_rls/repo/migrations/20211116045059_create_realtime_check_filters_trigger.ex
rename to lib/realtime/tenants/repo/migrations/20211116045059_create_realtime_check_filters_trigger.ex
index 6184e1c..e673e51 100644
--- a/lib/extensions/postgres_cdc_rls/repo/migrations/20211116045059_create_realtime_check_filters_trigger.ex
+++ b/lib/realtime/tenants/repo/migrations/20211116045059_create_realtime_check_filters_trigger.ex
@@ -1,4 +1,4 @@
-defmodule Realtime.Extensions.Rls.Repo.Migrations.CreateRealtimeCheckFiltersTrigger do
+defmodule Realtime.Tenants.Migrations.CreateRealtimeCheckFiltersTrigger do
@moduledoc false
use Ecto.Migration
diff --git a/lib/extensions/postgres_cdc_rls/repo/migrations/20211116050929_create_realtime_quote_wal2json_function.ex b/lib/realtime/tenants/repo/migrations/20211116050929_create_realtime_quote_wal2json_function.ex
similarity index 92%
rename from lib/extensions/postgres_cdc_rls/repo/migrations/20211116050929_create_realtime_quote_wal2json_function.ex
rename to lib/realtime/tenants/repo/migrations/20211116050929_create_realtime_quote_wal2json_function.ex
index 6b86dd8..9434909 100644
--- a/lib/extensions/postgres_cdc_rls/repo/migrations/20211116050929_create_realtime_quote_wal2json_function.ex
+++ b/lib/realtime/tenants/repo/migrations/20211116050929_create_realtime_quote_wal2json_function.ex
@@ -1,4 +1,4 @@
-defmodule Realtime.Extensions.Rls.Repo.Migrations.CreateRealtimeQuoteWal2jsonFunction do
+defmodule Realtime.Tenants.Migrations.CreateRealtimeQuoteWal2jsonFunction do
@moduledoc false
use Ecto.Migration
diff --git a/lib/extensions/postgres_cdc_rls/repo/migrations/20211116051442_create_realtime_check_equality_op_function.ex b/lib/realtime/tenants/repo/migrations/20211116051442_create_realtime_check_equality_op_function.ex
similarity index 90%
rename from lib/extensions/postgres_cdc_rls/repo/migrations/20211116051442_create_realtime_check_equality_op_function.ex
rename to lib/realtime/tenants/repo/migrations/20211116051442_create_realtime_check_equality_op_function.ex
index 68c12ad..1a7408a 100644
--- a/lib/extensions/postgres_cdc_rls/repo/migrations/20211116051442_create_realtime_check_equality_op_function.ex
+++ b/lib/realtime/tenants/repo/migrations/20211116051442_create_realtime_check_equality_op_function.ex
@@ -1,4 +1,4 @@
-defmodule Realtime.Extensions.Rls.Repo.Migrations.CreateRealtimeCheckEqualityOpFunction do
+defmodule Realtime.Tenants.Migrations.CreateRealtimeCheckEqualityOpFunction do
@moduledoc false
use Ecto.Migration
diff --git a/lib/extensions/postgres_cdc_rls/repo/migrations/20211116212300_create_realtime_build_prepared_statement_sql_function.ex b/lib/realtime/tenants/repo/migrations/20211116212300_create_realtime_build_prepared_statement_sql_function.ex
similarity index 68%
rename from lib/extensions/postgres_cdc_rls/repo/migrations/20211116212300_create_realtime_build_prepared_statement_sql_function.ex
rename to lib/realtime/tenants/repo/migrations/20211116212300_create_realtime_build_prepared_statement_sql_function.ex
index 748bfb5..d5a9a05 100644
--- a/lib/extensions/postgres_cdc_rls/repo/migrations/20211116212300_create_realtime_build_prepared_statement_sql_function.ex
+++ b/lib/realtime/tenants/repo/migrations/20211116212300_create_realtime_build_prepared_statement_sql_function.ex
@@ -1,16 +1,23 @@
-defmodule Realtime.Extensions.Rls.Repo.Migrations.CreateRealtimeBuildPreparedStatementSqlFunction do
+defmodule Realtime.Tenants.Migrations.CreateRealtimeBuildPreparedStatementSqlFunction do
@moduledoc false
use Ecto.Migration
def change do
- execute("create type realtime.wal_column as (
- name text,
- type text,
- value jsonb,
- is_pkey boolean,
- is_selectable boolean
- );")
+ execute("""
+ DO $$
+ BEGIN
+ IF NOT EXISTS (SELECT 1 FROM pg_type WHERE typname = 'wal_column') THEN
+ CREATE TYPE realtime.wal_column AS (
+ name text,
+ type text,
+ value jsonb,
+ is_pkey boolean,
+ is_selectable boolean
+ );
+ END IF;
+ END$$;
+ """)
execute("create function realtime.build_prepared_statement_sql(
prepared_statement_name text,
diff --git a/lib/extensions/postgres_cdc_rls/repo/migrations/20211116213355_create_realtime_cast_function.ex b/lib/realtime/tenants/repo/migrations/20211116213355_create_realtime_cast_function.ex
similarity index 81%
rename from lib/extensions/postgres_cdc_rls/repo/migrations/20211116213355_create_realtime_cast_function.ex
rename to lib/realtime/tenants/repo/migrations/20211116213355_create_realtime_cast_function.ex
index 6e137e1..30f36e7 100644
--- a/lib/extensions/postgres_cdc_rls/repo/migrations/20211116213355_create_realtime_cast_function.ex
+++ b/lib/realtime/tenants/repo/migrations/20211116213355_create_realtime_cast_function.ex
@@ -1,4 +1,4 @@
-defmodule Realtime.Extensions.Rls.Repo.Migrations.CreateRealtimeCastFunction do
+defmodule Realtime.Tenants.Migrations.CreateRealtimeCastFunction do
@moduledoc false
use Ecto.Migration
diff --git a/lib/extensions/postgres_cdc_rls/repo/migrations/20211116213934_create_realtime_is_visible_through_filters_function.ex b/lib/realtime/tenants/repo/migrations/20211116213934_create_realtime_is_visible_through_filters_function.ex
similarity index 89%
rename from lib/extensions/postgres_cdc_rls/repo/migrations/20211116213934_create_realtime_is_visible_through_filters_function.ex
rename to lib/realtime/tenants/repo/migrations/20211116213934_create_realtime_is_visible_through_filters_function.ex
index 1bbeab1..119e31f 100644
--- a/lib/extensions/postgres_cdc_rls/repo/migrations/20211116213934_create_realtime_is_visible_through_filters_function.ex
+++ b/lib/realtime/tenants/repo/migrations/20211116213934_create_realtime_is_visible_through_filters_function.ex
@@ -1,4 +1,4 @@
-defmodule Realtime.Extensions.Rls.Repo.Migrations.CreateRealtimeIsVisibleThroughFiltersFunction do
+defmodule Realtime.Tenants.Migrations.CreateRealtimeIsVisibleThroughFiltersFunction do
@moduledoc false
use Ecto.Migration
diff --git a/lib/extensions/postgres_cdc_rls/repo/migrations/20211116214523_create_realtime_apply_rls_function.ex b/lib/realtime/tenants/repo/migrations/20211116214523_create_realtime_apply_rls_function.ex
similarity index 90%
rename from lib/extensions/postgres_cdc_rls/repo/migrations/20211116214523_create_realtime_apply_rls_function.ex
rename to lib/realtime/tenants/repo/migrations/20211116214523_create_realtime_apply_rls_function.ex
index 74fb6f4..8b29d96 100644
--- a/lib/extensions/postgres_cdc_rls/repo/migrations/20211116214523_create_realtime_apply_rls_function.ex
+++ b/lib/realtime/tenants/repo/migrations/20211116214523_create_realtime_apply_rls_function.ex
@@ -1,19 +1,34 @@
-defmodule Realtime.Extensions.Rls.Repo.Migrations.CreateRealtimeApplyRlsFunction do
+defmodule Realtime.Tenants.Migrations.CreateRealtimeApplyRlsFunction do
@moduledoc false
use Ecto.Migration
def change do
- execute(
- "create type realtime.action as enum ('INSERT', 'UPDATE', 'DELETE', 'TRUNCATE', 'ERROR');"
- )
-
- execute("create type realtime.wal_rls as (
- wal jsonb,
- is_rls_enabled boolean,
- users uuid[],
- errors text[]
- );")
+ execute("""
+ DO $$
+ BEGIN
+ IF NOT EXISTS (SELECT 1 FROM pg_type WHERE typname = 'action') THEN
+ CREATE TYPE realtime.action AS ENUM (
+ 'INSERT', 'UPDATE', 'DELETE', 'TRUNCATE', 'ERROR'
+ );
+ END IF;
+ END$$;
+ """)
+
+ execute("""
+ DO $$
+ BEGIN
+ IF NOT EXISTS (SELECT 1 FROM pg_type WHERE typname = 'wal_rls') THEN
+ CREATE TYPE realtime.wal_rls AS (
+ wal jsonb,
+ is_rls_enabled boolean,
+ users uuid[],
+ errors text[]
+ );
+ END IF;
+ END$$;
+ """)
+
execute("create function realtime.apply_rls(wal jsonb, max_record_bytes int = 1024 * 1024)
returns realtime.wal_rls
language plpgsql
diff --git a/lib/extensions/postgres_cdc_rls/repo/migrations/20211122062447_grant_realtime_usage_to_authenticated_role.ex b/lib/realtime/tenants/repo/migrations/20211122062447_grant_realtime_usage_to_authenticated_role.ex
similarity index 59%
rename from lib/extensions/postgres_cdc_rls/repo/migrations/20211122062447_grant_realtime_usage_to_authenticated_role.ex
rename to lib/realtime/tenants/repo/migrations/20211122062447_grant_realtime_usage_to_authenticated_role.ex
index a6fbb53..63f408b 100644
--- a/lib/extensions/postgres_cdc_rls/repo/migrations/20211122062447_grant_realtime_usage_to_authenticated_role.ex
+++ b/lib/realtime/tenants/repo/migrations/20211122062447_grant_realtime_usage_to_authenticated_role.ex
@@ -1,4 +1,4 @@
-defmodule Realtime.Extensions.Rls.Repo.Migrations.GrantRealtimeUsageToAuthenticatedRole do
+defmodule Realtime.Tenants.Migrations.GrantRealtimeUsageToAuthenticatedRole do
@moduledoc false
use Ecto.Migration
diff --git a/lib/extensions/postgres_cdc_rls/repo/migrations/20211124070109_enable_realtime_apply_rls_function_postgrest_9_compatibility.ex b/lib/realtime/tenants/repo/migrations/20211124070109_enable_realtime_apply_rls_function_postgrest_9_compatibility.ex
similarity index 96%
rename from lib/extensions/postgres_cdc_rls/repo/migrations/20211124070109_enable_realtime_apply_rls_function_postgrest_9_compatibility.ex
rename to lib/realtime/tenants/repo/migrations/20211124070109_enable_realtime_apply_rls_function_postgrest_9_compatibility.ex
index ea13337..2d1b170 100644
--- a/lib/extensions/postgres_cdc_rls/repo/migrations/20211124070109_enable_realtime_apply_rls_function_postgrest_9_compatibility.ex
+++ b/lib/realtime/tenants/repo/migrations/20211124070109_enable_realtime_apply_rls_function_postgrest_9_compatibility.ex
@@ -1,11 +1,10 @@
-defmodule Realtime.Extensions.Rls.Repo.Migrations.EnableRealtimeApplyRlsFunctionPostgrest9Compatibility do
+defmodule Realtime.Tenants.Migrations.EnableRealtimeApplyRlsFunctionPostgrest9Compatibility do
@moduledoc false
use Ecto.Migration
def change do
- execute(
- "create or replace function realtime.apply_rls(wal jsonb, max_record_bytes int = 1024 * 1024)
+ execute("create or replace function realtime.apply_rls(wal jsonb, max_record_bytes int = 1024 * 1024)
returns realtime.wal_rls
language plpgsql
volatile
@@ -207,7 +206,6 @@ defmodule Realtime.Extensions.Rls.Repo.Migrations.EnableRealtimeApplyRlsFunction
errors
)::realtime.wal_rls;
end;
- $$;"
- )
+ $$;")
end
end
diff --git a/lib/extensions/postgres_cdc_rls/repo/migrations/20211202204204_update_realtime_subscription_check_filters_function_security.ex b/lib/realtime/tenants/repo/migrations/20211202204204_update_realtime_subscription_check_filters_function_security.ex
similarity index 94%
rename from lib/extensions/postgres_cdc_rls/repo/migrations/20211202204204_update_realtime_subscription_check_filters_function_security.ex
rename to lib/realtime/tenants/repo/migrations/20211202204204_update_realtime_subscription_check_filters_function_security.ex
index a1bc768..4d266ff 100644
--- a/lib/extensions/postgres_cdc_rls/repo/migrations/20211202204204_update_realtime_subscription_check_filters_function_security.ex
+++ b/lib/realtime/tenants/repo/migrations/20211202204204_update_realtime_subscription_check_filters_function_security.ex
@@ -1,4 +1,4 @@
-defmodule Realtime.Extensions.Rls.Repo.Migrations.UpdateRealtimeSubscriptionCheckFiltersFunctionSecurity do
+defmodule Realtime.Tenants.Migrations.UpdateRealtimeSubscriptionCheckFiltersFunctionSecurity do
@moduledoc false
use Ecto.Migration
diff --git a/lib/extensions/postgres_cdc_rls/repo/migrations/20211202204605_update_realtime_build_prepared_statement_sql_function_for_compatibility_with_all_types.ex b/lib/realtime/tenants/repo/migrations/20211202204605_update_realtime_build_prepared_statement_sql_function_for_compatibility_with_all_types.ex
similarity index 88%
rename from lib/extensions/postgres_cdc_rls/repo/migrations/20211202204605_update_realtime_build_prepared_statement_sql_function_for_compatibility_with_all_types.ex
rename to lib/realtime/tenants/repo/migrations/20211202204605_update_realtime_build_prepared_statement_sql_function_for_compatibility_with_all_types.ex
index 3642a54..42ddc13 100644
--- a/lib/extensions/postgres_cdc_rls/repo/migrations/20211202204605_update_realtime_build_prepared_statement_sql_function_for_compatibility_with_all_types.ex
+++ b/lib/realtime/tenants/repo/migrations/20211202204605_update_realtime_build_prepared_statement_sql_function_for_compatibility_with_all_types.ex
@@ -1,4 +1,4 @@
-defmodule Realtime.Extensions.Rls.Repo.Migrations.UpdateRealtimeBuildPreparedStatementSqlFunctionForCompatibilityWithAllTypes do
+defmodule Realtime.Tenants.Migrations.UpdateRealtimeBuildPreparedStatementSqlFunctionForCompatibilityWithAllTypes do
@moduledoc false
use Ecto.Migration
diff --git a/lib/extensions/postgres_cdc_rls/repo/migrations/20211210212804_enable_generic_subscription_claims.ex b/lib/realtime/tenants/repo/migrations/20211210212804_enable_generic_subscription_claims.ex
similarity index 99%
rename from lib/extensions/postgres_cdc_rls/repo/migrations/20211210212804_enable_generic_subscription_claims.ex
rename to lib/realtime/tenants/repo/migrations/20211210212804_enable_generic_subscription_claims.ex
index ab83a53..a372b38 100644
--- a/lib/extensions/postgres_cdc_rls/repo/migrations/20211210212804_enable_generic_subscription_claims.ex
+++ b/lib/realtime/tenants/repo/migrations/20211210212804_enable_generic_subscription_claims.ex
@@ -1,4 +1,4 @@
-defmodule Realtime.Extensions.Rls.Repo.Migrations.EnableGenericSubscriptionClaims do
+defmodule Realtime.Tenants.Migrations.EnableGenericSubscriptionClaims do
@moduledoc false
use Ecto.Migration
diff --git a/lib/extensions/postgres_cdc_rls/repo/migrations/20211228014915_add_wal_payload_on_errors_in_apply_rls_function.ex b/lib/realtime/tenants/repo/migrations/20211228014915_add_wal_payload_on_errors_in_apply_rls_function.ex
similarity index 97%
rename from lib/extensions/postgres_cdc_rls/repo/migrations/20211228014915_add_wal_payload_on_errors_in_apply_rls_function.ex
rename to lib/realtime/tenants/repo/migrations/20211228014915_add_wal_payload_on_errors_in_apply_rls_function.ex
index bea449e..1d0ee4b 100644
--- a/lib/extensions/postgres_cdc_rls/repo/migrations/20211228014915_add_wal_payload_on_errors_in_apply_rls_function.ex
+++ b/lib/realtime/tenants/repo/migrations/20211228014915_add_wal_payload_on_errors_in_apply_rls_function.ex
@@ -1,11 +1,10 @@
-defmodule Realtime.Extensions.Rls.Repo.Migrations.AddWalPayloadOnErrorsInApplyRlsFunction do
+defmodule Realtime.Tenants.Migrations.AddWalPayloadOnErrorsInApplyRlsFunction do
@moduledoc false
use Ecto.Migration
def change do
- execute(
- "create or replace function realtime.apply_rls(wal jsonb, max_record_bytes int = 1024 * 1024)
+ execute("create or replace function realtime.apply_rls(wal jsonb, max_record_bytes int = 1024 * 1024)
returns setof realtime.wal_rls
language plpgsql
volatile
@@ -245,7 +244,6 @@ defmodule Realtime.Extensions.Rls.Repo.Migrations.AddWalPayloadOnErrorsInApplyRl
perform set_config('role', null, true);
end;
- $$;"
- )
+ $$;")
end
end
diff --git a/lib/extensions/postgres_cdc_rls/repo/migrations/20220107221237_update_change_timestamp_to_iso_8601_zulu_format.ex b/lib/realtime/tenants/repo/migrations/20220107221237_update_change_timestamp_to_iso_8601_zulu_format.ex
similarity index 97%
rename from lib/extensions/postgres_cdc_rls/repo/migrations/20220107221237_update_change_timestamp_to_iso_8601_zulu_format.ex
rename to lib/realtime/tenants/repo/migrations/20220107221237_update_change_timestamp_to_iso_8601_zulu_format.ex
index a5f837a..5dc2788 100644
--- a/lib/extensions/postgres_cdc_rls/repo/migrations/20220107221237_update_change_timestamp_to_iso_8601_zulu_format.ex
+++ b/lib/realtime/tenants/repo/migrations/20220107221237_update_change_timestamp_to_iso_8601_zulu_format.ex
@@ -1,11 +1,10 @@
-defmodule Realtime.Extensions.Rls.Repo.Migrations.UpdateChangeTimestampToIso8601ZuluFormat do
+defmodule Realtime.Tenants.Migrations.UpdateChangeTimestampToIso8601ZuluFormat do
@moduledoc false
use Ecto.Migration
def change do
- execute(
- "create or replace function realtime.apply_rls(wal jsonb, max_record_bytes int = 1024 * 1024)
+ execute("create or replace function realtime.apply_rls(wal jsonb, max_record_bytes int = 1024 * 1024)
returns setof realtime.wal_rls
language plpgsql
volatile
@@ -240,7 +239,6 @@ defmodule Realtime.Extensions.Rls.Repo.Migrations.UpdateChangeTimestampToIso8601
perform set_config('role', null, true);
end;
- $$;"
- )
+ $$;")
end
end
diff --git a/lib/extensions/postgres_cdc_rls/repo/migrations/20220228202821_update_subscription_check_filters_function_dynamic_table_name.ex b/lib/realtime/tenants/repo/migrations/20220228202821_update_subscription_check_filters_function_dynamic_table_name.ex
similarity index 94%
rename from lib/extensions/postgres_cdc_rls/repo/migrations/20220228202821_update_subscription_check_filters_function_dynamic_table_name.ex
rename to lib/realtime/tenants/repo/migrations/20220228202821_update_subscription_check_filters_function_dynamic_table_name.ex
index 5e37caa..6148599 100644
--- a/lib/extensions/postgres_cdc_rls/repo/migrations/20220228202821_update_subscription_check_filters_function_dynamic_table_name.ex
+++ b/lib/realtime/tenants/repo/migrations/20220228202821_update_subscription_check_filters_function_dynamic_table_name.ex
@@ -1,4 +1,4 @@
-defmodule Realtime.Extensions.Rls.Repo.Migrations.UpdateSubscriptionCheckFiltersFunctionDynamicTableName do
+defmodule Realtime.Tenants.Migrations.UpdateSubscriptionCheckFiltersFunctionDynamicTableName do
@moduledoc false
use Ecto.Migration
diff --git a/lib/extensions/postgres_cdc_rls/repo/migrations/20220312004840_update_apply_rls_function_to_apply_iso_8601.ex b/lib/realtime/tenants/repo/migrations/20220312004840_update_apply_rls_function_to_apply_iso_8601.ex
similarity index 97%
rename from lib/extensions/postgres_cdc_rls/repo/migrations/20220312004840_update_apply_rls_function_to_apply_iso_8601.ex
rename to lib/realtime/tenants/repo/migrations/20220312004840_update_apply_rls_function_to_apply_iso_8601.ex
index 95a1b81..9a6caa1 100644
--- a/lib/extensions/postgres_cdc_rls/repo/migrations/20220312004840_update_apply_rls_function_to_apply_iso_8601.ex
+++ b/lib/realtime/tenants/repo/migrations/20220312004840_update_apply_rls_function_to_apply_iso_8601.ex
@@ -1,11 +1,10 @@
-defmodule Realtime.Extensions.Rls.Repo.Migrations.UpdateApplyRlsFunctionToApplyIso8601 do
+defmodule Realtime.Tenants.Migrations.UpdateApplyRlsFunctionToApplyIso8601 do
@moduledoc false
use Ecto.Migration
def change do
- execute(
- "create or replace function realtime.apply_rls(wal jsonb, max_record_bytes int = 1024 * 1024)
+ execute("create or replace function realtime.apply_rls(wal jsonb, max_record_bytes int = 1024 * 1024)
returns setof realtime.wal_rls
language plpgsql
volatile
@@ -240,7 +239,6 @@ defmodule Realtime.Extensions.Rls.Repo.Migrations.UpdateApplyRlsFunctionToApplyI
perform set_config('role', null, true);
end;
- $$;"
- )
+ $$;")
end
end
diff --git a/lib/extensions/postgres_cdc_rls/repo/migrations/20220603231003_add_quoted_regtypes_support.ex b/lib/realtime/tenants/repo/migrations/20220603231003_add_quoted_regtypes_support.ex
similarity index 96%
rename from lib/extensions/postgres_cdc_rls/repo/migrations/20220603231003_add_quoted_regtypes_support.ex
rename to lib/realtime/tenants/repo/migrations/20220603231003_add_quoted_regtypes_support.ex
index e42c524..68111b5 100644
--- a/lib/extensions/postgres_cdc_rls/repo/migrations/20220603231003_add_quoted_regtypes_support.ex
+++ b/lib/realtime/tenants/repo/migrations/20220603231003_add_quoted_regtypes_support.ex
@@ -1,4 +1,4 @@
-defmodule Realtime.Extensions.Rls.Repo.Migrations.AddQuotedRegtypesSupport do
+defmodule Realtime.Tenants.Migrations.AddQuotedRegtypesSupport do
@moduledoc false
use Ecto.Migration
@@ -6,16 +6,21 @@ defmodule Realtime.Extensions.Rls.Repo.Migrations.AddQuotedRegtypesSupport do
def change do
execute("drop type if exists realtime.wal_column cascade;")
- execute("
- create type realtime.wal_column as (
- name text,
- type_name text,
- type_oid oid,
- value jsonb,
- is_pkey boolean,
- is_selectable boolean
- );
- ")
+ execute("""
+ DO $$
+ BEGIN
+ IF NOT EXISTS (SELECT 1 FROM pg_type WHERE typname = 'wal_column') THEN
+ CREATE TYPE realtime.wal_column AS (
+ name text,
+ type_name text,
+ type_oid oid,
+ value jsonb,
+ is_pkey boolean,
+ is_selectable boolean
+ );
+ END IF;
+ END$$;
+ """)
execute("
create or replace function realtime.is_visible_through_filters(columns realtime.wal_column[], filters realtime.user_defined_filter[])
diff --git a/lib/extensions/postgres_cdc_rls/repo/migrations/20220603232444_add_output_for_data_less_than_equal_64_bytes_when_payload_too_large.ex b/lib/realtime/tenants/repo/migrations/20220603232444_add_output_for_data_less_than_equal_64_bytes_when_payload_too_large.ex
similarity index 99%
rename from lib/extensions/postgres_cdc_rls/repo/migrations/20220603232444_add_output_for_data_less_than_equal_64_bytes_when_payload_too_large.ex
rename to lib/realtime/tenants/repo/migrations/20220603232444_add_output_for_data_less_than_equal_64_bytes_when_payload_too_large.ex
index 6130ab7..63f351f 100644
--- a/lib/extensions/postgres_cdc_rls/repo/migrations/20220603232444_add_output_for_data_less_than_equal_64_bytes_when_payload_too_large.ex
+++ b/lib/realtime/tenants/repo/migrations/20220603232444_add_output_for_data_less_than_equal_64_bytes_when_payload_too_large.ex
@@ -1,4 +1,4 @@
-defmodule Realtime.Extensions.Rls.Repo.Migrations.AddOutputForDataLessThanEqual64BytesWhenPayloadTooLarge do
+defmodule Realtime.Tenants.Migrations.AddOutputForDataLessThanEqual64BytesWhenPayloadTooLarge do
@moduledoc false
use Ecto.Migration
diff --git a/lib/extensions/postgres_cdc_rls/repo/migrations/20220615214548_add_quoted_regtypes_backward_compatibility_support.ex b/lib/realtime/tenants/repo/migrations/20220615214548_add_quoted_regtypes_backward_compatibility_support.ex
similarity index 99%
rename from lib/extensions/postgres_cdc_rls/repo/migrations/20220615214548_add_quoted_regtypes_backward_compatibility_support.ex
rename to lib/realtime/tenants/repo/migrations/20220615214548_add_quoted_regtypes_backward_compatibility_support.ex
index ad64cf7..b03bf7c 100644
--- a/lib/extensions/postgres_cdc_rls/repo/migrations/20220615214548_add_quoted_regtypes_backward_compatibility_support.ex
+++ b/lib/realtime/tenants/repo/migrations/20220615214548_add_quoted_regtypes_backward_compatibility_support.ex
@@ -1,4 +1,4 @@
-defmodule Realtime.Extensions.Rls.Repo.Migrations.AddQuotedRegtypesBackwardCompatibilitySupport do
+defmodule Realtime.Tenants.Migrations.AddQuotedRegtypesBackwardCompatibilitySupport do
@moduledoc false
use Ecto.Migration
diff --git a/lib/extensions/postgres_cdc_rls/repo/migrations/20220712093339_recreate_realtime_build_prepared_statement_sql_function.ex b/lib/realtime/tenants/repo/migrations/20220712093339_recreate_realtime_build_prepared_statement_sql_function.ex
similarity index 91%
rename from lib/extensions/postgres_cdc_rls/repo/migrations/20220712093339_recreate_realtime_build_prepared_statement_sql_function.ex
rename to lib/realtime/tenants/repo/migrations/20220712093339_recreate_realtime_build_prepared_statement_sql_function.ex
index f745784..37d0a99 100644
--- a/lib/extensions/postgres_cdc_rls/repo/migrations/20220712093339_recreate_realtime_build_prepared_statement_sql_function.ex
+++ b/lib/realtime/tenants/repo/migrations/20220712093339_recreate_realtime_build_prepared_statement_sql_function.ex
@@ -1,4 +1,4 @@
-defmodule Realtime.Extensions.Rls.Repo.Migrations.RecreateRealtimeBuildPreparedStatementSqlFunction do
+defmodule Realtime.Tenants.Migrations.RecreateRealtimeBuildPreparedStatementSqlFunction do
@moduledoc false
use Ecto.Migration
diff --git a/lib/extensions/postgres_cdc_rls/repo/migrations/20220908172859_null_passes_filters_recreate_is_visible_through_filters.ex b/lib/realtime/tenants/repo/migrations/20220908172859_null_passes_filters_recreate_is_visible_through_filters.ex
similarity index 93%
rename from lib/extensions/postgres_cdc_rls/repo/migrations/20220908172859_null_passes_filters_recreate_is_visible_through_filters.ex
rename to lib/realtime/tenants/repo/migrations/20220908172859_null_passes_filters_recreate_is_visible_through_filters.ex
index 09a24ba..91c1dc2 100644
--- a/lib/extensions/postgres_cdc_rls/repo/migrations/20220908172859_null_passes_filters_recreate_is_visible_through_filters.ex
+++ b/lib/realtime/tenants/repo/migrations/20220908172859_null_passes_filters_recreate_is_visible_through_filters.ex
@@ -1,4 +1,4 @@
-defmodule Realtime.Extensions.Rls.Repo.Migrations.NullPassesFiltersRecreateIsVisibleThroughFilters do
+defmodule Realtime.Tenants.Migrations.NullPassesFiltersRecreateIsVisibleThroughFilters do
@moduledoc false
use Ecto.Migration
diff --git a/lib/extensions/postgres_cdc_rls/repo/migrations/20220916233421_update_apply_rls_function_to_pass_through_delete_events_on_filter.ex b/lib/realtime/tenants/repo/migrations/20220916233421_update_apply_rls_function_to_pass_through_delete_events_on_filter.ex
similarity index 99%
rename from lib/extensions/postgres_cdc_rls/repo/migrations/20220916233421_update_apply_rls_function_to_pass_through_delete_events_on_filter.ex
rename to lib/realtime/tenants/repo/migrations/20220916233421_update_apply_rls_function_to_pass_through_delete_events_on_filter.ex
index 7499653..228376d 100644
--- a/lib/extensions/postgres_cdc_rls/repo/migrations/20220916233421_update_apply_rls_function_to_pass_through_delete_events_on_filter.ex
+++ b/lib/realtime/tenants/repo/migrations/20220916233421_update_apply_rls_function_to_pass_through_delete_events_on_filter.ex
@@ -1,4 +1,4 @@
-defmodule Realtime.Extensions.Rls.Repo.Migrations.UpdateApplyRlsFunctionToPassThroughDeleteEventsOnFilter do
+defmodule Realtime.Tenants.Migrations.UpdateApplyRlsFunctionToPassThroughDeleteEventsOnFilter do
@moduledoc false
use Ecto.Migration
diff --git a/lib/extensions/postgres_cdc_rls/repo/migrations/20230119133233_millisecond_precision_for_walrus.ex b/lib/realtime/tenants/repo/migrations/20230119133233_millisecond_precision_for_walrus.ex
similarity index 99%
rename from lib/extensions/postgres_cdc_rls/repo/migrations/20230119133233_millisecond_precision_for_walrus.ex
rename to lib/realtime/tenants/repo/migrations/20230119133233_millisecond_precision_for_walrus.ex
index ce68233..5acdbe8 100644
--- a/lib/extensions/postgres_cdc_rls/repo/migrations/20230119133233_millisecond_precision_for_walrus.ex
+++ b/lib/realtime/tenants/repo/migrations/20230119133233_millisecond_precision_for_walrus.ex
@@ -1,4 +1,4 @@
-defmodule Realtime.Extensions.Rls.Repo.Migrations.MillisecondPrecisionForWalrus do
+defmodule Realtime.Tenants.Migrations.MillisecondPrecisionForWalrus do
@moduledoc false
use Ecto.Migration
diff --git a/lib/extensions/postgres_cdc_rls/repo/migrations/20230128025114_add_in_op_to_filters.ex b/lib/realtime/tenants/repo/migrations/20230128025114_add_in_op_to_filters.ex
similarity index 98%
rename from lib/extensions/postgres_cdc_rls/repo/migrations/20230128025114_add_in_op_to_filters.ex
rename to lib/realtime/tenants/repo/migrations/20230128025114_add_in_op_to_filters.ex
index b459136..01872cb 100644
--- a/lib/extensions/postgres_cdc_rls/repo/migrations/20230128025114_add_in_op_to_filters.ex
+++ b/lib/realtime/tenants/repo/migrations/20230128025114_add_in_op_to_filters.ex
@@ -1,4 +1,4 @@
-defmodule Realtime.Extensions.Rls.Repo.Migrations.AddInOpToFilters do
+defmodule Realtime.Tenants.Migrations.AddInOpToFilters do
@moduledoc false
use Ecto.Migration
diff --git a/lib/extensions/postgres_cdc_rls/repo/migrations/20230128025212_enable_filtering_on_delete_record.ex b/lib/realtime/tenants/repo/migrations/20230128025212_enable_filtering_on_delete_record.ex
similarity index 99%
rename from lib/extensions/postgres_cdc_rls/repo/migrations/20230128025212_enable_filtering_on_delete_record.ex
rename to lib/realtime/tenants/repo/migrations/20230128025212_enable_filtering_on_delete_record.ex
index c0a54a1..9f6d0c7 100644
--- a/lib/extensions/postgres_cdc_rls/repo/migrations/20230128025212_enable_filtering_on_delete_record.ex
+++ b/lib/realtime/tenants/repo/migrations/20230128025212_enable_filtering_on_delete_record.ex
@@ -1,4 +1,4 @@
-defmodule Realtime.Extensions.Rls.Repo.Migrations.EnableFilteringOnDeleteRecord do
+defmodule Realtime.Tenants.Migrations.EnableFilteringOnDeleteRecord do
@moduledoc false
use Ecto.Migration
diff --git a/lib/extensions/postgres_cdc_rls/repo/migrations/20230227211149_update_subscription_check_filters_for_in_filter_non_text_types.ex b/lib/realtime/tenants/repo/migrations/20230227211149_update_subscription_check_filters_for_in_filter_non_text_types.ex
similarity index 96%
rename from lib/extensions/postgres_cdc_rls/repo/migrations/20230227211149_update_subscription_check_filters_for_in_filter_non_text_types.ex
rename to lib/realtime/tenants/repo/migrations/20230227211149_update_subscription_check_filters_for_in_filter_non_text_types.ex
index 0fe46c3..d5a96ac 100644
--- a/lib/extensions/postgres_cdc_rls/repo/migrations/20230227211149_update_subscription_check_filters_for_in_filter_non_text_types.ex
+++ b/lib/realtime/tenants/repo/migrations/20230227211149_update_subscription_check_filters_for_in_filter_non_text_types.ex
@@ -1,4 +1,4 @@
-defmodule Realtime.Extensions.Rls.Repo.Migrations.UpdateSubscriptionCheckFiltersForInFilterNonTextTypes do
+defmodule Realtime.Tenants.Migrations.UpdateSubscriptionCheckFiltersForInFilterNonTextTypes do
@moduledoc false
use Ecto.Migration
diff --git a/lib/extensions/postgres_cdc_rls/repo/migrations/20230228184745_convert_commit_timestamp_to_utc.ex b/lib/realtime/tenants/repo/migrations/20230228184745_convert_commit_timestamp_to_utc.ex
similarity index 99%
rename from lib/extensions/postgres_cdc_rls/repo/migrations/20230228184745_convert_commit_timestamp_to_utc.ex
rename to lib/realtime/tenants/repo/migrations/20230228184745_convert_commit_timestamp_to_utc.ex
index e97e3d4..b50119b 100644
--- a/lib/extensions/postgres_cdc_rls/repo/migrations/20230228184745_convert_commit_timestamp_to_utc.ex
+++ b/lib/realtime/tenants/repo/migrations/20230228184745_convert_commit_timestamp_to_utc.ex
@@ -1,4 +1,4 @@
-defmodule Realtime.Extensions.Rls.Repo.Migrations.ConvertCommitTimestampToUtc do
+defmodule Realtime.Tenants.Migrations.ConvertCommitTimestampToUtc do
@moduledoc false
use Ecto.Migration
diff --git a/lib/extensions/postgres_cdc_rls/repo/migrations/20230308225145_output_full_record_when_unchanged_toast.ex b/lib/realtime/tenants/repo/migrations/20230308225145_output_full_record_when_unchanged_toast.ex
similarity index 99%
rename from lib/extensions/postgres_cdc_rls/repo/migrations/20230308225145_output_full_record_when_unchanged_toast.ex
rename to lib/realtime/tenants/repo/migrations/20230308225145_output_full_record_when_unchanged_toast.ex
index aa08a4f..22c1b45 100644
--- a/lib/extensions/postgres_cdc_rls/repo/migrations/20230308225145_output_full_record_when_unchanged_toast.ex
+++ b/lib/realtime/tenants/repo/migrations/20230308225145_output_full_record_when_unchanged_toast.ex
@@ -1,4 +1,4 @@
-defmodule Realtime.Extensions.Rls.Repo.Migrations.OutputFullRecordWhenUnchangedToast do
+defmodule Realtime.Tenants.Migrations.OutputFullRecordWhenUnchangedToast do
@moduledoc false
use Ecto.Migration
diff --git a/lib/realtime/tenants/repo/migrations/20230328144023_create_list_changes_function.ex b/lib/realtime/tenants/repo/migrations/20230328144023_create_list_changes_function.ex
new file mode 100644
index 0000000..cb55b15
--- /dev/null
+++ b/lib/realtime/tenants/repo/migrations/20230328144023_create_list_changes_function.ex
@@ -0,0 +1,71 @@
+defmodule Realtime.Tenants.Migrations.CreateListChangesFunction do
+ @moduledoc false
+
+ use Ecto.Migration
+
+ def change do
+ execute(
+ "create or replace function realtime.list_changes(publication name, slot_name name, max_changes int, max_record_bytes int)
+ returns setof realtime.wal_rls
+ language sql
+ set log_min_messages to 'fatal'
+ as $$
+ with pub as (
+ select
+ concat_ws(
+ ',',
+ case when bool_or(pubinsert) then 'insert' else null end,
+ case when bool_or(pubupdate) then 'update' else null end,
+ case when bool_or(pubdelete) then 'delete' else null end
+ ) as w2j_actions,
+ coalesce(
+ string_agg(
+ realtime.quote_wal2json(format('%I.%I', schemaname, tablename)::regclass),
+ ','
+ ) filter (where ppt.tablename is not null and ppt.tablename not like '% %'),
+ ''
+ ) w2j_add_tables
+ from
+ pg_publication pp
+ left join pg_publication_tables ppt
+ on pp.pubname = ppt.pubname
+ where
+ pp.pubname = publication
+ group by
+ pp.pubname
+ limit 1
+ ),
+ w2j as (
+ select
+ x.*, pub.w2j_add_tables
+ from
+ pub,
+ pg_logical_slot_get_changes(
+ slot_name, null, max_changes,
+ 'include-pk', 'true',
+ 'include-transaction', 'false',
+ 'include-timestamp', 'true',
+ 'include-type-oids', 'true',
+ 'format-version', '2',
+ 'actions', pub.w2j_actions,
+ 'add-tables', pub.w2j_add_tables
+ ) x
+ )
+ select
+ xyz.wal,
+ xyz.is_rls_enabled,
+ xyz.subscription_ids,
+ xyz.errors
+ from
+ w2j,
+ realtime.apply_rls(
+ wal := w2j.data::jsonb,
+ max_record_bytes := max_record_bytes
+ ) xyz(wal, is_rls_enabled, subscription_ids, errors)
+ where
+ w2j.w2j_add_tables <> ''
+ and xyz.subscription_ids[1] is not null
+ $$;"
+ )
+ end
+end
diff --git a/lib/realtime/tenants/repo/migrations/20231018144023_create_channels.ex b/lib/realtime/tenants/repo/migrations/20231018144023_create_channels.ex
new file mode 100644
index 0000000..779ca6d
--- /dev/null
+++ b/lib/realtime/tenants/repo/migrations/20231018144023_create_channels.ex
@@ -0,0 +1,14 @@
+defmodule Realtime.Tenants.Migrations.CreateChannels do
+ @moduledoc false
+
+ use Ecto.Migration
+
+ def change do
+ create table(:channels, prefix: "realtime") do
+ add(:name, :string, null: false)
+ timestamps()
+ end
+
+ create unique_index(:channels, [:name], prefix: "realtime")
+ end
+end
diff --git a/lib/realtime/tenants/repo/migrations/20231204144023_set_required_grants.ex b/lib/realtime/tenants/repo/migrations/20231204144023_set_required_grants.ex
new file mode 100644
index 0000000..4e213b1
--- /dev/null
+++ b/lib/realtime/tenants/repo/migrations/20231204144023_set_required_grants.ex
@@ -0,0 +1,23 @@
+defmodule Realtime.Tenants.Migrations.SetRequiredGrants do
+ @moduledoc false
+
+ use Ecto.Migration
+
+ def change do
+ execute("""
+ GRANT USAGE ON SCHEMA realtime TO postgres, anon, authenticated, service_role
+ """)
+
+ execute("""
+ GRANT SELECT ON ALL TABLES IN SCHEMA realtime TO postgres, anon, authenticated, service_role
+ """)
+
+ execute("""
+ GRANT EXECUTE ON ALL FUNCTIONS IN SCHEMA realtime TO postgres, anon, authenticated, service_role
+ """)
+
+ execute("""
+ GRANT USAGE ON ALL SEQUENCES IN SCHEMA realtime TO postgres, anon, authenticated, service_role
+ """)
+ end
+end
diff --git a/lib/realtime/tenants/repo/migrations/20231204144024_create_rls_helper_functions.ex b/lib/realtime/tenants/repo/migrations/20231204144024_create_rls_helper_functions.ex
new file mode 100644
index 0000000..c7bef3b
--- /dev/null
+++ b/lib/realtime/tenants/repo/migrations/20231204144024_create_rls_helper_functions.ex
@@ -0,0 +1,13 @@
+defmodule Realtime.Tenants.Migrations.CreateRlsHelperFunctions do
+ @moduledoc false
+
+ use Ecto.Migration
+
+ def change do
+ execute("""
+ create or replace function realtime.channel_name() returns text as $$
+ select nullif(current_setting('realtime.channel_name', true), '')::text;
+ $$ language sql stable;
+ """)
+ end
+end
diff --git a/lib/realtime/tenants/repo/migrations/20231204144025_enable_channels_rls.ex b/lib/realtime/tenants/repo/migrations/20231204144025_enable_channels_rls.ex
new file mode 100644
index 0000000..46838fc
--- /dev/null
+++ b/lib/realtime/tenants/repo/migrations/20231204144025_enable_channels_rls.ex
@@ -0,0 +1,9 @@
+defmodule Realtime.Tenants.Migrations.EnableChannelsRls do
+ @moduledoc false
+
+ use Ecto.Migration
+
+ def change do
+ execute("ALTER TABLE realtime.channels ENABLE row level security")
+ end
+end
diff --git a/lib/realtime/tenants/repo/migrations/20240108234812_add_channels_column_for_write_check.ex b/lib/realtime/tenants/repo/migrations/20240108234812_add_channels_column_for_write_check.ex
new file mode 100644
index 0000000..3023845
--- /dev/null
+++ b/lib/realtime/tenants/repo/migrations/20240108234812_add_channels_column_for_write_check.ex
@@ -0,0 +1,11 @@
+defmodule Realtime.Tenants.Migrations.AddChannelsColumnForWriteCheck do
+ @moduledoc false
+
+ use Ecto.Migration
+
+ def change do
+ alter table(:channels, prefix: "realtime") do
+ add :check, :boolean, default: false
+ end
+ end
+end
diff --git a/lib/realtime/tenants/repo/migrations/20240109165339_add_update_grant_to_channels.ex b/lib/realtime/tenants/repo/migrations/20240109165339_add_update_grant_to_channels.ex
new file mode 100644
index 0000000..3af3d05
--- /dev/null
+++ b/lib/realtime/tenants/repo/migrations/20240109165339_add_update_grant_to_channels.ex
@@ -0,0 +1,11 @@
+defmodule Realtime.Tenants.Migrations.AddUpdateGrantToChannels do
+ @moduledoc false
+
+ use Ecto.Migration
+
+ def change do
+ execute("""
+ GRANT UPDATE ON realtime.channels TO postgres, anon, authenticated, service_role
+ """)
+ end
+end
diff --git a/lib/realtime/tenants/repo/migrations/20240227174441_add_broadcast_permissions_table.ex b/lib/realtime/tenants/repo/migrations/20240227174441_add_broadcast_permissions_table.ex
new file mode 100644
index 0000000..37a8b31
--- /dev/null
+++ b/lib/realtime/tenants/repo/migrations/20240227174441_add_broadcast_permissions_table.ex
@@ -0,0 +1,19 @@
+defmodule Realtime.Tenants.Migrations.AddBroadcastsPoliciesTable do
+ @moduledoc false
+
+ use Ecto.Migration
+
+ def change do
+ create table(:broadcasts) do
+ add :channel_id, references(:channels, on_delete: :delete_all), null: false
+ add :check, :boolean, default: false, null: false
+ timestamps()
+ end
+
+ create unique_index(:broadcasts, :channel_id)
+
+ execute("ALTER TABLE realtime.broadcasts ENABLE row level security")
+ execute("GRANT SELECT ON realtime.broadcasts TO postgres, anon, authenticated, service_role")
+ execute("GRANT UPDATE ON realtime.broadcasts TO postgres, anon, authenticated, service_role")
+ end
+end
diff --git a/lib/realtime/tenants/repo/migrations/20240311171622_add_insert_and_delete_grant_to_channels.ex b/lib/realtime/tenants/repo/migrations/20240311171622_add_insert_and_delete_grant_to_channels.ex
new file mode 100644
index 0000000..dddbd23
--- /dev/null
+++ b/lib/realtime/tenants/repo/migrations/20240311171622_add_insert_and_delete_grant_to_channels.ex
@@ -0,0 +1,19 @@
+defmodule Realtime.Tenants.Migrations.AddInsertAndDeleteGrantToChannels do
+ @moduledoc false
+
+ use Ecto.Migration
+
+ def change do
+ execute("""
+ GRANT INSERT, DELETE ON realtime.channels TO postgres, anon, authenticated, service_role
+ """)
+
+ execute("""
+ GRANT INSERT ON realtime.broadcasts TO postgres, anon, authenticated, service_role
+ """)
+
+ execute("""
+ GRANT USAGE ON SEQUENCE realtime.broadcasts_id_seq TO postgres, anon, authenticated, service_role
+ """)
+ end
+end
diff --git a/lib/realtime/tenants/repo/migrations/20240321100241_add_presences_permissions_table.ex b/lib/realtime/tenants/repo/migrations/20240321100241_add_presences_permissions_table.ex
new file mode 100644
index 0000000..e2d0299
--- /dev/null
+++ b/lib/realtime/tenants/repo/migrations/20240321100241_add_presences_permissions_table.ex
@@ -0,0 +1,27 @@
+defmodule Realtime.Tenants.Migrations.AddPresencesPoliciesTable do
+ @moduledoc false
+
+ use Ecto.Migration
+
+ def change do
+ create table(:presences) do
+ add :channel_id, references(:channels, on_delete: :delete_all), null: false
+ add :check, :boolean, default: false, null: false
+ timestamps()
+ end
+
+ create unique_index(:presences, :channel_id)
+
+ execute("ALTER TABLE realtime.presences ENABLE row level security")
+ execute("GRANT SELECT ON realtime.presences TO postgres, anon, authenticated, service_role")
+ execute("GRANT UPDATE ON realtime.presences TO postgres, anon, authenticated, service_role")
+
+ execute("""
+ GRANT INSERT ON realtime.presences TO postgres, anon, authenticated, service_role
+ """)
+
+ execute("""
+ GRANT USAGE ON SEQUENCE realtime.presences_id_seq TO postgres, anon, authenticated, service_role
+ """)
+ end
+end
diff --git a/lib/realtime/tenants/repo/migrations/20240401105812_create_realtime_admin_and_move_ownership.ex b/lib/realtime/tenants/repo/migrations/20240401105812_create_realtime_admin_and_move_ownership.ex
new file mode 100644
index 0000000..c99754f
--- /dev/null
+++ b/lib/realtime/tenants/repo/migrations/20240401105812_create_realtime_admin_and_move_ownership.ex
@@ -0,0 +1,35 @@
+defmodule Realtime.Tenants.Migrations.CreateRealtimeAdminAndMoveOwnership do
+ @moduledoc false
+
+ use Ecto.Migration
+
+ def change do
+ execute("""
+ DO
+ $do$
+ BEGIN
+ IF EXISTS (
+ SELECT FROM pg_catalog.pg_roles
+ WHERE rolname = 'tealbase_realtime_admin') THEN
+
+ RAISE NOTICE 'Role "tealbase_realtime_admin" already exists. Skipping.';
+ ELSE
+ CREATE ROLE tealbase_realtime_admin WITH NOINHERIT NOLOGIN NOREPLICATION;
+ END IF;
+ END
+ $do$;
+ """)
+
+ execute("GRANT ALL PRIVILEGES ON SCHEMA realtime TO tealbase_realtime_admin")
+ execute("GRANT ALL PRIVILEGES ON ALL TABLES IN SCHEMA realtime TO tealbase_realtime_admin")
+ execute("GRANT ALL PRIVILEGES ON ALL SEQUENCES IN SCHEMA realtime TO tealbase_realtime_admin")
+ execute("GRANT ALL PRIVILEGES ON ALL FUNCTIONS IN SCHEMA realtime TO tealbase_realtime_admin")
+
+ execute("ALTER table realtime.channels OWNER to tealbase_realtime_admin")
+ execute("ALTER table realtime.broadcasts OWNER to tealbase_realtime_admin")
+ execute("ALTER table realtime.presences OWNER TO tealbase_realtime_admin")
+ execute("ALTER function realtime.channel_name() owner to tealbase_realtime_admin")
+
+ execute("GRANT tealbase_realtime_admin TO postgres")
+ end
+end
diff --git a/lib/realtime/tenants/repo/migrations/20240418121054_remove_check_columns.ex b/lib/realtime/tenants/repo/migrations/20240418121054_remove_check_columns.ex
new file mode 100644
index 0000000..45cf865
--- /dev/null
+++ b/lib/realtime/tenants/repo/migrations/20240418121054_remove_check_columns.ex
@@ -0,0 +1,19 @@
+defmodule Realtime.Tenants.Migrations.RemoveCheckColumns do
+ @moduledoc false
+
+ use Ecto.Migration
+
+ def change do
+ alter table(:channels) do
+ remove :check
+ end
+
+ alter table(:broadcasts) do
+ remove :check
+ end
+
+ alter table(:presences) do
+ remove :check
+ end
+ end
+end
diff --git a/lib/realtime/tenants/repo/migrations/20240523004032_redefine_authorization_tables.ex b/lib/realtime/tenants/repo/migrations/20240523004032_redefine_authorization_tables.ex
new file mode 100644
index 0000000..9681362
--- /dev/null
+++ b/lib/realtime/tenants/repo/migrations/20240523004032_redefine_authorization_tables.ex
@@ -0,0 +1,45 @@
+defmodule Realtime.Tenants.Migrations.RedefineAuthorizationTables do
+ @moduledoc false
+
+ use Ecto.Migration
+
+ def change do
+ drop table(:broadcasts, mode: :cascade)
+ drop table(:presences, mode: :cascade)
+ drop table(:channels, mode: :cascade)
+
+ create_if_not_exists table(:messages) do
+ add :topic, :text, null: false
+ add :extension, :text, null: false
+ timestamps()
+ end
+
+ create_if_not_exists index(:messages, [:topic])
+
+ execute("ALTER TABLE realtime.messages ENABLE row level security")
+ execute("GRANT SELECT ON realtime.messages TO postgres, anon, authenticated, service_role")
+ execute("GRANT UPDATE ON realtime.messages TO postgres, anon, authenticated, service_role")
+
+ execute("""
+ GRANT INSERT ON realtime.messages TO postgres, anon, authenticated, service_role
+ """)
+
+ execute("""
+ GRANT USAGE ON SEQUENCE realtime.messages_id_seq TO postgres, anon, authenticated, service_role
+ """)
+
+ execute("ALTER table realtime.messages OWNER to tealbase_realtime_admin")
+
+ execute("""
+ DROP function realtime.channel_name
+ """)
+
+ execute("""
+ create or replace function realtime.topic() returns text as $$
+ select nullif(current_setting('realtime.topic', true), '')::text;
+ $$ language sql stable;
+ """)
+
+ execute("ALTER function realtime.topic() owner to tealbase_realtime_admin")
+ end
+end
diff --git a/lib/realtime/tenants/repo/migrations/20240618124746_fix_walrus_role_handling.ex b/lib/realtime/tenants/repo/migrations/20240618124746_fix_walrus_role_handling.ex
new file mode 100644
index 0000000..cf767c6
--- /dev/null
+++ b/lib/realtime/tenants/repo/migrations/20240618124746_fix_walrus_role_handling.ex
@@ -0,0 +1,307 @@
+defmodule Realtime.Tenants.Migrations.FixWalrusRoleHandling do
+ @moduledoc false
+ use Ecto.Migration
+
+ def change do
+ execute """
+ create or replace function realtime.apply_rls(wal jsonb, max_record_bytes int = 1024 * 1024)
+ returns setof realtime.wal_rls
+ language plpgsql
+ volatile
+ as $$
+ declare
+ -- Regclass of the table e.g. public.notes
+ entity_ regclass = (quote_ident(wal ->> 'schema') || '.' || quote_ident(wal ->> 'table'))::regclass;
+
+ -- I, U, D, T: insert, update ...
+ action realtime.action = (
+ case wal ->> 'action'
+ when 'I' then 'INSERT'
+ when 'U' then 'UPDATE'
+ when 'D' then 'DELETE'
+ else 'ERROR'
+ end
+ );
+
+ -- Is row level security enabled for the table
+ is_rls_enabled bool = relrowsecurity from pg_class where oid = entity_;
+
+ subscriptions realtime.subscription[] = array_agg(subs)
+ from
+ realtime.subscription subs
+ where
+ subs.entity = entity_;
+
+ -- Subscription vars
+ roles regrole[] = array_agg(distinct us.claims_role::text)
+ from
+ unnest(subscriptions) us;
+
+ working_role regrole;
+ claimed_role regrole;
+ claims jsonb;
+
+ subscription_id uuid;
+ subscription_has_access bool;
+ visible_to_subscription_ids uuid[] = '{}';
+
+ -- structured info for wal's columns
+ columns realtime.wal_column[];
+ -- previous identity values for update/delete
+ old_columns realtime.wal_column[];
+
+ error_record_exceeds_max_size boolean = octet_length(wal::text) > max_record_bytes;
+
+ -- Primary jsonb output for record
+ output jsonb;
+
+ begin
+ perform set_config('role', null, true);
+
+ columns =
+ array_agg(
+ (
+ x->>'name',
+ x->>'type',
+ x->>'typeoid',
+ realtime.cast(
+ (x->'value') #>> '{}',
+ coalesce(
+ (x->>'typeoid')::regtype, -- null when wal2json version <= 2.4
+ (x->>'type')::regtype
+ )
+ ),
+ (pks ->> 'name') is not null,
+ true
+ )::realtime.wal_column
+ )
+ from
+ jsonb_array_elements(wal -> 'columns') x
+ left join jsonb_array_elements(wal -> 'pk') pks
+ on (x ->> 'name') = (pks ->> 'name');
+
+ old_columns =
+ array_agg(
+ (
+ x->>'name',
+ x->>'type',
+ x->>'typeoid',
+ realtime.cast(
+ (x->'value') #>> '{}',
+ coalesce(
+ (x->>'typeoid')::regtype, -- null when wal2json version <= 2.4
+ (x->>'type')::regtype
+ )
+ ),
+ (pks ->> 'name') is not null,
+ true
+ )::realtime.wal_column
+ )
+ from
+ jsonb_array_elements(wal -> 'identity') x
+ left join jsonb_array_elements(wal -> 'pk') pks
+ on (x ->> 'name') = (pks ->> 'name');
+
+ for working_role in select * from unnest(roles) loop
+
+ -- Update `is_selectable` for columns and old_columns
+ columns =
+ array_agg(
+ (
+ c.name,
+ c.type_name,
+ c.type_oid,
+ c.value,
+ c.is_pkey,
+ pg_catalog.has_column_privilege(working_role, entity_, c.name, 'SELECT')
+ )::realtime.wal_column
+ )
+ from
+ unnest(columns) c;
+
+ old_columns =
+ array_agg(
+ (
+ c.name,
+ c.type_name,
+ c.type_oid,
+ c.value,
+ c.is_pkey,
+ pg_catalog.has_column_privilege(working_role, entity_, c.name, 'SELECT')
+ )::realtime.wal_column
+ )
+ from
+ unnest(old_columns) c;
+
+ if action <> 'DELETE' and count(1) = 0 from unnest(columns) c where c.is_pkey then
+ return next (
+ jsonb_build_object(
+ 'schema', wal ->> 'schema',
+ 'table', wal ->> 'table',
+ 'type', action
+ ),
+ is_rls_enabled,
+ -- subscriptions is already filtered by entity
+ (select array_agg(s.subscription_id) from unnest(subscriptions) as s where claims_role = working_role),
+ array['Error 400: Bad Request, no primary key']
+ )::realtime.wal_rls;
+
+ -- The claims role does not have SELECT permission to the primary key of entity
+ elsif action <> 'DELETE' and sum(c.is_selectable::int) <> count(1) from unnest(columns) c where c.is_pkey then
+ return next (
+ jsonb_build_object(
+ 'schema', wal ->> 'schema',
+ 'table', wal ->> 'table',
+ 'type', action
+ ),
+ is_rls_enabled,
+ (select array_agg(s.subscription_id) from unnest(subscriptions) as s where claims_role = working_role),
+ array['Error 401: Unauthorized']
+ )::realtime.wal_rls;
+
+ else
+ output = jsonb_build_object(
+ 'schema', wal ->> 'schema',
+ 'table', wal ->> 'table',
+ 'type', action,
+ 'commit_timestamp', to_char(
+ ((wal ->> 'timestamp')::timestamptz at time zone 'utc'),
+ 'YYYY-MM-DD"T"HH24:MI:SS.MS"Z"'
+ ),
+ 'columns', (
+ select
+ jsonb_agg(
+ jsonb_build_object(
+ 'name', pa.attname,
+ 'type', pt.typname
+ )
+ order by pa.attnum asc
+ )
+ from
+ pg_attribute pa
+ join pg_type pt
+ on pa.atttypid = pt.oid
+ where
+ attrelid = entity_
+ and attnum > 0
+ and pg_catalog.has_column_privilege(working_role, entity_, pa.attname, 'SELECT')
+ )
+ )
+ -- Add "record" key for insert and update
+ || case
+ when action in ('INSERT', 'UPDATE') then
+ jsonb_build_object(
+ 'record',
+ (
+ select
+ jsonb_object_agg(
+ -- if unchanged toast, get column name and value from old record
+ coalesce((c).name, (oc).name),
+ case
+ when (c).name is null then (oc).value
+ else (c).value
+ end
+ )
+ from
+ unnest(columns) c
+ full outer join unnest(old_columns) oc
+ on (c).name = (oc).name
+ where
+ coalesce((c).is_selectable, (oc).is_selectable)
+ and ( not error_record_exceeds_max_size or (octet_length((c).value::text) <= 64))
+ )
+ )
+ else '{}'::jsonb
+ end
+ -- Add "old_record" key for update and delete
+ || case
+ when action = 'UPDATE' then
+ jsonb_build_object(
+ 'old_record',
+ (
+ select jsonb_object_agg((c).name, (c).value)
+ from unnest(old_columns) c
+ where
+ (c).is_selectable
+ and ( not error_record_exceeds_max_size or (octet_length((c).value::text) <= 64))
+ )
+ )
+ when action = 'DELETE' then
+ jsonb_build_object(
+ 'old_record',
+ (
+ select jsonb_object_agg((c).name, (c).value)
+ from unnest(old_columns) c
+ where
+ (c).is_selectable
+ and ( not error_record_exceeds_max_size or (octet_length((c).value::text) <= 64))
+ and ( not is_rls_enabled or (c).is_pkey ) -- if RLS enabled, we can't secure deletes so filter to pkey
+ )
+ )
+ else '{}'::jsonb
+ end;
+
+ -- Create the prepared statement
+ if is_rls_enabled and action <> 'DELETE' then
+ if (select 1 from pg_prepared_statements where name = 'walrus_rls_stmt' limit 1) > 0 then
+ deallocate walrus_rls_stmt;
+ end if;
+ execute realtime.build_prepared_statement_sql('walrus_rls_stmt', entity_, columns);
+ end if;
+
+ visible_to_subscription_ids = '{}';
+
+ for subscription_id, claims in (
+ select
+ subs.subscription_id,
+ subs.claims
+ from
+ unnest(subscriptions) subs
+ where
+ subs.entity = entity_
+ and subs.claims_role = working_role
+ and (
+ realtime.is_visible_through_filters(columns, subs.filters)
+ or action = 'DELETE'
+ )
+ ) loop
+
+ if not is_rls_enabled or action = 'DELETE' then
+ visible_to_subscription_ids = visible_to_subscription_ids || subscription_id;
+ else
+ -- Check if RLS allows the role to see the record
+ perform
+ -- Trim leading and trailing quotes from working_role because set_config
+ -- doesn't recognize the role as valid if they are included
+ set_config('role', trim(both '"' from working_role::text), true),
+ set_config('request.jwt.claims', claims::text, true);
+
+ execute 'execute walrus_rls_stmt' into subscription_has_access;
+
+ if subscription_has_access then
+ visible_to_subscription_ids = visible_to_subscription_ids || subscription_id;
+ end if;
+ end if;
+ end loop;
+
+ perform set_config('role', null, true);
+
+ return next (
+ output,
+ is_rls_enabled,
+ visible_to_subscription_ids,
+ case
+ when error_record_exceeds_max_size then array['Error 413: Payload Too Large']
+ else '{}'
+ end
+ )::realtime.wal_rls;
+
+ end if;
+ end loop;
+
+ perform set_config('role', null, true);
+ end;
+ $$;
+ """
+ end
+end
diff --git a/lib/realtime/tenants/repo/migrations/20240801235015_unlogged_messages_table.ex b/lib/realtime/tenants/repo/migrations/20240801235015_unlogged_messages_table.ex
new file mode 100644
index 0000000..87dee33
--- /dev/null
+++ b/lib/realtime/tenants/repo/migrations/20240801235015_unlogged_messages_table.ex
@@ -0,0 +1,11 @@
+defmodule Realtime.Tenants.Migrations.UnloggedMessagesTable do
+ @moduledoc false
+ use Ecto.Migration
+
+ def change do
+ execute """
+ -- Commented to have oriole compatability
+ -- ALTER TABLE realtime.messages SET UNLOGGED;
+ """
+ end
+end
diff --git a/lib/realtime/tenants/repo/migrations/20240805133720_logged_messages_table.ex b/lib/realtime/tenants/repo/migrations/20240805133720_logged_messages_table.ex
new file mode 100644
index 0000000..bd5371d
--- /dev/null
+++ b/lib/realtime/tenants/repo/migrations/20240805133720_logged_messages_table.ex
@@ -0,0 +1,11 @@
+defmodule Realtime.Tenants.Migrations.LoggedMessagesTable do
+ @moduledoc false
+ use Ecto.Migration
+
+ def change do
+ execute """
+ -- Commented to have oriole compatability
+ -- ALTER TABLE realtime.messages SET LOGGED;
+ """
+ end
+end
diff --git a/lib/realtime/tenants/repo/migrations/20240827160934_filter_delete_postgres_changes.ex b/lib/realtime/tenants/repo/migrations/20240827160934_filter_delete_postgres_changes.ex
new file mode 100644
index 0000000..3deb264
--- /dev/null
+++ b/lib/realtime/tenants/repo/migrations/20240827160934_filter_delete_postgres_changes.ex
@@ -0,0 +1,310 @@
+defmodule Realtime.Tenants.Migrations.FilterDeletePostgresChanges do
+ @moduledoc false
+ use Ecto.Migration
+
+ def change do
+ execute """
+ create or replace function realtime.apply_rls(wal jsonb, max_record_bytes int = 1024 * 1024)
+ returns setof realtime.wal_rls
+ language plpgsql
+ volatile
+ as $$
+ declare
+ -- Regclass of the table e.g. public.notes
+ entity_ regclass = (quote_ident(wal ->> 'schema') || '.' || quote_ident(wal ->> 'table'))::regclass;
+
+ -- I, U, D, T: insert, update ...
+ action realtime.action = (
+ case wal ->> 'action'
+ when 'I' then 'INSERT'
+ when 'U' then 'UPDATE'
+ when 'D' then 'DELETE'
+ else 'ERROR'
+ end
+ );
+
+ -- Is row level security enabled for the table
+ is_rls_enabled bool = relrowsecurity from pg_class where oid = entity_;
+
+ subscriptions realtime.subscription[] = array_agg(subs)
+ from
+ realtime.subscription subs
+ where
+ subs.entity = entity_;
+
+ -- Subscription vars
+ roles regrole[] = array_agg(distinct us.claims_role::text)
+ from
+ unnest(subscriptions) us;
+
+ working_role regrole;
+ claimed_role regrole;
+ claims jsonb;
+
+ subscription_id uuid;
+ subscription_has_access bool;
+ visible_to_subscription_ids uuid[] = '{}';
+
+ -- structured info for wal's columns
+ columns realtime.wal_column[];
+ -- previous identity values for update/delete
+ old_columns realtime.wal_column[];
+
+ error_record_exceeds_max_size boolean = octet_length(wal::text) > max_record_bytes;
+
+ -- Primary jsonb output for record
+ output jsonb;
+
+ begin
+ perform set_config('role', null, true);
+
+ columns =
+ array_agg(
+ (
+ x->>'name',
+ x->>'type',
+ x->>'typeoid',
+ realtime.cast(
+ (x->'value') #>> '{}',
+ coalesce(
+ (x->>'typeoid')::regtype, -- null when wal2json version <= 2.4
+ (x->>'type')::regtype
+ )
+ ),
+ (pks ->> 'name') is not null,
+ true
+ )::realtime.wal_column
+ )
+ from
+ jsonb_array_elements(wal -> 'columns') x
+ left join jsonb_array_elements(wal -> 'pk') pks
+ on (x ->> 'name') = (pks ->> 'name');
+
+ old_columns =
+ array_agg(
+ (
+ x->>'name',
+ x->>'type',
+ x->>'typeoid',
+ realtime.cast(
+ (x->'value') #>> '{}',
+ coalesce(
+ (x->>'typeoid')::regtype, -- null when wal2json version <= 2.4
+ (x->>'type')::regtype
+ )
+ ),
+ (pks ->> 'name') is not null,
+ true
+ )::realtime.wal_column
+ )
+ from
+ jsonb_array_elements(wal -> 'identity') x
+ left join jsonb_array_elements(wal -> 'pk') pks
+ on (x ->> 'name') = (pks ->> 'name');
+
+ for working_role in select * from unnest(roles) loop
+
+ -- Update `is_selectable` for columns and old_columns
+ columns =
+ array_agg(
+ (
+ c.name,
+ c.type_name,
+ c.type_oid,
+ c.value,
+ c.is_pkey,
+ pg_catalog.has_column_privilege(working_role, entity_, c.name, 'SELECT')
+ )::realtime.wal_column
+ )
+ from
+ unnest(columns) c;
+
+ old_columns =
+ array_agg(
+ (
+ c.name,
+ c.type_name,
+ c.type_oid,
+ c.value,
+ c.is_pkey,
+ pg_catalog.has_column_privilege(working_role, entity_, c.name, 'SELECT')
+ )::realtime.wal_column
+ )
+ from
+ unnest(old_columns) c;
+
+ if action <> 'DELETE' and count(1) = 0 from unnest(columns) c where c.is_pkey then
+ return next (
+ jsonb_build_object(
+ 'schema', wal ->> 'schema',
+ 'table', wal ->> 'table',
+ 'type', action
+ ),
+ is_rls_enabled,
+ -- subscriptions is already filtered by entity
+ (select array_agg(s.subscription_id) from unnest(subscriptions) as s where claims_role = working_role),
+ array['Error 400: Bad Request, no primary key']
+ )::realtime.wal_rls;
+
+ -- The claims role does not have SELECT permission to the primary key of entity
+ elsif action <> 'DELETE' and sum(c.is_selectable::int) <> count(1) from unnest(columns) c where c.is_pkey then
+ return next (
+ jsonb_build_object(
+ 'schema', wal ->> 'schema',
+ 'table', wal ->> 'table',
+ 'type', action
+ ),
+ is_rls_enabled,
+ (select array_agg(s.subscription_id) from unnest(subscriptions) as s where claims_role = working_role),
+ array['Error 401: Unauthorized']
+ )::realtime.wal_rls;
+
+ else
+ output = jsonb_build_object(
+ 'schema', wal ->> 'schema',
+ 'table', wal ->> 'table',
+ 'type', action,
+ 'commit_timestamp', to_char(
+ ((wal ->> 'timestamp')::timestamptz at time zone 'utc'),
+ 'YYYY-MM-DD"T"HH24:MI:SS.MS"Z"'
+ ),
+ 'columns', (
+ select
+ jsonb_agg(
+ jsonb_build_object(
+ 'name', pa.attname,
+ 'type', pt.typname
+ )
+ order by pa.attnum asc
+ )
+ from
+ pg_attribute pa
+ join pg_type pt
+ on pa.atttypid = pt.oid
+ where
+ attrelid = entity_
+ and attnum > 0
+ and pg_catalog.has_column_privilege(working_role, entity_, pa.attname, 'SELECT')
+ )
+ )
+ -- Add "record" key for insert and update
+ || case
+ when action in ('INSERT', 'UPDATE') then
+ jsonb_build_object(
+ 'record',
+ (
+ select
+ jsonb_object_agg(
+ -- if unchanged toast, get column name and value from old record
+ coalesce((c).name, (oc).name),
+ case
+ when (c).name is null then (oc).value
+ else (c).value
+ end
+ )
+ from
+ unnest(columns) c
+ full outer join unnest(old_columns) oc
+ on (c).name = (oc).name
+ where
+ coalesce((c).is_selectable, (oc).is_selectable)
+ and ( not error_record_exceeds_max_size or (octet_length((c).value::text) <= 64))
+ )
+ )
+ else '{}'::jsonb
+ end
+ -- Add "old_record" key for update and delete
+ || case
+ when action = 'UPDATE' then
+ jsonb_build_object(
+ 'old_record',
+ (
+ select jsonb_object_agg((c).name, (c).value)
+ from unnest(old_columns) c
+ where
+ (c).is_selectable
+ and ( not error_record_exceeds_max_size or (octet_length((c).value::text) <= 64))
+ )
+ )
+ when action = 'DELETE' then
+ jsonb_build_object(
+ 'old_record',
+ (
+ select jsonb_object_agg((c).name, (c).value)
+ from unnest(old_columns) c
+ where
+ (c).is_selectable
+ and ( not error_record_exceeds_max_size or (octet_length((c).value::text) <= 64))
+ and ( not is_rls_enabled or (c).is_pkey ) -- if RLS enabled, we can't secure deletes so filter to pkey
+ )
+ )
+ else '{}'::jsonb
+ end;
+
+ -- Create the prepared statement
+ if is_rls_enabled and action <> 'DELETE' then
+ if (select 1 from pg_prepared_statements where name = 'walrus_rls_stmt' limit 1) > 0 then
+ deallocate walrus_rls_stmt;
+ end if;
+ execute realtime.build_prepared_statement_sql('walrus_rls_stmt', entity_, columns);
+ end if;
+
+ visible_to_subscription_ids = '{}';
+
+ for subscription_id, claims in (
+ select
+ subs.subscription_id,
+ subs.claims
+ from
+ unnest(subscriptions) subs
+ where
+ subs.entity = entity_
+ and subs.claims_role = working_role
+ and (
+ realtime.is_visible_through_filters(columns, subs.filters)
+ or (
+ action = 'DELETE'
+ and realtime.is_visible_through_filters(old_columns, subs.filters)
+ )
+ )
+ ) loop
+
+ if not is_rls_enabled or action = 'DELETE' then
+ visible_to_subscription_ids = visible_to_subscription_ids || subscription_id;
+ else
+ -- Check if RLS allows the role to see the record
+ perform
+ -- Trim leading and trailing quotes from working_role because set_config
+ -- doesn't recognize the role as valid if they are included
+ set_config('role', trim(both '"' from working_role::text), true),
+ set_config('request.jwt.claims', claims::text, true);
+
+ execute 'execute walrus_rls_stmt' into subscription_has_access;
+
+ if subscription_has_access then
+ visible_to_subscription_ids = visible_to_subscription_ids || subscription_id;
+ end if;
+ end if;
+ end loop;
+
+ perform set_config('role', null, true);
+
+ return next (
+ output,
+ is_rls_enabled,
+ visible_to_subscription_ids,
+ case
+ when error_record_exceeds_max_size then array['Error 413: Payload Too Large']
+ else '{}'
+ end
+ )::realtime.wal_rls;
+
+ end if;
+ end loop;
+
+ perform set_config('role', null, true);
+ end;
+ $$;
+ """
+ end
+end
diff --git a/lib/realtime/tenants/repo/migrations/20240919163303_add_payload_to_messages.ex b/lib/realtime/tenants/repo/migrations/20240919163303_add_payload_to_messages.ex
new file mode 100644
index 0000000..5c5fcdf
--- /dev/null
+++ b/lib/realtime/tenants/repo/migrations/20240919163303_add_payload_to_messages.ex
@@ -0,0 +1,55 @@
+defmodule Realtime.Tenants.Migrations.AddPayloadToMessages do
+ @moduledoc false
+ use Ecto.Migration
+
+ def change do
+ alter table(:messages) do
+ add_if_not_exists :payload, :map
+ add_if_not_exists :event, :text
+ add_if_not_exists :topic, :text
+ add_if_not_exists :private, :boolean, default: true
+
+ modify :inserted_at, :utc_datetime, default: fragment("now()")
+ modify :updated_at, :utc_datetime, default: fragment("now()")
+ end
+
+ execute """
+ CREATE OR REPLACE FUNCTION realtime.send(payload jsonb, event text, topic text, private boolean DEFAULT true)
+ RETURNS void
+ AS $$
+ BEGIN
+ INSERT INTO realtime.messages (payload, event, topic, private, extension)
+ VALUES (payload, event, topic, private, 'broadcast');
+ END;
+ $$
+ LANGUAGE plpgsql;
+ """
+
+ execute """
+ CREATE OR REPLACE FUNCTION realtime.broadcast_changes (topic_name text, event_name text, operation text, table_name text, table_schema text, NEW record, OLD record, level text DEFAULT 'ROW')
+ RETURNS void
+ AS $$
+ DECLARE
+ -- Declare a variable to hold the JSONB representation of the row
+ row_data jsonb := '{}'::jsonb;
+ BEGIN
+ IF level = 'STATEMENT' THEN
+ RAISE EXCEPTION 'function can only be triggered for each row, not for each statement';
+ END IF;
+ -- Check the operation type and handle accordingly
+ IF operation = 'INSERT' OR operation = 'UPDATE' OR operation = 'DELETE' THEN
+ row_data := jsonb_build_object('old_record', OLD, 'record', NEW, 'operation', operation, 'table', table_name, 'schema', table_schema);
+ PERFORM realtime.send (row_data, event_name, topic_name);
+ ELSE
+ RAISE EXCEPTION 'Unexpected operation type: %', operation;
+ END IF;
+ EXCEPTION
+ WHEN OTHERS THEN
+ RAISE EXCEPTION 'Failed to process the row: %', SQLERRM;
+ END;
+
+ $$
+ LANGUAGE plpgsql;
+ """
+ end
+end
diff --git a/lib/realtime/tenants/repo/migrations/20240919163305_change_messages_id_type.ex b/lib/realtime/tenants/repo/migrations/20240919163305_change_messages_id_type.ex
new file mode 100644
index 0000000..e64a784
--- /dev/null
+++ b/lib/realtime/tenants/repo/migrations/20240919163305_change_messages_id_type.ex
@@ -0,0 +1,10 @@
+defmodule Realtime.Tenants.Migrations.ChangeMessagesIdType do
+ @moduledoc false
+ use Ecto.Migration
+
+ def change do
+ alter table(:messages) do
+ add_if_not_exists :uuid, :uuid
+ end
+ end
+end
diff --git a/lib/realtime/tenants/repo/migrations/20241019105805_uuid_auto_generation.ex b/lib/realtime/tenants/repo/migrations/20241019105805_uuid_auto_generation.ex
new file mode 100644
index 0000000..b9acc9c
--- /dev/null
+++ b/lib/realtime/tenants/repo/migrations/20241019105805_uuid_auto_generation.ex
@@ -0,0 +1,10 @@
+defmodule Realtime.Tenants.Migrations.UuidAutoGeneration do
+ @moduledoc false
+ use Ecto.Migration
+
+ def change do
+ alter table(:messages) do
+ modify :uuid, :uuid, null: false, default: fragment("gen_random_uuid()")
+ end
+ end
+end
diff --git a/lib/realtime/tenants/repo/migrations/20241030150047_messages_partitioning.ex b/lib/realtime/tenants/repo/migrations/20241030150047_messages_partitioning.ex
new file mode 100644
index 0000000..94e6397
--- /dev/null
+++ b/lib/realtime/tenants/repo/migrations/20241030150047_messages_partitioning.ex
@@ -0,0 +1,126 @@
+defmodule Realtime.Tenants.Migrations.MessagesPartitioning do
+ @moduledoc false
+ use Ecto.Migration
+
+ def change do
+ execute("""
+ CREATE TABLE IF NOT EXISTS realtime.messages_new (
+ id BIGSERIAL,
+ uuid TEXT DEFAULT gen_random_uuid(),
+ topic TEXT NOT NULL,
+ extension TEXT NOT NULL,
+ payload JSONB,
+ event TEXT,
+ private BOOLEAN DEFAULT FALSE,
+ updated_at TIMESTAMP NOT NULL DEFAULT NOW(),
+ inserted_at TIMESTAMP NOT NULL DEFAULT NOW(),
+ PRIMARY KEY (id, inserted_at)
+ ) PARTITION BY RANGE (inserted_at)
+ """)
+
+ execute("ALTER TABLE realtime.messages_new ENABLE ROW LEVEL SECURITY")
+
+ execute("""
+ DO $$
+ DECLARE
+ rec record;
+ sql text;
+ role_list text;
+ BEGIN
+ FOR rec IN
+ SELECT *
+ FROM pg_policies
+ WHERE schemaname = 'realtime'
+ AND tablename = 'messages'
+ LOOP
+ -- Start constructing the create policy statement
+ sql := 'CREATE POLICY ' || quote_ident(rec.policyname) ||
+ ' ON realtime.messages_new ';
+
+ IF (rec.permissive = 'PERMISSIVE') THEN
+ sql := sql || 'AS PERMISSIVE ';
+ ELSE
+ sql := sql || 'AS RESTRICTIVE ';
+ END IF;
+
+ sql := sql || ' FOR ' || rec.cmd;
+
+ -- Include roles if specified
+ IF rec.roles IS NOT NULL AND array_length(rec.roles, 1) > 0 THEN
+ role_list := (
+ SELECT string_agg(quote_ident(role), ', ')
+ FROM unnest(rec.roles) AS role
+ );
+ sql := sql || ' TO ' || role_list;
+ END IF;
+
+ -- Include using clause if specified
+ IF rec.qual IS NOT NULL THEN
+ sql := sql || ' USING (' || rec.qual || ')';
+ END IF;
+
+ -- Include with check clause if specified
+ IF rec.with_check IS NOT NULL THEN
+ sql := sql || ' WITH CHECK (' || rec.with_check || ')';
+ END IF;
+
+ -- Output the constructed sql for debugging purposes
+ RAISE NOTICE 'Executing: %', sql;
+
+ -- Execute the constructed sql statement
+ EXECUTE sql;
+ END LOOP;
+ END
+ $$
+ """)
+
+ execute("ALTER TABLE realtime.messages RENAME TO messages_old")
+ execute("ALTER TABLE realtime.messages_new RENAME TO messages")
+ execute("DROP TABLE realtime.messages_old")
+
+ execute("CREATE SEQUENCE IF NOT EXISTS realtime.messages_id_seq")
+
+ execute("ALTER TABLE realtime.messages ALTER COLUMN id SET DEFAULT nextval('realtime.messages_id_seq')")
+
+ execute("ALTER table realtime.messages OWNER to tealbase_realtime_admin")
+
+ execute("GRANT USAGE ON SEQUENCE realtime.messages_id_seq TO postgres, anon, authenticated, service_role")
+
+ execute("GRANT SELECT ON realtime.messages TO postgres, anon, authenticated, service_role")
+ execute("GRANT UPDATE ON realtime.messages TO postgres, anon, authenticated, service_role")
+ execute("GRANT INSERT ON realtime.messages TO postgres, anon, authenticated, service_role")
+
+ execute("ALTER TABLE realtime.messages ENABLE ROW LEVEL SECURITY")
+
+ execute("""
+ CREATE OR REPLACE FUNCTION realtime.send(payload jsonb, event text, topic text, private boolean DEFAULT true)
+ RETURNS void
+ AS $$
+ DECLARE
+ partition_name text;
+ BEGIN
+ partition_name := 'messages_' || to_char(NOW(), 'YYYY_MM_DD');
+
+ IF NOT EXISTS (
+ SELECT 1
+ FROM pg_class c
+ JOIN pg_namespace n ON n.oid = c.relnamespace
+ WHERE n.nspname = 'realtime'
+ AND c.relname = partition_name
+ ) THEN
+ EXECUTE format(
+ 'CREATE TABLE %I PARTITION OF realtime.messages FOR VALUES FROM (%L) TO (%L)',
+ partition_name,
+ NOW(),
+ (NOW() + interval '1 day')::timestamp
+ );
+ END IF;
+
+ INSERT INTO realtime.messages (payload, event, topic, private, extension)
+ VALUES (payload, event, topic, private, 'broadcast');
+ END;
+ $$
+ LANGUAGE plpgsql;
+ """)
+ end
+end
diff --git a/lib/realtime/tenants/repo/migrations/20241108114728_messages_using_uuid.ex b/lib/realtime/tenants/repo/migrations/20241108114728_messages_using_uuid.ex
new file mode 100644
index 0000000..0accb47
--- /dev/null
+++ b/lib/realtime/tenants/repo/migrations/20241108114728_messages_using_uuid.ex
@@ -0,0 +1,15 @@
+defmodule Realtime.Tenants.Migrations.MessagesUsingUuid do
+ @moduledoc false
+ use Ecto.Migration
+
+ def change do
+ alter table(:messages) do
+ remove(:id)
+ remove(:uuid)
+ add(:id, :uuid, null: false, default: fragment("gen_random_uuid()"))
+ end
+
+ execute("ALTER TABLE realtime.messages ADD PRIMARY KEY (id, inserted_at)")
+ execute("DROP SEQUENCE realtime.messages_id_seq")
+ end
+end
diff --git a/lib/realtime/tenants/repo/migrations/20241121104152_fix_send_function_.ex b/lib/realtime/tenants/repo/migrations/20241121104152_fix_send_function_.ex
new file mode 100644
index 0000000..144984b
--- /dev/null
+++ b/lib/realtime/tenants/repo/migrations/20241121104152_fix_send_function_.ex
@@ -0,0 +1,38 @@
+defmodule Realtime.Tenants.Migrations.FixSendFunction do
+ @moduledoc false
+ use Ecto.Migration
+
+ # We missed the schema prefix of `realtime.` in the create table partition statement
+ def change do
+ execute("""
+ CREATE OR REPLACE FUNCTION realtime.send(payload jsonb, event text, topic text, private boolean DEFAULT true)
+ RETURNS void
+ AS $$
+ DECLARE
+ partition_name text;
+ BEGIN
+ partition_name := 'messages_' || to_char(NOW(), 'YYYY_MM_DD');
+
+ IF NOT EXISTS (
+ SELECT 1
+ FROM pg_class c
+ JOIN pg_namespace n ON n.oid = c.relnamespace
+ WHERE n.nspname = 'realtime'
+ AND c.relname = partition_name
+ ) THEN
+ EXECUTE format(
+ 'CREATE TABLE realtime.%I PARTITION OF realtime.messages FOR VALUES FROM (%L) TO (%L)',
+ partition_name,
+ NOW(),
+ (NOW() + interval '1 day')::timestamp
+ );
+ END IF;
+
+ INSERT INTO realtime.messages (payload, event, topic, private, extension)
+ VALUES (payload, event, topic, private, 'broadcast');
+ END;
+ $$
+ LANGUAGE plpgsql;
+ """)
+ end
+end
diff --git a/lib/realtime/tenants/repo/migrations/20241130184212_recreate_entity_index_using_btree.ex b/lib/realtime/tenants/repo/migrations/20241130184212_recreate_entity_index_using_btree.ex
new file mode 100644
index 0000000..3684632
--- /dev/null
+++ b/lib/realtime/tenants/repo/migrations/20241130184212_recreate_entity_index_using_btree.ex
@@ -0,0 +1,18 @@
+defmodule Realtime.Tenants.Migrations.RecreateEntityIndexUsingBtree do
+ @moduledoc false
+ use Ecto.Migration
+
+ def change do
+ execute("drop index if exists \"realtime\".\"ix_realtime_subscription_entity\"")
+
+ execute("""
+ do $$
+ begin
+ create index concurrently if not exists ix_realtime_subscription_entity on realtime.subscription using btree (entity);
+ exception
+ when others then
+ create index if not exists ix_realtime_subscription_entity on realtime.subscription using btree (entity);
+ end$$;
+ """)
+ end
+end
diff --git a/lib/realtime/tenants/repo/migrations/20241220035512_fix_send_function_partition_creation.ex b/lib/realtime/tenants/repo/migrations/20241220035512_fix_send_function_partition_creation.ex
new file mode 100644
index 0000000..0d6c79e
--- /dev/null
+++ b/lib/realtime/tenants/repo/migrations/20241220035512_fix_send_function_partition_creation.ex
@@ -0,0 +1,38 @@
+defmodule Realtime.Tenants.Migrations.FixSendFunctionPartitionCreation do
+ @moduledoc false
+ use Ecto.Migration
+
+ # We missed the schema prefix of `realtime.` in the create table partition statement
+ def change do
+ execute("""
+ CREATE OR REPLACE FUNCTION realtime.send(payload jsonb, event text, topic text, private boolean DEFAULT true)
+ RETURNS void
+ AS $$
+ DECLARE
+ partition_name text;
+ partition_start timestamp;
+ partition_end timestamp;
+ BEGIN
+ partition_start := date_trunc('day', NOW());
+ partition_end := partition_start + interval '1 day';
+ partition_name := 'messages_' || to_char(partition_start, 'YYYY_MM_DD');
+
+ BEGIN
+ EXECUTE format(
+ 'CREATE TABLE IF NOT EXISTS realtime.%I PARTITION OF realtime.messages FOR VALUES FROM (%L) TO (%L)',
+ partition_name,
+ partition_start,
+ partition_end
+ );
+ EXCEPTION WHEN duplicate_table THEN
+ -- Ignore; table already exists
+ END;
+
+ INSERT INTO realtime.messages (payload, event, topic, private, extension)
+ VALUES (payload, event, topic, private, 'broadcast');
+ END;
+ $$
+ LANGUAGE plpgsql;
+ """)
+ end
+end
diff --git a/lib/realtime/tenants/repo/migrations/20241220123912_realtime_send_handle_exceptions_remove_partition_creation.ex b/lib/realtime/tenants/repo/migrations/20241220123912_realtime_send_handle_exceptions_remove_partition_creation.ex
new file mode 100644
index 0000000..d93d87b
--- /dev/null
+++ b/lib/realtime/tenants/repo/migrations/20241220123912_realtime_send_handle_exceptions_remove_partition_creation.ex
@@ -0,0 +1,34 @@
+defmodule Realtime.Tenants.Migrations.RealtimeSendHandleExceptionsRemovePartitionCreation do
+ @moduledoc false
+ use Ecto.Migration
+
+ # We missed the schema prefix of `realtime.` in the create table partition statement
+ def change do
+ execute("""
+ CREATE OR REPLACE FUNCTION realtime.send(payload jsonb, event text, topic text, private boolean DEFAULT true ) RETURNS void
+ AS $$
+ BEGIN
+ BEGIN
+ -- Attempt to insert the message
+ INSERT INTO realtime.messages (payload, event, topic, private, extension)
+ VALUES (payload, event, topic, private, 'broadcast');
+ EXCEPTION
+ WHEN OTHERS THEN
+ -- Capture and notify the error
+ PERFORM pg_notify(
+ 'realtime:system',
+ jsonb_build_object(
+ 'error', SQLERRM,
+ 'function', 'realtime.send',
+ 'event', event,
+ 'topic', topic,
+ 'private', private
+ )::text
+ );
+ END;
+ END;
+ $$
+ LANGUAGE plpgsql;
+ """)
+ end
+end
diff --git a/lib/realtime/tenants/repo/migrations/20241224161212_realtime_send_sets_config.ex b/lib/realtime/tenants/repo/migrations/20241224161212_realtime_send_sets_config.ex
new file mode 100644
index 0000000..7510253
--- /dev/null
+++ b/lib/realtime/tenants/repo/migrations/20241224161212_realtime_send_sets_config.ex
@@ -0,0 +1,37 @@
+defmodule Realtime.Tenants.Migrations.RealtimeSendSetsConfig do
+ @moduledoc false
+ use Ecto.Migration
+
+ # We missed the schema prefix of `realtime.` in the create table partition statement
+ def change do
+ execute("""
+ CREATE OR REPLACE FUNCTION realtime.send(payload jsonb, event text, topic text, private boolean DEFAULT true ) RETURNS void
+ AS $$
+ BEGIN
+ BEGIN
+ -- Set the topic configuration
+ SET LOCAL realtime.topic TO topic;
+
+ -- Attempt to insert the message
+ INSERT INTO realtime.messages (payload, event, topic, private, extension)
+ VALUES (payload, event, topic, private, 'broadcast');
+ EXCEPTION
+ WHEN OTHERS THEN
+ -- Capture and notify the error
+ PERFORM pg_notify(
+ 'realtime:system',
+ jsonb_build_object(
+ 'error', SQLERRM,
+ 'function', 'realtime.send',
+ 'event', event,
+ 'topic', topic,
+ 'private', private
+ )::text
+ );
+ END;
+ END;
+ $$
+ LANGUAGE plpgsql;
+ """)
+ end
+end
diff --git a/lib/realtime/tenants/repo/migrations/20250107150512_realtime_subscription_unlogged.ex b/lib/realtime/tenants/repo/migrations/20250107150512_realtime_subscription_unlogged.ex
new file mode 100644
index 0000000..d135a1b
--- /dev/null
+++ b/lib/realtime/tenants/repo/migrations/20250107150512_realtime_subscription_unlogged.ex
@@ -0,0 +1,11 @@
+defmodule Realtime.Tenants.Migrations.RealtimeSubscriptionUnlogged do
+ @moduledoc false
+ use Ecto.Migration
+
+ def change do
+ execute("""
+ -- Commented to have oriole compatability
+ -- ALTER TABLE realtime.subscription SET UNLOGGED;
+ """)
+ end
+end
diff --git a/lib/realtime/tenants/repo/migrations/20250110162412_realtime_subscription_logged.ex b/lib/realtime/tenants/repo/migrations/20250110162412_realtime_subscription_logged.ex
new file mode 100644
index 0000000..63795c6
--- /dev/null
+++ b/lib/realtime/tenants/repo/migrations/20250110162412_realtime_subscription_logged.ex
@@ -0,0 +1,12 @@
+defmodule Realtime.Tenants.Migrations.RealtimeSubscriptionLogged do
+ @moduledoc false
+ use Ecto.Migration
+
+ # PG Updates doesn't allow us to use UNLOGGED tables due to the fact that Sequences on PG14 still need to be logged
+ def change do
+ execute("""
+ -- Commented to have oriole compatability
+ -- ALTER TABLE realtime.subscription SET LOGGED;
+ """)
+ end
+end
diff --git a/lib/realtime/tenants/repo/migrations/20250123174212_remove_unused_publications.ex b/lib/realtime/tenants/repo/migrations/20250123174212_remove_unused_publications.ex
new file mode 100644
index 0000000..e7583f2
--- /dev/null
+++ b/lib/realtime/tenants/repo/migrations/20250123174212_remove_unused_publications.ex
@@ -0,0 +1,19 @@
+defmodule Realtime.Tenants.Migrations.RemoveUnusedPublications do
+ @moduledoc false
+ use Ecto.Migration
+
+ def change do
+ execute("""
+ DO $$
+ DECLARE
+ r RECORD;
+ BEGIN
+ FOR r IN
+ SELECT pubname FROM pg_publication WHERE pubname LIKE 'realtime_messages%' or pubname LIKE 'tealbase_realtime_messages%'
+ LOOP
+ EXECUTE 'DROP PUBLICATION IF EXISTS ' || quote_ident(r.pubname) || ';' ;
+ END LOOP;
+ END $$;
+ """)
+ end
+end
diff --git a/lib/realtime/tenants/repo/migrations/20250128220012_realtime_send_sets_topic_config.ex b/lib/realtime/tenants/repo/migrations/20250128220012_realtime_send_sets_topic_config.ex
new file mode 100644
index 0000000..4c46b4b
--- /dev/null
+++ b/lib/realtime/tenants/repo/migrations/20250128220012_realtime_send_sets_topic_config.ex
@@ -0,0 +1,37 @@
+defmodule Realtime.Tenants.Migrations.RealtimeSendSetsTopicConfig do
+ @moduledoc false
+ use Ecto.Migration
+
+ # We missed the schema prefix of `realtime.` in the create table partition statement
+ def change do
+ execute("""
+ CREATE OR REPLACE FUNCTION realtime.send(payload jsonb, event text, topic text, private boolean DEFAULT true ) RETURNS void
+ AS $$
+ BEGIN
+ BEGIN
+ -- Set the topic configuration
+ EXECUTE format('SET LOCAL realtime.topic TO %L', topic);
+
+ -- Attempt to insert the message
+ INSERT INTO realtime.messages (payload, event, topic, private, extension)
+ VALUES (payload, event, topic, private, 'broadcast');
+ EXCEPTION
+ WHEN OTHERS THEN
+ -- Capture and notify the error
+ PERFORM pg_notify(
+ 'realtime:system',
+ jsonb_build_object(
+ 'error', SQLERRM,
+ 'function', 'realtime.send',
+ 'event', event,
+ 'topic', topic,
+ 'private', private
+ )::text
+ );
+ END;
+ END;
+ $$
+ LANGUAGE plpgsql;
+ """)
+ end
+end
diff --git a/lib/realtime/tenants/repo/migrations/20250506224012_subscription_index_bridging_disabled.ex b/lib/realtime/tenants/repo/migrations/20250506224012_subscription_index_bridging_disabled.ex
new file mode 100644
index 0000000..1903679
--- /dev/null
+++ b/lib/realtime/tenants/repo/migrations/20250506224012_subscription_index_bridging_disabled.ex
@@ -0,0 +1,10 @@
+defmodule Realtime.Tenants.Migrations.SubscriptionIndexBridgingDisabled do
+ @moduledoc false
+ use Ecto.Migration
+
+ def change do
+ """
+ alter table realtime.subscription reset (index_bridging);
+ """
+ end
+end
diff --git a/lib/realtime/tenants/repo/migrations/20250523164012_run_subscription_index_bridging_disabled.ex b/lib/realtime/tenants/repo/migrations/20250523164012_run_subscription_index_bridging_disabled.ex
new file mode 100644
index 0000000..2039f96
--- /dev/null
+++ b/lib/realtime/tenants/repo/migrations/20250523164012_run_subscription_index_bridging_disabled.ex
@@ -0,0 +1,10 @@
+defmodule Realtime.Tenants.Migrations.RunSubscriptionIndexBridgingDisabled do
+ @moduledoc false
+ use Ecto.Migration
+
+ def change do
+ execute("""
+ alter table realtime.subscription reset (index_bridging);
+ """)
+ end
+end
diff --git a/lib/realtime/user_counter.ex b/lib/realtime/user_counter.ex
index b554f1b..6190030 100644
--- a/lib/realtime/user_counter.ex
+++ b/lib/realtime/user_counter.ex
@@ -1,19 +1,24 @@
defmodule Realtime.UsersCounter do
- @moduledoc false
+ @moduledoc """
+ Counts of connected clients for a tenant across the whole cluster or for a single node.
+ """
require Logger
+ @doc """
+ Adds a RealtimeChannel pid to the `:users` scope for a tenant so we can keep track of all connected clients for a tenant.
+ """
@spec add(pid(), String.t()) :: :ok
- def add(pid, tenant) do
- :syn.join(:users, tenant, pid)
- end
+ def add(pid, tenant), do: :syn.join(:users, tenant, pid)
+ @doc """
+ Returns the count of all connected clients for a tenant for the cluster.
+ """
@spec tenant_users(String.t()) :: non_neg_integer()
- def tenant_users(tenant) do
- :syn.member_count(:users, tenant)
- end
+ def tenant_users(tenant), do: :syn.member_count(:users, tenant)
+ @doc """
+ Returns the count of all connected clients for a tenant for a single node.
+ """
@spec tenant_users(atom, String.t()) :: non_neg_integer()
- def tenant_users(node_name, tenant) do
- :syn.member_count(:users, tenant, node_name)
- end
+ def tenant_users(node_name, tenant), do: :syn.member_count(:users, tenant, node_name)
end
diff --git a/lib/realtime_web.ex b/lib/realtime_web.ex
index 6895f36..9e6b51d 100644
--- a/lib/realtime_web.ex
+++ b/lib/realtime_web.ex
@@ -17,6 +17,8 @@ defmodule RealtimeWeb do
and import those modules here.
"""
+ def static_paths, do: ~w(assets fonts images favicon.svg robots.txt worker.js)
+
def controller do
quote do
use Phoenix.Controller, namespace: RealtimeWeb
@@ -24,6 +26,8 @@ defmodule RealtimeWeb do
import Plug.Conn
import RealtimeWeb.Gettext
alias RealtimeWeb.Router.Helpers, as: Routes
+
+ unquote(verified_routes())
end
end
@@ -45,7 +49,7 @@ defmodule RealtimeWeb do
def live_view do
quote do
use Phoenix.LiveView,
- layout: {RealtimeWeb.LayoutView, "live.html"}
+ layout: {RealtimeWeb.LayoutView, :live}
unquote(view_helpers())
end
@@ -80,7 +84,7 @@ defmodule RealtimeWeb do
def channel do
quote do
- use Phoenix.Channel
+ use Phoenix.Channel, log_join: false, log_handle_in: false
import RealtimeWeb.Gettext
end
end
@@ -102,6 +106,8 @@ defmodule RealtimeWeb do
alias RealtimeWeb.Router.Helpers, as: Routes
import RealtimeWeb.Components
+
+ unquote(verified_routes())
end
end
@@ -111,4 +117,13 @@ defmodule RealtimeWeb do
defmacro __using__(which) when is_atom(which) do
apply(__MODULE__, which, [])
end
+
+ def verified_routes do
+ quote do
+ use Phoenix.VerifiedRoutes,
+ endpoint: RealtimeWeb.Endpoint,
+ router: RealtimeWeb.Router,
+ statics: RealtimeWeb.static_paths()
+ end
+ end
end
diff --git a/lib/realtime_web/api_spec.ex b/lib/realtime_web/api_spec.ex
new file mode 100644
index 0000000..fd8916c
--- /dev/null
+++ b/lib/realtime_web/api_spec.ex
@@ -0,0 +1,42 @@
+defmodule RealtimeWeb.ApiSpec do
+ @moduledoc false
+
+ alias OpenApiSpex.Components
+ alias OpenApiSpex.Info
+ alias OpenApiSpex.OpenApi
+ alias OpenApiSpex.Paths
+ alias OpenApiSpex.SecurityScheme
+ alias OpenApiSpex.Server
+ alias OpenApiSpex.ServerVariable
+
+ alias RealtimeWeb.Router
+
+ @behaviour OpenApi
+
+ @impl OpenApi
+ def spec do
+ url =
+ case Mix.env() do
+ :prod -> "https://{tenant}.tealbase.co/realtime/v1"
+ _ -> "http://{tenant}.localhost:4000/"
+ end
+
+ %OpenApi{
+ servers: [
+ %Server{
+ url: url,
+ variables: %{"tenant" => %ServerVariable{default: "tenant"}}
+ }
+ ],
+ info: %Info{
+ title: to_string(Application.spec(:realtime, :description)),
+ version: to_string(Application.spec(:realtime, :vsn))
+ },
+ paths: Paths.from_router(Router),
+ components: %Components{
+ securitySchemes: %{"authorization" => %SecurityScheme{type: "http", scheme: "bearer"}}
+ }
+ }
+ |> OpenApiSpex.resolve_schema_modules()
+ end
+end
diff --git a/lib/realtime_web/channels/auth/channels_authorization.ex b/lib/realtime_web/channels/auth/channels_authorization.ex
index 122d1cc..ad2e7fb 100644
--- a/lib/realtime_web/channels/auth/channels_authorization.ex
+++ b/lib/realtime_web/channels/auth/channels_authorization.ex
@@ -4,36 +4,40 @@ defmodule RealtimeWeb.ChannelsAuthorization do
"""
require Logger
- def authorize(token, secret) when is_binary(token) do
+ @doc """
+ Authorize connection to access channel
+ """
+ @spec authorize(binary(), binary(), binary() | nil) ::
+ {:ok, map()} | {:error, any()} | {:error, :expired_token, String.t()}
+ def authorize(token, jwt_secret, jwt_jwks) when is_binary(token) do
token
|> clean_token()
- |> RealtimeWeb.JwtVerification.verify(secret)
+ |> RealtimeWeb.JwtVerification.verify(jwt_secret, jwt_jwks)
end
- def authorize(_token, _secret), do: :error
-
- defp clean_token(token) do
- Regex.replace(~r/\s|\n/, URI.decode(token), "")
- end
+ def authorize(_token, _jwt_secret, _jwt_jwks), do: {:error, :invalid_token}
- def authorize_conn(token, secret) do
- case authorize(token, secret) do
+ def authorize_conn(token, jwt_secret, jwt_jwks) do
+ case authorize(token, jwt_secret, jwt_jwks) do
{:ok, claims} ->
required = MapSet.new(["role", "exp"])
- claims_keys = Map.keys(claims) |> MapSet.new()
+ claims_keys = claims |> Map.keys() |> MapSet.new()
if MapSet.subset?(required, claims_keys) do
{:ok, claims}
else
- {:error, "Fields `role` and `exp` are required in JWT"}
+ {:error, :missing_claims}
end
+ {:error, [message: validation_timer, claim: "exp", claim_val: claim_val]}
+ when is_integer(validation_timer) ->
+ msg = "Token has expired #{validation_timer - claim_val} seconds ago"
+ {:error, :expired_token, msg}
+
{:error, reason} ->
{:error, reason}
-
- error ->
- Logger.error("Unknown connection authorization error: #{inspect(error)}")
- {:error, :unknown}
end
end
+
+ defp clean_token(token), do: Regex.replace(~r/\s|\n/, URI.decode(token), "")
end
diff --git a/lib/realtime_web/channels/auth/jwt_verification.ex b/lib/realtime_web/channels/auth/jwt_verification.ex
index a4d0da2..b6aae08 100644
--- a/lib/realtime_web/channels/auth/jwt_verification.ex
+++ b/lib/realtime_web/channels/auth/jwt_verification.ex
@@ -16,7 +16,8 @@ defmodule RealtimeWeb.JwtVerification do
end
defp add_claim_validator(claims, "exp") do
- add_claim(claims, "exp", nil, &(&1 > current_time()))
+ current_time = current_time()
+ add_claim(claims, "exp", nil, &(&1 > current_time), message: current_time)
end
defp add_claim_validator(claims, claim_key, expected_val) do
@@ -25,18 +26,25 @@ defmodule RealtimeWeb.JwtVerification do
end
@hs_algorithms ["HS256", "HS384", "HS512"]
+ @rs_algorithms ["RS256", "RS384", "RS512"]
+ @es_algorithms ["ES256", "ES384", "ES512"]
+ @ed_algorithms ["Ed25519", "Ed448"]
- def verify(token, secret) when is_binary(token) do
+ @doc """
+ Verify JWT token and validate claims
+ """
+ @spec verify(binary(), binary(), binary() | nil) :: {:ok, map()} | {:error, any()}
+ def verify(token, jwt_secret, jwt_jwks) when is_binary(token) do
with {:ok, _claims} <- check_claims_format(token),
{:ok, header} <- check_header_format(token),
- {:ok, signer} <- generate_signer(header, secret) do
+ {:ok, signer} <- generate_signer(header, jwt_secret, jwt_jwks) do
JwtAuthToken.verify_and_validate(token, signer)
else
{:error, _e} = error -> error
end
end
- def verify(_token, _secret), do: {:error, :not_a_string}
+ def verify(_token, _jwt_secret, _jwt_jwks), do: {:error, :not_a_string}
defp check_header_format(token) do
case Joken.peek_header(token) do
@@ -52,9 +60,63 @@ defmodule RealtimeWeb.JwtVerification do
end
end
- defp generate_signer(%{"typ" => "JWT", "alg" => alg}, jwt_secret) when alg in @hs_algorithms do
+ defp generate_signer(%{"typ" => "JWT", "alg" => alg, "kid" => kid}, _jwt_secret, %{
+ "keys" => keys
+ })
+ when is_binary(kid) and alg in @rs_algorithms do
+ jwk = Enum.find(keys, fn jwk -> jwk["kty"] == "RSA" and jwk["kid"] == kid end)
+
+ case jwk do
+ nil -> {:error, :error_generating_signer}
+ _ -> {:ok, Joken.Signer.create(alg, jwk)}
+ end
+ end
+
+ defp generate_signer(%{"typ" => "JWT", "alg" => alg, "kid" => kid}, _jwt_secret, %{
+ "keys" => keys
+ })
+ when is_binary(kid) and alg in @es_algorithms do
+ jwk = Enum.find(keys, fn jwk -> jwk["kty"] == "EC" and jwk["kid"] == kid end)
+
+ case jwk do
+ nil -> {:error, :error_generating_signer}
+ _ -> {:ok, Joken.Signer.create(alg, jwk)}
+ end
+ end
+
+ defp generate_signer(%{"typ" => "JWT", "alg" => alg, "kid" => kid}, _jwt_secret, %{
+ "keys" => keys
+ })
+ when is_binary(kid) and alg in @ed_algorithms do
+ jwk = Enum.find(keys, fn jwk -> jwk["kty"] == "OKP" and jwk["kid"] == kid end)
+
+ case jwk do
+ nil -> {:error, :error_generating_signer}
+ _ -> {:ok, Joken.Signer.create(alg, jwk)}
+ end
+ end
+
+ # Most tealbase Auth JWTs fall in this case, as they're usually signed with
+ # HS256, have a kid header, but there's no JWK as this is sensitive. In this
+ # case, the jwt_secret should be used.
+ defp generate_signer(%{"typ" => "JWT", "alg" => alg, "kid" => kid}, jwt_secret, %{
+ "keys" => keys
+ })
+ when is_binary(kid) and alg in @hs_algorithms do
+ jwk = Enum.find(keys, fn jwk -> jwk["kty"] == "oct" and jwk["kid"] == kid end)
+
+ case jwk do
+ # If there's no JWK, and HS* is being used, instead of erroring, try
+ # the jwt_secret instead.
+ nil -> {:ok, Joken.Signer.create(alg, jwt_secret)}
+ _ -> {:ok, Joken.Signer.create(alg, jwk)}
+ end
+ end
+
+ defp generate_signer(%{"typ" => "JWT", "alg" => alg}, jwt_secret, _jwt_jwks)
+ when alg in @hs_algorithms do
{:ok, Joken.Signer.create(alg, jwt_secret)}
end
- defp generate_signer(_header, _secret), do: {:error, :error_generating_signer}
+ defp generate_signer(_header, _jwt_secret, _jwt_jwks), do: {:error, :error_generating_signer}
end
diff --git a/lib/realtime_web/channels/realtime_channel.ex b/lib/realtime_web/channels/realtime_channel.ex
index 65d0cfc..20387c9 100644
--- a/lib/realtime_web/channels/realtime_channel.ex
+++ b/lib/realtime_web/channels/realtime_channel.ex
@@ -3,227 +3,209 @@ defmodule RealtimeWeb.RealtimeChannel do
Used for handling channels and subscriptions.
"""
use RealtimeWeb, :channel
-
require Logger
+ import Realtime.Logs
alias DBConnection.Backoff
- alias Phoenix.Tracker.Shard
- alias RealtimeWeb.{ChannelsAuthorization, Endpoint, Presence}
- alias Realtime.{GenCounter, RateCounter, PostgresCdc, SignalHandler, Tenants}
-
- import Realtime.Helpers, only: [cancel_timer: 1, decrypt!: 2]
-
- defmodule Assigns do
- @moduledoc false
- defstruct [
- :tenant,
- :log_level,
- :rate_counter,
- :limits,
- :tenant_topic,
- :pg_sub_ref,
- :pg_change_params,
- :postgres_extension,
- :claims,
- :jwt_secret,
- :tenant_token,
- :access_token,
- :postgres_cdc_module,
- :channel_name
- ]
- @type t :: %__MODULE__{
- tenant: String.t(),
- log_level: atom(),
- rate_counter: RateCounter.t(),
- limits: %{
- max_events_per_second: integer(),
- max_concurrent_users: integer(),
- max_bytes_per_second: integer(),
- max_channels_per_client: integer(),
- max_joins_per_second: integer()
- },
- tenant_topic: String.t(),
- pg_sub_ref: reference() | nil,
- pg_change_params: map(),
- postgres_extension: map(),
- claims: map(),
- jwt_secret: String.t(),
- tenant_token: String.t(),
- access_token: String.t(),
- channel_name: String.t()
- }
- end
-
- @confirm_token_ms_interval 1_000 * 60 * 5
+ alias Realtime.Crypto
+ alias Realtime.GenCounter
+ alias Realtime.Helpers
+ alias Realtime.PostgresCdc
+ alias Realtime.RateCounter
+ alias Realtime.SignalHandler
+ alias Realtime.Tenants
+ alias Realtime.Tenants.Authorization
+ alias Realtime.Tenants.Authorization.Policies
+ alias Realtime.Tenants.Authorization.Policies.BroadcastPolicies
+ alias Realtime.Tenants.Authorization.Policies.PresencePolicies
+ alias Realtime.Tenants.Connect
+
+ alias RealtimeWeb.ChannelsAuthorization
+ alias RealtimeWeb.RealtimeChannel.BroadcastHandler
+ alias RealtimeWeb.RealtimeChannel.Logging
+ alias RealtimeWeb.RealtimeChannel.PresenceHandler
+
+ @confirm_token_ms_interval :timer.minutes(5)
@impl true
- def join(
- "realtime:" <> sub_topic = topic,
- params,
- %{
- assigns: %{
- tenant: tenant,
- log_level: log_level,
- postgres_cdc_module: module
- },
- channel_pid: channel_pid,
- serializer: serializer,
- transport_pid: transport_pid
- } = socket
- ) do
- Logger.metadata(external_id: tenant, project: tenant)
- Logger.put_process_level(self(), log_level)
+ def join("realtime:", _params, _socket) do
+ Logging.log_error_message(:error, "TopicNameRequired", "You must provide a topic name")
+ end
- socket = socket |> assign_access_token(params) |> assign_counter()
+ def join("realtime:" <> sub_topic = topic, params, socket) do
+ %{
+ assigns: %{tenant: tenant_id, log_level: log_level, postgres_cdc_module: module},
+ channel_pid: channel_pid,
+ serializer: serializer,
+ transport_pid: transport_pid
+ } = socket
- start_db_rate_counter(tenant)
+ Logger.metadata(external_id: tenant_id, project: tenant_id)
+ Logger.put_process_level(self(), log_level)
- with false <- SignalHandler.shutdown_in_progress?(),
- :ok <- limit_joins(socket),
+ socket =
+ socket
+ |> assign_access_token(params)
+ |> assign_counter()
+ |> assign_presence_counter()
+ |> assign(:private?, !!params["config"]["private"])
+ |> assign(:policies, nil)
+
+ with :ok <- SignalHandler.shutdown_in_progress?(),
+ :ok <- only_private?(tenant_id, socket),
+ :ok <- limit_joins(socket.assigns),
:ok <- limit_channels(socket),
- :ok <- limit_max_users(socket),
- {:ok, claims, confirm_token_ref} <- confirm_token(socket) do
- Realtime.UsersCounter.add(transport_pid, tenant)
+ :ok <- limit_max_users(socket.assigns),
+ :ok <- start_db_rate_counter(tenant_id),
+ {:ok, claims, confirm_token_ref, access_token, _} <- confirm_token(socket),
+ {:ok, db_conn} <- Connect.lookup_or_start_connection(tenant_id),
+ socket = assign_authorization_context(socket, sub_topic, access_token, claims),
+ {:ok, socket} <- maybe_assign_policies(sub_topic, db_conn, socket) do
+ tenant_topic = Tenants.tenant_topic(tenant_id, sub_topic, !socket.assigns.private?)
- tenant_topic = tenant <> ":" <> sub_topic
RealtimeWeb.Endpoint.subscribe(tenant_topic)
+ Phoenix.PubSub.subscribe(Realtime.PubSub, "realtime:operations:" <> tenant_id)
+
+ is_new_api = new_api?(params)
+ pg_change_params = pg_change_params(is_new_api, params, channel_pid, claims, sub_topic)
+
+ opts = %{
+ is_new_api: is_new_api,
+ pg_change_params: pg_change_params,
+ transport_pid: transport_pid,
+ serializer: serializer,
+ topic: topic,
+ tenant: tenant_id,
+ module: module
+ }
- is_new_api =
- case params do
- %{"config" => _} -> true
- _ -> false
- end
+ postgres_cdc_subscribe(opts)
+
+ state = %{postgres_changes: add_id_to_postgres_changes(pg_change_params)}
+
+ assigns = %{
+ ack_broadcast: !!params["config"]["broadcast"]["ack"],
+ confirm_token_ref: confirm_token_ref,
+ is_new_api: is_new_api,
+ pg_sub_ref: nil,
+ pg_change_params: pg_change_params,
+ presence_key: presence_key(params),
+ self_broadcast: !!params["config"]["broadcast"]["self"],
+ tenant_topic: tenant_topic,
+ channel_name: sub_topic,
+ db_conn: db_conn
+ }
- pg_change_params =
- if is_new_api do
- send(self(), :sync_presence)
-
- params["config"]["postgres_changes"]
- |> case do
- [_ | _] = params_list ->
- params_list
- |> Enum.map(fn params ->
- %{
- id: UUID.uuid1(),
- channel_pid: channel_pid,
- claims: claims,
- params: params
- }
- end)
-
- _ ->
- []
- end
- else
- params =
- case String.split(sub_topic, ":", parts: 3) do
- [schema, table, filter] ->
- %{"schema" => schema, "table" => table, "filter" => filter}
-
- [schema, table] ->
- %{"schema" => schema, "table" => table}
-
- [schema] ->
- %{"schema" => schema}
- end
-
- [
- %{
- id: UUID.uuid1(),
- channel_pid: channel_pid,
- claims: claims,
- params: params
- }
- ]
- end
- |> case do
- [_ | _] = pg_change_params ->
- ids =
- for %{id: id, params: params} <- pg_change_params do
- {UUID.string_to_binary!(id), :erlang.phash2(params)}
- end
+ # Start presence and add user
+ send(self(), :sync_presence)
+ Realtime.UsersCounter.add(transport_pid, tenant_id)
+ {:ok, state, assign(socket, assigns)}
+ else
+ {:error, :expired_token, msg} ->
+ Logging.log_error_message(:error, "InvalidJWTToken", msg)
- metadata = [
- metadata:
- {:subscriber_fastlane, transport_pid, serializer, ids, topic, tenant, is_new_api}
- ]
+ {:error, :missing_claims} ->
+ msg = "Fields `role` and `exp` are required in JWT"
+ Logging.log_error_message(:error, "InvalidJWTToken", msg)
- # Endpoint.subscribe("realtime:postgres:" <> tenant, metadata)
+ {:error, :expected_claims_map} ->
+ msg = "Token claims must be a map"
+ Logging.log_error_message(:error, "InvalidJWTToken", msg)
- PostgresCdc.subscribe(module, pg_change_params, tenant, metadata)
+ {:error, :unauthorized, msg} ->
+ Logging.log_error_message(:error, "Unauthorized", msg)
- pg_change_params
+ {:error, :too_many_channels} ->
+ msg = "Too many channels"
+ Logging.log_error_message(:error, "ChannelRateLimitReached", msg)
- other ->
- other
- end
+ {:error, :too_many_connections} ->
+ msg = "Too many connected users"
+ Logging.log_error_message(:error, "ConnectionRateLimitReached", msg)
- Logger.debug("Postgres change params: " <> inspect(pg_change_params, pretty: true))
+ {:error, :too_many_joins} ->
+ msg = "Too many joins per second"
+ Logging.log_error_message(:error, "ClientJoinRateLimitReached", msg)
- if !Enum.empty?(pg_change_params) do
- send(self(), :postgres_subscribe)
- end
+ {:error, :increase_connection_pool} ->
+ msg = "Please increase your connection pool size"
+ Logging.log_error_message(:error, "IncreaseConnectionPool", msg)
- Logger.debug("Start channel: " <> inspect(pg_change_params, pretty: true))
-
- presence_key = presence_key(params)
-
- {:ok,
- %{
- postgres_changes:
- Enum.map(pg_change_params, fn %{params: params} ->
- id = :erlang.phash2(params)
- Map.put(params, :id, id)
- end)
- },
- assign(socket, %{
- ack_broadcast: !!params["config"]["broadcast"]["ack"],
- confirm_token_ref: confirm_token_ref,
- is_new_api: is_new_api,
- pg_sub_ref: nil,
- pg_change_params: pg_change_params,
- presence_key: presence_key,
- self_broadcast: !!params["config"]["broadcast"]["self"],
- tenant_topic: tenant_topic,
- channel_name: sub_topic
- })}
- else
- {:error, :too_many_channels} = error ->
- error_msg = inspect(error, pretty: true)
- Logger.warn("Start channel error: #{error_msg}")
- {:error, %{reason: error_msg}}
-
- {:error, :too_many_connections} = error ->
- error_msg = inspect(error, pretty: true)
- Logger.warn("Start channel error: #{error_msg}")
- {:error, %{reason: error_msg}}
-
- {:error, :too_many_joins} = error ->
- error_msg = inspect(error, pretty: true)
- Logger.warn("Start channel error: #{error_msg}")
- {:error, %{reason: error_msg}}
-
- {:error, [message: "Invalid token", claim: _claim, claim_val: _value]} = error ->
- error_msg = inspect(error, pretty: true)
- Logger.warn("Start channel error: #{error_msg}")
- {:error, %{reason: error_msg}}
+ {:error, :tenant_db_too_many_connections} ->
+ msg = "Database can't accept more connections, Realtime won't connect"
+ Logging.log_error_message(:error, "DatabaseLackOfConnections", msg)
- error ->
- error_msg = inspect(error, pretty: true)
- Logger.error("Start channel error: #{error_msg}")
- {:error, %{reason: error_msg}}
+ {:error, :unable_to_set_policies, error} ->
+ Logging.log_error_message(:error, "UnableToSetPolicies", error)
+
+ {:error, :tenant_database_unavailable} ->
+ Logging.log_error_message(
+ :error,
+ "UnableToConnectToProject",
+ "Realtime was unable to connect to the project database"
+ )
+
+ {:error, :rpc_error, :timeout} ->
+ Logging.log_error_message(:error, "TimeoutOnRpcCall", "Node request timeout")
+
+ {:error, :rpc_error, reason} ->
+ Logging.log_error_message(:error, "ErrorOnRpcCall", "RPC call error: " <> inspect(reason))
+
+ {:error, :initializing} ->
+ Logging.log_error_message(
+ :error,
+ "InitializingProjectConnection",
+ "Realtime is initializing the project connection"
+ )
+
+ {:error, :tenant_database_connection_initializing} ->
+ Logging.log_error_message(
+ :error,
+ "InitializingProjectConnection",
+ "Connecting to the project database"
+ )
+
+ {:error, invalid_exp} when is_integer(invalid_exp) and invalid_exp <= 0 ->
+ Logging.log_error_message(
+ :error,
+ "InvalidJWTExpiration",
+ "Token expiration time is invalid"
+ )
+
+ {:error, :private_only} ->
+ Logging.log_error_message(
+ :error,
+ "PrivateOnly",
+ "This project only allows private channels"
+ )
+
+ {:error, :tenant_suspended} ->
+ Logging.log_error_message(
+ :error,
+ "RealtimeDisabledForTenant",
+ "Realtime disabled for this tenant"
+ )
+
+ {:error, :signature_error} ->
+ Logging.log_error_message(:error, "JwtSignatureError", "Failed to validate JWT signature")
+
+ {:error, :shutdown_in_progress} ->
+ Logging.log_error_message(
+ :error,
+ "RealtimeRestarting",
+ "Realtime is restarting, please standby"
+ )
+
+ {:error, error} ->
+ Logging.log_error_message(:error, "UnknownErrorOnChannel", error)
end
end
+ @impl true
def handle_info(
_any,
- %{
- assigns: %{
- rate_counter: %{avg: avg},
- limits: %{max_events_per_second: max}
- }
- } = socket
+ %{assigns: %{rate_counter: %{avg: avg}, limits: %{max_events_per_second: max}}} = socket
)
when avg > max do
message = "Too many messages per second"
@@ -231,45 +213,45 @@ defmodule RealtimeWeb.RealtimeChannel do
shutdown_response(socket, message)
end
- @impl true
- def handle_info(:sync_presence, %{assigns: %{tenant_topic: topic}} = socket) do
- socket = count(socket)
+ def handle_info(%{event: "postgres_cdc_rls_down"}, socket) do
+ %{assigns: %{pg_sub_ref: pg_sub_ref}} = socket
+ Helpers.cancel_timer(pg_sub_ref)
+ pg_sub_ref = postgres_subscribe()
- push(socket, "presence_state", presence_dirty_list(topic))
+ {:noreply, assign(socket, %{pg_sub_ref: pg_sub_ref})}
+ end
+ def handle_info(
+ %{event: "presence_diff"},
+ %{assigns: %{policies: %Policies{presence: %PresencePolicies{read: false}}}} = socket
+ ) do
+ Logger.warning("Presence message ignored")
{:noreply, socket}
end
- @impl true
- def handle_info(%{event: "postgres_cdc_down"}, socket) do
- pg_sub_ref = postgres_subscribe()
-
- {:noreply, assign(socket, %{pg_sub_ref: pg_sub_ref})}
+ def handle_info(_msg, %{assigns: %{policies: %Policies{broadcast: %BroadcastPolicies{read: false}}}} = socket) do
+ Logger.warning("Broadcast message ignored")
+ {:noreply, socket}
end
- @impl true
- def handle_info(%{event: type, payload: payload}, socket) do
- socket = count(socket)
-
+ def handle_info(%{event: type, payload: payload} = msg, socket) do
+ socket = socket |> count() |> Logging.maybe_log_handle_info(msg)
push(socket, type, payload)
{:noreply, socket}
end
- @impl true
- def handle_info(
- :postgres_subscribe,
- %{
- assigns: %{
- tenant: tenant,
- pg_sub_ref: pg_sub_ref,
- pg_change_params: pg_change_params,
- postgres_extension: postgres_extension,
- channel_name: channel_name,
- postgres_cdc_module: module
- }
- } = socket
- ) do
- cancel_timer(pg_sub_ref)
+ def handle_info(:postgres_subscribe, %{assigns: %{channel_name: channel_name}} = socket) do
+ %{
+ assigns: %{
+ tenant: tenant,
+ pg_sub_ref: pg_sub_ref,
+ pg_change_params: pg_change_params,
+ postgres_extension: postgres_extension,
+ postgres_cdc_module: module
+ }
+ } = socket
+
+ Helpers.cancel_timer(pg_sub_ref)
args = Map.put(postgres_extension, "id", tenant)
@@ -278,198 +260,156 @@ defmodule RealtimeWeb.RealtimeChannel do
case PostgresCdc.after_connect(module, response, postgres_extension, pg_change_params) do
{:ok, _response} ->
message = "Subscribed to PostgreSQL"
-
Logger.info(message)
-
push_system_message("postgres_changes", socket, "ok", message, channel_name)
-
{:noreply, assign(socket, :pg_sub_ref, nil)}
error ->
- message = "Subscribing to PostgreSQL failed: #{inspect(error)}"
-
- push_system_message("postgres_changes", socket, "error", message, channel_name)
-
- Logger.error(message)
-
+ log_warning("UnableToSubscribeToPostgres", error)
+ push_system_message("postgres_changes", socket, "error", error, channel_name)
{:noreply, assign(socket, :pg_sub_ref, postgres_subscribe(5, 10))}
end
nil ->
- Logger.warning("Re-subscribed to PostgreSQL with params: #{inspect(pg_change_params)}")
+ Logger.warning("Re-connecting to PostgreSQL with params: " <> inspect(pg_change_params))
{:noreply, assign(socket, :pg_sub_ref, postgres_subscribe())}
+
+ error ->
+ log_warning("UnableToSubscribeToPostgres", error)
+ push_system_message("postgres_changes", socket, "error", error, channel_name)
+ {:noreply, assign(socket, :pg_sub_ref, postgres_subscribe(5, 10))}
end
+ rescue
+ error ->
+ log_warning("UnableToSubscribeToPostgres", error)
+ push_system_message("postgres_changes", socket, "error", error, channel_name)
+ {:noreply, assign(socket, :pg_sub_ref, postgres_subscribe(5, 10))}
end
- @impl true
def handle_info(:confirm_token, %{assigns: %{pg_change_params: pg_change_params}} = socket) do
case confirm_token(socket) do
- {:ok, claims, confirm_token_ref} ->
+ {:ok, claims, confirm_token_ref, _, _} ->
pg_change_params = Enum.map(pg_change_params, &Map.put(&1, :claims, claims))
+ {:noreply, assign(socket, %{confirm_token_ref: confirm_token_ref, pg_change_params: pg_change_params})}
- {:noreply,
- assign(socket, %{
- confirm_token_ref: confirm_token_ref,
- pg_change_params: pg_change_params
- })}
+ {:error, :missing_claims} ->
+ shutdown_response(socket, "Fields `role` and `exp` are required in JWT")
- {:error, error} ->
- message = "access token has expired: " <> inspect(error, pretty: true)
+ {:error, :expired_token, msg} ->
+ shutdown_response(socket, msg)
- shutdown_response(socket, message)
+ {:error, error} ->
+ shutdown_response(socket, to_log(error))
end
end
- def handle_info(
- {:DOWN, _, :process, _, _reason},
- %{assigns: %{pg_sub_ref: pg_sub_ref, pg_change_params: pg_change_params}} = socket
- ) do
- cancel_timer(pg_sub_ref)
-
- ref =
- case pg_change_params do
- [_ | _] -> postgres_subscribe()
- _ -> nil
- end
-
- {:noreply, assign(socket, :pg_sub_ref, ref)}
+ def handle_info(:disconnect, %{assigns: %{channel_name: channel_name}} = socket) do
+ Logger.info("Received operational call to disconnect channel")
+ push_system_message("system", socket, "ok", "Server requested disconnect", channel_name)
+ {:stop, :shutdown, socket}
end
- def handle_info(other, socket) do
- Logger.error("Undefined msg #{inspect(other, pretty: true)}")
+ def handle_info(:sync_presence, socket), do: PresenceHandler.sync(socket)
+ def handle_info(:unsuspend_tenant, socket), do: {:noreply, socket}
+
+ def handle_info(msg, socket) do
+ log_error("UnhandledSystemMessage", msg)
{:noreply, socket}
end
@impl true
- def handle_in(
- _,
- _,
- %{
- assigns: %{
- rate_counter: %{avg: avg},
- limits: %{max_events_per_second: max}
- }
- } = socket
- )
+ def handle_in("broadcast", payload, socket), do: BroadcastHandler.handle(payload, socket)
+ def handle_in("presence", payload, socket), do: PresenceHandler.handle(payload, socket)
+
+ def handle_in(_, _, %{assigns: %{rate_counter: %{avg: avg}, limits: %{max_events_per_second: max}}} = socket)
when avg > max do
message = "Too many messages per second"
shutdown_response(socket, message)
end
- def handle_in(
- "access_token",
- %{"access_token" => refresh_token},
- %{assigns: %{pg_sub_ref: pg_sub_ref, pg_change_params: pg_change_params}} = socket
- )
- when is_binary(refresh_token) do
- socket = socket |> assign(:access_token, refresh_token)
+ def handle_in("access_token", %{"access_token" => refresh_token}, %{assigns: %{access_token: access_token}} = socket)
+ when refresh_token == access_token do
+ {:noreply, socket}
+ end
- case confirm_token(socket) do
- {:ok, claims, confirm_token_ref} ->
- cancel_timer(pg_sub_ref)
+ def handle_in("access_token", %{"access_token" => refresh_token}, %{assigns: %{access_token: _access_token}} = socket)
+ when is_nil(refresh_token) do
+ {:noreply, socket}
+ end
- pg_change_params = Enum.map(pg_change_params, &Map.put(&1, :claims, claims))
+ def handle_in("access_token", %{"access_token" => refresh_token}, socket) when is_binary(refresh_token) do
+ %{
+ assigns: %{
+ access_token: access_token,
+ pg_sub_ref: pg_sub_ref,
+ db_conn: db_conn,
+ channel_name: channel_name,
+ pg_change_params: pg_change_params
+ }
+ } = socket
- pg_sub_ref =
- case pg_change_params do
- [_ | _] -> postgres_subscribe()
- _ -> nil
- end
+ socket = assign(socket, :access_token, refresh_token)
- {:noreply,
- assign(socket, %{
- confirm_token_ref: confirm_token_ref,
- pg_change_params: pg_change_params,
- pg_sub_ref: pg_sub_ref
- })}
+ with {:ok, claims, confirm_token_ref, _, socket} <- confirm_token(socket),
+ socket = assign_authorization_context(socket, channel_name, access_token, claims),
+ {:ok, socket} <- maybe_assign_policies(channel_name, db_conn, socket) do
+ Helpers.cancel_timer(pg_sub_ref)
+ pg_change_params = Enum.map(pg_change_params, &Map.put(&1, :claims, claims))
- {:error, error} ->
- message = "Received an invalid access token from client: " <> inspect(error)
-
- shutdown_response(socket, message)
- end
- end
+ pg_sub_ref =
+ case pg_change_params do
+ [_ | _] -> postgres_subscribe()
+ _ -> nil
+ end
- def handle_in(
- "broadcast" = type,
- payload,
- %{
- assigns: %{
- is_new_api: true,
- ack_broadcast: ack_broadcast,
- self_broadcast: self_broadcast,
- tenant_topic: tenant_topic
- }
- } = socket
- ) do
- socket = count(socket)
+ assigns = %{
+ pg_sub_ref: pg_sub_ref,
+ confirm_token_ref: confirm_token_ref,
+ pg_change_params: pg_change_params
+ }
- if self_broadcast do
- Endpoint.broadcast(tenant_topic, type, payload)
+ {:noreply, assign(socket, assigns)}
else
- Endpoint.broadcast_from(self(), tenant_topic, type, payload)
- end
+ {:error, :unauthorized, msg} ->
+ shutdown_response(socket, msg)
- if ack_broadcast do
- {:reply, :ok, socket}
- else
- {:noreply, socket}
- end
- end
+ {:error, :expired_token, msg} ->
+ shutdown_response(socket, msg)
- def handle_in(
- "presence",
- %{"event" => event} = payload,
- %{assigns: %{is_new_api: true, presence_key: presence_key, tenant_topic: tenant_topic}} =
- socket
- ) do
- socket = count(socket)
+ {:error, :missing_claims} ->
+ shutdown_response(socket, "Fields `role` and `exp` are required in JWT")
- result =
- event
- |> String.downcase()
- |> case do
- "track" ->
- payload = Map.get(payload, "payload", %{})
-
- with {:error, {:already_tracked, _, _, _}} <-
- Presence.track(self(), tenant_topic, presence_key, payload),
- {:ok, _} <- Presence.update(self(), tenant_topic, presence_key, payload) do
- :ok
- else
- {:ok, _} -> :ok
- {:error, _} -> :error
- end
-
- "untrack" ->
- Presence.untrack(self(), tenant_topic, presence_key)
-
- _ ->
- :error
- end
+ {:error, :expected_claims_map} ->
+ shutdown_response(socket, "Token claims must be a map")
+
+ {:error, :unable_to_set_policies, _msg} ->
+ shutdown_response(socket, "Realtime was unable to connect to the project database")
- {:reply, result, socket}
+ {:error, error} ->
+ shutdown_response(socket, inspect(error))
+ end
end
- def handle_in(_, _, socket) do
+ def handle_in(type, payload, socket) do
socket = count(socket)
+
+ # Log info here so that bad messages from clients won't flood Logflare
+ # Can subscribe to a Channel with `log_level` `info` to see these messages
+ message = "Unexpected message from client of type `#{type}` with payload: #{inspect(payload)}"
+ Logger.info(message)
+
{:noreply, socket}
end
@impl true
def terminate(reason, _state) do
- Logger.debug(%{terminate: reason})
+ Logger.debug("Channel terminated with reason: #{reason}")
:telemetry.execute([:prom_ex, :plugin, :realtime, :disconnected], %{})
:ok
end
- defp decrypt_jwt_secret(secret) do
- secure_key = Application.get_env(:realtime, :db_enc_key)
- decrypt!(secret, secure_key)
- end
-
- defp postgres_subscribe(min \\ 1, max \\ 5) do
+ defp postgres_subscribe(min \\ 1, max \\ 3) do
Process.send_after(self(), :postgres_subscribe, backoff(min, max))
end
@@ -478,7 +418,7 @@ defmodule RealtimeWeb.RealtimeChannel do
wait
end
- def limit_joins(%{assigns: %{tenant: tenant, limits: limits}}) do
+ def limit_joins(%{tenant: tenant, limits: limits}) do
id = Tenants.joins_per_second_key(tenant)
GenCounter.new(id)
@@ -494,23 +434,22 @@ defmodule RealtimeWeb.RealtimeChannel do
GenCounter.add(id)
case RateCounter.get(id) do
- {:ok, %{avg: avg}} ->
- if avg < limits.max_joins_per_second do
- :ok
- else
- {:error, :too_many_joins}
- end
+ {:ok, %{avg: avg}} when avg < limits.max_joins_per_second ->
+ :ok
+
+ {:ok, %{avg: _}} ->
+ {:error, :too_many_joins}
- other ->
- Logger.error("Unexpected error for #{tenant} #{inspect(other)}")
- {:error, other}
+ error ->
+ Logging.log_error_message(:error, "UnknownErrorOnCounter", error)
+ {:error, error}
end
end
def limit_channels(%{assigns: %{tenant: tenant, limits: limits}, transport_pid: pid}) do
key = Tenants.channels_per_client_key(tenant)
- if Registry.count_match(Realtime.Registry, key, pid) > limits.max_channels_per_client do
+ if Registry.count_match(Realtime.Registry, key, pid) + 1 > limits.max_channels_per_client do
{:error, :too_many_channels}
else
Registry.register(Realtime.Registry, Tenants.channels_per_client_key(tenant), pid)
@@ -518,16 +457,12 @@ defmodule RealtimeWeb.RealtimeChannel do
end
end
- defp limit_max_users(%{
- assigns: %{limits: %{max_concurrent_users: max_conn_users}, tenant: tenant}
- }) do
+ defp limit_max_users(%{limits: %{max_concurrent_users: max_conn_users}, tenant: tenant}) do
conns = Realtime.UsersCounter.tenant_users(tenant)
- if conns < max_conn_users do
- :ok
- else
- {:error, :too_many_connections}
- end
+ if conns < max_conn_users,
+ do: :ok,
+ else: {:error, :too_many_connections}
end
defp assign_counter(%{assigns: %{tenant: tenant, limits: limits}} = socket) do
@@ -549,8 +484,25 @@ defmodule RealtimeWeb.RealtimeChannel do
assign(socket, :rate_counter, rate_counter)
end
- defp assign_counter(socket) do
- socket
+ defp assign_counter(socket), do: socket
+
+ defp assign_presence_counter(%{assigns: %{tenant: tenant, limits: limits}} = socket) do
+ key = Tenants.presence_events_per_second_key(tenant)
+
+ GenCounter.new(key)
+
+ RateCounter.new(key,
+ idle_shutdown: :infinity,
+ telemetry: %{
+ event_name: [:channel, :presence_events],
+ measurements: %{limit: limits.max_events_per_second},
+ metadata: %{tenant: tenant}
+ }
+ )
+
+ {:ok, rate_counter} = RateCounter.get(key)
+
+ assign(socket, :presence_rate_counter, rate_counter)
end
defp count(%{assigns: %{rate_counter: counter}} = socket) do
@@ -561,10 +513,8 @@ defmodule RealtimeWeb.RealtimeChannel do
end
defp presence_key(params) do
- with key when is_binary(key) <- params["config"]["presence"]["key"],
- true <- String.length(key) > 0 do
- key
- else
+ case params["config"]["presence"]["key"] do
+ key when is_binary(key) and key != "" -> key
_ -> UUID.uuid1()
end
end
@@ -588,46 +538,55 @@ defmodule RealtimeWeb.RealtimeChannel do
assign(socket, :access_token, tenant_token)
end
- defp confirm_token(%{
- assigns:
- %{
- jwt_secret: jwt_secret,
- access_token: access_token
- } = assigns
- }) do
- with jwt_secret_dec <- decrypt_jwt_secret(jwt_secret),
+ defp confirm_token(%{assigns: assigns} = socket) do
+ %{
+ jwt_secret: jwt_secret,
+ access_token: access_token
+ } = assigns
+
+ topic = Map.get(assigns, :topic)
+ db_conn = Map.get(assigns, :db_conn)
+ socket = Map.put(socket, :policies, nil)
+ jwt_jwks = Map.get(assigns, :jwt_jwks)
+
+ with jwt_secret_dec <- Crypto.decrypt!(jwt_secret),
{:ok, %{"exp" => exp} = claims} when is_integer(exp) <-
- ChannelsAuthorization.authorize_conn(access_token, jwt_secret_dec),
- exp_diff when exp_diff > 0 <- exp - Joken.current_time() do
- if ref = assigns[:confirm_token_ref], do: cancel_timer(ref)
-
- ref =
- Process.send_after(
- self(),
- :confirm_token,
- min(@confirm_token_ms_interval, exp_diff * 1_000)
- )
+ ChannelsAuthorization.authorize_conn(access_token, jwt_secret_dec, jwt_jwks),
+ exp_diff when exp_diff > 0 <- exp - Joken.current_time(),
+ {:ok, socket} <- maybe_assign_policies(topic, db_conn, socket) do
+ if ref = assigns[:confirm_token_ref], do: Helpers.cancel_timer(ref)
- {:ok, claims, ref}
- else
- {:error, e} ->
- {:error, e}
+ interval = min(@confirm_token_ms_interval, exp_diff * 1_000)
+ ref = Process.send_after(self(), :confirm_token, interval)
- e ->
- {:error, e}
+ {:ok, claims, ref, access_token, socket}
+ else
+ {:error, error} -> {:error, error}
+ {:error, error, message} -> {:error, error, message}
+ e -> {:error, e}
end
end
- defp shutdown_response(%{assigns: %{channel_name: channel_name}} = socket, message)
- when is_binary(message) do
+ defp shutdown_response(socket, message) when is_binary(message) do
+ %{assigns: %{channel_name: channel_name, access_token: access_token}} = socket
+ metadata = log_metadata(access_token)
push_system_message("system", socket, "error", message, channel_name)
+ log_warning("ChannelShutdown", message, metadata)
+ {:stop, :normal, socket}
+ end
- Logger.error(message)
-
- {:stop, :shutdown, socket}
+ defp push_system_message(extension, socket, status, error, channel_name)
+ when is_map(error) and is_map_key(error, :error_code) and is_map_key(error, :error_message) do
+ push(socket, "system", %{
+ extension: extension,
+ status: status,
+ message: "#{error.error_code}: #{error.error_message}",
+ channel: channel_name
+ })
end
- defp push_system_message(extension, socket, status, message, channel_name) do
+ defp push_system_message(extension, socket, status, message, channel_name)
+ when is_binary(message) do
push(socket, "system", %{
extension: extension,
status: status,
@@ -636,13 +595,13 @@ defmodule RealtimeWeb.RealtimeChannel do
})
end
- def presence_dirty_list(topic) do
- [{:pool_size, size}] = :ets.lookup(Presence, :pool_size)
-
- Presence
- |> Shard.name_for_topic(topic, size)
- |> Shard.dirty_list(topic)
- |> Phoenix.Presence.group()
+ defp push_system_message(extension, socket, status, message, channel_name) do
+ push(socket, "system", %{
+ extension: extension,
+ status: status,
+ message: inspect(message),
+ channel: channel_name
+ })
end
defp start_db_rate_counter(tenant) do
@@ -651,11 +610,149 @@ defmodule RealtimeWeb.RealtimeChannel do
RateCounter.new(key,
idle_shutdown: :infinity,
- telemetry: %{
- event_name: [:channel, :db_events],
- measurements: %{},
- metadata: %{tenant: tenant}
- }
+ telemetry: %{event_name: [:channel, :db_events], measurements: %{}, metadata: %{tenant: tenant}}
)
+
+ :ok
+ end
+
+ defp new_api?(%{"config" => _}), do: true
+ defp new_api?(_), do: false
+
+ defp pg_change_params(true, params, channel_pid, claims, _) do
+ case get_in(params, ["config", "postgres_changes"]) do
+ [_ | _] = params_list ->
+ Enum.map(params_list, fn params ->
+ %{
+ id: UUID.uuid1(),
+ channel_pid: channel_pid,
+ claims: claims,
+ params: params
+ }
+ end)
+
+ _ ->
+ []
+ end
+ end
+
+ defp pg_change_params(false, _, channel_pid, claims, sub_topic) do
+ params =
+ case String.split(sub_topic, ":", parts: 3) do
+ [schema, table, filter] -> %{"schema" => schema, "table" => table, "filter" => filter}
+ [schema, table] -> %{"schema" => schema, "table" => table}
+ [schema] -> %{"schema" => schema}
+ end
+
+ [
+ %{
+ id: UUID.uuid1(),
+ channel_pid: channel_pid,
+ claims: claims,
+ params: params
+ }
+ ]
+ end
+
+ defp postgres_cdc_subscribe(%{pg_change_params: []}), do: []
+
+ defp postgres_cdc_subscribe(opts) do
+ %{
+ is_new_api: is_new_api,
+ pg_change_params: pg_change_params,
+ transport_pid: transport_pid,
+ serializer: serializer,
+ topic: topic,
+ tenant: tenant,
+ module: module
+ } = opts
+
+ ids =
+ Enum.map(pg_change_params, fn %{id: id, params: params} ->
+ {UUID.string_to_binary!(id), :erlang.phash2(params)}
+ end)
+
+ subscription_metadata =
+ {:subscriber_fastlane, transport_pid, serializer, ids, topic, tenant, is_new_api}
+
+ metadata = [metadata: subscription_metadata]
+
+ PostgresCdc.subscribe(module, pg_change_params, tenant, metadata)
+
+ send(self(), :postgres_subscribe)
+
+ pg_change_params
+ end
+
+ defp add_id_to_postgres_changes(pg_change_params) do
+ Enum.map(pg_change_params, fn %{params: params} ->
+ id = :erlang.phash2(params)
+ Map.put(params, :id, id)
+ end)
+ end
+
+ defp assign_authorization_context(socket, topic, access_token, claims) do
+ authorization_context =
+ Authorization.build_authorization_params(%{
+ tenant_id: socket.assigns.tenant,
+ topic: topic,
+ headers: Map.get(socket.assigns, :headers, []),
+ jwt: access_token,
+ claims: claims,
+ role: claims["role"]
+ })
+
+ assign(socket, :authorization_context, authorization_context)
+ end
+
+ defp maybe_assign_policies(
+ topic,
+ db_conn,
+ %{assigns: %{private?: true}} = socket
+ )
+ when not is_nil(topic) and not is_nil(db_conn) do
+ authorization_context = socket.assigns.authorization_context
+
+ with {:ok, socket} <- Authorization.get_read_authorizations(socket, db_conn, authorization_context) do
+ if match?(%Policies{broadcast: %BroadcastPolicies{read: false}}, socket.assigns.policies),
+ do: {:error, :unauthorized, "You do not have permissions to read from this Channel topic: #{topic}"},
+ else: {:ok, socket}
+ else
+ {:error, :increase_connection_pool} ->
+ {:error, :increase_connection_pool}
+
+ {:error, :rls_policy_error, error} ->
+ log_error("RlsPolicyError", error)
+
+ {:error, :unauthorized, "You do not have permissions to read from this Channel topic: #{topic}"}
+
+ {:error, error} ->
+ {:error, :unable_to_set_policies, error}
+ end
+ end
+
+ defp maybe_assign_policies(_, _, socket) do
+ {:ok, assign(socket, policies: nil)}
+ end
+
+ defp only_private?(tenant_id, %{assigns: %{private?: private?}}) do
+ tenant = Tenants.Cache.get_tenant_by_external_id(tenant_id)
+
+ if tenant.private_only and !private?,
+ do: {:error, :private_only},
+ else: :ok
+ end
+
+ defp log_metadata(access_token) do
+ access_token
+ |> Joken.peek_claims()
+ |> then(fn
+ {:ok, claims} -> Map.get(claims, "sub")
+ _ -> nil
+ end)
+ |> then(fn
+ nil -> []
+ sub -> [sub: sub]
+ end)
end
end
diff --git a/lib/realtime_web/channels/realtime_channel/assign.ex b/lib/realtime_web/channels/realtime_channel/assign.ex
new file mode 100644
index 0000000..c6a1a00
--- /dev/null
+++ b/lib/realtime_web/channels/realtime_channel/assign.ex
@@ -0,0 +1,47 @@
+defmodule RealtimeWeb.RealtimeChannel.Assigns do
+ @moduledoc """
+ Assigns for RealtimeChannel
+ """
+
+ defstruct [
+ :tenant,
+ :log_level,
+ :rate_counter,
+ :limits,
+ :tenant_topic,
+ :pg_sub_ref,
+ :pg_change_params,
+ :postgres_extension,
+ :claims,
+ :jwt_secret,
+ :jwt_jwks,
+ :tenant_token,
+ :access_token,
+ :postgres_cdc_module,
+ :channel_name,
+ :headers
+ ]
+
+ @type t :: %__MODULE__{
+ tenant: String.t(),
+ log_level: atom(),
+ rate_counter: Realtime.RateCounter.t(),
+ limits: %{
+ max_events_per_second: integer(),
+ max_concurrent_users: integer(),
+ max_bytes_per_second: integer(),
+ max_channels_per_client: integer(),
+ max_joins_per_second: integer()
+ },
+ tenant_topic: String.t(),
+ pg_sub_ref: reference() | nil,
+ pg_change_params: map(),
+ postgres_extension: map(),
+ claims: map(),
+ jwt_secret: String.t(),
+ jwt_jwks: map(),
+ tenant_token: String.t(),
+ access_token: String.t(),
+ channel_name: String.t()
+ }
+end
diff --git a/lib/realtime_web/channels/realtime_channel/broadcast_handler.ex b/lib/realtime_web/channels/realtime_channel/broadcast_handler.ex
new file mode 100644
index 0000000..d8a2730
--- /dev/null
+++ b/lib/realtime_web/channels/realtime_channel/broadcast_handler.ex
@@ -0,0 +1,91 @@
+defmodule RealtimeWeb.RealtimeChannel.BroadcastHandler do
+ @moduledoc """
+ Handles the Broadcast feature from Realtime
+ """
+ require Logger
+ import Phoenix.Socket, only: [assign: 3]
+ import Realtime.Logs
+
+ alias Phoenix.Socket
+ alias Realtime.GenCounter
+ alias Realtime.RateCounter
+ alias Realtime.Tenants.Authorization
+ alias Realtime.Tenants.Authorization.Policies
+ alias Realtime.Tenants.Authorization.Policies.BroadcastPolicies
+ alias RealtimeWeb.Endpoint
+
+ @event_type "broadcast"
+ @spec handle(map(), Phoenix.Socket.t()) ::
+ {:reply, :ok, Phoenix.Socket.t()} | {:noreply, Phoenix.Socket.t()}
+ def handle(payload, %{assigns: %{private?: true}} = socket) do
+ %{
+ assigns: %{
+ self_broadcast: self_broadcast,
+ tenant_topic: tenant_topic,
+ authorization_context: authorization_context,
+ db_conn: db_conn
+ }
+ } = socket
+
+ case run_authorization_check(socket, db_conn, authorization_context) do
+ {:ok,
+ %{assigns: %{ack_broadcast: ack_broadcast, policies: %Policies{broadcast: %BroadcastPolicies{write: true}}}} =
+ socket} ->
+ socket = increment_rate_counter(socket)
+ send_message(self_broadcast, tenant_topic, payload)
+ if ack_broadcast, do: {:reply, :ok, socket}, else: {:noreply, socket}
+
+ {:ok, socket} ->
+ {:noreply, socket}
+
+ {:error, error} ->
+ log_error("UnableToSetPolicies", error)
+ {:noreply, socket}
+ end
+ end
+
+ def handle(payload, %{assigns: %{private?: false}} = socket) do
+ %{
+ assigns: %{
+ tenant_topic: tenant_topic,
+ self_broadcast: self_broadcast,
+ ack_broadcast: ack_broadcast
+ }
+ } = socket
+
+ socket = increment_rate_counter(socket)
+ send_message(self_broadcast, tenant_topic, payload)
+
+ if ack_broadcast,
+ do: {:reply, :ok, socket},
+ else: {:noreply, socket}
+ end
+
+ defp send_message(self_broadcast, tenant_topic, payload) do
+ if self_broadcast,
+ do: Endpoint.broadcast(tenant_topic, @event_type, payload),
+ else: Endpoint.broadcast_from(self(), tenant_topic, @event_type, payload)
+ end
+
+ defp increment_rate_counter(%{assigns: %{policies: %Policies{broadcast: %BroadcastPolicies{write: false}}}} = socket) do
+ socket
+ end
+
+ defp increment_rate_counter(%{assigns: %{rate_counter: counter}} = socket) do
+ GenCounter.add(counter.id)
+ {:ok, rate_counter} = RateCounter.get(counter.id)
+ assign(socket, :rate_counter, rate_counter)
+ end
+
+ defp run_authorization_check(
+ %Socket{assigns: %{policies: %{broadcast: %BroadcastPolicies{write: nil}}}} = socket,
+ db_conn,
+ authorization_context
+ ) do
+ Authorization.get_write_authorizations(socket, db_conn, authorization_context)
+ end
+
+ defp run_authorization_check(socket, _db_conn, _authorization_context) do
+ {:ok, socket}
+ end
+end
diff --git a/lib/realtime_web/channels/realtime_channel/logging.ex b/lib/realtime_web/channels/realtime_channel/logging.ex
new file mode 100644
index 0000000..0d5b6bc
--- /dev/null
+++ b/lib/realtime_web/channels/realtime_channel/logging.ex
@@ -0,0 +1,63 @@
+defmodule RealtimeWeb.RealtimeChannel.Logging do
+ @moduledoc """
+ Log functions for Realtime channels to ensure
+ """
+ require Logger
+ import Realtime.Logs
+ alias Realtime.Telemetry
+
+ @doc """
+ Logs messages according to user options given on config
+ """
+ def maybe_log_handle_info(
+ %{assigns: %{log_level: log_level, channel_name: channel_name}} = socket,
+ msg
+ ) do
+ if Logger.compare_levels(log_level, :info) == :eq do
+ msg =
+ case msg do
+ msg when is_binary(msg) -> msg
+ _ -> inspect(msg, pretty: true)
+ end
+
+ msg = "Received message on " <> channel_name <> " with payload: " <> msg
+ Logger.log(log_level, msg)
+ end
+
+ socket
+ end
+
+ @doc """
+ List of errors that are system triggered and not user driven
+ """
+ def system_errors,
+ do: [
+ "UnableToSetPolicies",
+ "InitializingProjectConnection",
+ "DatabaseConnectionIssue",
+ "UnknownErrorOnChannel"
+ ]
+
+ @doc """
+ Logs errors in an expected format
+ """
+ @spec log_error_message(
+ level :: :error | :warning,
+ code :: binary(),
+ error :: term(),
+ keyword()
+ ) :: {:error, %{reason: binary()}}
+ def log_error_message(level, code, error, metadata \\ [])
+
+ def log_error_message(:error, code, error, metadata) do
+ if code in system_errors(), do: Telemetry.execute([:realtime, :channel, :error], %{code: code}, %{code: code})
+
+ log_error(code, error, metadata)
+ {:error, %{reason: error}}
+ end
+
+ def log_error_message(:warning, code, error, metadata) do
+ log_warning(code, error, metadata)
+ {:error, %{reason: error}}
+ end
+end
diff --git a/lib/realtime_web/channels/realtime_channel/presence_handler.ex b/lib/realtime_web/channels/realtime_channel/presence_handler.ex
new file mode 100644
index 0000000..ddaf21f
--- /dev/null
+++ b/lib/realtime_web/channels/realtime_channel/presence_handler.ex
@@ -0,0 +1,156 @@
+defmodule RealtimeWeb.RealtimeChannel.PresenceHandler do
+ @moduledoc """
+ Handles the Presence feature from Realtime
+ """
+ require Logger
+
+ import Phoenix.Socket, only: [assign: 3]
+ import Phoenix.Channel, only: [push: 3]
+ import Realtime.Logs
+
+ alias Phoenix.Socket
+ alias Phoenix.Tracker.Shard
+ alias Realtime.GenCounter
+ alias Realtime.RateCounter
+ alias Realtime.Tenants.Authorization
+ alias Realtime.Tenants.Authorization.Policies
+ alias Realtime.Tenants.Authorization.Policies.PresencePolicies
+ alias RealtimeWeb.Presence
+ alias RealtimeWeb.RealtimeChannel.Logging
+
+ @spec handle(map(), Phoenix.Socket.t()) :: {:reply, :error | :ok, Phoenix.Socket.t()}
+ def handle(%{"event" => event} = payload, socket) do
+ event = String.downcase(event, :ascii)
+
+ case handle_presence_event(event, payload, socket) do
+ {:ok, socket} -> {:reply, :ok, socket}
+ {:error, socket} -> {:reply, :error, socket}
+ end
+ end
+
+ def handle(_payload, socket), do: {:noreply, socket}
+
+ @doc """
+ Sends presence state to connected clients
+ """
+ @spec sync(Phoenix.Socket.t()) :: {:noreply, Phoenix.Socket.t()}
+ def sync(%{assigns: %{private?: false}} = socket) do
+ %{assigns: %{tenant_topic: topic}} = socket
+ socket = count(socket)
+ push(socket, "presence_state", presence_dirty_list(topic))
+ {:noreply, socket}
+ end
+
+ def sync(%{assigns: assigns} = socket) do
+ %{tenant_topic: topic, policies: policies} = assigns
+
+ socket =
+ case policies do
+ %Policies{presence: %PresencePolicies{read: false}} ->
+ Logger.info("Presence track message ignored on #{topic}")
+ socket
+
+ _ ->
+ socket = Logging.maybe_log_handle_info(socket, :sync_presence)
+ push(socket, "presence_state", presence_dirty_list(topic))
+ socket
+ end
+
+ {:noreply, socket}
+ end
+
+ defp handle_presence_event("track", payload, %{assigns: %{private?: false}} = socket) do
+ track(socket, payload)
+ end
+
+ defp handle_presence_event(
+ "track",
+ payload,
+ %{assigns: %{private?: true, policies: %Policies{presence: %PresencePolicies{write: nil}}}} = socket
+ ) do
+ %{assigns: %{db_conn: db_conn, authorization_context: authorization_context}} = socket
+
+ case run_authorization_check(socket, db_conn, authorization_context) do
+ {:ok, socket} ->
+ handle_presence_event("track", payload, socket)
+
+ {:error, error} ->
+ log_error("UnableToSetPolicies", error)
+ {:error, socket}
+ end
+ end
+
+ defp handle_presence_event(
+ "track",
+ payload,
+ %{assigns: %{private?: true, policies: %Policies{presence: %PresencePolicies{write: true}}}} = socket
+ ) do
+ track(socket, payload)
+ end
+
+ defp handle_presence_event(
+ "track",
+ _,
+ %{assigns: %{private?: true, policies: %Policies{presence: %PresencePolicies{write: false}}}} = socket
+ ) do
+ {:error, socket}
+ end
+
+ defp handle_presence_event("untrack", _, socket) do
+ %{assigns: %{presence_key: presence_key, tenant_topic: tenant_topic}} = socket
+ {Presence.untrack(self(), tenant_topic, presence_key), socket}
+ end
+
+ defp handle_presence_event(event, _, socket) do
+ log_error("UnknownPresenceEvent", event)
+ {:error, socket}
+ end
+
+ defp track(socket, payload) do
+ %{assigns: %{presence_key: presence_key, tenant_topic: tenant_topic}} = socket
+ payload = Map.get(payload, "payload", %{})
+
+ case Presence.track(self(), tenant_topic, presence_key, payload) do
+ {:ok, _} ->
+ {:ok, socket}
+
+ {:error, {:already_tracked, pid, _, _}} ->
+ case Presence.update(pid, tenant_topic, presence_key, payload) do
+ {:ok, _} -> {:ok, socket}
+ {:error, _} -> {:error, socket}
+ end
+
+ {:error, error} ->
+ log_error("UnableToTrackPresence", error)
+ {:error, socket}
+ end
+ end
+
+ defp count(%{assigns: %{presence_rate_counter: presence_counter}} = socket) do
+ GenCounter.add(presence_counter.id)
+ {:ok, presence_rate_counter} = RateCounter.get(presence_counter.id)
+
+ assign(socket, :presence_rate_counter, presence_rate_counter)
+ end
+
+ defp presence_dirty_list(topic) do
+ [{:pool_size, size}] = :ets.lookup(Presence, :pool_size)
+
+ Presence
+ |> Shard.name_for_topic(topic, size)
+ |> Shard.dirty_list(topic)
+ |> Phoenix.Presence.group()
+ end
+
+ defp run_authorization_check(
+ %Socket{assigns: %{private?: true, policies: %{presence: %PresencePolicies{write: nil}}}} = socket,
+ db_conn,
+ authorization_context
+ ) do
+ Authorization.get_write_authorizations(socket, db_conn, authorization_context)
+ end
+
+ defp run_authorization_check(socket, _db_conn, _authorization_context) do
+ {:ok, socket}
+ end
+end
diff --git a/lib/realtime_web/channels/user_socket.ex b/lib/realtime_web/channels/user_socket.ex
index ac29f53..f13a9e6 100644
--- a/lib/realtime_web/channels/user_socket.ex
+++ b/lib/realtime_web/channels/user_socket.ex
@@ -3,42 +3,42 @@ defmodule RealtimeWeb.UserSocket do
require Logger
- alias Realtime.{PostgresCdc, Api}
- alias Api.Tenant
+ import Realtime.Logs
+
+ alias Realtime.Api.Tenant
+ alias Realtime.Crypto
+ alias Realtime.Database
+ alias Realtime.PostgresCdc
alias Realtime.Tenants
+
alias RealtimeWeb.ChannelsAuthorization
alias RealtimeWeb.RealtimeChannel
- import Realtime.Helpers, only: [decrypt!: 2, get_external_id: 1]
-
## Channels
- channel "realtime:*", RealtimeChannel
+ channel("realtime:*", RealtimeChannel)
- @default_log_level "error"
+ @default_log_level :error
@impl true
- def connect(params, socket, connect_info) do
- if Application.fetch_env!(:realtime, :secure_channels) do
- %{uri: %{host: host}, x_headers: headers} = connect_info
-
- {:ok, external_id} = get_external_id(host)
+ def id(%{assigns: %{tenant: tenant}}), do: subscribers_id(tenant)
- log_level =
- params
- |> Map.get("log_level", @default_log_level)
- |> case do
- "" -> @default_log_level
- level -> level
- end
- |> String.to_existing_atom()
+ @spec subscribers_id(String.t()) :: String.t()
+ def subscribers_id(tenant), do: "user_socket:" <> tenant
- secure_key = Application.get_env(:realtime, :db_enc_key)
+ @impl true
+ def connect(params, socket, opts) do
+ if Application.fetch_env!(:realtime, :secure_channels) do
+ %{uri: %{host: host}, x_headers: headers} = opts
+ {:ok, external_id} = Database.get_external_id(host)
Logger.metadata(external_id: external_id, project: external_id)
- Logger.put_process_level(self(), log_level)
+ Logger.put_process_level(self(), :error)
+
+ token = access_token(params, headers)
with %Tenant{
extensions: extensions,
jwt_secret: jwt_secret,
+ jwt_jwks: jwt_jwks,
max_concurrent_users: max_conn_users,
max_events_per_second: max_events_per_second,
max_bytes_per_second: max_bytes_per_second,
@@ -46,56 +46,74 @@ defmodule RealtimeWeb.UserSocket do
max_channels_per_client: max_channels_per_client,
postgres_cdc_default: postgres_cdc_default
} <- Tenants.Cache.get_tenant_by_external_id(external_id),
- token when is_binary(token) <- access_token(params, headers),
- jwt_secret_dec <- decrypt!(jwt_secret, secure_key),
- {:ok, claims} <- ChannelsAuthorization.authorize_conn(token, jwt_secret_dec),
+ token when is_binary(token) <- token,
+ jwt_secret_dec <- Crypto.decrypt!(jwt_secret),
+ {:ok, claims} <- ChannelsAuthorization.authorize_conn(token, jwt_secret_dec, jwt_jwks),
{:ok, postgres_cdc_module} <- PostgresCdc.driver(postgres_cdc_default) do
- assigns =
- %RealtimeChannel.Assigns{
- claims: claims,
- jwt_secret: jwt_secret,
- limits: %{
- max_concurrent_users: max_conn_users,
- max_events_per_second: max_events_per_second,
- max_bytes_per_second: max_bytes_per_second,
- max_joins_per_second: max_joins_per_second,
- max_channels_per_client: max_channels_per_client
- },
- postgres_extension: PostgresCdc.filter_settings(postgres_cdc_default, extensions),
- postgres_cdc_module: postgres_cdc_module,
- tenant: external_id,
- log_level: log_level,
- tenant_token: token
- }
- |> Map.from_struct()
+ assigns = %RealtimeChannel.Assigns{
+ claims: claims,
+ jwt_secret: jwt_secret,
+ jwt_jwks: jwt_jwks,
+ limits: %{
+ max_concurrent_users: max_conn_users,
+ max_events_per_second: max_events_per_second,
+ max_bytes_per_second: max_bytes_per_second,
+ max_joins_per_second: max_joins_per_second,
+ max_channels_per_client: max_channels_per_client
+ },
+ postgres_extension: PostgresCdc.filter_settings(postgres_cdc_default, extensions),
+ postgres_cdc_module: postgres_cdc_module,
+ tenant: external_id,
+ log_level: log_level(params),
+ tenant_token: token,
+ headers: opts.x_headers
+ }
+
+ assigns = Map.from_struct(assigns)
{:ok, assign(socket, assigns)}
else
nil ->
- Logger.error("Auth error: tenant `#{external_id}` not found")
- :error
+ log_error("TenantNotFound", "Tenant not found: #{external_id}")
+ {:error, :tenant_not_found}
+
+ {:error, :expired_token, msg} ->
+ log_error_with_token_metadata(msg, token)
+ {:error, :expired_token}
+
+ {:error, :missing_claims} ->
+ log_error_with_token_metadata("Fields `role` and `exp` are required in JWT", token)
+ {:error, :missing_claims}
error ->
- Logger.error("Auth error: #{inspect(error)}")
- :error
+ log_error("ErrorConnectingToWebsocket", error)
+ error
end
end
end
- def access_token(params, headers) do
+ defp access_token(params, headers) do
case :proplists.lookup("x-api-key", headers) do
:none -> Map.get(params, "apikey")
{"x-api-key", token} -> token
end
end
- @impl true
- def id(%{assigns: %{tenant: tenant}}) do
- subscribers_id(tenant)
+ defp log_error_with_token_metadata(msg, token) do
+ case Joken.peek_claims(token) do
+ {:ok, claims} ->
+ sub = Map.get(claims, "sub")
+ log_error("InvalidJWTToken", msg, sub: sub)
+
+ _ ->
+ log_error("InvalidJWTToken", msg)
+ end
end
- @spec subscribers_id(String.t()) :: String.t()
- def subscribers_id(tenant) do
- "user_socket:" <> tenant
+ defp log_level(params) do
+ case Map.get(params, "log_level") do
+ level when level in ["info", "warning", "error"] -> String.to_existing_atom(level)
+ _ -> @default_log_level
+ end
end
end
diff --git a/lib/realtime_web/controllers/broadcast_controller.ex b/lib/realtime_web/controllers/broadcast_controller.ex
new file mode 100644
index 0000000..bf02454
--- /dev/null
+++ b/lib/realtime_web/controllers/broadcast_controller.ex
@@ -0,0 +1,40 @@
+defmodule RealtimeWeb.BroadcastController do
+ use RealtimeWeb, :controller
+ use OpenApiSpex.ControllerSpecs
+ require Logger
+
+ alias Realtime.Tenants.BatchBroadcast
+ alias RealtimeWeb.OpenApiSchemas.EmptyResponse
+ alias RealtimeWeb.OpenApiSchemas.TenantBatchParams
+ alias RealtimeWeb.OpenApiSchemas.TooManyRequestsResponse
+ alias RealtimeWeb.OpenApiSchemas.UnprocessableEntityResponse
+
+ action_fallback(RealtimeWeb.FallbackController)
+
+ operation(:broadcast,
+ summary: "Broadcasts a batch of messages",
+ parameters: [
+ token: [
+ in: :header,
+ name: "Authorization",
+ schema: %OpenApiSpex.Schema{type: :string},
+ required: true,
+ example:
+ "Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpYXQiOjE2ODAxNjIxNTR9.U9orU6YYqXAtpF8uAiw6MS553tm4XxRzxOhz2IwDhpY"
+ ]
+ ],
+ request_body: TenantBatchParams.params(),
+ responses: %{
+ 202 => EmptyResponse.response(),
+ 403 => EmptyResponse.response(),
+ 422 => UnprocessableEntityResponse.response(),
+ 429 => TooManyRequestsResponse.response()
+ }
+ )
+
+ def broadcast(%{assigns: %{tenant: tenant}} = conn, attrs) do
+ with :ok <- BatchBroadcast.broadcast(conn, tenant, attrs) do
+ send_resp(conn, :accepted, "")
+ end
+ end
+end
diff --git a/lib/realtime_web/controllers/fallback_controller.ex b/lib/realtime_web/controllers/fallback_controller.ex
index f5717f6..8e95b05 100644
--- a/lib/realtime_web/controllers/fallback_controller.ex
+++ b/lib/realtime_web/controllers/fallback_controller.ex
@@ -5,20 +5,74 @@ defmodule RealtimeWeb.FallbackController do
See `Phoenix.Controller.action_fallback/1` for more details.
"""
use RealtimeWeb, :controller
+ import RealtimeWeb.ErrorHelpers
+ import Realtime.Logs
+
+ def call(conn, {:error, :not_found}) do
+ conn
+ |> put_status(:not_found)
+ |> put_view(RealtimeWeb.ErrorView)
+ |> render("error.json", message: "Not found")
+ end
- # This clause handles errors returned by Ecto's insert/update/delete.
def call(conn, {:error, %Ecto.Changeset{} = changeset}) do
+ log_error(
+ "UnprocessableEntity",
+ Ecto.Changeset.traverse_errors(changeset, &translate_error/1)
+ )
+
conn
|> put_status(:unprocessable_entity)
|> put_view(RealtimeWeb.ChangesetView)
|> render("error.json", changeset: changeset)
end
- # This clause is an example of how to handle resources that cannot be found.
- def call(conn, {:error, :not_found}) do
+ def call(conn, {:error, _}) do
conn
- |> put_status(:not_found)
+ |> put_status(:unauthorized)
+ |> put_view(RealtimeWeb.ErrorView)
+ |> render("error.json", message: "Unauthorized")
+ end
+
+ def call(conn, {:error, status, message}) when is_atom(status) and is_binary(message) do
+ log_error("UnprocessableEntity", message)
+
+ conn
+ |> put_status(status)
+ |> put_view(RealtimeWeb.ErrorView)
+ |> render("error.json", message: message)
+ end
+
+ def call(conn, %Ecto.Changeset{valid?: true} = changeset) do
+ log_error(
+ "UnprocessableEntity",
+ Ecto.Changeset.traverse_errors(changeset, &translate_error/1)
+ )
+
+ conn
+ |> put_status(:unprocessable_entity)
+ |> put_view(RealtimeWeb.ChangesetView)
+ |> render("error.json", changeset: changeset)
+ end
+
+ def call(conn, %Ecto.Changeset{valid?: false} = changeset) do
+ log_error(
+ "UnprocessableEntity",
+ Ecto.Changeset.traverse_errors(changeset, &translate_error/1)
+ )
+
+ conn
+ |> put_status(:unprocessable_entity)
+ |> put_view(RealtimeWeb.ChangesetView)
+ |> render("error.json", changeset: changeset)
+ end
+
+ def call(conn, response) do
+ log_error("UnknownErrorOnController", response)
+
+ conn
+ |> put_status(:unprocessable_entity)
|> put_view(RealtimeWeb.ErrorView)
- |> render(:"404")
+ |> render("error.json", message: "Unknown error")
end
end
diff --git a/lib/realtime_web/controllers/metrics_controller.ex b/lib/realtime_web/controllers/metrics_controller.ex
index 5297c55..09bf338 100644
--- a/lib/realtime_web/controllers/metrics_controller.ex
+++ b/lib/realtime_web/controllers/metrics_controller.ex
@@ -2,22 +2,21 @@ defmodule RealtimeWeb.MetricsController do
use RealtimeWeb, :controller
require Logger
alias Realtime.PromEx
+ alias Realtime.Rpc
def index(conn, _) do
cluster_metrics =
Node.list()
|> Task.async_stream(
fn node ->
- {node, :rpc.call(node, PromEx, :get_metrics, [], 10_000)}
+ {node, Rpc.call(node, PromEx, :get_metrics, [], timeout: 10_000)}
end,
timeout: :infinity
)
|> Enum.reduce(PromEx.get_metrics(), fn {_, {node, response}}, acc ->
case response do
{:badrpc, reason} ->
- Logger.error(
- "Cannot fetch metrics from the node #{inspect(node)} because #{inspect(reason)}"
- )
+ Logger.error("Cannot fetch metrics from the node #{inspect(node)} because #{inspect(reason)}")
acc
diff --git a/lib/realtime_web/controllers/page_controller.ex b/lib/realtime_web/controllers/page_controller.ex
index 48a62c8..9594b55 100644
--- a/lib/realtime_web/controllers/page_controller.ex
+++ b/lib/realtime_web/controllers/page_controller.ex
@@ -4,4 +4,10 @@ defmodule RealtimeWeb.PageController do
def index(conn, _params) do
render(conn, "index.html")
end
+
+ def healthcheck(conn, _params) do
+ conn
+ |> put_status(:ok)
+ |> text("ok")
+ end
end
diff --git a/lib/realtime_web/controllers/ping_controller.ex b/lib/realtime_web/controllers/ping_controller.ex
index 7f836ef..00b3201 100644
--- a/lib/realtime_web/controllers/ping_controller.ex
+++ b/lib/realtime_web/controllers/ping_controller.ex
@@ -1,6 +1,5 @@
defmodule RealtimeWeb.PingController do
use RealtimeWeb, :controller
- use PhoenixSwagger
def ping(conn, _params) do
json(conn, %{message: "Success"})
diff --git a/lib/realtime_web/controllers/tenant_controller.ex b/lib/realtime_web/controllers/tenant_controller.ex
index b87aa7c..9642d9c 100644
--- a/lib/realtime_web/controllers/tenant_controller.ex
+++ b/lib/realtime_web/controllers/tenant_controller.ex
@@ -1,70 +1,79 @@
defmodule RealtimeWeb.TenantController do
use RealtimeWeb, :controller
- use PhoenixSwagger
+ use OpenApiSpex.ControllerSpecs
require Logger
+ import Realtime.Logs
alias Realtime.Api
- alias Realtime.Repo
alias Realtime.Api.Tenant
+ alias Realtime.Database
alias Realtime.PostgresCdc
- alias PhoenixSwagger.{Path, Schema}
- alias RealtimeWeb.{UserSocket, Endpoint}
+ alias Realtime.Tenants
+ alias Realtime.Tenants.Cache
+ alias Realtime.Tenants.Migrations
+ alias RealtimeWeb.OpenApiSchemas.EmptyResponse
+ alias RealtimeWeb.OpenApiSchemas.ErrorResponse
+ alias RealtimeWeb.OpenApiSchemas.NotFoundResponse
+ alias RealtimeWeb.OpenApiSchemas.TenantHealthResponse
+ alias RealtimeWeb.OpenApiSchemas.TenantParams
+ alias RealtimeWeb.OpenApiSchemas.TenantResponse
+ alias RealtimeWeb.OpenApiSchemas.TenantResponseList
+ alias RealtimeWeb.OpenApiSchemas.UnauthorizedResponse
@stop_timeout 10_000
action_fallback(RealtimeWeb.FallbackController)
- swagger_path :index do
- Path.get("/api/tenants")
- tag("Tenants")
- response(200, "Success", :TenantsResponse)
- end
+ plug :set_observability_attributes when action in [:show, :edit, :update, :delete, :reload, :health]
+
+ operation(:index,
+ summary: "List tenants",
+ parameters: [
+ authorization: [
+ in: :header,
+ name: "Authorization",
+ schema: %OpenApiSpex.Schema{type: :string},
+ required: true,
+ example:
+ "Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpYXQiOjE2ODAxNjIxNTR9.U9orU6YYqXAtpF8uAiw6MS553tm4XxRzxOhz2IwDhpY"
+ ]
+ ],
+ responses: %{
+ 200 => TenantResponseList.response(),
+ 403 => EmptyResponse.response()
+ }
+ )
def index(conn, _params) do
tenants = Api.list_tenants()
render(conn, "index.json", tenants: tenants)
end
- def create(conn, %{"tenant" => tenant_params}) do
- extensions =
- Enum.reduce(tenant_params["extensions"], [], fn
- %{"type" => type, "settings" => settings}, acc ->
- [%{"type" => type, "settings" => settings} | acc]
-
- _e, acc ->
- acc
- end)
-
- with {:ok, %Tenant{} = tenant} <-
- Api.create_tenant(%{tenant_params | "extensions" => extensions}) do
- Logger.metadata(external_id: tenant.external_id, project: tenant.external_id)
-
- conn
- |> put_status(:created)
- |> put_resp_header("location", Routes.tenant_path(conn, :show, tenant))
- |> render("show.json", tenant: tenant)
- end
- end
-
- swagger_path :show do
- Path.get("/api/tenants/{external_id}")
- tag("Tenants")
-
- parameter(:external_id, :path, :string, "",
- required: true,
- example: "72ac258c-8dcd-4f0d-992f-9b6bab5e6d19"
- )
-
- response(200, "Success", :TenantResponse)
- end
+ operation(:show,
+ summary: "Fetch tenant",
+ parameters: [
+ token: [
+ in: :header,
+ name: "Authorization",
+ schema: %OpenApiSpex.Schema{type: :string},
+ required: true,
+ example:
+ "Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpYXQiOjE2ODAxNjIxNTR9.U9orU6YYqXAtpF8uAiw6MS553tm4XxRzxOhz2IwDhpY"
+ ],
+ tenant_id: [in: :path, description: "Tenant ID", type: :string]
+ ],
+ responses: %{
+ 200 => TenantResponse.response(),
+ 403 => EmptyResponse.response(),
+ 404 => NotFoundResponse.response()
+ }
+ )
- def show(conn, %{"id" => id}) do
- Logger.metadata(external_id: id, project: id)
+ def show(conn, %{"tenant_id" => id}) do
+ tenant = Api.get_tenant_by_external_id(id)
- id
- |> Api.get_tenant_by_external_id()
- |> case do
+ case tenant do
%Tenant{} = tenant ->
render(conn, "show.json", tenant: tenant)
@@ -75,100 +84,194 @@ defmodule RealtimeWeb.TenantController do
end
end
- swagger_path :update do
- Path.put("/api/tenants/{external_id}")
- tag("Tenants")
-
- parameters do
- external_id(:path, :string, "",
+ operation(:create,
+ summary: "Create or update tenant",
+ parameters: [
+ token: [
+ in: :header,
+ name: "Authorization",
+ schema: %OpenApiSpex.Schema{type: :string},
required: true,
- maxLength: 255,
- example: "72ac258c-8dcd-4f0d-992f-9b6bab5e6d19"
- )
+ example:
+ "Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpYXQiOjE2ODAxNjIxNTR9.U9orU6YYqXAtpF8uAiw6MS553tm4XxRzxOhz2IwDhpY"
+ ]
+ ],
+ request_body: TenantParams.params(),
+ responses: %{
+ 200 => TenantResponse.response(),
+ 403 => EmptyResponse.response()
+ }
+ )
- tenant(:body, Schema.ref(:TenantReq), "", required: true)
- end
+ @spec create(any(), map()) :: any()
+ def create(conn, %{"tenant" => params}) do
+ external_id = Map.get(params, "external_id")
- response(200, "Success", :TenantResponse)
+ case Tenant.changeset(%Tenant{}, params) do
+ %{valid?: true} -> update(conn, %{"tenant_id" => external_id, "tenant" => params})
+ changeset -> changeset
+ end
end
- def update(conn, %{"id" => id, "tenant" => tenant_params}) do
- Logger.metadata(external_id: id, project: id)
+ operation(:update,
+ summary: "Create or update tenant",
+ parameters: [
+ token: [
+ in: :header,
+ name: "Authorization",
+ schema: %OpenApiSpex.Schema{type: :string},
+ required: true,
+ example:
+ "Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpYXQiOjE2ODAxNjIxNTR9.U9orU6YYqXAtpF8uAiw6MS553tm4XxRzxOhz2IwDhpY"
+ ],
+ tenant_id: [in: :path, description: "Tenant ID", type: :string]
+ ],
+ request_body: TenantParams.params(),
+ responses: %{
+ 200 => TenantResponse.response(),
+ 403 => EmptyResponse.response()
+ }
+ )
- case Api.get_tenant_by_external_id(id) do
+ def update(conn, %{"tenant_id" => external_id, "tenant" => tenant_params}) do
+ tenant = Api.get_tenant_by_external_id(external_id)
+
+ case tenant do
nil ->
- create(conn, %{"tenant" => Map.put(tenant_params, "external_id", id)})
+ tenant_params = tenant_params |> Map.put("external_id", external_id) |> Map.put("name", external_id)
+
+ extensions =
+ Enum.reduce(tenant_params["extensions"], [], fn
+ %{"type" => type, "settings" => settings}, acc -> [%{"type" => type, "settings" => settings} | acc]
+ _e, acc -> acc
+ end)
+
+ with {:ok, %Tenant{} = tenant} <- Api.create_tenant(%{tenant_params | "extensions" => extensions}),
+ res when res in [:ok, :noop] <- Migrations.run_migrations(tenant) do
+ Logger.metadata(external_id: tenant.external_id, project: tenant.external_id)
+
+ conn
+ |> put_status(:created)
+ |> put_resp_header("location", Routes.tenant_path(conn, :show, tenant))
+ |> render("show.json", tenant: tenant)
+ end
tenant ->
with {:ok, %Tenant{} = tenant} <- Api.update_tenant(tenant, tenant_params) do
- render(conn, "show.json", tenant: tenant)
+ conn
+ |> put_status(:ok)
+ |> put_resp_header("location", Routes.tenant_path(conn, :show, tenant))
+ |> render("show.json", tenant: tenant)
end
end
end
- swagger_path :delete do
- Path.delete("/api/tenants/{external_id}")
- tag("Tenants")
- description("Delete a tenant by ID")
-
- parameter(:id, :path, :string, "Tenant ID",
- required: true,
- example: "123e4567-e89b-12d3-a456-426655440000"
- )
-
- response(200, "No Content - Deleted Successfully")
- end
-
- def delete(conn, %{"id" => id}) do
- Logger.metadata(external_id: id, project: id)
-
- Repo.transaction(
- fn ->
- if Api.delete_tenant_by_external_id(id) do
- with :ok <- UserSocket.subscribers_id(id) |> Endpoint.broadcast("disconnect", %{}),
- :ok <- PostgresCdc.stop_all(id) do
- :ok
- else
- other -> Repo.rollback(other)
- end
- end
- end,
- timeout: @stop_timeout
- )
- |> case do
- {:error, reason} ->
- Logger.error("Can't remove tenant #{inspect(reason)}")
- send_resp(conn, 503, "")
-
- _ ->
+ operation(:delete,
+ summary: "Delete tenant",
+ parameters: [
+ token: [
+ in: :header,
+ name: "Authorization",
+ schema: %OpenApiSpex.Schema{type: :string},
+ required: true,
+ example:
+ "Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpYXQiOjE2ODAxNjIxNTR9.U9orU6YYqXAtpF8uAiw6MS553tm4XxRzxOhz2IwDhpY"
+ ],
+ tenant_id: [in: :path, description: "Tenant ID", type: :string]
+ ],
+ responses: %{
+ 204 => EmptyResponse.response(),
+ 403 => UnauthorizedResponse.response(),
+ 500 => ErrorResponse.response()
+ }
+ )
+
+ def delete(conn, %{"tenant_id" => tenant_id}) do
+ stop_all_timeout = Enum.count(PostgresCdc.available_drivers()) * 1_000
+
+ with %Tenant{} = tenant <- Api.get_tenant_by_external_id(tenant_id, :primary),
+ _ <- Tenants.suspend_tenant_by_external_id(tenant_id),
+ true <- Api.delete_tenant_by_external_id(tenant_id),
+ :ok <- Cache.distributed_invalidate_tenant_cache(tenant_id),
+ :ok <- PostgresCdc.stop_all(tenant, stop_all_timeout),
+ :ok <- Database.replication_slot_teardown(tenant) do
+ send_resp(conn, 204, "")
+ else
+ nil ->
+ log_error("TenantNotFound", "Tenant not found")
send_resp(conn, 204, "")
+
+ err ->
+ log_error("UnableToDeleteTenant", err)
+ conn |> put_status(500) |> json(err) |> halt()
end
end
- swagger_path :reload do
- Path.post("/api/tenants/{external_id}/reload")
- tag("Tenants")
- description("Reload tenant database supervisor")
+ operation(:reload,
+ summary: "Reload tenant",
+ parameters: [
+ token: [
+ in: :header,
+ name: "Authorization",
+ schema: %OpenApiSpex.Schema{type: :string},
+ required: true,
+ example:
+ "Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpYXQiOjE2ODAxNjIxNTR9.U9orU6YYqXAtpF8uAiw6MS553tm4XxRzxOhz2IwDhpY"
+ ],
+ tenant_id: [in: :path, description: "Tenant ID", type: :string]
+ ],
+ responses: %{
+ 204 => EmptyResponse.response(),
+ 403 => EmptyResponse.response(),
+ 404 => NotFoundResponse.response()
+ }
+ )
+
+ def reload(conn, %{"tenant_id" => tenant_id}) do
+ case Tenants.get_tenant_by_external_id(tenant_id) do
+ nil ->
+ log_error("TenantNotFound", "Tenant not found")
- parameter(:tenant_id, :path, :string, "Tenant ID",
- required: true,
- example: "123e4567-e89b-12d3-a456-426655440000"
- )
+ conn
+ |> put_status(404)
+ |> render("not_found.json", tenant: nil)
- response(204, "")
- response(404, "not found")
+ tenant ->
+ PostgresCdc.stop_all(tenant, @stop_timeout)
+ send_resp(conn, 204, "")
+ end
end
- def reload(conn, %{"tenant_id" => tenant_id}) do
- Logger.metadata(external_id: tenant_id, project: tenant_id)
+ operation(:health,
+ summary: "Tenant health",
+ parameters: [
+ token: [
+ in: :header,
+ name: "Authorization",
+ schema: %OpenApiSpex.Schema{type: :string},
+ required: true,
+ example:
+ "Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpYXQiOjE2ODAxNjIxNTR9.U9orU6YYqXAtpF8uAiw6MS553tm4XxRzxOhz2IwDhpY"
+ ],
+ tenant_id: [in: :path, description: "Tenant ID", type: :string]
+ ],
+ responses: %{
+ 200 => TenantHealthResponse.response(),
+ 403 => EmptyResponse.response(),
+ 404 => NotFoundResponse.response()
+ }
+ )
- case Api.get_tenant_by_external_id(tenant_id) do
- %Tenant{} ->
- PostgresCdc.stop_all(tenant_id, @stop_timeout)
- send_resp(conn, 204, "")
+ def health(conn, %{"tenant_id" => tenant_id}) do
+ case Tenants.health_check(tenant_id) do
+ {:ok, response} ->
+ json(conn, %{data: response})
- nil ->
- Logger.error("Atttempted to reload non-existant tenant #{tenant_id}")
+ {:error, %{healthy: false} = response} ->
+ json(conn, %{data: response})
+
+ {:error, :tenant_not_found} ->
+ log_error("TenantNotFound", "Tenant not found")
conn
|> put_status(404)
@@ -176,118 +279,11 @@ defmodule RealtimeWeb.TenantController do
end
end
- def swagger_definitions do
- %{
- Tenant:
- swagger_schema do
- title("Tenant")
-
- properties do
- id(:string, "", required: false, example: "72ac258c-8dcd-4f0d-992f-9b6bab5e6d19")
- name(:string, "", required: false, example: "tenant1")
- external_id(:string, "", required: false, example: "okumviwlylkmpkoicbrc")
- inserted_at(:string, "", required: false, example: "2022-02-16T20:41:47")
- max_concurrent_users(:integer, "", required: false, example: 10_000)
- extensions(:array, "", required: true, items: Schema.ref(:ExtensionPostgres))
- end
- end,
- ExtensionPostgres:
- swagger_schema do
- title("ExtensionPostgres")
-
- properties do
- type(:string, "", required: true, example: "postgres")
- inserted_at(:string, "", required: false, example: "2022-02-16T20:41:47")
- updated_at(:string, "", required: false, example: "2022-02-16T20:41:47")
-
- settings(:object, "",
- required: true,
- properties: %{
- db_host: %Schema{type: :string, example: "some encrypted value"},
- db_name: %Schema{type: :string, example: "some encrypted value"},
- db_password: %Schema{type: :string, example: "some encrypted value"},
- db_port: %Schema{type: :string, example: "some encrypted value"},
- db_user: %Schema{type: :string, example: "some encrypted value"},
- poll_interval_ms: %Schema{type: :integer, example: 100},
- poll_max_changes: %Schema{type: :integer, example: 100},
- poll_max_record_bytes: %Schema{type: :integer, example: 1_048_576},
- publication: %Schema{type: :string, example: "tealbase_realtime"},
- region: %Schema{type: :string, example: "us-east-1"},
- slot_name: %Schema{
- type: :string,
- example: "tealbase_realtime_replication_slot"
- }
- }
- )
- end
- end,
- TenantReq:
- swagger_schema do
- title("TenantReq")
-
- properties do
- name(:string, "", required: false, example: "tenant1", maxLength: 255)
- max_concurrent_users(:integer, "", required: false, example: 10_000, default: 10_000)
- extensions(:array, "", required: true, items: Schema.ref(:ExtensionPostgresReq))
- end
- end,
- ExtensionPostgresReq:
- swagger_schema do
- title("ExtensionPostgresReq")
-
- properties do
- type(:string, "", required: true, example: "postgres")
-
- settings(:object, "",
- required: true,
- properties: %{
- db_host: %Schema{type: :string, required: true, example: "127.0.0.1"},
- db_name: %Schema{type: :string, required: true, example: "postgres"},
- db_password: %Schema{
- type: :string,
- required: true,
- example: "postgres"
- },
- db_user: %Schema{type: :string, required: true, example: "postgres"},
- db_port: %Schema{type: :string, required: true, example: "6432"},
- region: %Schema{type: :string, required: true, example: "us-east-1"},
- poll_interval_ms: %Schema{type: :integer, default: 100, example: 100},
- poll_max_changes: %Schema{type: :integer, default: 100, example: 100},
- poll_max_record_bytes: %Schema{
- type: :integer,
- default: 1_048_576,
- example: 1_048_576
- },
- publication: %Schema{
- type: :string,
- default: "tealbase_realtime",
- example: "tealbase_realtime"
- },
- slot_name: %Schema{
- type: :string,
- default: "tealbase_realtime_replication_slot",
- example: "tealbase_realtime_replication_slot"
- }
- }
- )
- end
- end,
- Tenants:
- swagger_schema do
- title("Tenants")
- type(:array)
- items(Schema.ref(:Tenant))
- end,
- TenantsResponse:
- swagger_schema do
- title("TenantsResponse")
- property(:data, Schema.ref(:Tenants), "")
- end,
- TenantResponse:
- swagger_schema do
- title("TenantResponse")
- property(:data, Schema.ref(:Tenant), "")
- end
- }
+ defp set_observability_attributes(conn, _opts) do
+ tenant_id = conn.path_params["tenant_id"]
+ OpenTelemetry.Tracer.set_attributes(external_id: tenant_id)
+ Logger.metadata(external_id: tenant_id, project: tenant_id)
+
+ conn
end
end
diff --git a/lib/realtime_web/controllers/tenant_metrics_controller.ex b/lib/realtime_web/controllers/tenant_metrics_controller.ex
deleted file mode 100644
index a2ebb54..0000000
--- a/lib/realtime_web/controllers/tenant_metrics_controller.ex
+++ /dev/null
@@ -1,25 +0,0 @@
-defmodule RealtimeWeb.TenantMetricsController do
- use RealtimeWeb, :controller
-
- def index(conn, %{"id" => tenant}) do
- Logger.metadata(external_id: tenant, project: tenant)
-
- if Realtime.Api.get_tenant_by_external_id(tenant) do
- metrics = [
- {"tenant_concurrent_users", Realtime.UsersCounter.tenant_users(tenant)}
- ]
-
- conn
- |> put_resp_content_type("text/plain")
- |> send_resp(200, params_to_rows(metrics))
- else
- send_resp(conn, 404, "")
- end
- end
-
- def params_to_rows(metrics) do
- Enum.reduce(metrics, "", fn {key, value}, acc ->
- "#{acc} #{key} #{value} \n"
- end)
- end
-end
diff --git a/lib/realtime_web/dashboard/process_dump.ex b/lib/realtime_web/dashboard/process_dump.ex
new file mode 100644
index 0000000..d29bd20
--- /dev/null
+++ b/lib/realtime_web/dashboard/process_dump.ex
@@ -0,0 +1,40 @@
+defmodule Realtime.Dashboard.ProcessDump do
+ @moduledoc """
+ Live Dashboard page to dump the current processes tree
+ """
+ use Phoenix.LiveDashboard.PageBuilder
+
+ @impl true
+ def menu_link(_, _) do
+ {:ok, "Process Dump"}
+ end
+
+ @impl true
+ def mount(_, _, socket) do
+ ts = :os.system_time(:millisecond)
+ name = "process_dump_#{ts}"
+ content = dump_processes(name)
+ {:ok, socket |> assign(content: content) |> assign(name: name)}
+ end
+
+ @impl true
+ def render(assigns) do
+ ~H"""
+
+
Process Dump
+
+ Download
+
+ After you untar the file, you can use `File.read!("filename") |> :erlang.binary_to_term` to check the contents
+
+ """
+ end
+
+ defp dump_processes(name) do
+ term = Process.list() |> Enum.map(&Process.info/1) |> :erlang.term_to_binary()
+ path = "/tmp/#{name}"
+ File.write!(path, term)
+ System.cmd("tar", ["-czf", "#{path}.tar.gz", path])
+ "#{path}.tar.gz" |> File.read!() |> Base.encode64()
+ end
+end
diff --git a/lib/realtime_web/endpoint.ex b/lib/realtime_web/endpoint.ex
index ea1f8ad..917ab65 100644
--- a/lib/realtime_web/endpoint.ex
+++ b/lib/realtime_web/endpoint.ex
@@ -1,5 +1,6 @@
defmodule RealtimeWeb.Endpoint do
use Phoenix.Endpoint, otp_app: :realtime
+ alias RealtimeWeb.Plugs.BaggageRequestId
# The session will be stored in the cookie and signed,
# this means its contents can be read but not tampered with.
@@ -14,9 +15,19 @@ defmodule RealtimeWeb.Endpoint do
websocket: [
connect_info: [:peer_data, :uri, :x_headers],
fullsweep_after: 20,
- max_frame_size: 8_000_000
+ max_frame_size: 8_000_000,
+ serializer: [
+ {Phoenix.Socket.V1.JSONSerializer, "~> 1.0.0"},
+ {Phoenix.Socket.V2.JSONSerializer, "~> 2.0.0"}
+ ]
],
- longpoll: true
+ longpoll: [
+ connect_info: [:peer_data, :uri, :x_headers],
+ serializer: [
+ {Phoenix.Socket.V1.JSONSerializer, "~> 1.0.0"},
+ {Phoenix.Socket.V2.JSONSerializer, "~> 2.0.0"}
+ ]
+ ]
socket "/live", Phoenix.LiveView.Socket, websocket: [connect_info: [session: @session_options]]
@@ -28,7 +39,7 @@ defmodule RealtimeWeb.Endpoint do
at: "/",
from: :realtime,
gzip: false,
- only: ~w(assets fonts images favicon.svg robots.txt)
+ only: RealtimeWeb.static_paths()
# plug PromEx.Plug, path: "/metrics", prom_ex_module: Realtime.PromEx
@@ -44,7 +55,7 @@ defmodule RealtimeWeb.Endpoint do
param_key: "request_logger",
cookie_key: "request_logger"
- plug Plug.RequestId
+ plug BaggageRequestId, baggage_key: BaggageRequestId.baggage_key()
plug Plug.Telemetry, event_prefix: [:phoenix, :endpoint]
plug Plug.Parsers,
diff --git a/lib/realtime_web/gettext.ex b/lib/realtime_web/gettext.ex
index c9797b1..fb6f86c 100644
--- a/lib/realtime_web/gettext.ex
+++ b/lib/realtime_web/gettext.ex
@@ -20,5 +20,5 @@ defmodule RealtimeWeb.Gettext do
See the [Gettext Docs](https://hexdocs.pm/gettext) for detailed usage.
"""
- use Gettext, otp_app: :realtime
+ use Gettext.Backend, otp_app: :realtime
end
diff --git a/lib/realtime_web/live/admin_live/index.ex b/lib/realtime_web/live/admin_live/index.ex
deleted file mode 100644
index 0249d69..0000000
--- a/lib/realtime_web/live/admin_live/index.ex
+++ /dev/null
@@ -1,24 +0,0 @@
-defmodule RealtimeWeb.AdminLive.Index do
- use RealtimeWeb, :live_view
-
- @impl true
- def mount(_params, _session, socket) do
- now = DateTime.utc_now() |> DateTime.to_string()
-
- socket =
- socket
- |> assign(:server_time, now)
-
- {:ok, socket}
- end
-
- @impl true
- def handle_params(params, _url, socket) do
- {:noreply, apply_action(socket, socket.assigns.live_action, params)}
- end
-
- defp apply_action(socket, :index, _params) do
- socket
- |> assign(:page_title, "Admin - tealbase Realtime")
- end
-end
diff --git a/lib/realtime_web/live/admin_live/index.html.heex b/lib/realtime_web/live/admin_live/index.html.heex
deleted file mode 100644
index 341429c..0000000
--- a/lib/realtime_web/live/admin_live/index.html.heex
+++ /dev/null
@@ -1,2 +0,0 @@
-
tealbase Realtime
-
Admin stuff goes here
\ No newline at end of file
diff --git a/lib/realtime_web/live/components.ex b/lib/realtime_web/live/components.ex
index 064e9f9..dd1d1e1 100644
--- a/lib/realtime_web/live/components.ex
+++ b/lib/realtime_web/live/components.ex
@@ -4,8 +4,8 @@ defmodule RealtimeWeb.Components do
"""
use Phoenix.Component
- alias Phoenix.LiveView.JS
alias Phoenix.HTML.Form
+ alias Phoenix.LiveView.JS
@doc """
Renders an h1 tag.
@@ -16,7 +16,9 @@ defmodule RealtimeWeb.Components do
def h1(assigns) do
~H"""
-
<%= render_slot(@inner_block) %>
+
+ <%= render_slot(@inner_block) %>
+
"""
end
@@ -29,7 +31,9 @@ defmodule RealtimeWeb.Components do
def h2(assigns) do
~H"""
-
<%= render_slot(@inner_block) %>
+
+ <%= render_slot(@inner_block) %>
+
"""
end
@@ -42,7 +46,9 @@ defmodule RealtimeWeb.Components do
def h3(assigns) do
~H"""
-
<%= render_slot(@inner_block) %>
+
+ <%= render_slot(@inner_block) %>
+
"""
end
@@ -62,7 +68,10 @@ defmodule RealtimeWeb.Components do
~H"""