diff --git a/public/screenshots/iam-s3-permission.png b/public/screenshots/iam-s3-permission.png
new file mode 100644
index 0000000..cee4b04
Binary files /dev/null and b/public/screenshots/iam-s3-permission.png differ
diff --git a/public/screenshots/iam-ses-permission.png b/public/screenshots/iam-ses-permission.png
new file mode 100644
index 0000000..d5f9152
Binary files /dev/null and b/public/screenshots/iam-ses-permission.png differ
diff --git a/src/app/troubleshooting/issues/aws-s3-ses-access-denied/page.mdx b/src/app/troubleshooting/issues/aws-s3-ses-access-denied/page.mdx
new file mode 100644
index 0000000..6b52b0e
--- /dev/null
+++ b/src/app/troubleshooting/issues/aws-s3-ses-access-denied/page.mdx
@@ -0,0 +1,79 @@
+---
+title: "AWS S3/SES: Access Denied & Permission Errors"
+source: ["AWS", "Storage", "Email"]
+errorCode: ["403", "AccessDenied", "MessageRejected", "User is not authorized"]
+tags: ["Self-hosting", "iam", "permissions", "s3", "ses"]
+---
+
+import { Steps, Callout } from 'nextra/components'
+import { NumberWithMdx } from "@/app/_components/NumberOfContent.jsx"
+import { CardInfo } from "@/app/_components/CardInfo.jsx"
+
+# AWS S3/SES: Access Denied
+
+**The Problem:** Your SpaceDF instance connects to AWS successfully (keys are correct), but actions like **Uploading Files** or **Sending Emails** fail because the IAM User lacks the necessary permission policies.
+
+## 1. Symptoms & Diagnosis
+
+You will see specific error messages in your backend logs depending on which service is blocked.
+
+Run `docker compose logs -f backend` to check:
+
+| Service | Feature Impact | Log Error Message Example |
+| :--- | :--- | :--- |
+| **S3 Storage** | Cannot upload avatars, device images. | `Access Denied`, `Status Code: 403`
`User is not authorized to perform: s3:PutObject` |
+| **SES Email** | Registration emails or alerts are not sent. | `MessageRejected`
`User ... is not authorized to perform: ses:SendEmail` |
+
+## 2. The Fix: Add Missing Policies
+You do **not** need to create a new user or generate new keys. You simply need to attach the missing policies to your existing IAM User.
+
+
+### Open AWS IAM Console
+
+ Log in to your AWS Console.
+
+
+ Navigate to **[IAM > Users](https://console.aws.amazon.com/iam/home#/users)**.
+
+
+ Click on the user name you created for SpaceDF (e.g., `spacedf-user`).
+
+
+### Attach Permissions
+
+ Go to the **Permissions** tab.
+
+
+ Click the **Add permissions** button (Dropdown) > Select **Add permissions**.
+
+
+ Select the box: **Attach policies directly**.
+
+
+### Select & Save
+Search for and check the specific policy matching your error:
+
+* **For S3 Errors:** Search `S3` > Check **`AmazonS3FullAccess`**.
+* **For SES Errors:** Search `SES` > Check **`AmazonSESFullAccess`**.
+
+Click **Next** and then **Add permissions** to confirm.
+
+
+## 3. Verification
+Once the policies are attached, the effect is immediate (mostly). You can try the action again without restarting SpaceDF.
+
+
+ If you added the permissions but still get errors, check:
+ - **Wrong Keys:** Did you accidentally use the keys of a *different* IAM user in your `.env`?
+ - **Bucket Name:** For S3, ensure `AWS_S3_BUCKET` in `.env` matches your actual bucket name exactly.
+ - **Sandox Mode:** For SES, if you get `Email address is not verified`, your AWS account might still be in **SES Sandbox mode**. You must verify the "From" address or request a production limit increase.
+
+
+
+ **Original Setup Guide**
+ For the complete setup instructions including screenshots, refer back to the [AWS Configuration Section](/docs/getting-started/self-hosting/docker/advanced-setup#s3-service).
+
\ No newline at end of file
diff --git a/src/app/troubleshooting/issues/google-oauth-redirect-uri-mismatch/page.mdx b/src/app/troubleshooting/issues/google-oauth-redirect-uri-mismatch/page.mdx
index 4b1cf8b..4094f15 100644
--- a/src/app/troubleshooting/issues/google-oauth-redirect-uri-mismatch/page.mdx
+++ b/src/app/troubleshooting/issues/google-oauth-redirect-uri-mismatch/page.mdx
@@ -1,7 +1,7 @@
---
title: "Google OAuth: redirect_uri_mismatch"
source: ["Auth"]
-errorCode: ["401", "500", "unauthenticated", "server_error"]
+errorCode: ["400", "redirect_uri_mismatch"]
tags: ["Self-hosting"]
---
import { Steps } from 'nextra/components'
@@ -14,92 +14,71 @@ import { Callout } from 'nextra/components'
# Google OAuth: `redirect_uri_mismatch`
-> This error occurs when the **redirect URI used by SpaceDF does not exactly match** the Authorized Redirect URI configured in the Google Cloud Console.
-## Symptoms
-Google shows an error page with:
-
-```text
-Error 400: redirect_uri_mismatch
-```
-
-Google login works in one environment (local or production) but fails in another.
-
-The login flow redirects to Google, then immediately fails.
+**The Problem:** The URL that SpaceDF sends to Google does not **exactly match** the URL you allowed in the Google Cloud Console.
+
+## 1. The Golden Rule
+To fix this, you must understand how SpaceDF constructs the URL.
+
+| Location | Config Name | Value Format | Example |
+| :--- | :--- | :--- | :--- |
+| **Your Server (.env)** | `GOOGLE_CALLBACK_URL` | **Base Domain Only** | `https://your-domain.com` |
+| **Google Console** | `Authorized Redirect URI` | **Full Path** | `https://your-domain.com/auth/google/callback` |
+
+
+ SpaceDF automatically appends `/auth/google/callback` to your `.env` value.
+ đ **Do NOT** add the path in your `.env` file.
+ đ **DO** add the path in Google Console.
+
-## Common causes (SpaceDF-specific)
-- `GOOGLE_CALLBACK_URL` in `.env` does not match the redirect URI configured in Google Cloud Console.
-- The redirect URI is correct, but:
- - Protocol is different (`http` vs `https`)
- - Port is different (`3000` vs `80`)
- - Trailing slash mismatch
-- Production domain is not added to Google OAuth settings.
-- Switching from **Quick Start** to **Advanced Setup** without updating OAuth settings.
+## 2. Step-by-Step Fix
-## Fix
-
-### Verify `GOOGLE_CALLBACK_URL`
-Check your .env file:
-
-```bash copy
-# Development
-GOOGLE_CALLBACK_URL=http://localhost:3000
-
-# Production
-GOOGLE_CALLBACK_URL=https://your-domain.com
-```
-
-> Do not include `/auth/google/callback` in `GOOGLE_CALLBACK_URL`
-SpaceDF appends it automatically.
-
-### Update Google Cloud Console
-
- Go to the [Google Cloud Console](https://console.cloud.google.com/)
-
-
- Navigate to **APIs & Services â Credentials**
-
-
- Select your **OAuth 2.0 Client ID**.
-
-
- Under **Authorized redirect URIs**, add:
-```bash
-# Development
-http://localhost:3000/auth/google/callback
-
-# Production
-https://your-domain.com/auth/google/callback
-```
-> The URI must match exactly, including protocol, domain, port, and path.
-
-
-
-### Restart SpaceDF services
-After updating `.env`, restart all services:
-
-```bash copy
-docker compose down
-docker compose up -d
-```
+ ### Check your `.env` file
+ Open your `.env` file and ensure `GOOGLE_CALLBACK_URL` contains **only the protocol and domain** (and port if local).
+ ```bash copy
+ # â
CORRECT (Base URL only)
+ GOOGLE_CALLBACK_URL=[https://your-domain.com](https://your-domain.com)
+
+ # â INCORRECT (Do not add the path here)
+ GOOGLE_CALLBACK_URL=[https://your-domain.com/auth/google/callback](https://your-domain.com/auth/google/callback)
+ ```
+
+ ### Update Google Cloud Console
+ Please follow [Google Oauth Guide](docs/getting-started/self-hosting/docker/advanced-setup#google-oauth) in the Advanced Setup section
+
+ ### Restart Services
+ Environment variables are only loaded when the container starts.
+ ```bash copy
+ docker compose down
+ docker compose up -d
+ ```
-
- â Using localhost in production
- â Mixing HTTP and HTTPS
- â Missing the `/auth/google/callback path` in `Google Console`
- â Adding trailing slashes inconsistently
- â Forgetting to restart services after changing `.env`
+## 3. Common Mistakes Checklist
+
+
+ - Protocol: Did you write `http` in `.env` but `https` in Google Console? (Must match).
+ - Trailing Slash: Did you put `https://site.com/` in `.env`? (Remove the trailing slash).
+ - Port: Are you using port `80` or `443` but explicitely wrote `:3000`?
+ - Environment: Did you configure the Production URL but are trying to log in from Localhost? (You need both entries in Google Console).
-
+{/*
***Notes***
- Google OAuth is **not enabled in Quick Start**.
- This error only applies when using **Advanced Setup**.
- Each environment (local, staging, production) requires its own redirect URI entry.
-
\ No newline at end of file
+ */}
\ No newline at end of file
diff --git a/src/app/troubleshooting/issues/rabbitmq-existing-setup-with-different-credentials/page.mdx b/src/app/troubleshooting/issues/rabbitmq-existing-setup-with-different-credentials/page.mdx
index 5f5fd36..5adcfae 100644
--- a/src/app/troubleshooting/issues/rabbitmq-existing-setup-with-different-credentials/page.mdx
+++ b/src/app/troubleshooting/issues/rabbitmq-existing-setup-with-different-credentials/page.mdx
@@ -7,74 +7,96 @@ tags: ["Self-hosting", "rabbitmq", "credentials", "volume_conflict", "docker"]
import { Callout } from 'nextra/components'
import { Steps } from 'nextra/components'
+import { CardInfo } from "@/app/_components/CardInfo.jsx"
# RabbitMQ: Existing setup with different credentials
-> This issue occurs when a **previous RabbitMQ setup already exists** on the machine and was initialized with different credentials than the ones currently configured for SpaceDF.
+**The Problem:** You changed the RabbitMQ password in your `.env` file, but the service is still trying to use the old password (or default `guest`), resulting in `ACCESS_REFUSED`.
-## Symptoms
-- SpaceDF services fail to start or keep restarting
-- Logs show RabbitMQ authentication errors such as:
- - `ACCESS_REFUSED`
- - `authentication failed`
-- Updating `RABBITMQ_DEFAULT_USER` or `RABBITMQ_DEFAULT_PASS` does not fix the issue
+## 1. Understanding the Cause
+RabbitMQ has a specific behavior regarding Docker containers that confuses many users:
-## Cause
-RabbitMQ **stores credentials in persistent volumes** on first startup.
+1. **First Run:** On the very first startup, RabbitMQ reads `RABBITMQ_DEFAULT_USER` and `RABBITMQ_DEFAULT_PASS` from your `.env` file and creates the user account in its database.
+2. **Subsequent Runs:** If a database file already exists (in the Docker Volume), RabbitMQ **IGNORES** the `.env` variables. It uses the user/password already stored in the database.
-If RabbitMQ was previously started with different credentials:
-- Updating environment variables alone is **not enough**
-- RabbitMQ will continue using the **old credentials stored in volumes**
-- This results in a credential mismatch between SpaceDF services and RabbitMQ
+**Conclusion:** Simply changing the `.env` file and restarting the container (`docker compose restart`) **will not update the password**.
-## How to verify
-Check if an existing RabbitMQ container or volume is present:
-```bash copy
-docker ps -a | grep rabbitmq
-```
-
-```bash copy
-docker volume ls | grep rabbitmq
-```
-If RabbitMQ volumes exist, they may contain credentials from a previous setup.
+## 2. Solution: Reset the Volume
+To force RabbitMQ to accept the new password from your `.env` file, you must delete its old database volume so it re-initializes from scratch.
-## Fix (reset RabbitMQ credentials)
-
-
-This will remove all RabbitMQ data.
-Only assume safe in development or fresh setups.
-
+
+ The steps below will **delete all existing queues and messages** in RabbitMQ.
+ This is safe for new installations or troubleshooting connectivity, but be careful in Production.
+
-### Stop all SpaceDF services:
-
-```bash copy
-docker compose down
-```
-
-### Remove existing RabbitMQ volumes:
-```bash copy
-docker volume rm
-```
+ ### Stop Services
+ Stop the running containers to release the volume lock.
+ ```bash copy
+ docker compose down
+ ```
+
+ ### Locate & Remove the Volume
+ Find the volume associated with RabbitMQ and remove it.
+
+ **Option A: The Clean/Automatic Way (Recommended)** If you don't mind resetting all volumes (Database, RabbitMQ, Storage):
+ ```bash copy
+ # This removes containers AND all data volumes
+ docker compose down -v
+ ```
+
+ **Option B: The Surgical Way (RabbitMQ Only)** If you want to keep other data (like PostgreSQL) and only reset RabbitMQ:
+ ```bash copy
+ # 1. List volumes to find the exact name (look for 'rabbitmq' or 'mq')
+ docker volume ls
+
+ # 2. Remove the specific volume
+ docker volume rm spacedf_rabbitmq_data
+ ```
+ > (Note: Replace spacedf_rabbitmq_data with your actual volume name found in step 1).
+
+ ### Update `.env` & Restart
+ Ensure your `.env` has the desired credentials, then start fresh.
+
+ ```bash copy
+ # 1. Check config
+ cat .env | grep RABBITMQ
+
+ # 2. Start services
+ docker compose up -d
+ ```
+
+ RabbitMQ will see "no database file" and create a fresh user using your new .env values.
+
-### Verify credentials in `.env`:
-```bash copy
-RABBITMQ_DEFAULT_USER=your_username
-RABBITMQ_DEFAULT_PASS=your_password
-```
+## 3. Alternative: Change Password via CLI (Advanced)
+If you are in **Production** and cannot delete the volume/messages, you must use the command line to update the user manually.
-### Restart SpaceDF:
```bash copy
-docker compose up -d
-```
-
-RabbitMQ will be re-initialized using the new credentials.
-
+# 1. Enter the running container
+docker compose exec rabbitmq bash
-
-***Notes***
-- Changing RabbitMQ credentials **always requires resetting volumes**
-- Do not reuse RabbitMQ credentials across projects on the same host
-- For production systems, plan credential changes carefully to avoid data loss
-
\ No newline at end of file
+# 2. Change the password manually
+rabbitmqctl change_password
+```
+> Replace `` and `` with the values from your .env file.
+
+
+
+* **Stickiness:** Credentials are "sticky". RabbitMQ only reads the `.env` file once during the initial volume creation.
+* **Production Safety:** If you need to change passwords on a live system (Production), **do NOT delete the volume**. Instead, use the `rabbitmqctl` CLI command mentioned in Section 3 to update the password safely without losing data.
+* **Isolation:** Avoid reusing the same RabbitMQ user/password across different projects on the same server to prevent accidental cross-connection or security conflicts.
+
\ No newline at end of file
diff --git a/src/content/getting-started/self-hosting/docker/advanced-setup.mdx b/src/content/getting-started/self-hosting/docker/advanced-setup.mdx
index 84cf060..681a01d 100644
--- a/src/content/getting-started/self-hosting/docker/advanced-setup.mdx
+++ b/src/content/getting-started/self-hosting/docker/advanced-setup.mdx
@@ -113,6 +113,9 @@ git clone https://github.com/Space-DF/spacedf-core.git
# Switch to your project directory
cd spacedf-core
+# Initialize submodules (Important!)
+git submodule update --init --recursive
+
# Copy the env vars
cp .env.example .env
@@ -542,11 +545,21 @@ Now you need to create a "Service User" that has permission to read/write to tha
**Set Permissions**
- - Select **Attach policies directly**.
- - In the search bar, type `AmazonS3FullAccess`.
- - Check the box next to `AmazonS3FullAccess`
+ - Select **Attach policies directly**. You need to search for and check **two** policies:
+ - **Storage Access:** Search for `S3` and select **`AmazonS3FullAccess`**
+
+ - **Email Access:** Clear the search and type `SES`. Select **`AmazonSESFullAccess`**.
+
+
+
+ â **Why SES:** Although Email Configuration is covered in a later section, we enable `AmazonSESFullAccess` now so you don't have to come back and edit this user later. One user, two capabilities.
+
+
- Click **Next**, then click **Create user**
- > Note: For advanced security, you can create a custom policy restricted to a single bucket later
**Generate Keys**
@@ -675,7 +688,7 @@ Breakdown of the connection string components:
---
-#### Service Credentials
+##### Service Credentials
Required
The following services (Dashboard & Device) operate independently. Each requires its own Database Password and unique Secret Key to function securely.
@@ -713,9 +726,9 @@ DEVICE_SECRET_KEY=your_device_secret_key
##### Telemetry service
Required
-This variable points to the internal Docker container that handles device data.
+This section configures the high-performance components responsible for handling real-time device data.
-**Configuration** In 99% of cases (Local & Production), you **should keep the default value**.
+**1. Service URL** This variable points to the internal Docker container that processes data.
```bash copy
# Default (recommended)
@@ -726,11 +739,26 @@ TELEMETRY_SERVICE_URL=http://telemetry:8080
container: "!mt-6 !p-3 shadow-none",
title: "!text-base",
description: "!text-sm"
-}}
- title="â Why"
->
- - **No Action Needed:** This service runs automatically inside Docker.
- - **Internal Access:** SpaceDF uses this URL to talk to the service internally. It does not need a public domain or HTTPS.
+}}>
+ âšī¸ **Configuration Note:** In 99% of cases (Local & Production), you **should keep the default URL**.
+ SpaceDF uses this to talk to the service internally via the Docker network.
+
+
+**2. TimescaleDB Password** Define the password for the time-series database where all sensor data is stored.
+
+```bash copy
+# Default: postgres
+# â ī¸ CHANGE THIS for Production!
+TIMESCALEDB_PASSWORD=postgres
+```
+
+
+ â ī¸ **Security Risk:** The default password postgres is publicly known.
+ If you are deploying to a public server, you **MUST** change this to a strong, random string to protect your device data.
**When to change this?**
@@ -772,29 +800,27 @@ The Bridge Service acts as a listener. It connects to the MQTT Broker to receive
Enter the credentials required to connect to the MQTT Broker (usually the same credentials used for EMQX clients).
```bash copy
-MQTT_BROKER_BRIDGE_USERNAME=your_bridge_username
+MQTT_BROKER_BRIDGE_USERNAME=your_bridge_username #Default BrokerBridgeService
MQTT_BROKER_BRIDGE_PASSWORD=your_bridge_password
```
-**2. Topic Subscription**
+**2. Topic Subscription** Define which MQTT topics SpaceDF should listen to.
-Define which MQTT topics SpaceDF should listen to.
+**This value is pre-configured in your `.env.example` file**. You should simply use the default value provided.
```bash copy
-MQTT_TOPICS=device/+/telemetry,device/+/status
+# â ī¸ DO NOT EDIT - Use the default value from .env.example
+MQTT_TOPICS="tenant/+/transformed/device/location"
```
-
-**Syntax Guide:**
-- Comma (`,`): Use to separate multiple topics.
-- Plus (`+`): Wildcard for a single level (e.g., matching any device ID).
-- Hash (`#`): Wildcard for all remaining levels.
-
- đ **Example:** `device/+/telemetry` will match `device/sensor-01/telemetry` and `device/sensor-02/telemetry`.
+ container: "!mt-6 !p-3 shadow-none",
+ title: "!text-base",
+ description: "!text-sm"
+ }}
+ title="â CRITICAL CAUTION: DO NOT CHANGE"
+ >
+ Please do not modify the `MQTT_TOPICS` structure.
+ ***Reason:*** The SpaceDF Backend uses specific parsers that rely on this exact topic format to identify the **Device ID** (represented by `+`) and the **Data Type** (Telemetry/Attributes). Changing this pattern will break the data ingestion pipeline, and your devices will fail to update.
---
@@ -934,12 +960,38 @@ This variable defines the public address where your Backend API is accessible.
title: "!text-base",
description: "!text-sm"
}}>
- â ī¸ **Requirement:** In production, you should use HTTPS for security. .
+ â ī¸ **Requirement:** In production, you should use HTTPS for security.
+
+
+
+
+**2. Database Password** Define the password for the core PostgreSQL database used by the Bootstrap service.
+
+
+
+ Default Password: For quick local setup, you can use the default provided password.
+ ```bash copy
+ BOOTSTRAP_POSTGRES_PASSWORD="Abcd@1234"
+ ```
+
+
+
+ Production Security: You MUST change this to a strong, random password for live deployments.
+ ```bash copy
+ BOOTSTRAP_POSTGRES_PASSWORD="YourStrongUniquePassword"
+ ```
+
+
+ â ī¸ **Security Notice:** Using the default password in production makes your database vulnerable to unauthorized access.
-**2. Access Control (`CORS`)**
+**3. Access Control (`CORS`)**
This setting controls which websites (frontends) are allowed to talk to your backend.
@@ -983,7 +1035,7 @@ This prevents unauthorized websites from using your API.
- If you forget, the app will show a `Network Error` or `CORS-ERROR`.
-**3. Security Key**
+**4. Security Key**
This key secures the internal operations of the backend.
@@ -1330,14 +1382,14 @@ The Dashboard connects directly to the MQTT Broker via **WebSockets** to display
```bash copy
# 1. Connection Details
DASHBOARD_MQTT_PROTOCOL=ws
- DASHBOARD_MQTT_PORT=8083
- DASHBOARD_MQTT_BROKER=localhost
+ DASHBOARD_MQTT_PORT=8883
+ DASHBOARD_MQTT_BROKER=emqx.localhost:8000
# 2. Public Auth (Read-Only User recommended)
DASHBOARD_MQTT_USERNAME=dashboard_user
DASHBOARD_MQTT_PASSWORD=your_password
```
- > (Note: Ensure Port `8083` is exposed in your EMQX Docker container).
+ > (Note: Ensure Port `8883` is exposed in your EMQX Docker container).
@@ -1358,7 +1410,7 @@ The Dashboard connects directly to the MQTT Broker via **WebSockets** to display
**Variable Details:**
- `DASHBOARD_MQTT_PROTOCOL`: `ws` for local, `wss` for production (SSL).
- `DASHBOARD_MQTT_BROKER`: The domain pointing to your MQTT Broker.
- - `DASHBOARD_MQTT_PORT`: The WebSocket port (Default EMQX: `8083` for ws, `8084` for wss).
+ - `DASHBOARD_MQTT_PORT`: The WebSocket port (Default EMQX: `8883` for ws, `8883` for wss).