Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Binary file added public/screenshots/iam-s3-permission.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added public/screenshots/iam-ses-permission.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
79 changes: 79 additions & 0 deletions src/app/troubleshooting/issues/aws-s3-ses-access-denied/page.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,79 @@
---
title: "AWS S3/SES: Access Denied & Permission Errors"
source: ["AWS", "Storage", "Email"]
errorCode: ["403", "AccessDenied", "MessageRejected", "User is not authorized"]
tags: ["Self-hosting", "iam", "permissions", "s3", "ses"]
---

import { Steps, Callout } from 'nextra/components'
import { NumberWithMdx } from "@/app/_components/NumberOfContent.jsx"
import { CardInfo } from "@/app/_components/CardInfo.jsx"

# AWS S3/SES: Access Denied

**The Problem:** Your SpaceDF instance connects to AWS successfully (keys are correct), but actions like **Uploading Files** or **Sending Emails** fail because the IAM User lacks the necessary permission policies.

## 1. Symptoms & Diagnosis

You will see specific error messages in your backend logs depending on which service is blocked.

Run `docker compose logs -f backend` to check:

| Service | Feature Impact | Log Error Message Example |
| :--- | :--- | :--- |
| **S3 Storage** | Cannot upload avatars, device images. | `Access Denied`, `Status Code: 403`<br/>`User is not authorized to perform: s3:PutObject` |
| **SES Email** | Registration emails or alerts are not sent. | `MessageRejected`<br/>`User ... is not authorized to perform: ses:SendEmail` |

## 2. The Fix: Add Missing Policies
You do **not** need to create a new user or generate new keys. You simply need to attach the missing policies to your existing IAM User.

<Steps>
### Open AWS IAM Console
<NumberWithMdx number={1} className="mt-3">
Log in to your AWS Console.
</NumberWithMdx>
<NumberWithMdx number={2}>
Navigate to **[IAM > Users](https://console.aws.amazon.com/iam/home#/users)**.
</NumberWithMdx>
<NumberWithMdx number={3}>
Click on the user name you created for SpaceDF (e.g., `spacedf-user`).
</NumberWithMdx>

### Attach Permissions
<NumberWithMdx number={1} className="mt-3">
Go to the **Permissions** tab.
</NumberWithMdx>
<NumberWithMdx number={2}>
Click the **Add permissions** button (Dropdown) > Select **Add permissions**.
</NumberWithMdx>
<NumberWithMdx number={3}>
Select the box: **Attach policies directly**.
</NumberWithMdx>

### Select & Save
Search for and check the specific policy matching your error:

* **For S3 Errors:** Search `S3` > Check **`AmazonS3FullAccess`**.
* **For SES Errors:** Search `SES` > Check **`AmazonSESFullAccess`**.

Click **Next** and then **Add permissions** to confirm.
</Steps>

## 3. Verification
Once the policies are attached, the effect is immediate (mostly). You can try the action again without restarting SpaceDF.

<CardInfo title="Still having issues?" classNames={{
container: "!mt-6 !p-3 shadow-none",
title: "!text-lg",
description: "!text-base"
}}>
If you added the permissions but still get errors, check:
- **Wrong Keys:** Did you accidentally use the keys of a *different* IAM user in your `.env`?
- **Bucket Name:** For S3, ensure `AWS_S3_BUCKET` in `.env` matches your actual bucket name exactly.
- **Sandox Mode:** For SES, if you get `Email address is not verified`, your AWS account might still be in **SES Sandbox mode**. You must verify the "From" address or request a production limit increase.
</CardInfo>

<Callout type="info">
**Original Setup Guide**
For the complete setup instructions including screenshots, refer back to the [AWS Configuration Section](/docs/getting-started/self-hosting/docker/advanced-setup#s3-service).
</Callout>
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
---
title: "Google OAuth: redirect_uri_mismatch"
source: ["Auth"]
errorCode: ["401", "500", "unauthenticated", "server_error"]
errorCode: ["400", "redirect_uri_mismatch"]
tags: ["Self-hosting"]
---
import { Steps } from 'nextra/components'
Expand All @@ -14,92 +14,71 @@ import { Callout } from 'nextra/components'


# Google OAuth: `redirect_uri_mismatch`
> This error occurs when the **redirect URI used by SpaceDF does not exactly match** the Authorized Redirect URI configured in the Google Cloud Console.

## Symptoms
Google shows an error page with:

```text
Error 400: redirect_uri_mismatch
```

Google login works in one environment (local or production) but fails in another.

The login flow redirects to Google, then immediately fails.
**The Problem:** The URL that SpaceDF sends to Google does not **exactly match** the URL you allowed in the Google Cloud Console.

## 1. The Golden Rule
To fix this, you must understand how SpaceDF constructs the URL.

| Location | Config Name | Value Format | Example |
| :--- | :--- | :--- | :--- |
| **Your Server (.env)** | `GOOGLE_CALLBACK_URL` | **Base Domain Only** | `https://your-domain.com` |
| **Google Console** | `Authorized Redirect URI` | **Full Path** | `https://your-domain.com/auth/google/callback` |

<CardInfo classNames={{
container: "!mt-6 !p-3 shadow-none",
title: "!text-base",
description: "!text-sm"
}}
title="⚠️ Crucial Difference"
>
SpaceDF automatically appends `/auth/google/callback` to your `.env` value.<br/>
👉 **Do NOT** add the path in your `.env` file.<br/>
👉 **DO** add the path in Google Console.
</CardInfo>

## Common causes (SpaceDF-specific)
- `GOOGLE_CALLBACK_URL` in `.env` does not match the redirect URI configured in Google Cloud Console.
- The redirect URI is correct, but:
- Protocol is different (`http` vs `https`)
- Port is different (`3000` vs `80`)
- Trailing slash mismatch
- Production domain is not added to Google OAuth settings.
- Switching from **Quick Start** to **Advanced Setup** without updating OAuth settings.
## 2. Step-by-Step Fix

## Fix

<Steps>
### Verify `GOOGLE_CALLBACK_URL`
Check your .env file:

```bash copy
# Development
GOOGLE_CALLBACK_URL=http://localhost:3000

# Production
GOOGLE_CALLBACK_URL=https://your-domain.com
```

> Do not include `/auth/google/callback` in `GOOGLE_CALLBACK_URL`
SpaceDF appends it automatically.

### Update Google Cloud Console
<NumberWithMdx number={1} className="mt-3">
Go to the [Google Cloud Console](https://console.cloud.google.com/)
</NumberWithMdx>
<NumberWithMdx number={2}>
Navigate to **APIs & Services → Credentials**
</NumberWithMdx>
<NumberWithMdx number={3}>
Select your **OAuth 2.0 Client ID**.
</NumberWithMdx>
<NumberWithMdx number={4}>
Under **Authorized redirect URIs**, add:
```bash
# Development
http://localhost:3000/auth/google/callback

# Production
https://your-domain.com/auth/google/callback
```
> The URI must match exactly, including protocol, domain, port, and path.

</NumberWithMdx>

### Restart SpaceDF services
After updating `.env`, restart all services:

```bash copy
docker compose down
docker compose up -d
```
### Check your `.env` file
Open your `.env` file and ensure `GOOGLE_CALLBACK_URL` contains **only the protocol and domain** (and port if local).
```bash copy
# ✅ CORRECT (Base URL only)
GOOGLE_CALLBACK_URL=[https://your-domain.com](https://your-domain.com)

# ❌ INCORRECT (Do not add the path here)
GOOGLE_CALLBACK_URL=[https://your-domain.com/auth/google/callback](https://your-domain.com/auth/google/callback)
```

### Update Google Cloud Console
Please follow [Google Oauth Guide](docs/getting-started/self-hosting/docker/advanced-setup#google-oauth) in the Advanced Setup section

### Restart Services
Environment variables are only loaded when the container starts.
```bash copy
docker compose down
docker compose up -d
```
</Steps>

<CardInfo title="Common mistakes to avoid" classNames={{
container: "!mt-6 !p-3 shadow-none",
title: "!text-lg",
description: "!text-base"
}}>
❌ Using localhost in production<br/>
❌ Mixing HTTP and HTTPS<br/>
❌ Missing the `/auth/google/callback path` in `Google Console`<br/>
❌ Adding trailing slashes inconsistently<br/>
❌ Forgetting to restart services after changing `.env`<br/>
## 3. Common Mistakes Checklist

<CardInfo classNames={{
container: "!mt-6 !p-3 shadow-none",
title: "!text-lg",
description: "!text-base"
}}
title="📝 Troubleshooting Checklist"
>
- Protocol: Did you write `http` in `.env` but `https` in Google Console? (Must match).
- Trailing Slash: Did you put `https://site.com/` in `.env`? (Remove the trailing slash).
- Port: Are you using port `80` or `443` but explicitely wrote `:3000`?
- Environment: Did you configure the Production URL but are trying to log in from Localhost? (You need both entries in Google Console).
</CardInfo>

<Callout type="info">
{/* <Callout type="info">
***Notes***
- Google OAuth is **not enabled in Quick Start**.
- This error only applies when using **Advanced Setup**.
- Each environment (local, staging, production) requires its own redirect URI entry.
</Callout>
</Callout> */}
Original file line number Diff line number Diff line change
Expand Up @@ -7,74 +7,96 @@ tags: ["Self-hosting", "rabbitmq", "credentials", "volume_conflict", "docker"]

import { Callout } from 'nextra/components'
import { Steps } from 'nextra/components'
import { CardInfo } from "@/app/_components/CardInfo.jsx"

# RabbitMQ: Existing setup with different credentials

> This issue occurs when a **previous RabbitMQ setup already exists** on the machine and was initialized with different credentials than the ones currently configured for SpaceDF.
**The Problem:** You changed the RabbitMQ password in your `.env` file, but the service is still trying to use the old password (or default `guest`), resulting in `ACCESS_REFUSED`.

## Symptoms
- SpaceDF services fail to start or keep restarting
- Logs show RabbitMQ authentication errors such as:
- `ACCESS_REFUSED`
- `authentication failed`
- Updating `RABBITMQ_DEFAULT_USER` or `RABBITMQ_DEFAULT_PASS` does not fix the issue
## 1. Understanding the Cause
RabbitMQ has a specific behavior regarding Docker containers that confuses many users:

## Cause
RabbitMQ **stores credentials in persistent volumes** on first startup.
1. **First Run:** On the very first startup, RabbitMQ reads `RABBITMQ_DEFAULT_USER` and `RABBITMQ_DEFAULT_PASS` from your `.env` file and creates the user account in its database.
2. **Subsequent Runs:** If a database file already exists (in the Docker Volume), RabbitMQ **IGNORES** the `.env` variables. It uses the user/password already stored in the database.

If RabbitMQ was previously started with different credentials:
- Updating environment variables alone is **not enough**
- RabbitMQ will continue using the **old credentials stored in volumes**
- This results in a credential mismatch between SpaceDF services and RabbitMQ
**Conclusion:** Simply changing the `.env` file and restarting the container (`docker compose restart`) **will not update the password**.

## How to verify

Check if an existing RabbitMQ container or volume is present:
```bash copy
docker ps -a | grep rabbitmq
```

```bash copy
docker volume ls | grep rabbitmq
```
If RabbitMQ volumes exist, they may contain credentials from a previous setup.
## 2. Solution: Reset the Volume
To force RabbitMQ to accept the new password from your `.env` file, you must delete its old database volume so it re-initializes from scratch.

## Fix (reset RabbitMQ credentials)

<Callout type="warning">
This will remove all RabbitMQ data.
Only assume safe in development or fresh setups.
</Callout>
<CardInfo classNames={{
container: "!mt-6 !p-3 shadow-none",
title: "!text-base",
description: "!text-sm"
}}
title="⛔ Data Loss Warning:"
>
The steps below will **delete all existing queues and messages** in RabbitMQ.<br/>
This is safe for new installations or troubleshooting connectivity, but be careful in Production.
</CardInfo>

<Steps>
### Stop all SpaceDF services:

```bash copy
docker compose down
```

### Remove existing RabbitMQ volumes:
```bash copy
docker volume rm <rabbitmq_volume_name>
```
### Stop Services
Stop the running containers to release the volume lock.
```bash copy
docker compose down
```

### Locate & Remove the Volume
Find the volume associated with RabbitMQ and remove it.

**Option A: The Clean/Automatic Way (Recommended)** If you don't mind resetting all volumes (Database, RabbitMQ, Storage):
```bash copy
# This removes containers AND all data volumes
docker compose down -v
```

**Option B: The Surgical Way (RabbitMQ Only)** If you want to keep other data (like PostgreSQL) and only reset RabbitMQ:
```bash copy
# 1. List volumes to find the exact name (look for 'rabbitmq' or 'mq')
docker volume ls

# 2. Remove the specific volume
docker volume rm spacedf_rabbitmq_data
```
> (Note: Replace spacedf_rabbitmq_data with your actual volume name found in step 1).

### Update `.env` & Restart
Ensure your `.env` has the desired credentials, then start fresh.

```bash copy
# 1. Check config
cat .env | grep RABBITMQ

# 2. Start services
docker compose up -d
```

RabbitMQ will see "no database file" and create a fresh user using your new .env values.
</Steps>

### Verify credentials in `.env`:
```bash copy
RABBITMQ_DEFAULT_USER=your_username
RABBITMQ_DEFAULT_PASS=your_password
```
## 3. Alternative: Change Password via CLI (Advanced)
If you are in **Production** and cannot delete the volume/messages, you must use the command line to update the user manually.

### Restart SpaceDF:
```bash copy
docker compose up -d
```

RabbitMQ will be re-initialized using the new credentials.
</Steps>
# 1. Enter the running container
docker compose exec rabbitmq bash

<Callout type="info">
***Notes***
- Changing RabbitMQ credentials **always requires resetting volumes**
- Do not reuse RabbitMQ credentials across projects on the same host
- For production systems, plan credential changes carefully to avoid data loss
</Callout>
# 2. Change the password manually
rabbitmqctl change_password <username> <new_password>
```
> Replace `<username>` and `<new_password>` with the values from your .env file.


<CardInfo classNames={{
container: "!mt-6 !p-3 shadow-none",
title: "!text-base",
description: "!text-sm"
}}
title="ℹ️ Key Takeaways"
>
* **Stickiness:** Credentials are "sticky". RabbitMQ only reads the `.env` file once during the initial volume creation.
* **Production Safety:** If you need to change passwords on a live system (Production), **do NOT delete the volume**. Instead, use the `rabbitmqctl` CLI command mentioned in Section 3 to update the password safely without losing data.
* **Isolation:** Avoid reusing the same RabbitMQ user/password across different projects on the same server to prevent accidental cross-connection or security conflicts.
</CardInfo>
Loading