Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -36,3 +36,5 @@ yarn-error.log*
# Misc
.DS_Store
*.pem

db-data/
19 changes: 19 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -268,6 +268,25 @@ bookmarket/

### Quick Start

#### Option 1: Automated Setup (Recommended)

```bash
git clone https://github.com/yourusername/bookmarket.git
cd bookmarket
./setup.sh
```

The setup script will:

- ✅ Check prerequisites (Node.js 18+, pnpm, Docker)
- ✅ Install dependencies
- ✅ Create environment files from examples
- ✅ Start PostgreSQL database
- ✅ Run database migrations
- ✅ Provide next steps

#### Option 2: Manual Setup

1. **Clone the repository**

```bash
Expand Down
14 changes: 14 additions & 0 deletions apps/server/.env.example
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
POSTGRES_HOST=
POSTGRES_PORT=
POSTGRES_USER=
POSTGRES_PASSWORD=
POSTGRES_NAME=

JWT_SECRET=
JWT_TOKEN_AUDIENCE=
JWT_TOKEN_ISSUER=
JWT_ACCESS_TOKEN_TTL=
JWT_REFRESH_TOKEN_TTL=

GOOGLE_CLIENT_ID=
GOOGLE_CLIENT_SECRET=
4 changes: 3 additions & 1 deletion apps/server/package.json
Original file line number Diff line number Diff line change
Expand Up @@ -19,8 +19,10 @@
"test:debug": "node --inspect-brk -r tsconfig-paths/register -r ts-node/register node_modules/.bin/jest --runInBand",
"test:e2e": "jest --config ./test/jest-e2e.json",
"typeorm": "node --require ts-node/register ./node_modules/typeorm/cli.js",
"db:setup": "node scripts/db-setup.js",
"db:start": "cd ../.. && docker-compose up -d db",
"migration:generate": "npm run typeorm -- migration:generate -d src/data-source.ts -p",
"migration:run": "npm run typeorm -- migration:run -d src/data-source.ts",
"migration:run": "npm run db:setup && npm run typeorm -- migration:run -d src/data-source-dev.ts",
Copy link

Copilot AI Jul 2, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Chaining db:setup with migration:run causes the database to be dropped and recreated on every migration run, risking data loss in non-fresh environments. Separate these actions.

Suggested change
"migration:run": "npm run db:setup && npm run typeorm -- migration:run -d src/data-source-dev.ts",
"migration:run": "npm run typeorm -- migration:run -d src/data-source-dev.ts",

Copilot uses AI. Check for mistakes.
Copy link

Copilot AI Jul 2, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The migration:revert script uses src/data-source.ts while migration:run uses src/data-source-dev.ts, leading to inconsistency. Align both to the same data-source file.

Suggested change
"migration:run": "npm run db:setup && npm run typeorm -- migration:run -d src/data-source-dev.ts",
"migration:run": "npm run db:setup && npm run typeorm -- migration:run -d src/data-source.ts",

Copilot uses AI. Check for mistakes.
"migration:revert": "npm run typeorm -- migration:revert -d src/data-source.ts"
},
"dependencies": {
Expand Down
133 changes: 133 additions & 0 deletions apps/server/scripts/db-setup.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,133 @@
#!/usr/bin/env node

const { execSync } = require('child_process');
const { config } = require('dotenv');

// Load environment variables
config();

const DB_USER = process.env.POSTGRES_USER;
const DB_NAME = process.env.POSTGRES_NAME;
const DB_HOST = process.env.POSTGRES_HOST || 'localhost';
const DB_PORT = process.env.POSTGRES_PORT || '5432';

console.log('🔄 Setting up database...');

// Function to check if Docker container is running and start it if needed
function ensureDockerContainer() {
try {
// Check if any PostgreSQL container is running on port 5432
const result = execSync(`docker ps --format "table {{.Names}}\t{{.Ports}}" | grep :5432`, {
stdio: 'pipe',
encoding: 'utf8'
});

if (result.trim()) {
console.log('✅ PostgreSQL container is already running on port 5432');
return;
}
} catch (error) {
// No container running on port 5432
}

console.log('🐳 Starting PostgreSQL container...');
try {
execSync('cd ../.. && docker-compose up -d db', { stdio: 'inherit' });
console.log('✅ PostgreSQL container started');
// Give it a moment to initialize
require('child_process').execSync('sleep 3');
} catch (error) {
// If it fails due to port conflict, the container might already be running
if (error.message.includes('port is already allocated')) {
console.log('⚠️ Port 5432 is already in use - checking if PostgreSQL is accessible...');
return; // Continue with connection check
}
console.error('❌ Failed to start PostgreSQL container:', error.message);
console.log('💡 Make sure Docker is running and docker-compose.yml exists');
process.exit(1);
}
}
Comment on lines +17 to +49
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Improve Docker container detection and naming robustness.

The container detection logic is sound, but the hardcoded path navigation and error handling could be more robust.

Consider these improvements:

function ensureDockerContainer() {
  try {
-    const result = execSync(`docker ps --format "table {{.Names}}\t{{.Ports}}" | grep :5432`, { 
+    const result = execSync(`docker ps --format "{{.Names}}" --filter "publish=5432"`, { 
      stdio: 'pipe',
      encoding: 'utf8'
    });
    
    if (result.trim()) {
      console.log('✅ PostgreSQL container is already running on port 5432');
      return;
    }
  } catch (error) {
    // No container running on port 5432
  }
  
  console.log('🐳 Starting PostgreSQL container...');
  try {
-    execSync('cd ../.. && docker-compose up -d db', { stdio: 'inherit' });
+    const { execSync } = require('child_process');
+    execSync('docker-compose up -d db', { stdio: 'inherit', cwd: '../..' });
    console.log('✅ PostgreSQL container started');
-    require('child_process').execSync('sleep 3');
+    await sleep(3000);
  } catch (error) {
    // ... rest of error handling
  }
}

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In apps/server/scripts/db-setup.js around lines 17 to 49, improve the Docker
container detection by avoiding hardcoded relative paths like 'cd ../..' and
instead use absolute paths or Node.js path utilities to locate the
docker-compose.yml file. Enhance error handling by catching specific errors more
precisely and providing clearer messages. Also, consider verifying container
names explicitly rather than relying solely on port checks to increase
robustness.


async function setupDatabase() {
try {
// Ensure Docker container is running
ensureDockerContainer();

// Check if PostgreSQL server is responding (connect to default postgres db first)
console.log('⏳ Waiting for PostgreSQL server to be ready...');

let retries = 30;
while (retries > 0) {
try {
execSync(`pg_isready -h ${DB_HOST} -p ${DB_PORT} -U ${DB_USER}`, {
stdio: 'pipe'
});
console.log('✅ PostgreSQL server is ready!');
break;
} catch (error) {
retries--;
if (retries === 0) {
console.error('❌ PostgreSQL server is not responding. Make sure Docker container is running.');
console.log('💡 Try running: docker-compose up -d db');
process.exit(1);
}
console.log(`⏳ Waiting for server... (${retries} retries left)`);
await sleep(2000);
}
}

// Drop and recreate database to ensure clean state
console.log(`🔍 Checking if database '${DB_NAME}' exists...`);
try {
const result = execSync(`docker exec bookmarket-db-1 psql -U ${DB_USER} -d postgres -tc "SELECT 1 FROM pg_database WHERE datname = '${DB_NAME}'"`, {
Copy link

Copilot AI Jul 2, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hardcoding the container name bookmarket-db-1 may break if compose service names or project names differ; consider using docker-compose exec db ... or read the container name dynamically.

Copilot uses AI. Check for mistakes.
stdio: 'pipe',
encoding: 'utf8'
});

if (result.trim() === '1') {
console.log(`🗑️ Dropping existing database '${DB_NAME}' for clean setup...`);
execSync(`docker exec bookmarket-db-1 dropdb -U ${DB_USER} ${DB_NAME}`, {
stdio: 'pipe'
});
console.log(`✅ Database '${DB_NAME}' dropped`);
}
} catch (error) {
// Database doesn't exist, which is fine
}
Comment on lines +82 to +96
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Address hardcoded container name dependency.

The script uses a hardcoded container name bookmarket-db-1 which may not be consistent across different environments or Docker Compose versions.

Consider making the container name dynamic:

+    // Get the actual container name/ID for the database service
+    const containerName = execSync(`docker-compose ps -q db`, {
+      stdio: 'pipe',
+      encoding: 'utf8',
+      cwd: '../..'
+    }).trim();
+    
+    if (!containerName) {
+      throw new Error('Database container not found');
+    }
+
-      const result = execSync(`docker exec bookmarket-db-1 psql -U ${DB_USER} -d postgres -tc "SELECT 1 FROM pg_database WHERE datname = '${DB_NAME}'"`, {
+      const result = execSync(`docker exec ${containerName} psql -U ${DB_USER} -d postgres -tc "SELECT 1 FROM pg_database WHERE datname = '${DB_NAME}'"`, {

Apply similar changes to lines 89 and 100 where the container name is used.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const result = execSync(`docker exec bookmarket-db-1 psql -U ${DB_USER} -d postgres -tc "SELECT 1 FROM pg_database WHERE datname = '${DB_NAME}'"`, {
stdio: 'pipe',
encoding: 'utf8'
});
if (result.trim() === '1') {
console.log(`🗑️ Dropping existing database '${DB_NAME}' for clean setup...`);
execSync(`docker exec bookmarket-db-1 dropdb -U ${DB_USER} ${DB_NAME}`, {
stdio: 'pipe'
});
console.log(`✅ Database '${DB_NAME}' dropped`);
}
} catch (error) {
// Database doesn't exist, which is fine
}
// Get the actual container name/ID for the database service
const containerName = execSync(`docker-compose ps -q db`, {
stdio: 'pipe',
encoding: 'utf8',
cwd: '../..'
}).trim();
if (!containerName) {
throw new Error('Database container not found');
}
const result = execSync(`docker exec ${containerName} psql -U ${DB_USER} -d postgres -tc "SELECT 1 FROM pg_database WHERE datname = '${DB_NAME}'"`, {
stdio: 'pipe',
encoding: 'utf8'
});
if (result.trim() === '1') {
console.log(`🗑️ Dropping existing database '${DB_NAME}' for clean setup...`);
execSync(`docker exec ${containerName} dropdb -U ${DB_USER} ${DB_NAME}`, {
stdio: 'pipe'
});
console.log(`✅ Database '${DB_NAME}' dropped`);
}
} catch (error) {
// Database doesn't exist, which is fine
}
🤖 Prompt for AI Agents
In apps/server/scripts/db-setup.js around lines 82 to 96, the container name
'bookmarket-db-1' is hardcoded in multiple execSync commands, which can cause
issues in different environments. Refactor the code to use a dynamic variable
for the container name, such as reading it from an environment variable or a
configuration file. Replace all occurrences of the hardcoded container name on
lines 82, 89, and 100 with this variable to ensure flexibility and environment
compatibility.


console.log(`🔧 Creating database '${DB_NAME}'...`);
try {
execSync(`docker exec bookmarket-db-1 createdb -U ${DB_USER} ${DB_NAME}`, {
stdio: 'pipe'
});
console.log(`✅ Database '${DB_NAME}' created successfully`);
} catch (createError) {
console.error(`❌ Failed to create database '${DB_NAME}':`, createError.message);
process.exit(1);
}

// Final check that we can connect to the target database
try {
execSync(`pg_isready -h ${DB_HOST} -p ${DB_PORT} -U ${DB_USER} -d ${DB_NAME}`, {
stdio: 'pipe'
});
console.log(`✅ Database '${DB_NAME}' is ready for connections!`);
} catch (error) {
console.error(`❌ Cannot connect to database '${DB_NAME}':`, error.message);
process.exit(1);
}

console.log('✅ Database setup complete!');
} catch (error) {
console.error('❌ Database setup failed:', error.message);
console.log('💡 Make sure Docker is running and try: docker-compose up -d db');
process.exit(1);
}
}

setupDatabase();

// Helper function for async/await in older Node versions
function sleep(ms) {
return new Promise(resolve => setTimeout(resolve, ms));
}
20 changes: 4 additions & 16 deletions apps/server/src/bookmarks/entities/bookmark.entity.ts
Original file line number Diff line number Diff line change
@@ -1,9 +1,10 @@
import { Category } from 'src/categories/entities/category.entity';
import { User } from 'src/users/entities/user.entity';
import { Column, Entity, ManyToOne, PrimaryGeneratedColumn } from 'typeorm';
import { Category } from '../../categories/entities/category.entity';
import { BaseEntity } from '../../common/entities/base.entity';
import { User } from '../../users/entities/user.entity';

@Entity()
export class Bookmark {
export class Bookmark extends BaseEntity {
@PrimaryGeneratedColumn('uuid')
id: string;

Expand All @@ -19,19 +20,6 @@ export class Bookmark {
@Column({ nullable: true })
faviconUrl?: string;

@Column({
type: 'timestamp',
default: () => 'CURRENT_TIMESTAMP',
})
createdAt: Date;

@Column({
type: 'timestamp',
default: () => 'CURRENT_TIMESTAMP',
onUpdate: 'CURRENT_TIMESTAMP',
})
updatedAt: Date;

@ManyToOne(() => User, user => user.bookmarks, { eager: true })
user: User;

Expand Down
6 changes: 3 additions & 3 deletions apps/server/src/categories/entities/category.entity.ts
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
import { Bookmark } from 'src/bookmarks/entities/bookmark.entity';
import { BaseEntity } from 'src/common/entities/base.entity';
import { User } from 'src/users/entities/user.entity';
import { Column, Entity, Index, ManyToOne, OneToMany, PrimaryGeneratedColumn, Unique } from 'typeorm';
import { Bookmark } from '../../bookmarks/entities/bookmark.entity';
import { BaseEntity } from '../../common/entities/base.entity';
import { User } from '../../users/entities/user.entity';

@Entity()
export class Category extends BaseEntity {
Expand Down
20 changes: 20 additions & 0 deletions apps/server/src/data-source-dev.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
import { config } from 'dotenv';
import { DataSource } from 'typeorm';

// Load environment variables from .env file
config();
export const AppDataSource = new DataSource({
type: 'postgres',
port: parseInt(process.env.POSTGRES_PORT ?? '5432', 10),
host: process.env.POSTGRES_HOST,
username: process.env.POSTGRES_USER,
password: process.env.POSTGRES_PASSWORD,
database: process.env.POSTGRES_NAME,

entities: [`${__dirname}/**/*.entity.ts`],
// FIXME: Should be set to false on prod
Copy link

Copilot AI Jul 2, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[nitpick] The comment is misleading (synchronize is already false). Consider clarifying that synchronize should be enabled in dev and disabled in production, or remove if no longer applicable.

Suggested change
// FIXME: Should be set to false on prod
// Set to true in development for automatic schema synchronization. Ensure this is set to false in production.

Copilot uses AI. Check for mistakes.
synchronize: false,
migrations: [`${__dirname}/migrations/**/*.ts`],
migrationsTableName: 'migrations',
migrationsRun: true,
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Remove automatic migration execution for safety.

Setting migrationsRun: true automatically runs migrations on application startup, which can be dangerous and unpredictable. Migrations should be run explicitly through scripts or deployment processes.

Apply this diff to disable automatic migration execution:

-  migrationsRun: true,
+  migrationsRun: false,
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
migrationsRun: true,
migrationsRun: false,
🤖 Prompt for AI Agents
In apps/server/src/data-source-dev.ts at line 19, the configuration sets
migrationsRun to true, which causes automatic execution of migrations on
startup. To fix this, change migrationsRun to false or remove this setting
entirely to disable automatic migration execution, ensuring migrations are run
explicitly via scripts or deployment processes.

});
23 changes: 23 additions & 0 deletions apps/server/src/migrations/1740000000000-InitialUserTable.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
import { MigrationInterface, QueryRunner } from 'typeorm';

export class InitialUserTable1740000000000 implements MigrationInterface {
public async up(queryRunner: QueryRunner): Promise<void> {
await queryRunner.query(`
CREATE TABLE "user" (
"id" uuid PRIMARY KEY DEFAULT gen_random_uuid(),
"email" character varying NOT NULL,
"password" character varying,
"auth_provider" character varying NOT NULL,
"google_id" character varying,
"github_id" character varying,
"picture" character varying,
"createdAt" TIMESTAMP NOT NULL DEFAULT now(),
"updatedAt" TIMESTAMP NOT NULL DEFAULT now()
)
`);
}

public async down(queryRunner: QueryRunner): Promise<void> {
await queryRunner.query('DROP TABLE "user"');
}
}
23 changes: 23 additions & 0 deletions apps/server/src/migrations/1740000000001-InitialBookmarkTable.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
import { MigrationInterface, QueryRunner } from 'typeorm';

export class InitialBookmarkTable1740000000001 implements MigrationInterface {
public async up(queryRunner: QueryRunner): Promise<void> {
Copy link

Copilot AI Jul 2, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same as above: ensure the pgcrypto extension is enabled in the DB or include CREATE EXTENSION IF NOT EXISTS pgcrypto; in an earlier migration.

Suggested change
public async up(queryRunner: QueryRunner): Promise<void> {
public async up(queryRunner: QueryRunner): Promise<void> {
await queryRunner.query('CREATE EXTENSION IF NOT EXISTS pgcrypto;');

Copilot uses AI. Check for mistakes.
await queryRunner.query(`
CREATE TABLE "bookmark" (
"id" uuid PRIMARY KEY DEFAULT gen_random_uuid(),
"url" character varying NOT NULL,
"title" character varying NOT NULL,
"description" character varying NOT NULL,
"faviconUrl" character varying NOT NULL,
Comment on lines +10 to +11
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Consider making description and faviconUrl nullable.

Both description and faviconUrl are marked as NOT NULL, but these fields might not always be available when scraping metadata from URLs. This could cause bookmark creation to fail unnecessarily.

Apply this diff to make these fields optional:

-        "description" character varying NOT NULL,
-        "faviconUrl" character varying NOT NULL,
+        "description" character varying,
+        "faviconUrl" character varying,
🤖 Prompt for AI Agents
In apps/server/src/migrations/1740000000001-InitialBookmarkTable.ts around lines
10 to 11, the columns "description" and "faviconUrl" are currently set as NOT
NULL, which can cause failures when these metadata fields are missing. Modify
the migration to make both "description" and "faviconUrl" columns nullable by
removing the NOT NULL constraint, allowing bookmark creation to succeed even if
these fields are absent.

"userId" uuid NOT NULL,
"createdAt" TIMESTAMP NOT NULL DEFAULT now(),
"updatedAt" TIMESTAMP NOT NULL DEFAULT now(),
CONSTRAINT "FK_bookmark_user" FOREIGN KEY ("userId") REFERENCES "user"("id") ON DELETE CASCADE
)
`);
Comment on lines +6 to +17
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Add database indexes and field length constraints.

The table lacks indexes for commonly queried fields and has no length constraints on VARCHAR fields, which could impact performance and data integrity.

Consider adding these improvements:

       CREATE TABLE "bookmark" (
         "id" uuid PRIMARY KEY DEFAULT gen_random_uuid(),
-        "url" character varying NOT NULL,
-        "title" character varying NOT NULL,
-        "description" character varying NOT NULL,
-        "faviconUrl" character varying NOT NULL,
+        "url" character varying(2048) NOT NULL,
+        "title" character varying(500) NOT NULL,
+        "description" character varying(1000),
+        "faviconUrl" character varying(500),
         "userId" uuid NOT NULL,
         "createdAt" TIMESTAMP NOT NULL DEFAULT now(),
         "updatedAt" TIMESTAMP NOT NULL DEFAULT now(),
         CONSTRAINT "FK_bookmark_user" FOREIGN KEY ("userId") REFERENCES "user"("id") ON DELETE CASCADE
       )
     `);
+    
+    await queryRunner.query(`
+      CREATE INDEX "IDX_bookmark_userId" ON "bookmark" ("userId");
+      CREATE INDEX "IDX_bookmark_createdAt" ON "bookmark" ("createdAt");
+    `);
🤖 Prompt for AI Agents
In apps/server/src/migrations/1740000000001-InitialBookmarkTable.ts around lines
6 to 17, the bookmark table definition lacks indexes on frequently queried
columns and does not specify length constraints for VARCHAR fields. To fix this,
add appropriate length limits to all VARCHAR columns to enforce data integrity
and create indexes on columns like "userId" and "url" to improve query
performance. Modify the table creation SQL to include these constraints and
indexes accordingly.

}

public async down(queryRunner: QueryRunner): Promise<void> {
await queryRunner.query('DROP TABLE "bookmark"');
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ export class BookmarkCategory1741515061023 implements MigrationInterface {
// Create Category table
await queryRunner.query(`
Comment on lines 5 to 6
Copy link

Copilot AI Jul 2, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

gen_random_uuid() requires the pgcrypto extension; consider adding a migration step such as CREATE EXTENSION IF NOT EXISTS pgcrypto; before using this function to avoid errors.

Suggested change
// Create Category table
await queryRunner.query(`
// Ensure pgcrypto extension is enabled
await queryRunner.query(`
CREATE EXTENSION IF NOT EXISTS pgcrypto
`);
// Create Category table
await queryRunner.query(`

Copilot uses AI. Check for mistakes.
CREATE TABLE "category" (
"id" uuid PRIMARY KEY DEFAULT uuid_generate_v4(),
"id" uuid PRIMARY KEY DEFAULT gen_random_uuid(),
"name" character varying NOT NULL,
"userId" uuid,
"createdAt" TIMESTAMP NOT NULL DEFAULT now(),
Expand Down
9 changes: 5 additions & 4 deletions apps/server/src/users/entities/user.entity.ts
Original file line number Diff line number Diff line change
@@ -1,11 +1,12 @@
import { Bookmark } from 'src/bookmarks/entities/bookmark.entity';
import { Category } from 'src/categories/entities/category.entity';
import { USERNAME_MAX_LENGTH } from 'src/iam/constants/username';
import { Column, Entity, OneToMany, PrimaryGeneratedColumn } from 'typeorm';
import { Bookmark } from '../../bookmarks/entities/bookmark.entity';
import { Category } from '../../categories/entities/category.entity';
import { BaseEntity } from '../../common/entities/base.entity';
import { USERNAME_MAX_LENGTH } from '../../iam/constants/username';
import { AuthProvider } from '../enums/auth-provider.enum';

@Entity()
export class User {
export class User extends BaseEntity {
@PrimaryGeneratedColumn('uuid')
id: string;

Expand Down
24 changes: 6 additions & 18 deletions apps/web/.env.example
Original file line number Diff line number Diff line change
@@ -1,20 +1,8 @@
# Since the ".env" file is gitignored, you can use the ".env.example" file to
# build a new ".env" file when you clone the repo. Keep this file up-to-date
# when you add new variables to `.env`.
NEXT_PUBLIC_BASE_URL=

# This file will be committed to version control, so make sure not to have any
# secrets in it. If you are cloning this repo, create a copy of this file named
# ".env" and populate it with your secrets.
NEXT_PUBLIC_GOOGLE_CLIENT_ID=

# When adding additional environment variables, the schema in "/src/env.js"
# should be updated accordingly.

# Drizzle
DATABASE_URL=

# Example:
# SERVERVAR="foo"
# NEXT_PUBLIC_CLIENTVAR="bar"

NEXT_PUBLIC_CLERK_PUBLISHABLE_KEY=
CLERK_SECRET_KEY=
NEXT_PUBLIC_GITHUB_CLIENT_ID=
GITHUB_CLIENT_SECRET=
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Move GitHub client secret to server environment.

GITHUB_CLIENT_SECRET should be in the server's environment file (apps/server/.env.example) rather than the web app's environment, as client secrets should never be exposed to the frontend.

Remove this line from the web app's .env.example and ensure it's properly configured in the server's environment.

-GITHUB_CLIENT_SECRET=
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
GITHUB_CLIENT_SECRET=
🤖 Prompt for AI Agents
In apps/web/.env.example at line 6, remove the GITHUB_CLIENT_SECRET entry to
prevent exposing sensitive client secrets in the frontend environment. Then, add
this variable to the server environment file at apps/server/.env.example to
securely manage the secret on the backend.

NEXT_PUBLIC_GITHUB_REDIRECT_URI=
NEXT_PUBLIC_DOMAIN=
12 changes: 10 additions & 2 deletions docker-compose.yml
Original file line number Diff line number Diff line change
Expand Up @@ -4,11 +4,19 @@ services:
ports:
- 5432:5432
volumes:
- ../bookmarket-db-data:/var/lib/postgresql/data
- ./db-data:/var/lib/postgresql/data
env_file:
- ./apps/server/.env
healthcheck:
test: ['CMD', 'pg_isready', '-U', 'bokdol', '-d', 'bookmarket']
test:
[
'CMD',
'pg_isready',
'-U',
'${POSTGRES_USER}',
'-d',
'${POSTGRES_NAME}',
]
Comment on lines +11 to +19
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix environment variable name in healthcheck.

The healthcheck uses ${POSTGRES_NAME} but the standard PostgreSQL environment variable is POSTGRES_DB. This mismatch could cause the healthcheck to fail.

Apply this diff to fix the environment variable:

-          '${POSTGRES_NAME}',
+          '${POSTGRES_DB}',
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
test:
[
'CMD',
'pg_isready',
'-U',
'${POSTGRES_USER}',
'-d',
'${POSTGRES_NAME}',
]
test:
[
'CMD',
'pg_isready',
'-U',
'${POSTGRES_USER}',
'-d',
'${POSTGRES_DB}',
]
🤖 Prompt for AI Agents
In docker-compose.yml lines 11 to 19, the healthcheck uses the environment
variable ${POSTGRES_NAME} which is incorrect. Replace ${POSTGRES_NAME} with the
correct standard PostgreSQL environment variable ${POSTGRES_DB} to ensure the
healthcheck works properly.

interval: 10s
timeout: 5s
retries: 5
Expand Down
Loading