Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
117 changes: 117 additions & 0 deletions scripts/MIGRATION_README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,117 @@
# KYC Data Migration from CSV

This script migrates user KYC data from a local CSV file into the Supabase `user_kyc_profiles` table.

## Prerequisites

### 1. Install Dependencies

If you haven't already, install the required packages:
```bash
npm install @supabase/supabase-js dotenv ts-node typescript @types/node csv-parse
```

### 2. Environment Variables

Ensure your `.env` file contains the following Supabase credentials:

```bash
# Supabase
NEXT_PUBLIC_SUPABASE_URL=your_supabase_url
SUPABASE_SERVICE_ROLE_KEY=your_service_role_key
```

### 3. CSV Data File

Place a CSV file named `kyc-data.csv` inside the `scripts/` directory. The script expects the file to have a header row with column names that match the `CsvRow` interface in the script.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Fix CSV filename inconsistency.

The documentation states the CSV file should be named kyc-data.csv, but the actual script defaults to kyc-export.csv (line 43 in migrate-kyc-data.ts). This mismatch will cause confusion when users follow the documentation.

Update the filename to match the script's default:

-Place a CSV file named `kyc-data.csv` inside the `scripts/` directory. The script expects the file to have a header row with column names that match the `CsvRow` interface in the script.
+Place a CSV file named `kyc-export.csv` inside the `scripts/` directory. The script expects the file to have a header row with column names that match the `CsvRow` interface in the script.
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
Place a CSV file named `kyc-data.csv` inside the `scripts/` directory. The script expects the file to have a header row with column names that match the `CsvRow` interface in the script.
Place a CSV file named `kyc-export.csv` inside the `scripts/` directory. The script expects the file to have a header row with column names that match the `CsvRow` interface in the script.
🤖 Prompt for AI Agents
scripts/MIGRATION_README.md around line 26: the README instructs users to place
a CSV named `kyc-data.csv` but the script defaults to `kyc-export.csv`; update
the README line to reference `kyc-export.csv` so the documented filename matches
the script default (or alternatively change the script default if you prefer
`kyc-data.csv`) — edit the line to replace `kyc-data.csv` with `kyc-export.csv`
and ensure any other README references use the same filename.


**Expected CSV Columns:**
- `user_id` (maps to `wallet_address`)
- `id_type`
- `country`

## Usage

### Dry Run (Recommended First)

To preview the data that will be inserted without writing anything to the database, run:

```bash
npx ts-node scripts/migrate-kyc-data.ts --dry-run
```

This command will:
- Read and parse `kyc-data.csv`.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Update CSV filename references in usage instructions.

These lines also reference kyc-data.csv which is inconsistent with the script's actual default of kyc-export.csv.

Also applies to: 58-58

🤖 Prompt for AI Agents
In scripts/MIGRATION_README.md around lines 44 and 58, update the usage
instructions that reference "kyc-data.csv" to match the script's actual default
filename "kyc-export.csv"; replace those occurrences (and any other inconsistent
mentions in the README) so the documentation accurately reflects the script's
default input filename.

- Transform the first 5 records into the database format.
- Print the transformed data to the console.
- **It will NOT write any data to Supabase.**

### Full Migration

Once you have verified that the dry run output is correct, you can perform the full migration:

```bash
npx ts-node scripts/migrate-kyc-data.ts
```

This command will:
1. Read all records from `kyc-data.csv`.
2. Transform each record to match the `user_kyc_profiles` schema.
3. Upsert each record into the Supabase table, using `wallet_address` to handle conflicts.

## How It Works

### 1. Extraction
- The script reads the `kyc-data.csv` file from the `scripts/` directory.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Update CSV filename reference.

Another reference to kyc-data.csv instead of kyc-export.csv.

🤖 Prompt for AI Agents
In scripts/MIGRATION_README.md around line 65, the README incorrectly references
`kyc-data.csv`; update that sentence to reference the correct filename
`kyc-export.csv` so the documentation matches the actual script input filename.

- It uses the `csv-parse` library to parse the file into an array of JavaScript objects.

### 2. Transformation
For each row in the CSV:
- It maps the CSV columns to the fields in the `user_kyc_profiles` table.
- `user_id` from the CSV is used as the `wallet_address`.
- It defaults to `tier: 2` for all migrated users.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Remove incorrect tier field documentation.

The documentation states "It defaults to tier: 2 for all migrated users," but the script (lines 107-122) does not set a tier field in the buildRow function. This is misleading.

Remove or correct this statement:

-For each row in the CSV:
-- It maps the CSV columns to the fields in the `user_kyc_profiles` table.
-- `user_id` from the CSV is used as the `wallet_address`.
-- It defaults to `tier: 2` for all migrated users.
+For each row in the CSV:
+- It maps the CSV columns to the fields in the `user_kyc_profiles` table.
+- `user_id` from the CSV is used as the `wallet_address`.
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
- It defaults to `tier: 2` for all migrated users.
For each row in the CSV:
- It maps the CSV columns to the fields in the `user_kyc_profiles` table.
- `user_id` from the CSV is used as the `wallet_address`.
🤖 Prompt for AI Agents
In scripts/MIGRATION_README.md around line 72, the sentence saying "It defaults
to `tier: 2` for all migrated users" is incorrect because the migration script
(see buildRow in the script around lines 107-122) does not set a tier field;
either remove that sentence or replace it with an accurate statement such as
"The script does not set a `tier` field; tiers must be assigned separately" —
alternatively, if the intended behavior is to default migrated users to tier 2,
update the buildRow implementation to explicitly set tier: 2 and then update the
README to reflect that explicit default.


### 3. Loading
- The script connects to your Supabase instance using the service role key.
- It iterates through the transformed records and uses `supabase.from('user_kyc_profiles').upsert(...)` to insert or update each one.
- The `onConflict: 'wallet_address'` option ensures that existing records are updated, preventing duplicates.

## Data Mapping

| CSV Column | New Schema Field | Notes |
|----------------|--------------------|--------------------------------------------|
| `user_id` | `wallet_address` | Primary key, lowercased for consistency. |
| `id_type` | `id_type` | |
| `country` | `id_country` | |
| `smile_job_id` | `smile_job_id` | |
| `verified_at` | `verified_at` | Also sets `verified` to `true`. |
| - | `tier` | Hardcoded to `2` for all records. |
Comment on lines +86 to +88
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Correct the data mapping table.

The data mapping table contains inaccuracies:

  1. smile_job_id is not mapped in the actual script (lines 107-122 of migrate-kyc-data.ts). The Job ID is read but never stored in the output.
  2. The tier field is documented as "Hardcoded to 2" but is not set anywhere in the script.

Remove these entries or update the script to include them:

 | CSV Column     | New Schema Field   | Notes                                      |
 |----------------|--------------------|--------------------------------------------|
 | `user_id`      | `wallet_address`   | Primary key, lowercased for consistency.   |
 | `id_type`      | `id_type`          |                                            |
 | `country`    | `id_country`         |                                            |
-| `smile_job_id` | `smile_job_id`     |                                            |
 | `verified_at`  | `verified_at`      | Also sets `verified` to `true`.            |
-| -              | `tier`             | Hardcoded to `2` for all records.          |
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
| `smile_job_id` | `smile_job_id` | |
| `verified_at` | `verified_at` | Also sets `verified` to `true`. |
| - | `tier` | Hardcoded to `2` for all records. |
| `verified_at` | `verified_at` | Also sets `verified` to `true`. |
🤖 Prompt for AI Agents
In scripts/MIGRATION_README.md around lines 86-88, the table wrongly documents
`smile_job_id` and `tier` — either remove these two rows from the mapping table
or update the migration implementation (migrate-kyc-data.ts, around lines
107-122) to actually write them: ensure the Job ID read in the script is
assigned to the output `smile_job_id` field and add a `tier` property (set to 2
if intended) to each output record; then update the README to reflect the actual
behavior (or remove the entries if you choose not to persist them).


## Troubleshooting

### `CSV file not found`
**Issue**: The script throws an error `CSV file not found at: <path>`.
**Solution**: Make sure your CSV file is named exactly `kyc-data.csv` and is located in the `/Users/prof/Documents/paycrest/noblocks/scripts/` directory.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Remove hardcoded absolute path.

The troubleshooting section contains a hardcoded absolute path specific to a developer's machine: /Users/prof/Documents/paycrest/noblocks/scripts/. This should be replaced with a generic reference.

Use a generic path reference:

-**Solution**: Make sure your CSV file is named exactly `kyc-data.csv` and is located in the `/Users/prof/Documents/paycrest/noblocks/scripts/` directory.
+**Solution**: Make sure your CSV file is named exactly `kyc-export.csv` and is located in the `scripts/` directory.
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
**Solution**: Make sure your CSV file is named exactly `kyc-data.csv` and is located in the `/Users/prof/Documents/paycrest/noblocks/scripts/` directory.
**Solution**: Make sure your CSV file is named exactly `kyc-export.csv` and is located in the `scripts/` directory.
🤖 Prompt for AI Agents
In scripts/MIGRATION_README.md around line 94, the troubleshooting section uses
a hardcoded absolute path (/Users/prof/Documents/paycrest/noblocks/scripts/)
which is machine-specific; replace that with a generic, relative or
environment-agnostic path (e.g., ./scripts/ or <project_root>/scripts/) and
update the instruction to say “place the CSV named kyc-data.csv in your
project’s scripts directory (e.g., ./scripts/)” so the README is portable across
machines.


### `Missing Supabase credentials`
**Issue**: The script throws an error about missing credentials.
**Solution**: Ensure your `.env` file is in the root of the `noblocks` project and contains the correct `NEXT_PUBLIC_SUPABASE_URL` and `SUPABASE_SERVICE_ROLE_KEY`.

### Data not appearing as expected
**Issue**: The data in Supabase doesn't look right.
**Solution**:
1. Run the script with `--dry-run` and inspect the JSON output in your console.
2. Check that the column headers in your `kyc-data.csv` file exactly match the expected names (e.g., `user_id`, `phone_number`).
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Remove incorrect field reference.

Line 104 mentions phone_number as an expected column header, but neither the CSV structure (lines 46-60 of the script) nor the script logic includes a phone_number field.

Remove this incorrect reference:

 **Solution**:
 1. Run the script with `--dry-run` and inspect the JSON output in your console.
-2. Check that the column headers in your `kyc-data.csv` file exactly match the expected names (e.g., `user_id`, `phone_number`).
+2. Check that the column headers in your CSV file exactly match the expected names (e.g., `Job ID`, `User ID`, `Country`, `ID Type`, `Result`).
 3. Verify the data formats in the CSV (e.g., dates in `verified_at` are valid).
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
2. Check that the column headers in your `kyc-data.csv` file exactly match the expected names (e.g., `user_id`, `phone_number`).
2. Check that the column headers in your CSV file exactly match the expected names (e.g., `Job ID`, `User ID`, `Country`, `ID Type`, `Result`).
🤖 Prompt for AI Agents
In scripts/MIGRATION_README.md around line 104, the README incorrectly lists
`phone_number` as an expected CSV column header; this field does not exist in
the CSV structure or script logic (see script lines ~46-60). Remove the
`phone_number` reference from the sentence and ensure the remaining example
header list exactly matches the actual expected column names used by the script.

3. Verify the data formats in the CSV (e.g., dates in `verified_at` are valid).

## Rollback

If you need to undo the migration, you can run a SQL query in the Supabase SQL Editor. Be very careful with this operation.

```sql
-- Example: Delete records that were created or updated by the script.
-- You might need to adjust the timestamp to match your migration time.
DELETE FROM public.user_kyc_profiles WHERE updated_at >= '2025-12-02 00:00:00+00';
```

It is safer to identify the migrated records by a set of `wallet_address` values if possible.
172 changes: 172 additions & 0 deletions scripts/migrate-kyc-data.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,172 @@
/**
* Migrate minimal KYC fields from a CSV export to Supabase
* - Reads CSV with headers: Job ID, User ID, Country, ID Type, Result
* - Filters rows where Result === "Approved"
* - Upserts into public.user_kyc_profiles:
* - wallet_address ← User ID (lowercased)
* - id_country ← Country
* - id_type ← ID Type
* - verified=true, verified_at=now (optional; remove if not desired)
*
* Usage:
* npx ts-node scripts/migrate-kyc-data.ts --dry-run
* npx ts-node scripts/migrate-kyc-data.ts --csv ./kyc-export.csv
*/

import { createClient } from '@supabase/supabase-js';
import * as dotenv from 'dotenv';
import fs from 'fs';
import path from 'path';
import { fileURLToPath } from 'url';
import { parse } from 'csv-parse/sync';

dotenv.config();

// ESM-safe path resolution
const __filename = fileURLToPath(import.meta.url);
const __dirname = path.dirname(__filename);

// Env
const SUPABASE_URL = process.env.NEXT_PUBLIC_SUPABASE_URL;
const SUPABASE_SERVICE_KEY = process.env.SUPABASE_SERVICE_ROLE_KEY;

if (!SUPABASE_URL || !SUPABASE_SERVICE_KEY) {
throw new Error('Missing Supabase credentials in .env (NEXT_PUBLIC_SUPABASE_URL, SUPABASE_SERVICE_ROLE_KEY)');
}

const supabase = createClient(SUPABASE_URL, SUPABASE_SERVICE_KEY);

// CLI: allow overriding the CSV path via --csv
const csvArgIndex = process.argv.findIndex((a) => a === '--csv');
const CSV_FILE_PATH = csvArgIndex >= 0
? path.resolve(process.argv[csvArgIndex + 1])
: path.join(__dirname, 'kyc-export.csv'); // default name next to script


interface CsvRowRaw {
'Job ID'?: string;
'User ID'?: string;
'SDK'?: string;
'Date'?: string;
'Timestamp'?: string;
'Job Time'?: string;
'Product'?: string;
'Job Type'?: string;
'Country'?: string;
'ID Type'?: string;
'Result'?: string;
'Message'?: string;
'SmartCheck User'?: string;
}

type CsvRow = {
job_id: string;
user_id: string;
country?: string | null;
id_type?: string | null;
result: string;
};

// Read + normalize CSV
function readCsv(): CsvRow[] {
if (!fs.existsSync(CSV_FILE_PATH)) {
throw new Error(`CSV file not found at: ${CSV_FILE_PATH}`);
}
console.log(` Reading data from ${CSV_FILE_PATH}`);

const content = fs.readFileSync(CSV_FILE_PATH);
const raw = parse(content, {
columns: true,
skip_empty_lines: true,
trim: true,
}) as CsvRowRaw[];

console.log(`Found ${raw.length} records in CSV file.`);

const rows: CsvRow[] = raw.map((r) => ({
job_id: (r['Job ID'] || '').trim(),
user_id: (r['User ID'] || '').trim(),
country: r['Country'] ? r['Country'].trim() : null,
id_type: r['ID Type'] ? r['ID Type'].trim() : null,
result: (r['Result'] || '').trim(),
}));

const valid = rows.filter((r) => r.job_id && r.user_id && r.result);
const skipped = rows.length - valid.length;
if (skipped > 0) {
console.warn(` Skipped ${skipped} rows missing Job ID, User ID, or Result`);
}
return valid;
}


function buildRow(r: CsvRow) {
const isApproved = r.result === 'Approved';
const nowISO = new Date().toISOString();

return {
wallet_address: r.user_id.toLowerCase(),
id_country: r.country || null,
id_type: r.id_type || null,
platform: [
{
type: 'id',
identifier: 'smile_id',
reference: '',
}
],
Comment on lines +111 to +117
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Use job_id for platform reference instead of empty string.

The platform.reference field is hardcoded as an empty string, but the job_id from the CSV (Smile ID job reference) is available and should be stored here for traceability and potential future API lookups.

Apply this diff:

     platform: [
       {
         type: 'id',
         identifier: 'smile_id',
-        reference: '',
+        reference: r.job_id,
       }
     ],
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
platform: [
{
type: 'id',
identifier: 'smile_id',
reference: '',
}
],
platform: [
{
type: 'id',
identifier: 'smile_id',
reference: r.job_id,
}
],
🤖 Prompt for AI Agents
In scripts/migrate-kyc-data.ts around lines 111 to 117, the platform.reference
is currently set to an empty string; replace it so platform.reference is
populated with the CSV's job_id (Smile ID job reference). Ensure you reference
the existing variable containing job_id (or parse it from the CSV row), and if
job_id may be missing use null/undefined or omit the field rather than an empty
string; keep type as string when present so downstream consumers can use it for
traceability and API lookups.

verified: isApproved,
verified_at: isApproved ? nowISO : null,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

find . -type f -name "migrate-kyc-data.ts"

Repository: paycrest/noblocks

Length of output: 90


🏁 Script executed:

head -200 scripts/migrate-kyc-data.ts | cat -n

Repository: paycrest/noblocks

Length of output: 6143


🏁 Script executed:

wc -l scripts/migrate-kyc-data.ts

Repository: paycrest/noblocks

Length of output: 92


Add historical verification timestamp from CSV instead of current migration time.

The CSV export contains a Timestamp field (line 51 in CsvRowRaw), but it's not being mapped to CsvRow during normalization. As a result, verified_at is set to the current migration time (nowISO) at line 119, losing the original verification date from the CSV data.

To preserve historical accuracy:

type CsvRow = {
  job_id: string;
  user_id: string;
  country?: string | null;
  id_type?: string | null;
  result: string;
+ timestamp?: string | null;
};

  const rows: CsvRow[] = raw.map((r) => ({
    job_id: (r['Job ID'] || '').trim(),
    user_id: (r['User ID'] || '').trim(),
    country: r['Country'] ? r['Country'].trim() : null,
    id_type: r['ID Type'] ? r['ID Type'].trim() : null,
    result: (r['Result'] || '').trim(),
+   timestamp: r['Timestamp'] ? r['Timestamp'].trim() : null,
  }));

  function buildRow(r: CsvRow) {
    const isApproved = r.result === 'Approved';
+   const verifiedAt = isApproved && r.timestamp
+     ? new Date(r.timestamp).toISOString()
+     : (isApproved ? new Date().toISOString() : null);

    return {
      wallet_address: r.user_id.toLowerCase(),
      id_country: r.country || null,
      id_type: r.id_type || null,
      platform: [
        {
          type: 'id',
          identifier: 'smile_id',
          reference: '',
        }
      ],
      verified: isApproved,
-     verified_at: isApproved ? nowISO : null,
+     verified_at: verifiedAt,
-     updated_at: nowISO,
+     updated_at: new Date().toISOString(),
    };
  }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
verified_at: isApproved ? nowISO : null,
type CsvRow = {
job_id: string;
user_id: string;
country?: string | null;
id_type?: string | null;
result: string;
timestamp?: string | null;
};
const rows: CsvRow[] = raw.map((r) => ({
job_id: (r['Job ID'] || '').trim(),
user_id: (r['User ID'] || '').trim(),
country: r['Country'] ? r['Country'].trim() : null,
id_type: r['ID Type'] ? r['ID Type'].trim() : null,
result: (r['Result'] || '').trim(),
timestamp: r['Timestamp'] ? r['Timestamp'].trim() : null,
}));
function buildRow(r: CsvRow) {
const isApproved = r.result === 'Approved';
const verifiedAt = isApproved && r.timestamp
? new Date(r.timestamp).toISOString()
: (isApproved ? new Date().toISOString() : null);
return {
wallet_address: r.user_id.toLowerCase(),
id_country: r.country || null,
id_type: r.id_type || null,
platform: [
{
type: 'id',
identifier: 'smile_id',
reference: '',
}
],
verified: isApproved,
verified_at: verifiedAt,
updated_at: new Date().toISOString(),
};
}
🤖 Prompt for AI Agents
In scripts/migrate-kyc-data.ts around line 119, verified_at is being set to
nowISO which overwrites the original CSV verification time; instead map the
CSV's Timestamp field into CsvRow during normalization (parse/convert to ISO
when present), then when setting verified_at use that parsed timestamp when
isApproved and the CSV timestamp exists (otherwise null), ensuring the value is
in ISO format and falls back to null if invalid or missing; update the CsvRow
type/mapping and replace nowISO usage at line 119 with the normalized CSV
timestamp variable.

updated_at: nowISO,
};
}

// Upsert — conflict target: wallet_address (PK)
async function upsertRows(rows: any[]) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Replace any[] with proper typing.

The function parameter uses any[] which bypasses TypeScript's type safety. Define a proper type for the row structure.

Apply this diff:

+type KycProfileRow = {
+  wallet_address: string;
+  id_country: string | null;
+  id_type: string | null;
+  platform: Array<{ type: string; identifier: string; reference: string }>;
+  verified: boolean;
+  verified_at: string | null;
+  updated_at: string;
+};
+
-async function upsertRows(rows: any[]) {
+async function upsertRows(rows: KycProfileRow[]) {
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
async function upsertRows(rows: any[]) {
type KycProfileRow = {
wallet_address: string;
id_country: string | null;
id_type: string | null;
platform: Array<{ type: string; identifier: string; reference: string }>;
verified: boolean;
verified_at: string | null;
updated_at: string;
};
async function upsertRows(rows: KycProfileRow[]) {
🤖 Prompt for AI Agents
In scripts/migrate-kyc-data.ts around line 125, the upsertRows parameter is
typed as any[] which disables TypeScript safety; define a Row interface (or type
alias) that matches the actual row shape used in this file (fields like id,
name, email, kycStatus, createdAt, etc. — exactly match the properties accessed
in the function), replace any[] with Row[] in the upsertRows signature, and
update any local variables or function calls to use that Row type (or
export/import it if it belongs in a shared types file) so the compiler enforces
the correct structure.

console.log(`\nUpserting ${rows.length} records into public.user_kyc_profiles...`);
let ok = 0, failed = 0;

for (const row of rows) {
const { error } = await supabase
.from('user_kyc_profiles')
.upsert(row, { onConflict: 'wallet_address' });

if (error) {
console.error(`❌ ${row.wallet_address}: ${error.message}`);
failed++;
} else {
console.log(`✅ ${row.wallet_address}`);
ok++;
}
}
console.log(`\n Summary: OK=${ok}, Failed=${failed}`);
}

async function main() {
const isDryRun = process.argv.includes('--dry-run');
console.log(isDryRun ? 'Running in DRY RUN mode.' : 'Running in LIVE mode.');

try {
const all = readCsv();
const approved = all.filter((r) => r.result === 'Approved');
console.log(`✅ Approved rows: ${approved.length}`);

const rows = approved.map(buildRow);

if (isDryRun) {
console.log('--- Dry Run Output (first 5 rows) ---');
console.log(JSON.stringify(rows.slice(0, 5), null, 2));
console.log('-------------------------------------');
console.log('No data will be written to the database in dry run mode.');
} else {
await upsertRows(rows);
}

console.log('🎉 Migration script finished.');
} catch (err: any) {
console.error(' An error occurred:', err.message || err);
process.exit(1);
}
}

main();