added the release-3.0.3 doc#158
added the release-3.0.3 doc#158prashanthShiksha wants to merge 4 commits intoELEVATE-Project:release-3.0.3from
Conversation
📝 WalkthroughWalkthroughThis PR introduces Docker startup orchestration for Metabase schema updates, adds a Python script for program data deletion workflows, updates log path configuration in existing cleanup scripts, parameterizes table references in Metabase job configurations, removes hardcoded credentials, and includes v3.0.3 release documentation. Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
Tip Issue Planner is now in beta. Read the docs and try it out! Share your feedback on Discord. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Actionable comments posted: 11
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
Documentation/data-cleanup/python-script/resource_delete.py (1)
31-37:⚠️ Potential issue | 🟠 MajorMissing directory creation for
LOG_FILE_PATH—RotatingFileHandlerwill raiseFileNotFoundErrorif the path doesn't exist.Unlike
program_deletion.pywhich callsos.makedirs(LOG_DIR, exist_ok=True), this script readslog_pathfrom config but never ensures the directory exists before opening the log file handler on line 37.Proposed fix
LOG_FILE_PATH = config.get('common', 'log_path') +os.makedirs(LOG_FILE_PATH, exist_ok=True) LOG_FILE = os.path.join(LOG_FILE_PATH, "resource_delete.log")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Documentation/data-cleanup/python-script/resource_delete.py` around lines 31 - 37, The script sets LOG_FILE_PATH and LOG_FILE and then creates a RotatingFileHandler without ensuring the directory exists, causing FileNotFoundError; before instantiating handler (referenced by LOG_FILE_PATH, LOG_FILE, logger and handler) call os.makedirs(LOG_FILE_PATH, exist_ok=True) (or validate and create the directory) so the path exists prior to creating RotatingFileHandler, then proceed to create handler and attach it to logger.
🧹 Nitpick comments (4)
Documentation/data-cleanup/python-script/program_deletion.py (1)
138-157: Unused exception variable andlogger.errorinstead oflogger.exception.On line 156–157, the caught exception
eis never used andlogger.errorloses the traceback. Uselogger.exceptionfor automatic traceback logging (Ruff TRY400, F841).Proposed fix
- except Exception as e: - logger.error("Script failed due to an error.") + except Exception: + logger.exception("Script failed due to an error.")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Documentation/data-cleanup/python-script/program_deletion.py` around lines 138 - 157, The except block in the main script catches Exception as e but never uses e and calls logger.error (losing traceback); update the exception handler in the __main__ block that wraps calls to get_access_token, fetch_all_program_ids, and delete_required_program_ids to either remove the unused "as e" or use logger.exception instead of logger.error so the full traceback is logged (e.g., replace logger.error("Script failed due to an error.") with logger.exception("Script failed due to an error.") or logger.exception with the exception variable if you keep "as e").Documentation/Docker-setup/docker-compose.yml (1)
129-131: Running the Metabase container asrootwidens the attack surface.The
user: rootis needed only for theapk/apt-getpackage install insideupdate_metabase_schema.sh. Consider installingpostgresql-clientat image build time (custom Dockerfile) so the runtime container doesn't need root privileges. If that's not feasible short-term, document the rationale.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Documentation/Docker-setup/docker-compose.yml` around lines 129 - 131, The compose service currently sets user: root to allow package installs in update_metabase_schema.sh; remove that privilege by building a custom image that installs postgresql-client at build time and embeds the script, then use that image with the entrypoint ["/bin/bash","/app/update_metabase_schema.sh"] without user: root. Specifically, create a Dockerfile FROM the Metabase base image, RUN install postgresql-client (apk/apt as appropriate) and COPY update_metabase_schema.sh into /app, keep the service using entrypoint: ["/bin/bash","/app/update_metabase_schema.sh"] and remove user: root; if a custom image isn’t feasible immediately, add a short rationale comment in the compose file explaining why user: root is temporarily required and open a task to replace it with the custom Dockerfile approach.Documentation/data-cleanup/python-script/sample.ini (1)
1-9: Consider separating Kafka config from Database config.The
[Database]section mixes PostgreSQL connection settings with Kafka settings (topic,group_id,broker). A separate[Kafka]section would improve clarity, especially as more scripts consume this config.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Documentation/data-cleanup/python-script/sample.ini` around lines 1 - 9, Split the mixed settings into two INI sections by removing Kafka keys (topic, group_id, broker) from the existing [Database] section and adding a new [Kafka] section that contains topic, group_id and broker; update any code that reads these values to read database connection parameters from [Database] and Kafka parameters from [Kafka] (look for config access that references the keys "topic", "group_id", "broker" or the [Database] section and change them to read from the new [Kafka] section).Documentation/Docker-setup/update_metabase_schema.sh (1)
3-11:postgresql-clientis installed on every container start, adding latency to each restart.This runs
apk update && apk addorapt-get update && apt-get installon every startup. Consider building a custom Metabase image withpostgresql-clientpre-installed to avoid repeated package downloads and to keep startup fast and deterministic.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Documentation/Docker-setup/update_metabase_schema.sh` around lines 3 - 11, The startup script update_metabase_schema.sh currently installs postgresql-client on every container start (the apk/apt-get block), causing latency; replace this runtime install by building a custom Metabase image that includes postgresql-client ahead of time. Create a Dockerfile FROM the official Metabase image used in your compose/k8s, add a RUN line to install postgresql-client appropriate to the image base (apk add postgresql-client for alpine-based images or apt-get install -y postgresql-client for debian-based), build and tag the image, update your deployment/docker-compose to use the custom image, and remove or guard the install block in update_metabase_schema.sh (or leave a no-op check for client presence) so the package is not reinstalled at container startup.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@Documentation/data-cleanup/python-script/program_deletion.py`:
- Around line 60-62: The two requests.post calls that assign to res (e.g.,
requests.post(LOGIN_URL, json=payload, headers=headers) and the later
requests.post used to trigger program deletion) are missing a timeout, which can
hang indefinitely; update both calls to include a sensible timeout argument
(e.g., timeout=10) and propagate or handle requests.exceptions.Timeout where
appropriate so the script fails fast on unresponsive endpoints.
- Around line 86-92: The payload dict currently uses a hardcoded "limit": 10000
which can silently truncate results; modify the logic around payload (the dict
with keys "query", "sort", "projection", "mongoIdKeys", "limit") to paginate
instead of a one-shot fetch (e.g., loop using a smaller batch size and
cursor/last-seen _id or skip/offset until no more results) or, at minimum,
detect when len(data) == payload["limit"] and emit a clear warning/error; update
the code that consumes payload (the fetch loop that receives `data`) to repeat
requests with updated offsets/cursor and collect/process all pages until fewer
than limit items are returned.
- Around line 55-58: The headers dict currently hardcodes "origin":
"default-qa.tekdinext.com" which breaks non-QA deployments; update the script to
read the origin value from the config (e.g., the ProgramDeletion section key
"origin" in sample.ini or from an environment variable) and use that value when
building the headers variable (the headers symbol in program_deletion.py).
Ensure a safe fallback (empty or None) if the config key is missing and remove
the hardcoded string so non-QA environments can provide their own origin via the
shared config.
In `@Documentation/data-cleanup/python-script/resource_delete.md`:
- Line 3: Update the incorrect resource path in
Documentation/data-cleanup/python-script/resource_delete.md by changing the
referenced script path string `Documentation/data-cleanup/resource_delete.py` to
the correct location
`Documentation/data-cleanup/python-script/resource_delete.py`; locate the
reference inside the markdown file (resource_delete.md) and replace the wrong
path so all links and mentions point to the actual script file.
- Around line 156-158: The run command in the example uses the wrong path;
update the bash invocation to point to the actual location by including the
python-script directory so it runs the correct file (resource_delete.py) —
replace the current command that runs
"/app/Documentation/data-cleanup/resource_delete.py" with one that runs
"/app/Documentation/data-cleanup/python-script/resource_delete.py".
In `@Documentation/data-cleanup/python-script/sample.ini`:
- Around line 16-21: The placeholder names in the [ProgramDeletion] block are
inconsistent: update the uppercase placeholder key used for the password from
{{ADMINPASSWORD}} to {{ADMIN PASSWORD}} to match the spacing style of {{ADMIN
USERNAME}} (modify the value for the password entry under the ProgramDeletion
section so it reads {{ADMIN PASSWORD}}).
In `@Documentation/Docker-setup/update_metabase_schema.sh`:
- Around line 40-42: The timeout branch in update_metabase_schema.sh currently
only echoes "Timeout waiting for collection table or schema update."—change this
to emit a higher-severity failure and terminate the script with a non-zero exit
so callers detect the failure: use >&2 to send the message to stderr (or call
logger) and then call exit 1 when COUNTER equals MAX_RETRIES (referencing
COUNTER, MAX_RETRIES and the timeout branch) so the process does not silently
continue with an unpatched schema.
- Around line 24-26: The psql calls use unquoted variables (PGPASSWORD,
$MB_DB_HOST, $MB_DB_USER, $MB_DB_DBNAME) which will break with spaces/special
chars; change the invocations that check for the collection and run the ALTER
(the PGPASSWORD=... psql -h ... -U ... -d ... -c ... commands) to quote the
shell variables (e.g. PGPASSWORD="$MB_DB_PASS" and -h "$MB_DB_HOST" -U
"$MB_DB_USER" -d "$MB_DB_DBNAME") so the password and other DB credentials are
safely passed; apply the same quoting to both the existence check and the ALTER
TABLE invocation that references "collection" and "ALTER TABLE collection ALTER
COLUMN slug TYPE TEXT;".
In `@release-docs/release-3.0.3.md`:
- Line 3: Update the incorrect script path in the release note: replace the
referenced path "Documentation/data-cleanup/resource_delete.py" with the actual
file location "Documentation/data-cleanup/python-script/resource_delete.py" so
the link/reference points to the real script file (update the string in the
release-3.0.3.md content).
- Line 77: Duplicate markdown heading number: change the heading "### 2.
Metabase column data type changes" to "### 3. Metabase column data type changes"
and update all subsequent section numbers to follow the corrected sequence;
specifically search for the heading text "### 2. Metabase column data type
changes" and the earlier "### 2. Migration scripts" to ensure numbering becomes
"### 2. Migration scripts", "### 3. Metabase column data type changes", then
increment the numbers of the following headings accordingly.
- Around line 92-103: Fix the two typos in the release notes: update the
sentence "Reload all the configs by running the data-loader.sh script but first
make the necessary config chnages" to use "changes" instead of "chnages", and
correct the comment "# Json file directory path from inside contianer" to "#
Json file directory path from inside container" (the surrounding block including
DB_* variables and MAIN_FOLDER should remain unchanged).
---
Outside diff comments:
In `@Documentation/data-cleanup/python-script/resource_delete.py`:
- Around line 31-37: The script sets LOG_FILE_PATH and LOG_FILE and then creates
a RotatingFileHandler without ensuring the directory exists, causing
FileNotFoundError; before instantiating handler (referenced by LOG_FILE_PATH,
LOG_FILE, logger and handler) call os.makedirs(LOG_FILE_PATH, exist_ok=True) (or
validate and create the directory) so the path exists prior to creating
RotatingFileHandler, then proceed to create handler and attach it to logger.
---
Nitpick comments:
In `@Documentation/data-cleanup/python-script/program_deletion.py`:
- Around line 138-157: The except block in the main script catches Exception as
e but never uses e and calls logger.error (losing traceback); update the
exception handler in the __main__ block that wraps calls to get_access_token,
fetch_all_program_ids, and delete_required_program_ids to either remove the
unused "as e" or use logger.exception instead of logger.error so the full
traceback is logged (e.g., replace logger.error("Script failed due to an
error.") with logger.exception("Script failed due to an error.") or
logger.exception with the exception variable if you keep "as e").
In `@Documentation/data-cleanup/python-script/sample.ini`:
- Around line 1-9: Split the mixed settings into two INI sections by removing
Kafka keys (topic, group_id, broker) from the existing [Database] section and
adding a new [Kafka] section that contains topic, group_id and broker; update
any code that reads these values to read database connection parameters from
[Database] and Kafka parameters from [Kafka] (look for config access that
references the keys "topic", "group_id", "broker" or the [Database] section and
change them to read from the new [Kafka] section).
In `@Documentation/Docker-setup/docker-compose.yml`:
- Around line 129-131: The compose service currently sets user: root to allow
package installs in update_metabase_schema.sh; remove that privilege by building
a custom image that installs postgresql-client at build time and embeds the
script, then use that image with the entrypoint
["/bin/bash","/app/update_metabase_schema.sh"] without user: root. Specifically,
create a Dockerfile FROM the Metabase base image, RUN install postgresql-client
(apk/apt as appropriate) and COPY update_metabase_schema.sh into /app, keep the
service using entrypoint: ["/bin/bash","/app/update_metabase_schema.sh"] and
remove user: root; if a custom image isn’t feasible immediately, add a short
rationale comment in the compose file explaining why user: root is temporarily
required and open a task to replace it with the custom Dockerfile approach.
In `@Documentation/Docker-setup/update_metabase_schema.sh`:
- Around line 3-11: The startup script update_metabase_schema.sh currently
installs postgresql-client on every container start (the apk/apt-get block),
causing latency; replace this runtime install by building a custom Metabase
image that includes postgresql-client ahead of time. Create a Dockerfile FROM
the official Metabase image used in your compose/k8s, add a RUN line to install
postgresql-client appropriate to the image base (apk add postgresql-client for
alpine-based images or apt-get install -y postgresql-client for debian-based),
build and tag the image, update your deployment/docker-compose to use the custom
image, and remove or guard the install block in update_metabase_schema.sh (or
leave a no-op check for client presence) so the package is not reinstalled at
container startup.
| headers = { | ||
| "origin": "default-qa.tekdinext.com", | ||
| "Content-Type": "application/json" | ||
| } |
There was a problem hiding this comment.
Hardcoded QA environment origin header will break in non-QA deployments.
"origin": "default-qa.tekdinext.com" is environment-specific. This should be configurable (e.g., from config.ini) or removed if the API doesn't strictly require it.
Proposed fix — make it configurable
Add to sample.ini under [ProgramDeletion]:
origin = {{ORIGIN_URL}}Then in the script:
headers = {
- "origin": "default-qa.tekdinext.com",
+ "origin": program_deletion_config.get('origin', ''),
"Content-Type": "application/json"
}Based on learnings: the maintainer prefers to use a common-config or config file to store configuration values rather than hardcoding them in script files.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@Documentation/data-cleanup/python-script/program_deletion.py` around lines 55
- 58, The headers dict currently hardcodes "origin": "default-qa.tekdinext.com"
which breaks non-QA deployments; update the script to read the origin value from
the config (e.g., the ProgramDeletion section key "origin" in sample.ini or from
an environment variable) and use that value when building the headers variable
(the headers symbol in program_deletion.py). Ensure a safe fallback (empty or
None) if the config key is missing and remove the hardcoded string so non-QA
environments can provide their own origin via the shared config.
| try: | ||
| res = requests.post(LOGIN_URL, json=payload, headers=headers) | ||
| res.raise_for_status() |
There was a problem hiding this comment.
HTTP requests without timeout can hang indefinitely.
Both requests.post calls (lines 61 and 95) lack a timeout parameter. If the remote API is unresponsive, the script will block forever. Flagged by Ruff S113.
Proposed fix
- res = requests.post(LOGIN_URL, json=payload, headers=headers)
+ res = requests.post(LOGIN_URL, json=payload, headers=headers, timeout=30)- res = requests.post(DB_FIND_URL, json=payload, headers=headers)
+ res = requests.post(DB_FIND_URL, json=payload, headers=headers, timeout=60)Also applies to: 94-96
🧰 Tools
🪛 Ruff (0.15.0)
[error] 61-61: Probable use of requests call without timeout
(S113)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@Documentation/data-cleanup/python-script/program_deletion.py` around lines 60
- 62, The two requests.post calls that assign to res (e.g.,
requests.post(LOGIN_URL, json=payload, headers=headers) and the later
requests.post used to trigger program deletion) are missing a timeout, which can
hang indefinitely; update both calls to include a sensible timeout argument
(e.g., timeout=10) and propagate or handle requests.exceptions.Timeout where
appropriate so the script fails fast on unresponsive endpoints.
| payload = { | ||
| "query": {"isAPrivateProgram":True}, | ||
| "sort": {"createdAt": "-1"}, | ||
| "projection": ["_id", "tenantId", "orgId"], | ||
| "mongoIdKeys": ["_id"], | ||
| "limit": 10000 | ||
| } |
There was a problem hiding this comment.
Hardcoded limit: 10000 silently drops programs beyond that count.
If the dataset grows past 10,000 programs, the script will silently operate on a partial set — potentially leaving programs un-deleted. Consider paginating or at least logging a warning when len(data) == limit.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@Documentation/data-cleanup/python-script/program_deletion.py` around lines 86
- 92, The payload dict currently uses a hardcoded "limit": 10000 which can
silently truncate results; modify the logic around payload (the dict with keys
"query", "sort", "projection", "mongoIdKeys", "limit") to paginate instead of a
one-shot fetch (e.g., loop using a smaller batch size and cursor/last-seen _id
or skip/offset until no more results) or, at minimum, detect when len(data) ==
payload["limit"] and emit a clear warning/error; update the code that consumes
payload (the fetch loop that receives `data`) to repeat requests with updated
offsets/cursor and collect/process all pages until fewer than limit items are
returned.
| @@ -0,0 +1,178 @@ | |||
| # Resource Deletion Script Documentation | |||
|
|
|||
| Referred Script: `Documentation/data-cleanup/resource_delete.py` | |||
There was a problem hiding this comment.
Incorrect script path.
The referenced path Documentation/data-cleanup/resource_delete.py doesn't match the actual location Documentation/data-cleanup/python-script/resource_delete.py.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@Documentation/data-cleanup/python-script/resource_delete.md` at line 3,
Update the incorrect resource path in
Documentation/data-cleanup/python-script/resource_delete.md by changing the
referenced script path string `Documentation/data-cleanup/resource_delete.py` to
the correct location
`Documentation/data-cleanup/python-script/resource_delete.py`; locate the
reference inside the markdown file (resource_delete.md) and replace the wrong
path so all links and mentions point to the actual script file.
| ```bash | ||
| python3 /app/Documentation/data-cleanup/resource_delete.py | ||
| ``` |
There was a problem hiding this comment.
Incorrect script path in the run command.
The path should include the python-script directory to match the actual file location.
Proposed fix
-python3 /app/Documentation/data-cleanup/resource_delete.py
+python3 /app/Documentation/data-cleanup/python-script/resource_delete.py🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@Documentation/data-cleanup/python-script/resource_delete.md` around lines 156
- 158, The run command in the example uses the wrong path; update the bash
invocation to point to the actual location by including the python-script
directory so it runs the correct file (resource_delete.py) — replace the current
command that runs "/app/Documentation/data-cleanup/resource_delete.py" with one
that runs "/app/Documentation/data-cleanup/python-script/resource_delete.py".
| if PGPASSWORD=$MB_DB_PASS psql -h $MB_DB_HOST -U $MB_DB_USER -d $MB_DB_DBNAME -c "\dt collection" | grep -q "collection"; then | ||
| echo "Collection table found. Updating schema..." | ||
| PGPASSWORD=$MB_DB_PASS psql -h $MB_DB_HOST -U $MB_DB_USER -d $MB_DB_DBNAME -c "ALTER TABLE collection ALTER COLUMN slug TYPE TEXT;" |
There was a problem hiding this comment.
Unquoted variables will break if DB credentials contain spaces or special characters.
The psql invocations on lines 24 and 26 use unquoted $MB_DB_HOST, $MB_DB_USER, $MB_DB_DBNAME, and $MB_DB_PASS. Passwords frequently contain special shell characters.
Proposed fix
- if PGPASSWORD=$MB_DB_PASS psql -h $MB_DB_HOST -U $MB_DB_USER -d $MB_DB_DBNAME -c "\dt collection" | grep -q "collection"; then
+ if PGPASSWORD="$MB_DB_PASS" psql -h "$MB_DB_HOST" -U "$MB_DB_USER" -d "$MB_DB_DBNAME" -c "\dt collection" | grep -q "collection"; then
echo "Collection table found. Updating schema..."
- PGPASSWORD=$MB_DB_PASS psql -h $MB_DB_HOST -U $MB_DB_USER -d $MB_DB_DBNAME -c "ALTER TABLE collection ALTER COLUMN slug TYPE TEXT;"
+ PGPASSWORD="$MB_DB_PASS" psql -h "$MB_DB_HOST" -U "$MB_DB_USER" -d "$MB_DB_DBNAME" -c "ALTER TABLE collection ALTER COLUMN slug TYPE TEXT;"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if PGPASSWORD=$MB_DB_PASS psql -h $MB_DB_HOST -U $MB_DB_USER -d $MB_DB_DBNAME -c "\dt collection" | grep -q "collection"; then | |
| echo "Collection table found. Updating schema..." | |
| PGPASSWORD=$MB_DB_PASS psql -h $MB_DB_HOST -U $MB_DB_USER -d $MB_DB_DBNAME -c "ALTER TABLE collection ALTER COLUMN slug TYPE TEXT;" | |
| if PGPASSWORD="$MB_DB_PASS" psql -h "$MB_DB_HOST" -U "$MB_DB_USER" -d "$MB_DB_DBNAME" -c "\dt collection" | grep -q "collection"; then | |
| echo "Collection table found. Updating schema..." | |
| PGPASSWORD="$MB_DB_PASS" psql -h "$MB_DB_HOST" -U "$MB_DB_USER" -d "$MB_DB_DBNAME" -c "ALTER TABLE collection ALTER COLUMN slug TYPE TEXT;" |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@Documentation/Docker-setup/update_metabase_schema.sh` around lines 24 - 26,
The psql calls use unquoted variables (PGPASSWORD, $MB_DB_HOST, $MB_DB_USER,
$MB_DB_DBNAME) which will break with spaces/special chars; change the
invocations that check for the collection and run the ALTER (the PGPASSWORD=...
psql -h ... -U ... -d ... -c ... commands) to quote the shell variables (e.g.
PGPASSWORD="$MB_DB_PASS" and -h "$MB_DB_HOST" -U "$MB_DB_USER" -d
"$MB_DB_DBNAME") so the password and other DB credentials are safely passed;
apply the same quoting to both the existence check and the ALTER TABLE
invocation that references "collection" and "ALTER TABLE collection ALTER COLUMN
slug TYPE TEXT;".
| if [ $COUNTER -eq $MAX_RETRIES ]; then | ||
| echo "Timeout waiting for collection table or schema update." | ||
| fi |
There was a problem hiding this comment.
No alerting beyond a log message on timeout — the schema may silently remain unpatched.
If all 30 retries are exhausted, the script only prints a message but Metabase continues running with the potentially broken schema. Consider exiting with a non-zero code or at least logging at a higher severity so operators notice.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@Documentation/Docker-setup/update_metabase_schema.sh` around lines 40 - 42,
The timeout branch in update_metabase_schema.sh currently only echoes "Timeout
waiting for collection table or schema update."—change this to emit a
higher-severity failure and terminate the script with a non-zero exit so callers
detect the failure: use >&2 to send the message to stderr (or call logger) and
then call exit 1 when COUNTER equals MAX_RETRIES (referencing COUNTER,
MAX_RETRIES and the timeout branch) so the process does not silently continue
with an unpatched schema.
| @@ -0,0 +1,120 @@ | |||
| # Release v3.0.3 Deployment Guide | |||
|
|
|||
| ## Introduction | |||
There was a problem hiding this comment.
Incorrect script path reference.
Line 3 references Documentation/data-cleanup/resource_delete.py but the actual file location in this PR is Documentation/data-cleanup/python-script/resource_delete.py.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@release-docs/release-3.0.3.md` at line 3, Update the incorrect script path in
the release note: replace the referenced path
"Documentation/data-cleanup/resource_delete.py" with the actual file location
"Documentation/data-cleanup/python-script/resource_delete.py" so the
link/reference points to the real script file (update the string in the
release-3.0.3.md content).
| ```json | ||
| ./Documentation/migration-scripts/alter-solution-table.sh | ||
| ``` | ||
| ### 2. Metabase column data type changes |
There was a problem hiding this comment.
Duplicate section number: this is the second ### 2.
Line 56 already defines "### 2. Migration scripts". This section should be "### 3. Metabase column data type changes", and subsequent sections renumbered accordingly.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@release-docs/release-3.0.3.md` at line 77, Duplicate markdown heading number:
change the heading "### 2. Metabase column data type changes" to "### 3.
Metabase column data type changes" and update all subsequent section numbers to
follow the corrected sequence; specifically search for the heading text "### 2.
Metabase column data type changes" and the earlier "### 2. Migration scripts" to
ensure numbering becomes "### 2. Migration scripts", "### 3. Metabase column
data type changes", then increment the numbers of the following headings
accordingly.
| - **Reload all the configs by running the data-loader.sh script but first make the necessary config chnages** | ||
| ```json | ||
| # Database connection parameters | ||
| DB_NAME="{{PGDBNAME}}" | ||
| DB_USER="{{PGUSER}}" | ||
| DB_PASSWORD="{{PGPASSWORD}}" | ||
| DB_HOST="{{PGHOST}}" | ||
| DB_PORT="{{PGPORT}}" | ||
| TABLE_NAME="{{ENV}}_report_config" | ||
|
|
||
| # Json file directory path from inside contianer | ||
| MAIN_FOLDER="/app/data-pipeline/metabase-jobs/config-data-loader/projectJson" |
There was a problem hiding this comment.
Typos in documentation.
- Line 92: "chnages" → "changes"
- Line 102: "contianer" → "container"
Proposed fix
-- **Reload all the configs by running the data-loader.sh script but first make the necessary config chnages**
+- **Reload all the configs by running the data-loader.sh script but first make the necessary config changes**-# Json file directory path from inside contianer
+# Json file directory path from inside container🧰 Tools
🪛 LanguageTool
[grammar] ~92-~92: Ensure spelling is correct
Context: ...ipt but first make the necessary config chnages** json # Database connection parameters DB_NAME="{{PGDBNAME}}" DB_USER="{{PGUSER}}" DB_PASSWORD="{{PGPASSWORD}}" DB_HOST="{{PGHOST}}" DB_PORT="{{PGPORT}}" TABLE_NAME="{{ENV}}_report_config" # Json file directory path from inside contianer MAIN_FOLDER="/app/data-pipeline/metabase-jobs/config-data-loader/projectJson" ### 4. Data clean up (If required) - To setu...
(QB_NEW_EN_ORTHOGRAPHY_ERROR_IDS_1)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@release-docs/release-3.0.3.md` around lines 92 - 103, Fix the two typos in
the release notes: update the sentence "Reload all the configs by running the
data-loader.sh script but first make the necessary config chnages" to use
"changes" instead of "chnages", and correct the comment "# Json file directory
path from inside contianer" to "# Json file directory path from inside
container" (the surrounding block including DB_* variables and MAIN_FOLDER
should remain unchanged).
Summary by CodeRabbit
Release Notes
New Features
Documentation
Chores