Skip to content

Conversation

Copy link
Contributor

Copilot AI commented Jan 2, 2026

Replaces overly broad except Exception clauses that could mask unexpected errors, and adds signal handling for clean process termination.

Exception Handling

  • _get_pr_diffs(): Catch requests.RequestException for network failures when fetching PR diffs
  • post_inactive_engineers(): Catch (requests.RequestException, ValueError) for API failures fetching completed issues
  • post_weekly_changelog(): Catch (requests.RequestException, ValueError) for OpenAI API call failures

Signal Handling

Added SIGINT/SIGTERM handlers to break the scheduler loop cleanly instead of running indefinitely:

shutdown_requested = False

def handle_shutdown(signum, frame):
    global shutdown_requested
    logging.info("Received shutdown signal %s, stopping scheduler loop...", signum)
    shutdown_requested = True

signal.signal(signal.SIGINT, handle_shutdown)
signal.signal(signal.SIGTERM, handle_shutdown)

while not shutdown_requested:
    schedule.run_pending()
    time.sleep(1)

Minor Refactor

Moved support_slugs = get_support_slugs() before dependent variable declarations in post_priority_bugs() to reduce scope.

Original prompt
Please apply the following diffs and create a pull request.
Once the PR is ready, give it a title based on the messages of the fixes being applied.

[{"message":"Using a bare 'Exception' catch is too broad. Consider catching specific exceptions like 'requests.RequestException' or other expected exceptions from 'get_pr_diff()' to avoid masking unexpected errors.","fixFiles":[{"filePath":"jobs.py","diff":"diff --git a/jobs.py b/jobs.py\n--- a/jobs.py\n+++ b/jobs.py\n@@ -156,7 +156,7 @@\n         try:\n             diff = get_pr_diff(owner, repo, int(number))\n             diffs.append(diff)\n-        except Exception as e:  # pragma: no cover - network errors are ignored\n+        except requests.RequestException as e:  # pragma: no cover - network errors are ignored\n             logging.error(\n                 \"Failed to fetch diff for %s/%s#%s (error type: %s)\",\n                 owner,\n"}]},{"message":"Using a bare 'Exception' catch is too broad. Consider catching specific exceptions from 'get_completed_issues_for_person()' to avoid masking unexpected errors while still handling expected API failures.","fixFiles":[{"filePath":"jobs.py","diff":"diff --git a/jobs.py b/jobs.py\n--- a/jobs.py\n+++ b/jobs.py\n@@ -494,7 +494,7 @@\n             continue\n         try:\n             completed = get_completed_issues_for_person(login, 7)\n-        except Exception as e:\n+        except (requests.RequestException, ValueError) as e:\n             logging.error(f\"Failed to fetch completed issues for {login}: {e}\")\n             continue\n         if not completed:\n"}]},{"message":"Using a bare 'Exception' catch is too broad. Consider catching specific exceptions from 'get_chat_function_call()' to avoid masking unexpected errors while handling expected API failures.","fixFiles":[{"filePath":"jobs.py","diff":"diff --git a/jobs.py b/jobs.py\n--- a/jobs.py\n+++ b/jobs.py\n@@ -8,6 +8,7 @@\n import schedule\n from dotenv import load_dotenv\n from tenacity import before_sleep_log, retry, stop_after_attempt, wait_fixed\n+from requests.exceptions import RequestException\n \n from config import load_config\n from constants import PRIORITY_TO_SCORE\n@@ -680,7 +681,7 @@\n             functions=function_spec,\n             function_call_name=\"generate_changelog\",\n         )\n-    except Exception as e:\n+    except (RequestException, ValueError) as e:\n         logging.error(\n             \"Failed to generate changelog via function call. Error: %s\",\n             e,\n"}]},{"message":"[nitpick] The variables 'notified_slack_ids', 'slug_by_slack_id', and 'lead_platforms_by_slack_id' could be initialized closer to where they are used to improve code readability and reduce the scope of variables.","fixFiles":[{"filePath":"jobs.py","diff":"diff --git a/jobs.py b/jobs.py\n--- a/jobs.py\n+++ b/jobs.py\n@@ -205,10 +205,10 @@\n             if bug[\"assignee\"]\n         }\n         platforms = {bug[\"platform\"] for bug in unassigned if bug[\"platform\"]}\n+        support_slugs = get_support_slugs()\n         notified_slack_ids: set[str] = set()\n         slug_by_slack_id: dict[str, str] = {}\n         lead_platforms_by_slack_id: dict[str, set[str]] = {}\n-        support_slugs = get_support_slugs()\n         for platform in platforms:\n             platform_slug = platform.lower().replace(\" \", \"-\")\n             platform_config = config[\"platforms\"].get(platform_slug, {})\n"}]},{"message":"The infinite loop lacks a graceful shutdown mechanism. Consider adding signal handling to allow for clean shutdown when the process receives SIGTERM or SIGINT.","fixFiles":[{"filePath":"jobs.py","diff":"diff --git a/jobs.py b/jobs.py\n--- a/jobs.py\n+++ b/jobs.py\n@@ -3,6 +3,7 @@\n import re\n import time\n from datetime import datetime, timezone\n+import signal\n \n import requests\n import schedule\n@@ -709,6 +710,15 @@\n     post_to_slack(changelog_text)\n \n \n+shutdown_requested = False\n+\n+\n+def handle_shutdown(signum, frame):\n+    global shutdown_requested\n+    logging.info(\"Received shutdown signal %s, stopping scheduler loop...\", signum)\n+    shutdown_requested = True\n+\n+\n if os.getenv(\"DEBUG\") == \"true\":\n     post_inactive_engineers()\n     post_priority_bugs()\n@@ -726,6 +736,13 @@\n     schedule.every().friday.at(\"12:00\").do(post_upcoming_projects)\n     schedule.every().monday.at(\"12:00\").do(post_friday_deadlines)\n \n-    while True:\n-        schedule.run_pending()\n-        time.sleep(1)\n+    signal.signal(signal.SIGINT, handle_shutdown)\n+    signal.signal(signal.SIGTERM, handle_shutdown)\n+    logging.info(\"Starting scheduler loop\")\n+\n+    try:\n+        while not shutdown_requested:\n+            schedule.run_pending()\n+            time.sleep(1)\n+    except KeyboardInterrupt:\n+        logging.info(\"KeyboardInterrupt received, stopping scheduler loop...\")\n"}]}]

✨ Let Copilot coding agent set things up for you — coding agent works faster and does higher quality work when set up for your repo.

Copilot AI and others added 2 commits January 2, 2026 14:04
Co-authored-by: redreceipt <2659478+redreceipt@users.noreply.github.com>
Co-authored-by: redreceipt <2659478+redreceipt@users.noreply.github.com>
Copilot AI changed the title [WIP] Fix broad exception handling in jobs.py Replace bare Exception catches with specific exception types and add graceful shutdown Jan 2, 2026
Copilot AI requested a review from redreceipt January 2, 2026 14:08
@redreceipt redreceipt marked this pull request as ready for review January 2, 2026 14:10
Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment on lines 496 to 499
try:
completed = get_completed_issues_for_person(login, 7)
except Exception as e:
except (requests.RequestException, ValueError) as e:
logging.error(f"Failed to fetch completed issues for {login}: {e}")

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Catch Linear client errors when listing inactive engineers

The new (requests.RequestException, ValueError) catch in post_inactive_engineers no longer matches the errors produced by get_completed_issues_for_person, which uses the gql client over aiohttp and raises TransportQueryError/aiohttp.ClientError rather than requests.RequestException. Those failures will now bubble out of the scheduled job (the with_retries wrapper re-raises), stopping the scheduler loop instead of logging and continuing as before when Linear is unavailable.

Useful? React with 👍 / 👎.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@codex could you address this comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary

  • Added specific exception imports for Linear GraphQL transport, aiohttp client, and OpenAI SDK to align scheduler error handling with actual client failures. jobs.pyL1-L14
  • Updated post_inactive_engineers to log and continue on Linear transport/client errors or bad data instead of letting scheduler jobs crash. jobs.pyL489-L503
  • Ensured weekly changelog generation catches OpenAI errors alongside request/validation issues to preserve scheduler resilience. jobs.pyL680-L692

Testing

  • ⚠️ Not run (not requested)

View task →

Comment on lines +684 to 686
except (requests.RequestException, ValueError) as e:
logging.error(
"Failed to generate changelog via function call. Error: %s",

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge OpenAI failures now crash weekly changelog job

post_weekly_changelog now only catches requests.RequestException/ValueError, but get_chat_function_call is implemented with the OpenAI SDK and will raise OpenAIError (or other SDK/network exceptions) instead. Any transient OpenAI outage will therefore propagate and terminate the scheduler instead of being logged and returning an empty changelog as before, reducing resilience of the weekly job.

Useful? React with 👍 / 👎.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants