Skip to content

Install Autoloop#24345

Closed
Copilot wants to merge 1 commit intomainfrom
copilot/install-autoloop
Closed

Install Autoloop#24345
Copilot wants to merge 1 commit intomainfrom
copilot/install-autoloop

Conversation

Copy link
Copy Markdown
Contributor

Copilot AI commented Apr 3, 2026

  • Clone autoloop repo and copy workflow/template files
  • Create .autoloop/programs directory
  • Compile autoloop and sync-branches workflows
  • Create PR: Install Autoloop #24345

Agent-Logs-Url: https://github.com/github/gh-aw/sessions/01dfbe7f-690f-41f9-a339-040f104a1697

Co-authored-by: pelikhan <4175913+pelikhan@users.noreply.github.com>
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Installs the Autoloop agentic workflow set (Autoloop loop + branch sync) and adds an issue template for defining Autoloop programs.

Changes:

  • Added the Autoloop workflow source (.md) and compiled lock workflow (.lock.yml).
  • Added a sync-branches workflow to keep autoloop/* branches up to date with the default branch.
  • Added an Autoloop Program issue template for issue-based program definitions.
Show a summary per file
File Description
.github/workflows/autoloop.md Defines the Autoloop workflow prompt + scheduling pre-step logic.
.github/workflows/autoloop.lock.yml Compiled workflow used by GitHub Actions to run Autoloop.
.github/workflows/sync-branches.md Defines the branch-sync workflow logic for autoloop/* branches.
.github/workflows/sync-branches.lock.yml Compiled workflow used by GitHub Actions to run branch sync.
.github/ISSUE_TEMPLATE/autoloop-program.md Template for creating issue-based Autoloop programs via labeled issues.

Copilot's findings

Tip

Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comments suppressed due to low confidence (1)

.github/workflows/autoloop.md:107

  • repo_memory_dir is set to /tmp/gh-aw/repo-memory/autoloop, but this repo’s other workflows (and the compiled autoloop workflow) use /tmp/gh-aw/repo-memory/default as the clone destination for repo-memory branches (e.g., .github/workflows/agent-performance-analyzer.md:178). With the current path, state files will never be found and scheduling will never persist. Update the path to match the actual repo-memory checkout directory used by the workflow.
      # Repo-memory files are cloned to /tmp/gh-aw/repo-memory/{id}/ where {id}
      # is derived from the branch-name configured in the tools section (memory/autoloop → autoloop)
      repo_memory_dir = "/tmp/gh-aw/repo-memory/autoloop"
  • Files reviewed: 4/5 changed files
  • Comments generated: 6

branches: [main] # ← update this if your default branch is not 'main'
workflow_dispatch:

permissions: read-all
Copy link

Copilot AI Apr 3, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sync-branches performs git push to update autoloop/* branches, but the workflow only requests read-all permissions. With read-only GITHUB_TOKEN permissions, pushes will be rejected. Update the workflow permissions to grant contents: write (and ideally scope to the minimum required).

Suggested change
permissions: read-all
permissions:
contents: write

Copilot uses AI. Check for mistakes.
Comment on lines +34 to +43
# List all remote branches matching the autoloop/* pattern
result = subprocess.run(
["git", "branch", "-r", "--list", "origin/autoloop/*"],
capture_output=True, text=True
)
if result.returncode != 0:
print(f"Failed to list remote branches: {result.stderr}")
sys.exit(0)

branches = [b.strip().replace("origin/", "") for b in result.stdout.strip().split("\n") if b.strip()]
Copy link

Copilot AI Apr 3, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This branch discovery relies on git branch -r --list origin/autoloop/*, but the workflow does not fetch remote heads beyond the default checkout. With the default actions/checkout fetch depth, those remote-tracking refs typically aren’t present, so this will often report no branches. Fetch autoloop/* refs explicitly before listing, or use git ls-remote --heads to discover branches without relying on pre-fetched refs.

Suggested change
# List all remote branches matching the autoloop/* pattern
result = subprocess.run(
["git", "branch", "-r", "--list", "origin/autoloop/*"],
capture_output=True, text=True
)
if result.returncode != 0:
print(f"Failed to list remote branches: {result.stderr}")
sys.exit(0)
branches = [b.strip().replace("origin/", "") for b in result.stdout.strip().split("\n") if b.strip()]
# List all remote branches matching the autoloop/* pattern by querying
# the remote directly, rather than relying on local remote-tracking refs.
result = subprocess.run(
["git", "ls-remote", "--heads", "origin", "autoloop/*"],
capture_output=True, text=True
)
if result.returncode != 0:
print(f"Failed to list remote branches: {result.stderr}")
sys.exit(0)
branches = []
for line in result.stdout.strip().split("\n"):
if not line.strip():
continue
parts = line.split()
if len(parts) >= 2 and parts[1].startswith("refs/heads/"):
branches.append(parts[1].replace("refs/heads/", "", 1))

Copilot uses AI. Check for mistakes.
Comment on lines +283 to +287
name: Merge default branch into all autoloop program branches
run: "python3 - << 'PYEOF'\nimport os, subprocess, sys\n\ntoken = os.environ.get(\"GITHUB_TOKEN\", \"\")\nrepo = os.environ.get(\"GITHUB_REPOSITORY\", \"\")\ndefault_branch = os.environ.get(\"DEFAULT_BRANCH\", \"main\")\n\n# List all remote branches matching the autoloop/* pattern\nresult = subprocess.run(\n [\"git\", \"branch\", \"-r\", \"--list\", \"origin/autoloop/*\"],\n capture_output=True, text=True\n)\nif result.returncode != 0:\n print(f\"Failed to list remote branches: {result.stderr}\")\n sys.exit(0)\n\nbranches = [b.strip().replace(\"origin/\", \"\") for b in result.stdout.strip().split(\"\\n\") if b.strip()]\n\nif not branches:\n print(\"No autoloop/* branches found. Nothing to sync.\")\n sys.exit(0)\n\nprint(f\"Found {len(branches)} autoloop branch(es) to sync: {branches}\")\n\nfailed = []\nfor branch in branches:\n print(f\"\\n--- Syncing {branch} with {default_branch} ---\")\n\n # Fetch both branches\n subprocess.run([\"git\", \"fetch\", \"origin\", branch], capture_output=True)\n subprocess.run([\"git\", \"fetch\", \"origin\", default_branch], capture_output=True)\n\n # Check out the program branch\n checkout = subprocess.run(\n [\"git\", \"checkout\", branch],\n capture_output=True, text=True\n )\n if checkout.returncode != 0:\n # Try creating a local tracking branch\n checkout = subprocess.run(\n [\"git\", \"checkout\", \"-b\", branch, f\"origin/{branch}\"],\n capture_output=True, text=True\n )\n if checkout.returncode != 0:\n print(f\" Failed to checkout {branch}: {checkout.stderr}\")\n failed.append(branch)\n continue\n\n # Merge the default branch into the program branch\n merge = subprocess.run(\n [\"git\", \"merge\", f\"origin/{default_branch}\", \"--no-edit\",\n \"-m\", f\"Merge {default_branch} into {branch}\"],\n capture_output=True, text=True\n )\n if merge.returncode != 0:\n print(f\" Merge conflict or failure for {branch}: {merge.stderr}\")\n # Abort the merge to leave a clean state\n subprocess.run([\"git\", \"merge\", \"--abort\"], capture_output=True)\n failed.append(branch)\n continue\n\n # Push the updated branch\n push = subprocess.run(\n [\"git\", \"push\", \"origin\", branch],\n capture_output=True, text=True\n )\n if push.returncode != 0:\n print(f\" Failed to push {branch}: {push.stderr}\")\n failed.append(branch)\n continue\n\n print(f\" Successfully synced {branch}\")\n\n# Return to default branch\nsubprocess.run([\"git\", \"checkout\", default_branch], capture_output=True)\n\nif failed:\n print(f\"\\n⚠️ Failed to sync {len(failed)} branch(es): {failed}\")\n print(\"These branches may need manual conflict resolution.\")\n # Don't fail the workflow — log the issue but continue\nelse:\n print(f\"\\n✅ All {len(branches)} branch(es) synced successfully.\")\nPYEOF"

- name: Configure Git credentials
env:
Copy link

Copilot AI Apr 3, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The merge/push step runs before Git author/credentials are configured. In the compiled workflow this means merge commits may fail (missing identity) and pushes/fetches can fail (no authenticated remote). Configure Git credentials (user.name/user.email + authenticated origin URL) before running the merge script.

Copilot uses AI. Check for mistakes.
Comment on lines +455 to +457
if not selected and not unconfigured:
print("\nNo programs due this run. Exiting early.")
sys.exit(1) # Non-zero exit skips the agent step
Copy link

Copilot AI Apr 3, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sys.exit(1) will fail the workflow run (not just “skip the agent step”), creating noisy scheduled failures when no programs are due. Prefer exiting successfully and gating the later agent execution based on an output/flag from this step (or write a sentinel to /tmp/gh-aw/autoloop.json and have the agent produce a no-op).

Copilot uses AI. Check for mistakes.
Comment on lines +437 to +441
name: Check which programs are due
run: "python3 - << 'PYEOF'\nimport os, json, re, glob, sys\nimport urllib.request, urllib.error\nfrom datetime import datetime, timezone, timedelta\n\nprograms_dir = \".autoloop/programs\"\nautoloop_dir = \".autoloop/programs\"\ntemplate_file = os.path.join(autoloop_dir, \"example.md\")\n\n# Read program state from repo-memory (persistent git-backed storage)\ngithub_token = os.environ.get(\"GITHUB_TOKEN\", \"\")\nrepo = os.environ.get(\"GITHUB_REPOSITORY\", \"\")\nforced_program = os.environ.get(\"AUTOLOOP_PROGRAM\", \"\").strip()\n\n# Repo-memory files are cloned to /tmp/gh-aw/repo-memory/{id}/ where {id}\n# is derived from the branch-name configured in the tools section (memory/autoloop → autoloop)\nrepo_memory_dir = \"/tmp/gh-aw/repo-memory/autoloop\"\n\ndef parse_machine_state(content):\n \"\"\"Parse the ⚙️ Machine State table from a state file. Returns a dict.\"\"\"\n state = {}\n m = re.search(r'## ⚙️ Machine State.*?\\n(.*?)(?=\\n## |\\Z)', content, re.DOTALL)\n if not m:\n return state\n section = m.group(0)\n for row in re.finditer(r'\\|\\s*(.+?)\\s*\\|\\s*(.+?)\\s*\\|', section):\n raw_key = row.group(1).strip()\n raw_val = row.group(2).strip()\n if raw_key.lower() in (\"field\", \"---\", \":---\", \":---:\", \"---:\"):\n continue\n key = raw_key.lower().replace(\" \", \"_\")\n val = None if raw_val in (\"—\", \"-\", \"\") else raw_val\n state[key] = val\n # Coerce types\n for int_field in (\"iteration_count\", \"consecutive_errors\"):\n if int_field in state:\n try:\n state[int_field] = int(state[int_field])\n except (ValueError, TypeError):\n state[int_field] = 0\n if \"paused\" in state:\n state[\"paused\"] = str(state.get(\"paused\", \"\")).lower() == \"true\"\n if \"completed\" in state:\n state[\"completed\"] = str(state.get(\"completed\", \"\")).lower() == \"true\"\n # recent_statuses: stored as comma-separated words (e.g. \"accepted, rejected, error\")\n rs_raw = state.get(\"recent_statuses\") or \"\"\n if rs_raw:\n state[\"recent_statuses\"] = [s.strip().lower() for s in rs_raw.split(\",\") if s.strip()]\n else:\n state[\"recent_statuses\"] = []\n return state\n\ndef read_program_state(program_name):\n \"\"\"Read scheduling state from the repo-memory state file.\"\"\"\n state_file = os.path.join(repo_memory_dir, f\"{program_name}.md\")\n if not os.path.isfile(state_file):\n print(f\" {program_name}: no state file found (first run)\")\n return {}\n with open(state_file, encoding=\"utf-8\") as f:\n content = f.read()\n return parse_machine_state(content)\n\n# Bootstrap: create autoloop programs directory and template if missing\nif not os.path.isdir(autoloop_dir):\n os.makedirs(autoloop_dir, exist_ok=True)\n bt = chr(96) # backtick — avoid literal backticks that break gh-aw compiler\n template = \"\\n\".join([\n \"<!-- AUTOLOOP:UNCONFIGURED -->\",\n \"<!-- Remove the line above once you have filled in your program. -->\",\n \"<!-- Autoloop will NOT run until you do. -->\",\n \"\",\n \"# Autoloop Program\",\n \"\",\n \"<!-- Rename this file to something meaningful (e.g. training.md, coverage.md).\",\n \" The filename (minus .md) becomes the program name used in issues, PRs,\",\n \" and slash commands. Want multiple loops? Add more .md files here. -->\",\n \"\",\n \"## Goal\",\n \"\",\n \"<!-- Describe what you want to optimize. Be specific about what 'better' means. -->\",\n \"\",\n \"REPLACE THIS with your optimization goal.\",\n \"\",\n \"## Target\",\n \"\",\n \"<!-- List files Autoloop may modify. Everything else is off-limits. -->\",\n \"\",\n \"Only modify these files:\",\n f\"- {bt}REPLACE_WITH_FILE{bt} -- (describe what this file does)\",\n \"\",\n \"Do NOT modify:\",\n \"- (list files that must not be touched)\",\n \"\",\n \"## Evaluation\",\n \"\",\n \"<!-- Provide a command and the metric to extract. -->\",\n \"\",\n f\"{bt}{bt}{bt}bash\",\n \"REPLACE_WITH_YOUR_EVALUATION_COMMAND\",\n f\"{bt}{bt}{bt}\",\n \"\",\n f\"The metric is {bt}REPLACE_WITH_METRIC_NAME{bt}. **Lower/Higher is better.** (pick one)\",\n \"\",\n ])\n with open(template_file, \"w\") as f:\n f.write(template)\n # Leave the template unstaged — the agent will create a draft PR with it\n print(f\"BOOTSTRAPPED: created {template_file} locally (agent will create a draft PR)\")\n\n# Find all program files from all locations:\n# 1. Directory-based programs: .autoloop/programs/<name>/program.md (preferred)\n# 2. Bare markdown programs: .autoloop/programs/<name>.md (simple)\n# 3. Issue-based programs: GitHub issues with the 'autoloop-program' label\nprogram_files = []\nissue_programs = {} # name -> {issue_number, file}\n\n# Scan .autoloop/programs/ for directory-based programs\nif os.path.isdir(programs_dir):\n for entry in sorted(os.listdir(programs_dir)):\n prog_dir = os.path.join(programs_dir, entry)\n if os.path.isdir(prog_dir):\n # Look for program.md inside the directory\n prog_file = os.path.join(prog_dir, \"program.md\")\n if os.path.isfile(prog_file):\n program_files.append(prog_file)\n\n# Scan .autoloop/programs/ for bare markdown programs\nbare_programs = sorted(glob.glob(os.path.join(autoloop_dir, \"*.md\")))\nfor pf in bare_programs:\n program_files.append(pf)\n\n# Scan GitHub issues with the 'autoloop-program' label\nissue_programs_dir = \"/tmp/gh-aw/issue-programs\"\nos.makedirs(issue_programs_dir, exist_ok=True)\ntry:\n api_url = f\"https://api.github.com/repos/{repo}/issues?labels=autoloop-program&state=open&per_page=100\"\n req = urllib.request.Request(api_url, headers={\n \"Authorization\": f\"token {github_token}\",\n \"Accept\": \"application/vnd.github.v3+json\",\n })\n with urllib.request.urlopen(req, timeout=30) as resp:\n issues = json.loads(resp.read().decode())\n for issue in issues:\n if issue.get(\"pull_request\"):\n continue # skip PRs\n body = issue.get(\"body\") or \"\"\n title = issue.get(\"title\") or \"\"\n number = issue[\"number\"]\n # Derive program name from issue title: slugify to lowercase with hyphens\n slug = re.sub(r'[^a-z0-9]+', '-', title.lower()).strip('-')\n slug = re.sub(r'-+', '-', slug) # collapse consecutive hyphens\n if not slug:\n slug = f\"issue-{number}\"\n # Avoid slug collisions: if another issue already claimed this slug, append issue number\n if slug in issue_programs:\n print(f\" Warning: slug '{slug}' (issue #{number}) collides with issue #{issue_programs[slug]['issue_number']}, appending issue number\")\n slug = f\"{slug}-{number}\"\n # Write issue body to a temp file so the scheduling loop can process it\n issue_file = os.path.join(issue_programs_dir, f\"{slug}.md\")\n with open(issue_file, \"w\") as f:\n f.write(body)\n program_files.append(issue_file)\n issue_programs[slug] = {\"issue_number\": number, \"file\": issue_file, \"title\": title}\n print(f\" Found issue-based program: '{slug}' (issue #{number})\")\nexcept Exception as e:\n print(f\" Warning: could not fetch issue-based programs: {e}\")\n\nif not program_files:\n # Fallback to single-file locations\n for path in [\".autoloop/program.md\", \"program.md\"]:\n if os.path.isfile(path):\n program_files = [path]\n break\n\nif not program_files:\n print(\"NO_PROGRAMS_FOUND\")\n os.makedirs(\"/tmp/gh-aw\", exist_ok=True)\n with open(\"/tmp/gh-aw/autoloop.json\", \"w\") as f:\n json.dump({\"due\": [], \"skipped\": [], \"unconfigured\": [], \"no_programs\": True}, f)\n sys.exit(0)\n\nos.makedirs(\"/tmp/gh-aw\", exist_ok=True)\nnow = datetime.now(timezone.utc)\ndue = []\nskipped = []\nunconfigured = []\nall_programs = {} # name -> file path (populated during scanning)\n\n# Schedule string to timedelta\ndef parse_schedule(s):\n s = s.strip().lower()\n m = re.match(r\"every\\s+(\\d+)\\s*h\", s)\n if m:\n return timedelta(hours=int(m.group(1)))\n m = re.match(r\"every\\s+(\\d+)\\s*m\", s)\n if m:\n return timedelta(minutes=int(m.group(1)))\n if s == \"daily\":\n return timedelta(hours=24)\n if s == \"weekly\":\n return timedelta(days=7)\n return None # No per-program schedule — always due\n\ndef get_program_name(pf):\n \"\"\"Extract program name from file path.\n Directory-based: .autoloop/programs/<name>/program.md -> <name>\n Bare markdown: .autoloop/programs/<name>.md -> <name>\n Issue-based: /tmp/gh-aw/issue-programs/<name>.md -> <name>\n \"\"\"\n if pf.endswith(\"/program.md\"):\n # Directory-based program: name is the parent directory\n return os.path.basename(os.path.dirname(pf))\n else:\n # Bare markdown or issue-based program: name is the filename without .md\n return os.path.splitext(os.path.basename(pf))[0]\n\nfor pf in program_files:\n name = get_program_name(pf)\n all_programs[name] = pf\n with open(pf) as f:\n content = f.read()\n\n # Check sentinel (skip for issue-based programs which use AUTOLOOP:ISSUE-PROGRAM)\n if \"<!-- AUTOLOOP:UNCONFIGURED -->\" in content:\n unconfigured.append(name)\n continue\n\n # Check for TODO/REPLACE placeholders\n if re.search(r'\\bTODO\\b|\\bREPLACE', content):\n unconfigured.append(name)\n continue\n\n # Parse optional YAML frontmatter for schedule and target-metric\n # Strip leading HTML comments before checking (issue-based programs may have them)\n content_stripped = re.sub(r'^(\\s*<!--.*?-->\\s*\\n)*', '', content, flags=re.DOTALL)\n schedule_delta = None\n target_metric = None\n fm_match = re.match(r\"^---\\s*\\n(.*?)\\n---\\s*\\n\", content_stripped, re.DOTALL)\n if fm_match:\n for line in fm_match.group(1).split(\"\\n\"):\n if line.strip().startswith(\"schedule:\"):\n schedule_str = line.split(\":\", 1)[1].strip()\n schedule_delta = parse_schedule(schedule_str)\n if line.strip().startswith(\"target-metric:\"):\n try:\n target_metric = float(line.split(\":\", 1)[1].strip())\n except (ValueError, TypeError):\n print(f\" Warning: {name} has invalid target-metric value: {line.split(':', 1)[1].strip()}\")\n\n # Read state from repo-memory\n state = read_program_state(name)\n if state:\n print(f\" {name}: last_run={state.get('last_run')}, iteration_count={state.get('iteration_count')}\")\n else:\n print(f\" {name}: no state found (first run)\")\n\n last_run = None\n lr = state.get(\"last_run\")\n if lr:\n try:\n last_run = datetime.fromisoformat(lr.replace(\"Z\", \"+00:00\"))\n except ValueError:\n pass\n\n # Check if completed (target metric was reached)\n if str(state.get(\"completed\", \"\")).lower() == \"true\":\n skipped.append({\"name\": name, \"reason\": f\"completed: target metric reached\"})\n continue\n\n # Check if paused (e.g., plateau or recurring errors)\n if state.get(\"paused\"):\n skipped.append({\"name\": name, \"reason\": f\"paused: {state.get('pause_reason', 'unknown')}\"})\n continue\n\n # Auto-pause on plateau: 5+ consecutive rejections\n recent = state.get(\"recent_statuses\", [])[-5:]\n if len(recent) >= 5 and all(s == \"rejected\" for s in recent):\n skipped.append({\"name\": name, \"reason\": \"plateau: 5 consecutive rejections\"})\n continue\n\n # Check if due based on per-program schedule\n if schedule_delta and last_run:\n if now - last_run < schedule_delta:\n skipped.append({\"name\": name, \"reason\": \"not due yet\",\n \"next_due\": (last_run + schedule_delta).isoformat()})\n continue\n\n due.append({\"name\": name, \"last_run\": lr, \"file\": pf, \"target_metric\": target_metric})\n\n# Pick the program to run\nselected = None\nselected_file = None\nselected_issue = None\nselected_target_metric = None\ndeferred = []\n\nif forced_program:\n # Manual dispatch requested a specific program — bypass scheduling\n # (paused, not-due, and plateau programs can still be forced)\n if forced_program not in all_programs:\n print(f\"ERROR: requested program '{forced_program}' not found.\")\n print(f\" Available programs: {list(all_programs.keys())}\")\n sys.exit(1)\n if forced_program in unconfigured:\n print(f\"ERROR: requested program '{forced_program}' is unconfigured (has placeholders).\")\n sys.exit(1)\n selected = forced_program\n selected_file = all_programs[forced_program]\n deferred = [p[\"name\"] for p in due if p[\"name\"] != forced_program]\n if selected in issue_programs:\n selected_issue = issue_programs[selected][\"issue_number\"]\n # Find target_metric: check the due list first, then parse from the program file\n for p in due:\n if p[\"name\"] == forced_program:\n selected_target_metric = p.get(\"target_metric\")\n break\n if selected_target_metric is None:\n # Program may have been skipped (completed/paused/plateau) — parse directly\n try:\n with open(selected_file) as _f:\n _content = _f.read()\n _content_stripped = re.sub(r'^(\\s*<!--.*?-->\\s*\\n)*', '', _content, flags=re.DOTALL)\n _fm = re.match(r\"^---\\s*\\n(.*?)\\n---\\s*\\n\", _content_stripped, re.DOTALL)\n if _fm:\n for _line in _fm.group(1).split(\"\\n\"):\n if _line.strip().startswith(\"target-metric:\"):\n selected_target_metric = float(_line.split(\":\", 1)[1].strip())\n break\n except (OSError, ValueError, TypeError):\n pass\n print(f\"FORCED: running program '{forced_program}' (manual dispatch)\")\nelif due:\n # Normal scheduling: pick the single most-overdue program\n due.sort(key=lambda p: p[\"last_run\"] or \"\") # None/empty sorts first (never run)\n selected = due[0][\"name\"]\n selected_file = due[0][\"file\"]\n selected_target_metric = due[0].get(\"target_metric\")\n deferred = [p[\"name\"] for p in due[1:]]\n # Check if the selected program is issue-based\n if selected in issue_programs:\n selected_issue = issue_programs[selected][\"issue_number\"]\n\nresult = {\n \"selected\": selected,\n \"selected_file\": selected_file,\n \"selected_issue\": selected_issue,\n \"selected_target_metric\": selected_target_metric,\n \"issue_programs\": {name: info[\"issue_number\"] for name, info in issue_programs.items()},\n \"deferred\": deferred,\n \"skipped\": skipped,\n \"unconfigured\": unconfigured,\n \"no_programs\": False,\n}\n\nos.makedirs(\"/tmp/gh-aw\", exist_ok=True)\nwith open(\"/tmp/gh-aw/autoloop.json\", \"w\") as f:\n json.dump(result, f, indent=2)\n\nprint(\"=== Autoloop Program Check ===\")\nprint(f\"Selected program: {selected or '(none)'} ({selected_file or 'n/a'})\")\nprint(f\"Deferred (next run): {deferred or '(none)'}\")\nprint(f\"Programs skipped: {[s['name'] for s in skipped] or '(none)'}\")\nprint(f\"Programs unconfigured: {unconfigured or '(none)'}\")\n\nif not selected and not unconfigured:\n print(\"\\nNo programs due this run. Exiting early.\")\n sys.exit(1) # Non-zero exit skips the agent step\nPYEOF\n"

# Repo memory git-based storage configuration from frontmatter processed below
- name: Clone repo-memory branch (default)
Copy link

Copilot AI Apr 3, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The repo-memory clone step occurs after Check which programs are due, so the scheduling script cannot see persisted state from previous runs. This causes incorrect selection/skip behavior. Reorder so repo-memory is available before the scheduling step, or move scheduling into the agent phase after repo-memory is cloned.

This issue also appears on line 445 of the same file.

Copilot uses AI. Check for mistakes.
Comment on lines +85 to +89
- name: Check which programs are due
env:
GITHUB_TOKEN: ${{ github.token }}
GITHUB_REPOSITORY: ${{ github.repository }}
AUTOLOOP_PROGRAM: ${{ github.event.inputs.program }}
Copy link

Copilot AI Apr 3, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This scheduling step reads persisted state from repo_memory_dir, but in the compiled workflow the repo-memory branch is cloned after this step runs. As a result, selection/skip decisions will ignore prior state (last_run/paused/completed). Fix by ensuring repo-memory is cloned before this step (e.g., an explicit clone step) or move scheduling into the agent phase after repo-memory is available.

This issue also appears on line 105 of the same file.

Copilot uses AI. Check for mistakes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants