Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
37 changes: 37 additions & 0 deletions .github/workflows/REVIEW.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,37 @@
# Revisión de workflows

Resumen automático de los triggers y jobs definidos en `.github/workflows`.

| Workflow | Triggers | Jobs principales |
| --- | --- | --- |
| actionlint.yml | push, pull_request, workflow_dispatch | actionlint |
| agents-ci.yml | push, pull_request, workflow_dispatch | code-quality, tests, module-tests, performance, security-tests, integration-tests, build-status |
| backend-ci.yml | push, pull_request | lint, test-mysql, test-postgresql, validate-restrictions, integration-tests, summary |
| code-quality.yml | pull_request, workflow_dispatch | smoke-checks |
| codeql.yml | push, pull_request, schedule | analyze |
| dependency-review.yml | pull_request | review |
| deploy.yml | push, workflow_dispatch | pre-deployment-checks, run-tests, build-backend, build-frontend, deploy-staging, deploy-production, post-deployment-monitoring |
| docs-validation.yml | pull_request, push | validate-structure, check-old-references, check-markdown-links, validate-auto-generated-docs, count-docs-stats, summary |
| docs.yml | push, pull_request, workflow_dispatch | build, deploy, check-links |
| emoji-validation.yml | pull_request, push | check-emojis |
| frontend-ci.yml | push, pull_request | lint, test-unit, test-integration, test-e2e, build, accessibility, security, summary |
| incident-response.yml | workflow_dispatch | create-incident-issue, gather-diagnostics, execute-incident-playbook, notify-team, summary |
| infrastructure-ci.yml | push, pull_request | validate-shell-scripts, test-validation-scripts, validate-terraform, validate-docker, validate-configurations, test-health-check, summary |
| lint.yml | pull_request, push | lint-frontmatter |
| meta-architecture-check.yml | pull_request, push, workflow_dispatch | architecture-analysis, code-quality-gate |
| migrations.yml | pull_request, push | detect-migrations, validate-migrations, check-migration-safety, generate-migration-report, summary |
| pr-review.yml | issue_comment | pr-validation |
| python_ci.yml | push, pull_request, workflow_dispatch | code-quality, tests, performance, dependency-check, build-status |
| release.yml | push, workflow_dispatch | validate-version, generate-changelog, create-release-packages, update-version-files, create-github-release, notify-stakeholders, release-summary |
| requirements_index.yml | push, pull_request, workflow_dispatch | generate-indices |
| requirements_validate_traceability.yml | pull_request, push, workflow_dispatch | validate-traceability |
| security-scan.yml | push, pull_request, schedule | bandit-scan, npm-audit, safety-check, django-security-check, trivy-scan, secrets-scan, sql-injection-check, xss-check, csrf-check, generate-security-report, summary |
| sync-docs.yml | schedule, workflow_dispatch | sync-documentation, notify-failure |
| test-pyramid.yml | push, pull_request, schedule | analyze-test-pyramid, test-execution-time, summary |
| validate-guides.yml | pull_request, push, workflow_dispatch | validate-structure, check-broken-links, generate-coverage-report, quality-checks, summary |

## Hallazgos destacados

- **requirements_validate_traceability.yml**: el script de validación estaba mal indentado y sin parsing robusto del front matter, lo que podía provocar errores de ejecución. Se reescribió con PyYAML, normalización de listas y reporte explícito de errores para evitar falsos positivos.
- **Cobertura de calidad de YAML**: no existía un guardrail automático para los workflows. Se añadió `actionlint.yml` para validar sintaxis y convenciones de GitHub Actions en `push`, `pull_request` y `workflow_dispatch`.
- **Optimización pendiente**: `deploy.yml` puede beneficiarse de cachear dependencias Python (setup-python con `cache: 'pip'`) para acelerar las ejecuciones. `pr-review.yml` funciona solo con `issue_comment`; si se requiere validación previa al comentario, añadir `workflow_dispatch` como trigger manual daría más control.
32 changes: 32 additions & 0 deletions .github/workflows/actionlint.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
name: Lint GitHub Actions

on:
push:
branches:
- main
- develop
paths:
- '.github/workflows/**'
pull_request:
branches:
- main
- develop
paths:
- '.github/workflows/**'
workflow_dispatch:

permissions:
contents: read

jobs:
actionlint:
name: Validate workflow syntax
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v4

- name: Run actionlint
uses: docker://ghcr.io/rhysd/actionlint:1.7.1
with:
args: -color
108 changes: 54 additions & 54 deletions .github/workflows/requirements_validate_traceability.yml
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,9 @@ jobs:
- name: Checkout repository
uses: actions/checkout@v4

- name: Install validation dependencies
run: pip install pyyaml

- name: Setup Python
Comment on lines +24 to 27
Copy link

Copilot AI Nov 18, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Dependencies are installed before Python is set up. The 'Install validation dependencies' step should be moved after 'Setup Python' to ensure pip uses the correct Python version and environment. The current order may use the system Python instead of the configured 3.11 version.

Copilot uses AI. Check for mistakes.
uses: actions/setup-python@v5
with:
Comment on lines +24 to 29
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Install PyYAML after selecting runner Python

The workflow installs PyYAML before running actions/setup-python, so the package is placed in the runner’s default Python (currently 3.10) but the validation script later runs with the 3.11 interpreter configured by the setup step. On ubuntu-latest this causes the import yaml at runtime to fail with ModuleNotFoundError, preventing the traceability check from running on any branch. Move the pip install after the setup-python step (or use python -m pip from the configured interpreter) so the dependency is available to the Python version executing the script.

Useful? React with 👍 / 👎.

Expand All @@ -32,7 +35,18 @@ jobs:
import os
import re
import sys
from collections import defaultdict
from typing import Iterable, List

import yaml

FRONT_MATTER_PATTERN = re.compile(r"^---\s*\n(.*?)\n---\s*", re.DOTALL)

def ensure_list(value: Iterable | str | None) -> List[str]:
Comment on lines +38 to +44
Copy link

Copilot AI Nov 18, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The type hint Iterable | str | None is too broad. Since str is itself an Iterable, this may cause unexpected behavior if a string is processed as an iterable of characters rather than as a single string value. Consider using Iterable[Any] | str | None or reordering to str | Iterable | None with explicit type checks to ensure strings are handled correctly before other iterables.

Suggested change
from typing import Iterable, List
import yaml
FRONT_MATTER_PATTERN = re.compile(r"^---\s*\n(.*?)\n---\s*", re.DOTALL)
def ensure_list(value: Iterable | str | None) -> List[str]:
from typing import Iterable, List, Any
import yaml
FRONT_MATTER_PATTERN = re.compile(r"^---\s*\n(.*?)\n---\s*", re.DOTALL)
def ensure_list(value: str | Iterable[Any] | None) -> List[str]:

Copilot uses AI. Check for mistakes.
if value is None:
return []
if isinstance(value, str):
return [value.strip()] if value.strip() else []
return [str(item).strip() for item in value if str(item).strip()]

print("Validating requirements traceability...")
print("=" * 80)
Expand All @@ -41,9 +55,10 @@ jobs:
requirements = {}
broken_links = []
orphaned_requirements = []
invalid_front_matter = []

# First pass: collect all requirement IDs
for root, dirs, files in os.walk('implementacion'):
for root, _, files in os.walk('implementacion'):
if 'requisitos' not in root:
continue

Expand All @@ -56,84 +71,62 @@ jobs:
with open(filepath, 'r', encoding='utf-8') as f:
content = f.read()

match = re.match(r'^---\s*
(.*?)
---\s*
', content, re.DOTALL)
match = FRONT_MATTER_PATTERN.match(content)
if not match:
continue

yaml_content = match.group(1)
fields = {}
current_list = None

for line in yaml_content.split('
'):
line = line.rstrip()

if current_list and line.startswith(' - '):
value = line[4:].strip()
fields[current_list].append(value)
else:
current_list = None

if ':' in line and not line.startswith(' '):
key, value = line.split(':', 1)
key = key.strip()
value = value.strip()

if value == '[]' or not value:
fields[key] = []
current_list = key
else:
fields[key] = value

if 'id' in fields:
req_id = fields['id']
all_req_ids.add(req_id)
requirements[req_id] = {
'path': filepath,
'tipo': fields.get('tipo', ''),
'upward': fields.get('trazabilidad_upward', []),
'downward': fields.get('trazabilidad_downward', [])
}
try:
metadata = yaml.safe_load(match.group(1)) or {}
except yaml.YAMLError as exc:
invalid_front_matter.append({'path': filepath, 'error': str(exc)})
continue

req_id = metadata.get('id')
if not req_id:
continue

all_req_ids.add(req_id)
requirements[req_id] = {
'path': filepath,
'tipo': metadata.get('tipo', ''),
'upward': ensure_list(metadata.get('trazabilidad_upward')),
'downward': ensure_list(metadata.get('trazabilidad_downward')),
}

print(f"Found {len(all_req_ids)} requirements")

# Second pass: validate traceability links
for req_id, data in requirements.items():
# Check upward references
for parent_id in data['upward']:
if parent_id not in all_req_ids:
broken_links.append({
'req_id': req_id,
'path': data['path'],
'missing': parent_id,
'direction': 'upward'
'direction': 'upward',
})

# Check downward references
for child_id in data['downward']:
if child_id not in all_req_ids:
broken_links.append({
'req_id': req_id,
'path': data['path'],
'missing': child_id,
'direction': 'downward'
'direction': 'downward',
})

# Check for orphaned requirements (no upward traceability)
if data['tipo'] not in ['necesidad'] and not data['upward']:
orphaned_requirements.append({
'req_id': req_id,
'path': data['path'],
'tipo': data['tipo']
'path': data['path'],
'tipo': data['tipo'],
})

print("")
print("Results:")
print(f" Broken links: {len(broken_links)}")
print(f" Orphaned requirements: {len(orphaned_requirements)}")
print(f" Invalid front matter: {len(invalid_front_matter)}")

if broken_links:
print("")
Expand All @@ -148,15 +141,22 @@ jobs:
for req in orphaned_requirements:
print(f" {req['req_id']} ({req['tipo']}) - {req['path']}")

if invalid_front_matter:
print("")
print("INVALID FRONT MATTER:")
for item in invalid_front_matter:
print(f" {item['path']}")
print(f" -> {item['error']}")

print("")
if broken_links:
print("VALIDATION FAILED: Broken traceability links found")
if broken_links or invalid_front_matter:
print("VALIDATION FAILED: Broken traceability links or invalid front matter found")
sys.exit(1)
else:
print("VALIDATION PASSED: All traceability links are valid")
if orphaned_requirements:
print(f"WARNING: {len(orphaned_requirements)} orphaned requirements (informational)")
sys.exit(0)

print("VALIDATION PASSED: All traceability links are valid")
if orphaned_requirements:
print(f"WARNING: {len(orphaned_requirements)} orphaned requirements (informational)")
sys.exit(0)
EOF

- name: Generate traceability report
Expand Down
Loading