Conversation
…into feature/integration-test-setup
…setup Feature/integration test setup
…setup savepoint
WalkthroughThis PR introduces a comprehensive integration test suite for the mentoring application, adding 27+ test files with corresponding JSON schema definitions across Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~22 minutes
Possibly related PRs
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✨ Finishing touches🧪 Generate unit tests (beta)
📜 Recent review detailsConfiguration used: Path: .coderabbit.yaml Review profile: ASSERTIVE Plan: Pro 📒 Files selected for processing (1)
🔇 Additional comments (2)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 128
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
⛔ Files ignored due to path filters (1)
dev-ops/docker-compose.ymlis excluded by!dev-ops/**
📒 Files selected for processing (60)
src/integration-tests-new/admin/admin.spec.js(1 hunks)src/integration-tests-new/admin/schemas/admin.schemas.json(1 hunks)src/integration-tests-new/cloud-services/cloud-services.spec.js(1 hunks)src/integration-tests-new/cloud-services/schemas/cloud-services.schemas.json(1 hunks)src/integration-tests-new/commonTests.js(1 hunks)src/integration-tests-new/config/config.spec.js(1 hunks)src/integration-tests-new/config/schemas/config.schemas.json(1 hunks)src/integration-tests-new/connections/connections.specs.js(1 hunks)src/integration-tests-new/connections/schemas/connections.schemas.json(1 hunks)src/integration-tests-new/default-rule/default-rule.spec.js(1 hunks)src/integration-tests-new/default-rule/schemas/default-rule.schemas.json(1 hunks)src/integration-tests-new/entity-type/entity-type.specs.js(1 hunks)src/integration-tests-new/entity-type/schemas/entity-type.schemas.json(1 hunks)src/integration-tests-new/entity/entity.spec.js(1 hunks)src/integration-tests-new/entity/schemas/entity.schemas.json(1 hunks)src/integration-tests-new/feedback/feedback.spec.js(1 hunks)src/integration-tests-new/feedback/schemas/feedback.schemas.json(1 hunks)src/integration-tests-new/form/form.spec.js(1 hunks)src/integration-tests-new/form/schemas/form.schemas.json(1 hunks)src/integration-tests-new/issues/issues.spec.js(1 hunks)src/integration-tests-new/issues/schemas/issues.schemas.json(1 hunks)src/integration-tests-new/mentees/mentees.specs.js(1 hunks)src/integration-tests-new/mentees/schemas/mentees.schemas.json(1 hunks)src/integration-tests-new/mentoring/mentoring.spec.js(1 hunks)src/integration-tests-new/mentoring/schemas/mentoring.schemas.json(1 hunks)src/integration-tests-new/mentors/mentors.specs.js(1 hunks)src/integration-tests-new/mentors/schemas/mentors.schemas.json(1 hunks)src/integration-tests-new/modules/modules.spec.js(1 hunks)src/integration-tests-new/modules/schemas/modules.schemas.json(1 hunks)src/integration-tests-new/org-admin/org-admin.spec.js(1 hunks)src/integration-tests-new/org-admin/schemas/org-admin.schemas.json(1 hunks)src/integration-tests-new/permissions/permissions.spec.js(1 hunks)src/integration-tests-new/permissions/schemas/permissions.schemas.json(1 hunks)src/integration-tests-new/profile/profile.specs.js(1 hunks)src/integration-tests-new/profile/schemas/profile.schemas.json(1 hunks)src/integration-tests-new/question-set/question-set.spec.js(1 hunks)src/integration-tests-new/question-set/schemas/question-set.schemas.json(1 hunks)src/integration-tests-new/questions/questions.spec.js(1 hunks)src/integration-tests-new/questions/schemas/questions.schemas.json(1 hunks)src/integration-tests-new/report-mapping/report-mapping.spec.js(1 hunks)src/integration-tests-new/report-mapping/schemas/report-mapping.schemas.json(1 hunks)src/integration-tests-new/report-queries/report-queries.spec.js(1 hunks)src/integration-tests-new/report-queries/schemas/report-queries.schemas.json(1 hunks)src/integration-tests-new/report-type/report-type.spec.js(1 hunks)src/integration-tests-new/report-type/schemas/report-type.schemas.json(1 hunks)src/integration-tests-new/reports/reports.spec.js(1 hunks)src/integration-tests-new/reports/schemas/reports.schemas.json(1 hunks)src/integration-tests-new/requestSessions/requestSessions.specs.js(1 hunks)src/integration-tests-new/requestSessions/schemas/requestSessions.schemas.json(1 hunks)src/integration-tests-new/role-extension/role-extension.spec.js(1 hunks)src/integration-tests-new/role-extension/schemas/role-extension.schemas.json(1 hunks)src/integration-tests-new/rolePermissionMapping/rolePermissionMapping.spec.js(1 hunks)src/integration-tests-new/rolePermissionMapping/schemas/rolePermissionMapping.schemas.json(1 hunks)src/integration-tests-new/sessions/schemas/sessions.schemas.json(1 hunks)src/integration-tests-new/sessions/sessions.specs.js(1 hunks)src/integration-tests-new/users/schemas/users.schemas.json(1 hunks)src/integration-tests-new/users/users.spec.js(1 hunks)src/integrationJest.config.js(1 hunks)src/package.json(2 hunks)test.js(1 hunks)
🧰 Additional context used
🧠 Learnings (3)
📚 Learning: 2025-11-06T06:25:57.830Z
Learnt from: sumanvpacewisdom
Repo: ELEVATE-Project/mentoring PR: 1426
File: src/database/migrations/20251020081719-add-orgEntity-type.js:10-12
Timestamp: 2025-11-06T06:25:57.830Z
Learning: In the ELEVATE-Project/mentoring repository, entity type migrations (like 20251020081719-add-orgEntity-type.js) run before tenant-specific migrations. Therefore, down migrations for these entity types do not need to filter by organization_code and tenant_code, as multi-tenant data does not exist at the time these migrations execute.
Applied to files:
src/integration-tests-new/entity/entity.spec.jssrc/integration-tests-new/entity-type/entity-type.specs.jssrc/integration-tests-new/entity-type/schemas/entity-type.schemas.json
📚 Learning: 2025-09-16T11:00:09.176Z
Learnt from: sumanvpacewisdom
Repo: ELEVATE-Project/mentoring PR: 1334
File: src/controllers/v1/question-set.js:82-83
Timestamp: 2025-09-16T11:00:09.176Z
Learning: In the ELEVATE-Project/mentoring repository, the question-set read API intentionally accepts the code parameter in req.body.code rather than req.query.code, as this is part of their established API contract design.
Applied to files:
src/integration-tests-new/questions/questions.spec.jssrc/integration-tests-new/question-set/question-set.spec.jssrc/integration-tests-new/report-queries/report-queries.spec.jssrc/integration-tests-new/requestSessions/requestSessions.specs.jssrc/integration-tests-new/question-set/schemas/question-set.schemas.jsonsrc/integration-tests-new/mentors/mentors.specs.js
📚 Learning: 2025-09-16T11:03:05.949Z
Learnt from: sumanvpacewisdom
Repo: ELEVATE-Project/mentoring PR: 1334
File: src/controllers/v1/sessions.js:502-524
Timestamp: 2025-09-16T11:03:05.949Z
Learning: The removeAllSessions endpoint in src/controllers/v1/sessions.js is an internal API that intentionally accepts user_id, organization_code, and tenant_code from the request body rather than deriving them from req.decodedToken, as it's designed for service-to-service communication where the calling service specifies the tenant context.
Applied to files:
src/integration-tests-new/requestSessions/requestSessions.specs.jssrc/integration-tests-new/sessions/sessions.specs.js
🧬 Code graph analysis (19)
src/integration-tests-new/org-admin/org-admin.spec.js (3)
src/integration-tests-new/admin/admin.spec.js (6)
request(1-1)Ajv(2-2)BASE(3-3)TOKEN(4-4)ajv(5-5)schemas(7-7)src/integration-tests-new/config/config.spec.js (6)
request(1-1)Ajv(2-2)BASE(3-3)TOKEN(4-4)ajv(5-5)schemas(7-7)src/integration-tests-new/mentoring/mentoring.spec.js (5)
Ajv(2-2)BASE(3-3)TOKEN(4-4)ajv(5-5)schemas(7-7)
src/integration-tests-new/entity/entity.spec.js (1)
src/integration-tests-new/modules/modules.spec.js (4)
Ajv(2-2)BASE(3-3)TOKEN(4-4)ajv(5-5)
src/integration-tests-new/connections/connections.specs.js (2)
src/integration-tests-new/commonTests.js (4)
require(3-3)userDetails(44-51)res(22-27)res(76-82)src/integration-tests-new/mentoring/mentoring.spec.js (5)
Ajv(2-2)BASE(3-3)TOKEN(4-4)ajv(5-5)schemas(7-7)
src/integration-tests-new/reports/reports.spec.js (3)
src/integration-tests-new/mentors/mentors.specs.js (5)
Ajv(2-2)BASE(3-3)TOKEN(4-4)ajv(5-5)schemas(8-8)src/integration-tests-new/mentoring/mentoring.spec.js (5)
Ajv(2-2)BASE(3-3)TOKEN(4-4)ajv(5-5)schemas(7-7)src/helpers/getDefaultOrgId.js (1)
process(11-11)
src/integration-tests-new/users/users.spec.js (4)
src/integration-tests-new/admin/admin.spec.js (5)
request(1-1)Ajv(2-2)BASE(3-3)TOKEN(4-4)ajv(5-5)src/integration-tests-new/form/form.spec.js (5)
request(1-1)Ajv(2-2)BASE(3-3)TOKEN(4-4)ajv(5-5)src/integration-tests-new/mentoring/mentoring.spec.js (4)
Ajv(2-2)BASE(3-3)TOKEN(4-4)ajv(5-5)src/integration-tests-new/modules/modules.spec.js (4)
Ajv(2-2)BASE(3-3)TOKEN(4-4)ajv(5-5)
src/integration-tests-new/mentees/mentees.specs.js (5)
src/integration-tests-new/commonTests.js (6)
request(11-11)request(64-64)require(3-3)userDetails(44-51)res(22-27)res(76-82)src/integration-tests-new/mentors/mentors.specs.js (8)
request(1-1)Ajv(2-2)BASE(3-3)TOKEN(4-4)ajv(5-5)commonHelper(6-6)schemas(8-8)userDetails(7-7)src/integration-tests-new/config/config.spec.js (6)
request(1-1)Ajv(2-2)BASE(3-3)TOKEN(4-4)ajv(5-5)schemas(7-7)src/integration-tests-new/mentoring/mentoring.spec.js (6)
request(1-1)Ajv(2-2)BASE(3-3)TOKEN(4-4)ajv(5-5)schemas(7-7)src/integration-tests-new/modules/modules.spec.js (5)
Ajv(2-2)BASE(3-3)TOKEN(4-4)ajv(5-5)schemas(7-7)
src/integration-tests-new/permissions/permissions.spec.js (2)
src/integration-tests-new/mentoring/mentoring.spec.js (5)
Ajv(2-2)BASE(3-3)TOKEN(4-4)ajv(5-5)schemas(7-7)src/integration-tests-new/modules/modules.spec.js (5)
Ajv(2-2)BASE(3-3)TOKEN(4-4)ajv(5-5)schemas(7-7)
src/integration-tests-new/question-set/question-set.spec.js (5)
src/integration-tests-new/commonTests.js (3)
request(11-11)request(64-64)require(3-3)src/integration-tests-new/admin/admin.spec.js (6)
request(1-1)Ajv(2-2)BASE(3-3)TOKEN(4-4)ajv(5-5)schemas(7-7)src/integration-tests-new/form/form.spec.js (6)
request(1-1)Ajv(2-2)BASE(3-3)TOKEN(4-4)ajv(5-5)schemas(7-7)src/integration-tests-new/mentoring/mentoring.spec.js (5)
Ajv(2-2)BASE(3-3)TOKEN(4-4)ajv(5-5)schemas(7-7)src/integration-tests-new/modules/modules.spec.js (5)
Ajv(2-2)BASE(3-3)TOKEN(4-4)ajv(5-5)schemas(7-7)
src/integration-tests-new/report-queries/report-queries.spec.js (3)
src/integration-tests-new/mentoring/mentoring.spec.js (5)
Ajv(2-2)BASE(3-3)TOKEN(4-4)ajv(5-5)schemas(7-7)src/integration-tests-new/modules/modules.spec.js (5)
Ajv(2-2)BASE(3-3)TOKEN(4-4)ajv(5-5)schemas(7-7)src/helpers/getDefaultOrgId.js (1)
process(11-11)
src/integration-tests-new/requestSessions/requestSessions.specs.js (4)
src/integration-tests-new/commonTests.js (4)
require(3-3)mentorDetails(99-106)res(22-27)res(76-82)src/integration-tests-new/entity/entity.spec.js (6)
Ajv(2-2)BASE(3-3)TOKEN(4-4)ajv(5-5)commonHelper(6-6)schemas(8-8)src/integration-tests-new/mentees/mentees.specs.js (6)
Ajv(3-3)BASE(4-4)TOKEN(5-5)ajv(6-6)commonHelper(7-7)schemas(8-8)src/integration-tests-new/modules/modules.spec.js (5)
Ajv(2-2)BASE(3-3)TOKEN(4-4)ajv(5-5)schemas(7-7)
src/integration-tests-new/sessions/sessions.specs.js (3)
src/integration-tests-new/commonTests.js (6)
request(11-11)request(64-64)require(3-3)userDetails(44-51)res(22-27)res(76-82)src/integration-tests-new/requestSessions/requestSessions.specs.js (8)
request(2-2)Ajv(3-3)BASE(4-4)TOKEN(5-5)ajv(6-6)commonHelper(7-7)menteeDetails(8-8)schemas(10-10)src/integration-tests-new/mentoring/mentoring.spec.js (5)
Ajv(2-2)BASE(3-3)TOKEN(4-4)ajv(5-5)schemas(7-7)
src/integration-tests-new/issues/issues.spec.js (2)
src/integration-tests-new/mentees/mentees.specs.js (6)
request(2-2)Ajv(3-3)BASE(4-4)TOKEN(5-5)ajv(6-6)schemas(8-8)src/integration-tests-new/modules/modules.spec.js (5)
Ajv(2-2)BASE(3-3)TOKEN(4-4)ajv(5-5)schemas(7-7)
src/integration-tests-new/role-extension/role-extension.spec.js (1)
src/integration-tests-new/mentoring/mentoring.spec.js (5)
Ajv(2-2)BASE(3-3)TOKEN(4-4)ajv(5-5)schemas(7-7)
src/integration-tests-new/feedback/feedback.spec.js (2)
src/integration-tests-new/form/form.spec.js (6)
request(1-1)Ajv(2-2)BASE(3-3)TOKEN(4-4)ajv(5-5)schemas(7-7)src/integration-tests-new/modules/modules.spec.js (5)
Ajv(2-2)BASE(3-3)TOKEN(4-4)ajv(5-5)schemas(7-7)
src/integration-tests-new/profile/profile.specs.js (2)
src/integration-tests-new/commonTests.js (4)
require(3-3)userDetails(44-51)res(22-27)res(76-82)src/integration-tests-new/modules/modules.spec.js (5)
Ajv(2-2)BASE(3-3)TOKEN(4-4)ajv(5-5)schemas(7-7)
src/integration-tests-new/report-mapping/report-mapping.spec.js (1)
src/integration-tests-new/mentoring/mentoring.spec.js (5)
Ajv(2-2)BASE(3-3)TOKEN(4-4)ajv(5-5)schemas(7-7)
src/integration-tests-new/admin/admin.spec.js (1)
src/integration-tests-new/modules/modules.spec.js (5)
Ajv(2-2)BASE(3-3)TOKEN(4-4)ajv(5-5)schemas(7-7)
src/integration-tests-new/mentors/mentors.specs.js (2)
src/integration-tests-new/commonTests.js (4)
require(3-3)userDetails(44-51)res(22-27)res(76-82)src/integration-tests-new/modules/modules.spec.js (5)
Ajv(2-2)BASE(3-3)TOKEN(4-4)ajv(5-5)schemas(7-7)
src/integration-tests-new/modules/modules.spec.js (1)
src/integration-tests-new/mentoring/mentoring.spec.js (5)
Ajv(2-2)BASE(3-3)TOKEN(4-4)ajv(5-5)schemas(7-7)
🔇 Additional comments (23)
src/integration-tests-new/issues/issues.spec.js (1)
16-23: Double‑check thedescriptaionfield name in the request bodyThe payload uses
descriptaion(note the spelling):.send({ descriptaion: 'string', meta_data: { /* ... */ }, })If the API actually expects
description, this test will always send the wrong field and could mask problems if the backend is ignoring unknown keys. If the misspelling matches the live API contract, it’s fine as‑is, but then it’s worth confirming and potentially documenting that this is intentional.src/integration-tests-new/permissions/schemas/permissions.schemas.json (1)
3-3: Verify draft‑04 usage with the Ajv version/config in useEach schema declares
"$schema": "http://json-schema.org/draft-04/schema#", while the tests instantiate Ajv vianew Ajv({ strict: false }). Ajv’s behavior w.r.t. draft‑04 support depends on its major version and configuration.Please double‑check that:
- The Ajv version in
package.jsonsupports draft‑04 as used here, and- Any required draft‑04 compatibility setup (e.g. plugins/config) is in place.
Otherwise, you may see confusing validation behavior or warnings at runtime.
Also applies to: 46-46, 89-89, 116-116
src/integration-tests-new/users/schemas/users.schemas.json (1)
1-100: Consistent with other schema files.The schema structure follows the established pattern in the PR. Consider adding
requiredfields for stronger contract validation, similar to the recommendation for other schema files.src/package.json (1)
75-75: No action needed. Jest version^30.2.0is valid—it was released on September 28, 2025, and is the latest stable version.Likely an incorrect or invalid review comment.
src/integration-tests-new/config/config.spec.js (1)
1-27: Config endpoint test and schema validation look consistentThe test follows the shared supertest + Ajv pattern, asserts 2xx, and validates against
GET_mentoring_v1_config_get. No functional issues stand out here.src/integration-tests-new/mentoring/mentoring.spec.js (1)
1-27: Mentoring health integration test looks solidThe
/mentoring/healthtest cleanly exercises the endpoint and validates againstGET_mentoring_healthvia Ajv; no functional issues stand out.src/integration-tests-new/reports/schemas/reports.schemas.json (1)
243-245: Type mismatch forconfigfield.In
POST_mentoring_v1_reports_update_id, theconfigfield is typed as"string"(line 244), while inPOST_mentoring_v1_reports_createit's typed as an object with nested properties. Verify if this is intentional or if the update endpoint should also expect an object.src/integration-tests-new/admin/schemas/admin.schemas.json (1)
1-155: Admin schemas are structurally consistent and align with described endpointsSchemas for the five admin endpoints consistently model
responseCode,message,result, andmetawith appropriate inner types (booleans for delete flags, string arrays for rebuild/refresh results). No structural issues stand out.src/integration-tests-new/mentors/schemas/mentors.schemas.json (1)
1-381: Mentors response schemas match test usage and allow realistic payload variationsThe mentors schemas use the same keys your tests reference and model realistic payloads (rich mentor details, reports, lists, and createdSessions with
oneOfto tolerate multiple shapes). Thelist/createdSessionswrappers (message/result/meta) align with the active specs.src/integration-tests-new/entity/schemas/entity.schemas.json (1)
1-234: Entity schemas align with test keys and model standard wrapper/result/meta shapeThe four entity schemas use the same keys your spec file references and provide a consistent contract (
responseCode,message, detailedresultobject, andmetawith formsVersion/correlation/meeting_platform). No structural or naming issues detected.src/integration-tests-new/role-extension/schemas/role-extension.schemas.json (1)
74-264: Consistent schema structure across CRUD endpoints.The remaining schemas (
GET_mentoring_v1_role-extension_read,POST_mentoring_v1_role-extension_update,DELETE_mentoring_v1_role-extension_delete) follow a consistent pattern with appropriate result type variations (object vs array for delete). The meta structure withformsVersionitems correctly specifiesrequired: ["id", "type", "version"].src/integration-tests-new/sessions/sessions.specs.js (1)
68-98: Test assertions and schema validation look correct.The session details and enrollment tests properly:
- Use dynamic session IDs from the created session
- Set appropriate auth tokens (mentor for details, mentee for enrollment)
- Validate response status codes
- Perform schema validation with error logging
src/integration-tests-new/profile/schemas/profile.schemas.json (1)
331-450: Profile details schema is comprehensive.The
GET_mentoring_v1_profile_detailsschema properly defines nullable fields (using"type": "null"), visibility settings, and nested structures. TheformsVersionhere correctly uses the object structure with required fields.src/integration-tests-new/requestSessions/schemas/requestSessions.schemas.json (2)
1-94: Create endpoint schema is well-structured.The
POST_mentoring_v1_requestSessions_createschema properly defines all expected fields including nullable fields (meta,session_id,reject_reason,deleted_at), timestamps, and the meta section withformsVersionarray containing properly required object items.
381-449: User availability schema structure looks good.The
GET_mentoring_v1_requestSessions_userAvailabilityschema properly defines the array result with nestedbookedSlotsstructure and appropriaterequiredconstraints for slot items.src/integration-tests-new/entity-type/schemas/entity-type.schemas.json (2)
68-179: Read schema with nested entities is well-defined.The
GET_mentoring_v1_entity-type_readschema has comprehensiverequiredarrays for both entity types and nested entities, ensuring strict validation. The nullableparent_idusing["null", "number"]type array is correctly handled.
181-275: Update and delete schemas follow consistent patterns.Both
POST_mentoring_v1_entity-type_update_idandDELETE_mentoring_v1_entity-type_delete_idschemas maintain consistent structure with appropriate result types (object for update, array for delete).src/integration-tests-new/connections/schemas/connections.schemas.json (3)
149-172: Fixednulltypes may be too restrictive.Fields like
designation(Line 150) andexperience(Line 171) are defined with"type": "null", meaning they must always benull. If these fields can sometimes contain actual values (strings), the schema should use a nullable type instead.Verify the actual API responses. If fields can be either null or have values:
"designation": { - "type": "null" + "type": ["string", "null"] },
1-81: Initiate connection schema is well-structured.The
POST_mentoring_v1_connections_initiateschema properly defines the result object with nestedmeta.messageand all expected fields including timestamps anddeleted_atas null type.
503-585: Accept connection schema correctly defines result structure.The
POST_mentoring_v1_connections_acceptschema includes themeta.room_idfield which is appropriately added for accepted connections, differentiating it from the initiate response.src/integration-tests-new/requestSessions/requestSessions.specs.js (2)
33-93: Session request lifecycle tests are well-structured.The create flow properly:
- Generates dynamic timestamps for future sessions
- Uses appropriate user tokens (mentee creates, mentor accepts)
- Captures the created ID for subsequent tests
- Includes unauthorized and invalid body test cases
127-186: Rejection lifecycle tests maintain proper isolation.Creating a separate session request for the rejection flow (instead of reusing from accept tests) ensures test isolation and prevents test interdependencies. The 15-day offset avoids potential scheduling conflicts.
src/integration-tests-new/sessions/schemas/sessions.schemas.json (1)
152-181: Schema field names and types are out of sync with current database structure.
organization_ids(string) references an outdated field name that was renamed tovisible_to_organizationsin migration 20231030090854- The database field
visible_to_organizationsis defined asARRAY(INTEGER)(per migration 20231115113837), not stringorganization_id(array of numbers) appears to be the correct representation but uses the wrong field name in the schemaUpdate the schema to replace
organization_ids: stringwithvisible_to_organizations: array of integersto align with the current database structure.⛔ Skipped due to learnings
Learnt from: sumanvpacewisdom Repo: ELEVATE-Project/mentoring PR: 1319 File: src/database/models/reportQueries.js:19-28 Timestamp: 2025-08-17T08:39:06.865Z Learning: In the ELEVATE-Project/mentoring codebase, when transitioning models to use composite primary keys with tenant_code and organization_code, existing unique indexes (such as those using organization_id) are intentionally kept unchanged and do not need to be updated to include the new tenant/organization fields, as this aligns with their multi-tenant architecture approach.
| const BASE = process.env.BASE_URL || 'http://localhost:3000' | ||
| const TOKEN = process.env.TEST_BEARER_TOKEN || 'test-token' | ||
| const ajv = new Ajv({ strict: false }) |
There was a problem hiding this comment.
Use the configurable TOKEN instead of hard‑coding 'string' in the auth header
You define TOKEN from TEST_BEARER_TOKEN but don’t use it; the header is currently hard-coded to 'string':
const TOKEN = process.env.TEST_BEARER_TOKEN || 'test-token'
// ...
req = req.set('x-auth-token', 'string')This makes the test non‑portable across environments and likely causes auth failures when a real token is required. Recommend wiring the constant through:
- req = req.set('x-auth-token', 'string')
+ req = req.set('x-auth-token', TOKEN)This also removes the effectively-dead TOKEN definition.
🤖 Prompt for AI Agents
In src/integration-tests-new/issues/issues.spec.js around lines 3 to 5, the test
defines TOKEN but the auth header is hard-coded to 'string'; update the request
setup to use the TOKEN constant (e.g., req.set('x-auth-token', TOKEN)) so the
test uses TEST_BEARER_TOKEN when provided and falls back to 'test-token'; ensure
the TOKEN declaration remains and remove any truly-unused hard-coded literals so
the test becomes portable across environments.
| const schema = schemas['POST_mentoring_v1_issues_create'] | ||
| const validate = ajv.compile(schema) | ||
| const valid = validate(res.body) | ||
| if (!valid) { | ||
| console.error('Schema validation errors:', validate.errors) | ||
| } | ||
| expect(valid).toBe(true) |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Optional: compile the AJV schema once and remove redundant req = req
You currently compile the schema inside the test on every run and have a no‑op assignment in the request chain:
req = req
.send({ /* ... */ })
.set('Content-Type', 'application/json')
const schema = schemas['POST_mentoring_v1_issues_create']
const validate = ajv.compile(schema)
const valid = validate(res.body)For cleaner and slightly more efficient tests:
- Move schema lookup & compilation to the top-level.
- Start the chain directly at
.sendand drop thereq = reqline.
Example:
-const ajv = new Ajv({ strict: false })
-
-const schemas = require('./schemas/issues.schemas.json')
+const ajv = new Ajv({ strict: false })
+const schemas = require('./schemas/issues.schemas.json')
+const createIssueSchema = schemas['POST_mentoring_v1_issues_create']
+const validateCreateIssue = ajv.compile(createIssueSchema)
@@
- req = req
- .send({
+ req = req.send({
descriptaion: 'string',
meta_data: {
request_type: 'string',
browserName: 'string',
browserVersion: 'string',
},
- })
- .set('Content-Type', 'application/json')
+ }).set('Content-Type', 'application/json')
@@
- const schema = schemas['POST_mentoring_v1_issues_create']
- const validate = ajv.compile(schema)
- const valid = validate(res.body)
+ const valid = validateCreateIssue(res.body)This keeps the test body focused on the HTTP interaction.
🤖 Prompt for AI Agents
In src/integration-tests-new/issues/issues.spec.js around lines 29 to 35, the
test compiles the AJV schema on every run and contains a redundant no-op `req =
req` assignment; move the schema lookup and ajv.compile call (using
schemas['POST_mentoring_v1_issues_create']) to the top-level test file (outside
individual it blocks) so a reusable validate function is created once, remove
the `req = req` line and start the request chain directly at `.send(...)`, and
update the test to call the top-level validate on res.body and assert the
result.
| { | ||
| "POST_mentoring_v1_issues_create": { | ||
| "$schema": "http://json-schema.org/draft-04/schema#", | ||
| "type": "object", | ||
| "properties": { | ||
| "responseCode": { | ||
| "type": "string" | ||
| }, | ||
| "message": { | ||
| "type": "string" | ||
| }, | ||
| "result": { | ||
| "type": "array", | ||
| "items": { | ||
| "type": "string" | ||
| } | ||
| }, | ||
| "meta": { | ||
| "type": "object", | ||
| "properties": { | ||
| "formsVersion": { | ||
| "type": "array", | ||
| "items": { | ||
| "type": "string" | ||
| } | ||
| }, | ||
| "correlation": { | ||
| "type": "string" | ||
| }, | ||
| "meeting_platform": { | ||
| "type": "string" | ||
| } | ||
| } | ||
| } | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Consider adding required (and optionally additionalProperties) to tighten contract tests
Right now none of the properties are marked as required, so responses missing responseCode, message, result, or meta will still validate successfully. If the API contract guarantees these fields, it’s worth adding required arrays (and, if desired, additionalProperties: false) at the top level and for meta to make the integration tests actually fail on incomplete responses.
Example patch:
"POST_mentoring_v1_issues_create": {
"$schema": "http://json-schema.org/draft-04/schema#",
"type": "object",
"properties": {
"responseCode": { "type": "string" },
"message": { "type": "string" },
"result": {
"type": "array",
"items": { "type": "string" }
},
"meta": {
"type": "object",
"properties": {
"formsVersion": {
"type": "array",
"items": { "type": "string" }
},
"correlation": { "type": "string" },
"meeting_platform": { "type": "string" }
- }
+ },
+ "required": ["formsVersion", "correlation", "meeting_platform"]
}
- }
+ },
+ "required": ["responseCode", "message", "result", "meta"]
}Adjust the required sets to match the actual API behavior if some fields are legitimately optional.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| { | |
| "POST_mentoring_v1_issues_create": { | |
| "$schema": "http://json-schema.org/draft-04/schema#", | |
| "type": "object", | |
| "properties": { | |
| "responseCode": { | |
| "type": "string" | |
| }, | |
| "message": { | |
| "type": "string" | |
| }, | |
| "result": { | |
| "type": "array", | |
| "items": { | |
| "type": "string" | |
| } | |
| }, | |
| "meta": { | |
| "type": "object", | |
| "properties": { | |
| "formsVersion": { | |
| "type": "array", | |
| "items": { | |
| "type": "string" | |
| } | |
| }, | |
| "correlation": { | |
| "type": "string" | |
| }, | |
| "meeting_platform": { | |
| "type": "string" | |
| } | |
| } | |
| } | |
| } | |
| { | |
| "POST_mentoring_v1_issues_create": { | |
| "$schema": "http://json-schema.org/draft-04/schema#", | |
| "type": "object", | |
| "properties": { | |
| "responseCode": { | |
| "type": "string" | |
| }, | |
| "message": { | |
| "type": "string" | |
| }, | |
| "result": { | |
| "type": "array", | |
| "items": { | |
| "type": "string" | |
| } | |
| }, | |
| "meta": { | |
| "type": "object", | |
| "properties": { | |
| "formsVersion": { | |
| "type": "array", | |
| "items": { | |
| "type": "string" | |
| } | |
| }, | |
| "correlation": { | |
| "type": "string" | |
| }, | |
| "meeting_platform": { | |
| "type": "string" | |
| } | |
| }, | |
| "required": ["formsVersion", "correlation", "meeting_platform"] | |
| } | |
| }, | |
| "required": ["responseCode", "message", "result", "meta"] | |
| } | |
| } |
🤖 Prompt for AI Agents
In src/integration-tests-new/issues/schemas/issues.schemas.json around lines 1
to 35, the JSON Schema does not declare any required properties (so responses
missing responseCode, message, result, or meta still validate); add a "required"
array at the top-level listing the guaranteed fields (e.g.,
["responseCode","message","result","meta"]) and also add a "required" array
inside the "meta" object for its guaranteed keys (e.g.,
["formsVersion","correlation","meeting_platform"]); optionally add
"additionalProperties": false at the top-level and inside "meta" if you want to
forbid extra fields—adjust the exact required lists to reflect which fields are
truly mandatory per the API contract.
|
|
||
| describe('question-set endpoints generated from api-doc.yaml', () => { | ||
| describe('POST /mentoring/v1/question-set/create', () => { | ||
| test('should return 201', async () => { |
There was a problem hiding this comment.
Test names don't match assertions - specify exact status codes or update test names.
The test names claim specific status codes (201, 202, 200) but assertions check for any 2xx response (200-299).
Option 1 - Update test names to match assertions:
- test('should return 201', async () => {
+ test('should return 2xx status', async () => {- test('should return 202', async () => {
+ test('should return 2xx status', async () => {- test('should return 200', async () => {
+ test('should return 2xx status', async () => {Option 2 - Make assertions specific (if the exact status codes are guaranteed):
const res = await req
- expect(res.status).toBeGreaterThanOrEqual(200)
- expect(res.status).toBeLessThan(300)
+ expect(res.status).toBe(201)Also applies to: 42-42, 73-73
🤖 Prompt for AI Agents
In src/integration-tests-new/question-set/question-set.spec.js around lines 11,
42 and 73 the test titles state specific status codes (201, 202, 200) but the
assertions use a generic 2xx check; either make the test names accurate or make
the assertions specific. Fix by choosing one approach: a) update each test title
to say "should return 2xx" (or "should return a successful 2xx response") to
match the current assertions, or b) change the assertions to assert the exact
status code you expect (e.g. expect(response.status).toBe(201) / 202 / 200) and
update any response setup if needed; ensure all three locations are updated
consistently and run tests.
| test('should return 201', async () => { | ||
| const url = `/mentoring/v1/question-set/create` | ||
| let req = request(BASE).post(url) | ||
| req = req.set('x-auth-token', 'string') |
There was a problem hiding this comment.
Critical: Use TOKEN variable instead of hardcoded 'string' for authentication.
All authorized tests pass the literal string 'string' as the auth token instead of the TOKEN variable defined on line 4. This will cause authentication to fail or test incorrect behavior.
Apply this diff to fix authentication in all three tests:
test('should return 201', async () => {
const url = `/mentoring/v1/question-set/create`
let req = request(BASE).post(url)
- req = req.set('x-auth-token', 'string')
+ req = req.set('x-auth-token', TOKEN)
req = req
.send({
questions: [1], test('should return 202', async () => {
const url = `/mentoring/v1/question-set/update/1`
let req = request(BASE).patch(url)
- req = req.set('x-auth-token', 'string')
+ req = req.set('x-auth-token', TOKEN)
req = req
.send({
questions: [1], test('should return 200', async () => {
const url = `/mentoring/v1/question-set/read/1`
let req = request(BASE).post(url)
- req = req.set('x-auth-token', 'string')
+ req = req.set('x-auth-token', TOKEN)
const res = await reqAlso applies to: 45-45, 76-76
🤖 Prompt for AI Agents
In src/integration-tests-new/question-set/question-set.spec.js around lines 14,
45 and 76, tests are using the literal 'string' for the x-auth-token header;
replace those occurrences with the TOKEN variable (defined on line 4) so each
request sets req = req.set('x-auth-token', TOKEN). Ensure you update all three
locations (lines ~14, ~45, ~76) to use the TOKEN variable and run the tests to
confirm authentication succeeds.
| const url = `/mentoring/v1/rolePermissionMapping/delete/{role_id}` | ||
| let req = request(BASE).post(url) | ||
| req = req.set('x-auth-token', 'string') | ||
| req = req | ||
| .send({ | ||
| permission_id: 1, | ||
| }) | ||
| .set('Content-Type', 'application/json') |
There was a problem hiding this comment.
Replace path parameter placeholder with actual value.
The delete endpoint has the same path parameter issue.
Apply this diff:
- const url = `/mentoring/v1/rolePermissionMapping/delete/{role_id}`
+ const roleId = 1 // should match a valid role-permission mapping
+ const url = `/mentoring/v1/rolePermissionMapping/delete/${roleId}`📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const url = `/mentoring/v1/rolePermissionMapping/delete/{role_id}` | |
| let req = request(BASE).post(url) | |
| req = req.set('x-auth-token', 'string') | |
| req = req | |
| .send({ | |
| permission_id: 1, | |
| }) | |
| .set('Content-Type', 'application/json') | |
| const roleId = 1 // should match a valid role-permission mapping | |
| const url = `/mentoring/v1/rolePermissionMapping/delete/${roleId}` | |
| let req = request(BASE).post(url) | |
| req = req.set('x-auth-token', 'string') | |
| req = req | |
| .send({ | |
| permission_id: 1, | |
| }) | |
| .set('Content-Type', 'application/json') |
🤖 Prompt for AI Agents
In src/integration-tests-new/rolePermissionMapping/rolePermissionMapping.spec.js
around lines 45 to 52, the test builds the request URL using the literal
placeholder `{role_id}` instead of substituting an actual role id; update the
URL to include a real id (e.g., replace `{role_id}` with a numeric id or a test
variable like roleId) so the request path is valid, e.g.
`/mentoring/v1/rolePermissionMapping/delete/1` or use a template string with the
existing roleId variable before sending the request.
| test('should return 201', async () => { | ||
| const url = `/mentoring/v1/rolePermissionMapping/delete/{role_id}` | ||
| let req = request(BASE).post(url) | ||
| req = req.set('x-auth-token', 'string') |
There was a problem hiding this comment.
Use TOKEN constant instead of hardcoded authentication value.
Same authentication issue.
Apply this diff:
- req = req.set('x-auth-token', 'string')
+ req = req.set('x-auth-token', TOKEN)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| req = req.set('x-auth-token', 'string') | |
| req = req.set('x-auth-token', TOKEN) |
🤖 Prompt for AI Agents
In src/integration-tests-new/rolePermissionMapping/rolePermissionMapping.spec.js
around line 47, replace the hardcoded authentication header value 'string' with
the shared TOKEN constant: import or require the existing TOKEN constant from
the test helper/config module used in other tests (or reference the module where
TOKEN is defined) and set the header using that TOKEN variable instead of the
literal; ensure the file has the proper import/require added at the top if
missing and update req.set('x-auth-token', TOKEN).
| const url = `/mentoring/v1/rolePermissionMapping/delete/{role_id}` | ||
| let req = request(BASE).post(url) | ||
| req = req.set('x-auth-token', 'string') | ||
| req = req.send({}).set('Content-Type', 'application/json') |
There was a problem hiding this comment.
Replace path parameter placeholder and use TOKEN constant.
Both issues appear in this test as well.
Apply this diff:
- const url = `/mentoring/v1/rolePermissionMapping/delete/{role_id}`
+ const roleId = 1
+ const url = `/mentoring/v1/rolePermissionMapping/delete/${roleId}`
let req = request(BASE).post(url)
- req = req.set('x-auth-token', 'string')
+ req = req.set('x-auth-token', TOKEN)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const url = `/mentoring/v1/rolePermissionMapping/delete/{role_id}` | |
| let req = request(BASE).post(url) | |
| req = req.set('x-auth-token', 'string') | |
| req = req.send({}).set('Content-Type', 'application/json') | |
| const roleId = 1 | |
| const url = `/mentoring/v1/rolePermissionMapping/delete/${roleId}` | |
| let req = request(BASE).post(url) | |
| req = req.set('x-auth-token', TOKEN) | |
| req = req.send({}).set('Content-Type', 'application/json') |
🤖 Prompt for AI Agents
In src/integration-tests-new/rolePermissionMapping/rolePermissionMapping.spec.js
around lines 67 to 70, replace the literal path placeholder
"/mentoring/v1/rolePermissionMapping/delete/{role_id}" with a URL that injects
the actual role id (e.g., `/mentoring/v1/rolePermissionMapping/delete/${roleId}`
or the relevant ROLE_ID constant) and replace the hardcoded 'string' auth header
with the shared TOKEN constant (req.set('x-auth-token', TOKEN)); ensure
Content-Type remains application/json.
| "POST_mentoring_v1_rolePermissionMapping_create_role_id": { | ||
| "$schema": "http://json-schema.org/draft-04/schema#", | ||
| "type": "object", | ||
| "properties": { | ||
| "responseCode": { | ||
| "type": "string" | ||
| }, | ||
| "message": { | ||
| "type": "string" | ||
| }, | ||
| "result": { | ||
| "type": "object", | ||
| "properties": { | ||
| "roleId": { | ||
| "type": "number" | ||
| }, | ||
| "permissionId": { | ||
| "type": "number" | ||
| }, | ||
| "module": { | ||
| "type": "string" | ||
| }, | ||
| "request_type": { | ||
| "type": "array", | ||
| "items": { | ||
| "type": "string" | ||
| } | ||
| } | ||
| } | ||
| }, | ||
| "meta": { | ||
| "type": "object", | ||
| "properties": { | ||
| "correlation": { | ||
| "type": "string" | ||
| }, | ||
| "meetingPlatform": { | ||
| "type": "string" | ||
| } | ||
| } | ||
| } | ||
| } | ||
| }, |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Add required fields to strengthen schema validation.
The schema lacks a required array, making all properties optional. This means the validator will pass responses missing critical fields like responseCode, message, or result. Consider adding required field constraints to ensure API contract compliance.
Apply this diff to add required fields:
"POST_mentoring_v1_rolePermissionMapping_create_role_id": {
"$schema": "http://json-schema.org/draft-04/schema#",
"type": "object",
"properties": {
"responseCode": {
"type": "string"
},
"message": {
"type": "string"
},
"result": {
"type": "object",
"properties": {
"roleId": {
"type": "number"
},
"permissionId": {
"type": "number"
},
"module": {
"type": "string"
},
"request_type": {
"type": "array",
"items": {
"type": "string"
}
}
- }
+ },
+ "required": ["roleId", "permissionId"]
},
"meta": {
"type": "object",
"properties": {
"correlation": {
"type": "string"
},
"meetingPlatform": {
"type": "string"
}
}
}
- }
+ },
+ "required": ["responseCode", "message", "result"]
},📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| "POST_mentoring_v1_rolePermissionMapping_create_role_id": { | |
| "$schema": "http://json-schema.org/draft-04/schema#", | |
| "type": "object", | |
| "properties": { | |
| "responseCode": { | |
| "type": "string" | |
| }, | |
| "message": { | |
| "type": "string" | |
| }, | |
| "result": { | |
| "type": "object", | |
| "properties": { | |
| "roleId": { | |
| "type": "number" | |
| }, | |
| "permissionId": { | |
| "type": "number" | |
| }, | |
| "module": { | |
| "type": "string" | |
| }, | |
| "request_type": { | |
| "type": "array", | |
| "items": { | |
| "type": "string" | |
| } | |
| } | |
| } | |
| }, | |
| "meta": { | |
| "type": "object", | |
| "properties": { | |
| "correlation": { | |
| "type": "string" | |
| }, | |
| "meetingPlatform": { | |
| "type": "string" | |
| } | |
| } | |
| } | |
| } | |
| }, | |
| "POST_mentoring_v1_rolePermissionMapping_create_role_id": { | |
| "$schema": "http://json-schema.org/draft-04/schema#", | |
| "type": "object", | |
| "properties": { | |
| "responseCode": { | |
| "type": "string" | |
| }, | |
| "message": { | |
| "type": "string" | |
| }, | |
| "result": { | |
| "type": "object", | |
| "properties": { | |
| "roleId": { | |
| "type": "number" | |
| }, | |
| "permissionId": { | |
| "type": "number" | |
| }, | |
| "module": { | |
| "type": "string" | |
| }, | |
| "request_type": { | |
| "type": "array", | |
| "items": { | |
| "type": "string" | |
| } | |
| } | |
| }, | |
| "required": ["roleId", "permissionId"] | |
| }, | |
| "meta": { | |
| "type": "object", | |
| "properties": { | |
| "correlation": { | |
| "type": "string" | |
| }, | |
| "meetingPlatform": { | |
| "type": "string" | |
| } | |
| } | |
| } | |
| }, | |
| "required": ["responseCode", "message", "result"] | |
| }, |
🤖 Prompt for AI Agents
In
src/integration-tests-new/rolePermissionMapping/schemas/rolePermissionMapping.schemas.json
around lines 2 to 44, the schema currently has no "required" arrays so all
properties are optional; add a root "required" array including at minimum
"responseCode", "message", and "result" to enforce presence of the API contract,
then add a "required" array inside "result" requiring "roleId", "permissionId",
"module", and "request_type"; optionally add a "required" array in "meta" for
"correlation" (and "meetingPlatform" if your API requires it) so those metadata
fields must be present; ensure you add each "required" array as an array of
string property names and keep the existing type definitions unchanged.
| "POST_mentoring_v1_rolePermissionMapping_delete_role_id": { | ||
| "$schema": "http://json-schema.org/draft-04/schema#", | ||
| "type": "object", | ||
| "properties": { | ||
| "responseCode": { | ||
| "type": "string" | ||
| }, | ||
| "message": { | ||
| "type": "string" | ||
| }, | ||
| "result": { | ||
| "type": "object", | ||
| "properties": {} | ||
| }, | ||
| "meta": { | ||
| "type": "object", | ||
| "properties": { | ||
| "correlation": { | ||
| "type": "string" | ||
| }, | ||
| "meetingPlatform": { | ||
| "type": "string" | ||
| } | ||
| } | ||
| } | ||
| } | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Add required fields to strengthen schema validation.
The delete schema also lacks required field constraints. Apply the same pattern as the create schema.
Apply this diff:
"POST_mentoring_v1_rolePermissionMapping_delete_role_id": {
"$schema": "http://json-schema.org/draft-04/schema#",
"type": "object",
"properties": {
"responseCode": {
"type": "string"
},
"message": {
"type": "string"
},
"result": {
"type": "object",
"properties": {}
},
"meta": {
"type": "object",
"properties": {
"correlation": {
"type": "string"
},
"meetingPlatform": {
"type": "string"
}
}
}
- }
+ },
+ "required": ["responseCode", "message", "result"]
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| "POST_mentoring_v1_rolePermissionMapping_delete_role_id": { | |
| "$schema": "http://json-schema.org/draft-04/schema#", | |
| "type": "object", | |
| "properties": { | |
| "responseCode": { | |
| "type": "string" | |
| }, | |
| "message": { | |
| "type": "string" | |
| }, | |
| "result": { | |
| "type": "object", | |
| "properties": {} | |
| }, | |
| "meta": { | |
| "type": "object", | |
| "properties": { | |
| "correlation": { | |
| "type": "string" | |
| }, | |
| "meetingPlatform": { | |
| "type": "string" | |
| } | |
| } | |
| } | |
| } | |
| } | |
| "POST_mentoring_v1_rolePermissionMapping_delete_role_id": { | |
| "$schema": "http://json-schema.org/draft-04/schema#", | |
| "type": "object", | |
| "properties": { | |
| "responseCode": { | |
| "type": "string" | |
| }, | |
| "message": { | |
| "type": "string" | |
| }, | |
| "result": { | |
| "type": "object", | |
| "properties": {} | |
| }, | |
| "meta": { | |
| "type": "object", | |
| "properties": { | |
| "correlation": { | |
| "type": "string" | |
| }, | |
| "meetingPlatform": { | |
| "type": "string" | |
| } | |
| } | |
| } | |
| }, | |
| "required": ["responseCode", "message", "result"] | |
| } |
🤖 Prompt for AI Agents
In
src/integration-tests-new/rolePermissionMapping/schemas/rolePermissionMapping.schemas.json
around lines 45 to 71, the DELETE schema lacks required field constraints; add a
"required" array on the schema object listing the top-level required properties
(e.g., ["responseCode","message","result","meta"]) and add a "required" array
inside "meta" to require its fields (e.g., ["correlation","meetingPlatform"])
mirroring the create schema pattern so validation enforces presence of these
fields.
| var supertest = require('supertest') //require supertest | ||
| var defaults = require('superagent-defaults') |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Use const instead of var for consistency.
Modern JavaScript best practice is to use const for variables that won't be reassigned.
-var supertest = require('supertest') //require supertest
-var defaults = require('superagent-defaults')
+const supertest = require('supertest')
+const defaults = require('superagent-defaults')📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| var supertest = require('supertest') //require supertest | |
| var defaults = require('superagent-defaults') | |
| const supertest = require('supertest') | |
| const defaults = require('superagent-defaults') |
🤖 Prompt for AI Agents
In src/integration-tests-new/commonTests.js around lines 1 to 2, the file uses
"var" to import modules; update those declarations to "const" (e.g.,
require('supertest') and require('superagent-defaults')) since they are not
reassigned, ensuring consistent modern JS usage and avoiding var's
function-scoped behavior.
| const logIn = async () => { | ||
| try { | ||
| let request = defaults(supertest('http://localhost:3001')) | ||
| let waitOn = require('wait-on') |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Move require('wait-on') to top-level imports.
Requiring modules inside functions adds unnecessary overhead on each call and is unconventional.
+const waitOn = require('wait-on')
+
const logIn = async () => {
try {
let request = defaults(supertest('http://localhost:3001'))
- let waitOn = require('wait-on')
let opts = {📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| let waitOn = require('wait-on') | |
| const waitOn = require('wait-on') | |
| const logIn = async () => { | |
| try { | |
| let request = defaults(supertest('http://localhost:3001')) | |
| let opts = { |
🤖 Prompt for AI Agents
In src/integration-tests-new/commonTests.js around line 12, the code requires
'wait-on' inside a function which causes repeated runtime module resolution and
is unconventional; move the require to the top-level imports by adding a single
top-level const or let declaration (e.g., const waitOn = require('wait-on')) at
the top of the file and replace the in-function require usage with that variable
so the module is loaded once at module initialization.
| let password = faker.internet.password() | ||
| let res = await request.post('/user/v1/account/create').set('origin', 'localhost').send({ | ||
| name: 'adithya', | ||
| email: email, | ||
| password: 'PassworD@@@123', |
There was a problem hiding this comment.
Bug: Password mismatch between generated and returned values.
Line 21 generates a random password with faker.internet.password(), but lines 25 and 31 use the hardcoded 'PassworD@@@123'. The returned userDetails.password (line 49) contains the unused faker password, not the actual password used for login.
- let password = faker.internet.password()
+ const password = 'PassworD@@@123'
let res = await request.post('/user/v1/account/create').set('origin', 'localhost').send({
name: 'adithya',
email: email,
- password: 'PassworD@@@123',
+ password: password,
role: common.MENTEE_ROLE,
})
res = await request.post('/user/v1/account/login').set('origin', 'localhost').send({
identifier: email,
- password: 'PassworD@@@123',
+ password: password,
})The same issue exists in mentorLogIn (lines 74, 79, 86, 104).
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| let password = faker.internet.password() | |
| let res = await request.post('/user/v1/account/create').set('origin', 'localhost').send({ | |
| name: 'adithya', | |
| email: email, | |
| password: 'PassworD@@@123', | |
| const password = 'PassworD@@@123' | |
| let res = await request.post('/user/v1/account/create').set('origin', 'localhost').send({ | |
| name: 'adithya', | |
| email: email, | |
| password: password, | |
| role: common.MENTEE_ROLE, | |
| }) | |
| res = await request.post('/user/v1/account/login').set('origin', 'localhost').send({ | |
| identifier: email, | |
| password: password, | |
| }) |
🤖 Prompt for AI Agents
In src/integration-tests-new/commonTests.js around lines 21-25 (and related
blocks at ~31, 49 and the mentor login flow around lines 74, 79, 86, 104), the
test generates a password with faker.internet.password() but then sends a
hardcoded 'PassworD@@@123' to the account creation/login endpoints, causing the
returned userDetails.password to mismatch the actual credential used; fix by
using the generated password variable everywhere the request body and subsequent
login use the password (replace hardcoded strings with the faker-generated
password variable) so the created account and login steps use the same password
consistently.
| } catch (error) { | ||
| console.error(error) | ||
| } |
There was a problem hiding this comment.
Silent failure: caught errors are logged but not rethrown.
When an error occurs, the function logs it but returns undefined implicitly. This can mask failures in tests. Consider rethrowing or returning a structured error.
} catch (error) {
console.error(error)
+ throw error
}The same issue exists in mentorLogIn (lines 112-114).
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| } catch (error) { | |
| console.error(error) | |
| } | |
| } catch (error) { | |
| console.error(error) | |
| throw error | |
| } |
🤖 Prompt for AI Agents
In src/integration-tests-new/commonTests.js around lines 58-60 (and similarly
for mentorLogIn at lines 112-114), the catch blocks only console.error the
exception and allow the function to return undefined; change the catch handlers
to rethrow the caught error (or throw a new Error wrapping it) after logging so
test callers receive the failure. Ensure you preserve the existing console.error
call, then add a `throw error` (or `throw new Error(...)`) so the test harness
can detect and fail the test.
| email: email, | ||
| password: 'PassworD@@@123', | ||
| isAMentor: true, | ||
| secretCode: 'secret-code', |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Hardcoded secret code for mentor creation.
Using a hardcoded 'secret-code' assumes a specific test environment configuration. Consider making this configurable via environment variable.
+const MENTOR_SECRET_CODE = process.env.TEST_MENTOR_SECRET_CODE || 'secret-code'
+
let res = await request.post('/user/v1/account/create').set('origin', 'localhost').send({
name: 'Nevil',
email: email,
password: 'PassworD@@@123',
isAMentor: true,
- secretCode: 'secret-code',
+ secretCode: MENTOR_SECRET_CODE,
})Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In src/integration-tests-new/commonTests.js around line 81, the test uses a
hardcoded secretCode value 'secret-code'; update this to read the secret code
from an environment variable (e.g., process.env.MENTOR_SECRET_CODE) with a
sensible fallback for local/dev runs, validate its presence when required, and
document or fail the test with a clear error if the env var is missing in CI.
| const request = require('supertest') | ||
| const Ajv = require('ajv') | ||
| const BASE = process.env.BASE_URL || 'http://localhost:3000' | ||
| const TOKEN = process.env.TEST_BEARER_TOKEN || 'test-token' |
There was a problem hiding this comment.
Unused TOKEN constant.
TOKEN is defined but never used. The tests use hardcoded 'string' instead.
🤖 Prompt for AI Agents
In src/integration-tests-new/users/users.spec.js around line 4, the TOKEN
constant is defined but unused; update the tests to use TOKEN instead of the
hardcoded 'string' values (or remove TOKEN if you prefer environment-only
configuration). Replace occurrences of the literal 'string' used as the bearer
token in requests with the TOKEN variable so the tests respect
process.env.TEST_BEARER_TOKEN (keep the current fallback 'test-token'). Ensure
imports/variables around line 4 remain in scope for all tests that need the
token.
| test('should return 200', async () => { | ||
| const url = `/mentoring/v1/users/pendingFeedbacks` | ||
| let req = request(BASE).get(url) | ||
| req = req.set('x-auth-token', 'string') |
There was a problem hiding this comment.
Hardcoded 'string' token will fail authentication.
Using the literal string 'string' as the auth token will not authenticate successfully against the API. Use the TOKEN constant or integrate with commonTests.js login helpers for valid authentication.
- req = req.set('x-auth-token', 'string')
+ req = req.set('x-auth-token', `bearer ${TOKEN}`)The same issue exists on line 33.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| req = req.set('x-auth-token', 'string') | |
| req = req.set('x-auth-token', `bearer ${TOKEN}`) |
🤖 Prompt for AI Agents
In src/integration-tests-new/users/users.spec.js around lines 14 and 33, the
test is using a hardcoded auth token 'string' which will fail authentication;
replace these hardcoded values with the valid TOKEN constant (or call the login
helper from commonTests.js to obtain a real token) and ensure TOKEN is imported
or the login helper is invoked before the requests so req.set('x-auth-token',
TOKEN) is used instead of the literal 'string'.
|
|
||
| describe('GET /mentoring/v1/users/list?type={userType}&page={page}&limit={limit}&search={search}', () => { | ||
| test('should return 200', async () => { | ||
| const url = `/mentoring/v1/users/list?type=mentor&page=1&limit=2&search=jhon` |
There was a problem hiding this comment.
Typo in search parameter: "jhon" should be "john".
Minor typo in the test URL search parameter.
- const url = `/mentoring/v1/users/list?type=mentor&page=1&limit=2&search=jhon`
+ const url = `/mentoring/v1/users/list?type=mentor&page=1&limit=2&search=john`📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const url = `/mentoring/v1/users/list?type=mentor&page=1&limit=2&search=jhon` | |
| const url = `/mentoring/v1/users/list?type=mentor&page=1&limit=2&search=john` |
🤖 Prompt for AI Agents
In src/integration-tests-new/users/users.spec.js around line 31, the test URL
uses a typo in the search query parameter (`search=jhon`); update the URL to use
the correct spelling `search=john` so the test searches for "john" as intended.
| "start": "NODE_ENV=development nodemon app.js", | ||
| "prod": "NODE_ENV=production node app.js", | ||
| "stage": "NODE_ENV=stage node app.js", | ||
| "dev": "node --inspect=0.0.0.0:9229 app.js", |
There was a problem hiding this comment.
Security concern: Debug port bound to all interfaces.
Binding the Node.js inspector to 0.0.0.0 exposes the debugger to all network interfaces, which can be a security risk if the development environment is accessible from untrusted networks. Consider binding to 127.0.0.1 instead for local-only debugging.
- "dev": "node --inspect=0.0.0.0:9229 app.js",
+ "dev": "node --inspect=127.0.0.1:9229 app.js",If remote debugging is intentionally required (e.g., for containerized development), document this explicitly and ensure proper network isolation.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| "dev": "node --inspect=0.0.0.0:9229 app.js", | |
| "dev": "node --inspect=127.0.0.1:9229 app.js", |
🤖 Prompt for AI Agents
In src/package.json around line 14, the dev script binds the Node inspector to
0.0.0.0 which exposes the debug port to all network interfaces; change the bind
address to 127.0.0.1 to restrict debugging to localhost (or, if remote debugging
is intentionally required, leave as-is but add an explicit comment in the repo
and documentation explaining the requirement and ensure network
isolation/firewall rules are applied).
test.js
Outdated
| @@ -0,0 +1 @@ | |||
| console.log('test file executed') | |||
There was a problem hiding this comment.
Remove debug artifact from repository root.
This file appears to be a leftover debug artifact and serves no purpose in the test infrastructure. It's not referenced by Jest configuration and doesn't follow the test naming conventions (*.spec.js or *.specs.js).
-console.log('test file executed')Delete this file entirely.
Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In test.js at lines 1-1, this file is an unintended debug artifact not
referenced by Jest and not following test naming conventions; remove the file
entirely from the repository (delete test.js and any related accidental
commits), ensure no references remain in package.json or CI configs, and run the
test suite to confirm nothing breaks.
| test('should return 200', async () => { | ||
| const url = `/mentoring/v1/admin/userDelete` | ||
| let req = request(BASE).delete(url) | ||
| req = req.set('x-auth-token', 'string') |
There was a problem hiding this comment.
Use TOKEN variable instead of hard-coded 'string' for authentication.
The tests hard-code 'string' for the x-auth-token header instead of using the TOKEN constant defined at line 4. This will cause authentication failures.
Apply this pattern throughout the file:
- req = req.set('x-auth-token', 'string')
+ req = req.set('x-auth-token', TOKEN)Also applies to: 33-33, 52-52, 71-71, 90-90
🤖 Prompt for AI Agents
In src/integration-tests-new/admin/admin.spec.js around lines 14, 33, 52, 71 and
90, tests set the header x-auth-token to the hard-coded string 'string' instead
of using the TOKEN constant declared at line 4; update each
req.set('x-auth-token', 'string') call to use req.set('x-auth-token', TOKEN) so
the tests use the shared TOKEN variable for authentication and avoid hard-coded
values.
| describe('cloud-services endpoints generated from api-doc.yaml', () => { | ||
| describe('GET /mentoring/v1/cloud-services/getSignedUrl', () => { | ||
| test('should return 200', async () => { | ||
| const url = `/mentoring/v1/cloud-services/getSignedUrl` | ||
| let req = request(BASE).get(url) | ||
| req = req.set('x-auth-token', 'string') | ||
| const res = await req | ||
| expect(res.status).toBeGreaterThanOrEqual(200) | ||
| expect(res.status).toBeLessThan(300) | ||
| // validate response schema | ||
| const schema = schemas['GET_mentoring_v1_cloud-services_getSignedUrl'] | ||
| const validate = ajv.compile(schema) | ||
| const valid = validate(res.body) | ||
| if (!valid) { | ||
| console.error('Schema validation errors:', validate.errors) | ||
| } | ||
| expect(valid).toBe(true) | ||
| }) | ||
| }) | ||
|
|
||
| describe('GET /mentoring/v1/cloud-services/getDownloadableUrl', () => { | ||
| test('should return 200', async () => { | ||
| const url = `/mentoring/v1/cloud-services/getDownloadableUrl` | ||
| let req = request(BASE).get(url) | ||
| req = req.set('x-auth-token', 'string') | ||
| const res = await req | ||
| expect(res.status).toBeGreaterThanOrEqual(200) | ||
| expect(res.status).toBeLessThan(300) | ||
| // validate response schema | ||
| const schema = schemas['GET_mentoring_v1_cloud-services_getDownloadableUrl'] | ||
| const validate = ajv.compile(schema) | ||
| const valid = validate(res.body) | ||
| if (!valid) { | ||
| console.error('Schema validation errors:', validate.errors) | ||
| } | ||
| expect(valid).toBe(true) | ||
| }) | ||
| }) | ||
| }) |
There was a problem hiding this comment.
Use real auth tokens instead of "string" for cloud-services tests
Both cloud-services tests set:
req = req.set('x-auth-token', 'string')while a TOKEN env-based constant is available and other suites use logged-in user tokens.
If these endpoints are protected in non-dev environments, this placeholder will likely yield 401/403 or bypass realistic auth flows. Consider:
- req = req.set('x-auth-token', 'string')
+ req = req.set('x-auth-token', TOKEN)or aligning with the shared login helper pattern used in other integration tests.
🤖 Prompt for AI Agents
In src/integration-tests-new/cloud-services/cloud-services.spec.js around lines
9 to 47, the tests set the auth header to the literal 'string' which is a
placeholder and will produce invalid auth in non-dev environments; replace the
placeholder by obtaining a real token (either use the TOKEN env constant used
elsewhere via process.env.TOKEN or call the shared login helper to get a
logged-in user's token) and set that value in the x-auth-token header for both
getSignedUrl and getDownloadableUrl tests; ensure the token retrieval is awaited
where necessary and reuse the same valid token across the two tests to match
other suites' pattern.
| { | ||
| "GET_mentoring_v1_cloud-services_getSignedUrl": { | ||
| "$schema": "http://json-schema.org/draft-04/schema#", | ||
| "type": "object", | ||
| "properties": { | ||
| "responseCode": { | ||
| "type": "string" | ||
| }, | ||
| "message": { | ||
| "type": "string" | ||
| }, | ||
| "result": { | ||
| "type": "object", | ||
| "properties": { | ||
| "signed_url": { | ||
| "type": "string" | ||
| }, | ||
| "file_path": { | ||
| "type": "string" | ||
| }, | ||
| "dest_file_path": { | ||
| "type": "string" | ||
| } | ||
| } | ||
| } | ||
| } | ||
| }, | ||
| "GET_mentoring_v1_cloud-services_getDownloadableUrl": {} | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
GET_mentoring_v1_cloud-services_getDownloadableUrl schema is too permissive
GET_mentoring_v1_cloud-services_getDownloadableUrl is defined as {}, so Ajv will accept any response shape. That’s fine as a temporary placeholder, but it won’t catch contract regressions.
If the response mirrors getSignedUrl (or otherwise follows your standard { responseCode, message, result } wrapper), consider defining at least a minimal schema instead of {}.
🤖 Prompt for AI Agents
In src/integration-tests-new/cloud-services/schemas/cloud-services.schemas.json
around lines 1 to 29, the GET_mentoring_v1_cloud-services_getDownloadableUrl
schema is currently an empty object so AJV will accept any response; replace the
empty schema with a minimal JSON Schema that mirrors the standard wrapper
(responseCode:string, message:string, result:object) and at least include the
expected result properties (e.g., downloadable_url and file_path as strings) so
the test suite will validate the response shape and catch contract regressions.
| global.request = defaults(supertest(baseURL)) | ||
| global.request.set(defaultHeaders) | ||
| global.userId = res.body.result.user.id |
There was a problem hiding this comment.
Global state mutation may cause issues with parallel test execution.
Setting global.request and global.userId creates shared mutable state. If tests run in parallel or multiple login functions are called, this could lead to race conditions or unexpected behavior.
Consider returning the configured request object and userId directly instead of mutating globals, allowing tests to manage their own state:
- global.request = defaults(supertest(baseURL))
- global.request.set(defaultHeaders)
- global.userId = res.body.result.user.id
const userDetails = {
token: res.body.result.access_token,
refreshToken: res.body.result.refresh_token,
userId: res.body.result.user.id,
email: email,
password: password,
organizations: res.body.result.user.organizations,
+ request: defaults(supertest(baseURL)).set(defaultHeaders),
}🤖 Prompt for AI Agents
In src/integration-tests-new/commonTests.js around lines 40 to 42, the code
mutates global.request and global.userId which can cause race conditions in
parallel tests; change the function so it does not set globals but instead
returns the configured request
(defaults(supertest(baseURL)).set(defaultHeaders)) and the userId
(res.body.result.user.id) to callers, update any tests that rely on globals to
accept and use the returned { request, userId } (or tuple) and remove the global
assignments; ensure any test helpers or teardown logic are adjusted accordingly
so no shared global state is used.
| /* | ||
| let profileCreate = await request(BASE).post('/mentoring/v1/profile/create').set('x-auth-token', userDetails.token).send({ | ||
| designation: ['beo', 'deo', 'testt'], | ||
| area_of_expertise: ['educational_leadership', 'sqaa'], | ||
| education_qualification: 'MBA', | ||
| tags: ['Experienced', 'Technical'], | ||
| visibility: 'visible', | ||
| organisation_ids: [1], | ||
| external_session_visibility: 'CURRENT', | ||
| external_mentor_visibility: 'ALL', | ||
| }) | ||
|
|
||
| console.log(profileCreate.body, 'profileCreatebody') | ||
| */ |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Remove or activate extensive commented test scaffolding.
A significant portion of the test file is commented out. This reduces clarity and maintainability. Either remove the scaffolding if it's not needed or complete and enable the tests.
Also applies to: 32-119, 143-162
🤖 Prompt for AI Agents
In src/integration-tests-new/connections/connections.specs.js around lines 15-28
(also review ranges 32-119 and 143-162), there is large commented-out test
scaffolding; either remove these dead comments or re-enable and complete the
tests. Decide which blocks are intended: delete unnecessary commented code to
keep the file clean, or convert each scaffold into active tests by restoring
requests, assertions, and setup/teardown (ensuring valid tokens, endpoints, and
expectations) and run the suite to confirm they pass; apply the same action
consistently to the other commented ranges noted in the review.
| "GET_/mentoring/v1/sessions/getRecording/{sessionId}": {}, | ||
| "PATCH_/mentoring/v1/sessions/completed/{sessionId}": {}, | ||
| "PATCH_/mentoring/v1/sessions/updateRecordingUrl/{internalSessionId}": {} |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Additional empty schemas at end of file.
GET_/mentoring/v1/sessions/getRecording/{sessionId}, PATCH_/mentoring/v1/sessions/completed/{sessionId}, and PATCH_/mentoring/v1/sessions/updateRecordingUrl/{internalSessionId} are all empty schemas.
Consider defining these schemas or documenting that they're intentionally empty placeholders.
🤖 Prompt for AI Agents
In src/integration-tests-new/sessions/schemas/sessions.schemas.json around lines
449 to 451, the three entries
"GET_/mentoring/v1/sessions/getRecording/{sessionId}",
"PATCH_/mentoring/v1/sessions/completed/{sessionId}", and
"PATCH_/mentoring/v1/sessions/updateRecordingUrl/{internalSessionId}" are empty
objects; either define appropriate JSON schema objects for each endpoint
(request params, path variables, requestBody and response schema as applicable)
or replace each empty object with a short documented placeholder indicating they
are intentionally empty (e.g., { "description": "intentionally empty: no
payload/response" }) so the file is explicit and tooling won’t treat them as
accidental omissions.
| const request = require('supertest') | ||
| const Ajv = require('ajv') | ||
| const BASE = process.env.BASE_URL || 'http://localhost:3000' | ||
| const TOKEN = process.env.TEST_BEARER_TOKEN || 'test-token' |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Unused variable TOKEN.
The TOKEN constant is declared but never used in this test file. All authenticated requests use userDetails.token or menteeDetails.token instead.
-const TOKEN = process.env.TEST_BEARER_TOKEN || 'test-token'📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const TOKEN = process.env.TEST_BEARER_TOKEN || 'test-token' |
🤖 Prompt for AI Agents
In src/integration-tests-new/sessions/sessions.specs.js around line 5, the
constant TOKEN is declared but never used; remove the unused TOKEN declaration
or replace its usages if intended—delete the line declaring TOKEN
(process.env.TEST_BEARER_TOKEN || 'test-token') and run tests to ensure no
references remain, or if you meant to use a shared token, replace individual
userDetails.token/menteeDetails.token usages with TOKEN where appropriate and
ensure TOKEN is correctly initialized from the intended fixture.
| afterAll(async () => { | ||
| // Clean up the created session | ||
| if (createdSessionId) { | ||
| const deleteUrl = `/mentoring/v1/sessions/update/${createdSessionId}` | ||
| // We don't need to assert the result of cleanup, but it's good practice to ensure it runs | ||
| // await request(BASE).delete(deleteUrl).set('x-auth-token', userDetails.token) | ||
| } | ||
| }) |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Cleanup code is commented out, potentially leaving test data.
The session cleanup in afterAll is commented out. This may leave orphaned test sessions in the database, which could cause issues in subsequent test runs or pollute test/staging environments.
afterAll(async () => {
// Clean up the created session
if (createdSessionId) {
const deleteUrl = `/mentoring/v1/sessions/update/${createdSessionId}`
- // We don't need to assert the result of cleanup, but it's good practice to ensure it runs
- // await request(BASE).delete(deleteUrl).set('x-auth-token', userDetails.token)
+ // Clean up without asserting - best effort
+ await request(BASE).delete(deleteUrl).set('x-auth-token', userDetails.token)
}
})📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| afterAll(async () => { | |
| // Clean up the created session | |
| if (createdSessionId) { | |
| const deleteUrl = `/mentoring/v1/sessions/update/${createdSessionId}` | |
| // We don't need to assert the result of cleanup, but it's good practice to ensure it runs | |
| // await request(BASE).delete(deleteUrl).set('x-auth-token', userDetails.token) | |
| } | |
| }) | |
| afterAll(async () => { | |
| // Clean up the created session | |
| if (createdSessionId) { | |
| const deleteUrl = `/mentoring/v1/sessions/update/${createdSessionId}` | |
| // Clean up without asserting - best effort | |
| await request(BASE).delete(deleteUrl).set('x-auth-token', userDetails.token) | |
| } | |
| }) |
🤖 Prompt for AI Agents
In src/integration-tests-new/sessions/sessions.specs.js around lines 59 to 66,
the cleanup HTTP call in afterAll is commented out which leaves created sessions
orphaned; restore the cleanup by making an authenticated DELETE request to the
session endpoint (use the createdSessionId) inside afterAll, wrap it in a
try/catch so failures don't fail the suite, and do not assert the result—just
ensure the request runs (i.e., uncomment and call
request(BASE).delete(deleteUrl).set('x-auth-token', userDetails.token) and
swallow/log any errors).
| test('POST /mentoring/v1/sessions/unenroll/{sessionId} - should return 200 on successful unenrollment', async () => { | ||
| const url = `/mentoring/v1/sessions/unEnroll/${createdSessionId}` // Corrected to camelCase 'unEnroll' | ||
| let req = request(BASE).post(url) | ||
| req = req.set('x-auth-token', menteeDetails.token) // Use mentee's token to unenroll | ||
| const res = await req | ||
| expect(res.status).toBeGreaterThanOrEqual(200) | ||
| expect(res.status).toBeLessThan(300) | ||
| // validate response schema | ||
| const schema = schemas['POST_/mentoring/v1/sessions/unenroll/{sessionId}'] | ||
| const validate = ajv.compile(schema) | ||
| const valid = validate(res.body) | ||
| if (!valid) { | ||
| console.error('Schema validation errors:', validate.errors) | ||
| } | ||
| expect(valid).toBe(true) | ||
| }) |
There was a problem hiding this comment.
URL path casing mismatch with schema key will cause validation failure.
The URL uses unEnroll (camelCase) at Line 107, but the schema key at Line 114 uses unenroll (lowercase): POST_/mentoring/v1/sessions/unenroll/{sessionId}. If the actual API path is unEnroll, schema validation will work but the schema key naming is inconsistent with the URL. Verify the actual API path and ensure the schema key matches.
If the API path is indeed unEnroll, update the schema key for consistency:
-const schema = schemas['POST_/mentoring/v1/sessions/unenroll/{sessionId}']
+const schema = schemas['POST_/mentoring/v1/sessions/unEnroll/{sessionId}']And update the corresponding key in sessions.schemas.json.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| test('POST /mentoring/v1/sessions/unenroll/{sessionId} - should return 200 on successful unenrollment', async () => { | |
| const url = `/mentoring/v1/sessions/unEnroll/${createdSessionId}` // Corrected to camelCase 'unEnroll' | |
| let req = request(BASE).post(url) | |
| req = req.set('x-auth-token', menteeDetails.token) // Use mentee's token to unenroll | |
| const res = await req | |
| expect(res.status).toBeGreaterThanOrEqual(200) | |
| expect(res.status).toBeLessThan(300) | |
| // validate response schema | |
| const schema = schemas['POST_/mentoring/v1/sessions/unenroll/{sessionId}'] | |
| const validate = ajv.compile(schema) | |
| const valid = validate(res.body) | |
| if (!valid) { | |
| console.error('Schema validation errors:', validate.errors) | |
| } | |
| expect(valid).toBe(true) | |
| }) | |
| test('POST /mentoring/v1/sessions/unenroll/{sessionId} - should return 200 on successful unenrollment', async () => { | |
| const url = `/mentoring/v1/sessions/unEnroll/${createdSessionId}` // Corrected to camelCase 'unEnroll' | |
| let req = request(BASE).post(url) | |
| req = req.set('x-auth-token', menteeDetails.token) // Use mentee's token to unenroll | |
| const res = await req | |
| expect(res.status).toBeGreaterThanOrEqual(200) | |
| expect(res.status).toBeLessThan(300) | |
| // validate response schema | |
| const schema = schemas['POST_/mentoring/v1/sessions/unEnroll/{sessionId}'] | |
| const validate = ajv.compile(schema) | |
| const valid = validate(res.body) | |
| if (!valid) { | |
| console.error('Schema validation errors:', validate.errors) | |
| } | |
| expect(valid).toBe(true) | |
| }) |
🤖 Prompt for AI Agents
In src/integration-tests-new/sessions/sessions.specs.js around lines 106 to 121,
the test URL uses camelCase '/unEnroll' while the schema lookup uses lowercase
'unenroll', causing a mismatch; verify the actual API route and then make them
consistent by either changing the test URL to '/unenroll/{sessionId}' to match
the existing schema key or updating the schema key in sessions.schemas.json (and
any other references) to 'POST_/mentoring/v1/sessions/unEnroll/{sessionId}' so
the schema lookup matches the test URL; ensure you update all places that
reference the schema key to avoid future mismatches.
| '@commonTests': '<rootDir>/integration-tests-new/commonTests', | ||
| '@helpers/(.*)': '<rootDir>/helpers/$1', | ||
| '@utils/(.*)': '<rootDir>/utils/$1', | ||
| }, | ||
| testMatch: ['<rootDir>/integration-tests/**/*.spec.js'], | ||
| testMatch: ['<rootDir>/integration-tests-new/**/*.specs.js'], |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Find test files in both integration test directories
echo "=== Test files in integration-tests-new ==="
find . -path "*/integration-tests-new/**" -name "*.spec.js" -o -path "*/integration-tests-new/**" -name "*.specs.js" 2>/dev/null | head -20
echo -e "\n=== Test files in integration-tests (legacy) ==="
find . -path "*/integration-tests/**" -name "*.spec.js" -o -path "*/integration-tests/**" -name "*.specs.js" 2>/dev/null | head -20
echo -e "\n=== Check for helpers directory ==="
find . -type d -name "helpers" | grep -v node_modules | head -10
echo -e "\n=== Check for utils directory ==="
find . -type d -name "utils" | grep -v node_modules | head -10Repository: ELEVATE-Project/mentoring
Length of output: 2048
Jest testMatch will skip majority of new test files in integration-tests-new
The pattern testMatch: ['<rootDir>/integration-tests-new/**/*.specs.js'] only matches files ending in .specs.js, but the new test suite uses both naming conventions. Based on the codebase:
- integration-tests-new contains 15+ files named
*.spec.js(e.g.,admin.spec.js,entity.spec.js,feedback.spec.js, etc.) and 5 files named*.specs.js(e.g.,connections.specs.js,entity-type.specs.js) - Only the 5
.specs.jsfiles will be discovered; the other 15 will be silently skipped - The legacy
integration-tests/directory (13 test files) is also no longer included
Either standardize naming across the test suite to .specs.js, or broaden the pattern to capture both:
- testMatch: ['<rootDir>/integration-tests-new/**/*.specs.js'],
+ testMatch: ['<rootDir>/integration-tests-new/**/*.spec.js', '<rootDir>/integration-tests-new/**/*.specs.js'],The @helpers and @utils aliases are properly configured; both directories exist at <rootDir>/helpers and <rootDir>/utils.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| '@commonTests': '<rootDir>/integration-tests-new/commonTests', | |
| '@helpers/(.*)': '<rootDir>/helpers/$1', | |
| '@utils/(.*)': '<rootDir>/utils/$1', | |
| }, | |
| testMatch: ['<rootDir>/integration-tests/**/*.spec.js'], | |
| testMatch: ['<rootDir>/integration-tests-new/**/*.specs.js'], | |
| '@commonTests': '<rootDir>/integration-tests-new/commonTests', | |
| '@helpers/(.*)': '<rootDir>/helpers/$1', | |
| '@utils/(.*)': '<rootDir>/utils/$1', | |
| }, | |
| testMatch: ['<rootDir>/integration-tests-new/**/*.spec.js', '<rootDir>/integration-tests-new/**/*.specs.js'], |
🤖 Prompt for AI Agents
In src/integrationJest.config.js around lines 18 to 22, the current testMatch
only catches files ending with .specs.js and thus skips ~15 files named
*.spec.js and the legacy integration-tests/ folder; update testMatch to match
both naming conventions and include the legacy dir. Replace the single pattern
with either two explicit patterns for integration-tests-new (e.g., match
**/*.spec.js and **/*.specs.js) and add equivalent patterns for
integration-tests/, or use a single glob that matches both (e.g.,
**/*.spec?(s).js) and include both <rootDir>/integration-tests-new/** and
<rootDir>/integration-tests/** so all existing tests are discovered.
| "field_0": { | ||
| "type": "string" | ||
| }, |
There was a problem hiding this comment.
Suspicious field_0 property - likely a mistake or placeholder.
The field_0 property at the top level of POST_mentoring_v1_entity-type_create appears to be misplaced or a leftover from schema generation. It doesn't follow the standard response envelope pattern (responseCode, message, result, meta).
Remove if not part of the actual API response:
},
- "field_0": {
- "type": "string"
- },
"meta": {📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| "field_0": { | |
| "type": "string" | |
| }, | |
| }, | |
| "meta": { |
🤖 Prompt for AI Agents
In src/integration-tests-new/entity-type/schemas/entity-type.schemas.json around
lines 52 to 54, remove the misplaced top-level "field_0" property which looks
like a leftover placeholder and does not conform to the API response envelope
(responseCode, message, result, meta); update the schema to only include the
correctly structured envelope fields or move "field_0" into the appropriate
nested object if it actually belongs in the response body, then run schema
validation to ensure no other placeholders remain.
| const BASE = process.env.BASE_URL || 'http://localhost:3000' | ||
| const TOKEN = process.env.TEST_BEARER_TOKEN || 'test-token' |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Use environment‑driven token instead of a hard‑coded auth value
Authenticated tests currently send a fixed header:
req = req.set('x-auth-token', 'string')
Given you already expose TEST_BEARER_TOKEN via TOKEN, it’s safer to wire that into the header so the same spec can run across local/CI/dev environments that expect a real token (or easily be overridden).
For example:
- const TOKEN = process.env.TEST_BEARER_TOKEN || 'test-token'
+ const TOKEN = process.env.TEST_BEARER_TOKEN || 'test-token'
// ...
- req = req.set('x-auth-token', 'string')
+ req = req.set('x-auth-token', TOKEN)Apply this in all three happy‑path tests so auth behavior is consistent and configurable.
Also applies to: 14-15, 90-91, 166-167
🤖 Prompt for AI Agents
In src/integration-tests-new/form/form.spec.js around lines 3-4, 14-15, 90-91
and 166-167, the tests set a hard-coded auth header value ('string'); replace
those occurrences so the test uses the TOKEN constant
(process.env.TEST_BEARER_TOKEN || 'test-token') for req.set('x-auth-token',
TOKEN) in all three happy-path tests to allow environment-driven tokens across
local/CI/dev. Ensure each request that currently uses the literal 'string' uses
TOKEN instead and keep existing fallback behavior intact.
| const ajv = new Ajv({ strict: false }) | ||
|
|
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Pre‑compile Ajv schemas once instead of per test (optional)
Each test calls ajv.compile(schema) on every run, even though the schemas are static. This is correct but slightly inefficient and duplicated.
You can pre‑compile once near the top of the file and reuse the compiled validators:
const ajv = new Ajv({ strict: false })
const schemas = require('./schemas/form.schemas.json')
+
+const validateCreate = ajv.compile(schemas['POST_/mentoring/v1/form/create'])
+const validateUpdate = ajv.compile(schemas['PUT_/mentoring/v1/form/update/{formId}'])
+const validateRead = ajv.compile(schemas['POST_/mentoring/v1/form/read/{formId}'])Then in tests:
- const schema = schemas['POST_/mentoring/v1/form/create']
- const validate = ajv.compile(schema)
- const valid = validate(res.body)
+ const valid = validateCreate(res.body)(and similarly for update/read). This keeps the tests a bit leaner and avoids repeated compilation.
Also applies to: 70-76, 146-152, 177-183
🤖 Prompt for AI Agents
In src/integration-tests-new/form/form.spec.js around lines 5-6 (and repeated at
70-76, 146-152, 177-183), tests call ajv.compile(schema) inside each test
causing repeated compilation; pre-compile the static schemas once at the top of
the file after creating the Ajv instance (e.g., const validateCreate =
ajv.compile(createSchema); const validateUpdate = ajv.compile(updateSchema);
etc.) and then replace per-test ajv.compile(schema) calls with the corresponding
precompiled validator variables so tests reuse the compiled validators.
| describe('PUT /mentoring/v1/form/update/{formId}', () => { | ||
| test('should return 200', async () => { | ||
| const url = `/mentoring/v1/form/update/1` | ||
| let req = request(BASE).put(url) | ||
| req = req.set('x-auth-token', 'string') | ||
| req = req | ||
| .send({ | ||
| type: 'session', | ||
| sub_type: 'createSessionsNew', | ||
| data: { | ||
| template_name: 'Test', | ||
| fields: { | ||
| controls: [ | ||
| { | ||
| name: 'title', | ||
| label: 'title', | ||
| value: '', | ||
| class: 'ion-margin', | ||
| type: 'text', | ||
| position: 'floating', | ||
| validators: { | ||
| required: true, | ||
| min_length: 5, | ||
| }, | ||
| }, | ||
| { | ||
| name: 'categories', | ||
| label: 'Select categories', | ||
| value: '', | ||
| class: 'ion-margin', | ||
| type: 'chip', | ||
| position: '', | ||
| disabled: false, | ||
| show_select_all: true, | ||
| validators: { | ||
| required: true, | ||
| }, | ||
| }, | ||
| { | ||
| name: 'ages', | ||
| label: 'Select age', | ||
| value: '', | ||
| class: 'ion-margin', | ||
| type: 'chip', | ||
| position: '', | ||
| disabled: false, | ||
| show_select_all: true, | ||
| validators: { | ||
| required: true, | ||
| }, | ||
| }, | ||
| ], | ||
| }, | ||
| }, | ||
| }) | ||
| .set('Content-Type', 'application/json') | ||
| const res = await req | ||
| expect(res.status).toBeGreaterThanOrEqual(200) | ||
| expect(res.status).toBeLessThan(300) | ||
| // validate response schema | ||
| const schema = schemas['PUT_/mentoring/v1/form/update/{formId}'] | ||
| const validate = ajv.compile(schema) | ||
| const valid = validate(res.body) | ||
| if (!valid) { | ||
| console.error('Schema validation errors:', validate.errors) | ||
| } | ||
| expect(valid).toBe(true) | ||
| }) | ||
|
|
||
| test('should return 401/403 when unauthorized', async () => { | ||
| const url = `/mentoring/v1/form/update/1` | ||
| const res = await request(BASE).put(url) | ||
| expect([401, 403]).toContain(res.status) | ||
| }) | ||
| }) | ||
|
|
||
| describe('POST /mentoring/v1/form/read/{formId}', () => { | ||
| test('should return 200', async () => { | ||
| const url = `/mentoring/v1/form/read/1` | ||
| let req = request(BASE).post(url) | ||
| req = req.set('x-auth-token', 'string') | ||
| req = req | ||
| .send({ | ||
| type: 'session', | ||
| sub_type: 'createSessionsNew', | ||
| }) | ||
| .set('Content-Type', 'application/json') | ||
| const res = await req | ||
| expect(res.status).toBeGreaterThanOrEqual(200) | ||
| expect(res.status).toBeLessThan(300) | ||
| // validate response schema | ||
| const schema = schemas['POST_/mentoring/v1/form/read/{formId}'] | ||
| const validate = ajv.compile(schema) | ||
| const valid = validate(res.body) | ||
| if (!valid) { | ||
| console.error('Schema validation errors:', validate.errors) | ||
| } | ||
| expect(valid).toBe(true) | ||
| }) | ||
|
|
||
| test('should return 401/403 when unauthorized', async () => { | ||
| const url = `/mentoring/v1/form/read/1` | ||
| const res = await request(BASE).post(url) | ||
| expect([401, 403]).toContain(res.status) | ||
| }) | ||
| }) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, check if the file exists and examine the structure around the flagged lines
fd -t f "form.spec.js" src/Repository: ELEVATE-Project/mentoring
Length of output: 152
🏁 Script executed:
# Get the full file to understand context around lines 86-191
wc -l src/integration-tests-new/form/form.spec.jsRepository: ELEVATE-Project/mentoring
Length of output: 116
🏁 Script executed:
# Read the test file to see full context including any setup/fixtures
head -200 src/integration-tests-new/form/form.spec.jsRepository: ELEVATE-Project/mentoring
Length of output: 5244
🏁 Script executed:
# Check for any database setup, seeding, or fixtures
fd -t f -e "seed*" -e "fixture*" -e "setup*" src/integration-tests-new/Repository: ELEVATE-Project/mentoring
Length of output: 478
🏁 Script executed:
# Search for form creation tests and any beforeAll/beforeEach hooks
rg -n "beforeAll|beforeEach|describe.*form|test.*create.*form" src/integration-tests-new/form/form.spec.js -A 5Repository: ELEVATE-Project/mentoring
Length of output: 905
🏁 Script executed:
# Verify if there's any database seeding or setup mechanism elsewhere in the project
fd -t f --full-path "src/integration-tests-new" -name "*.js" | head -20Repository: ELEVATE-Project/mentoring
Length of output: 299
🏁 Script executed:
# Check if there's a shared setup or beforeAll at higher level
cat -n src/integration-tests-new/form/form.spec.js | tail -50Repository: ELEVATE-Project/mentoring
Length of output: 1986
🏁 Script executed:
# Look for any jest setup files or database initialization
fd -t f -name "jest.config.js" -o -name "setup*.js" -o -name "*seed*" src/Repository: ELEVATE-Project/mentoring
Length of output: 299
Chain update and read tests from create response to avoid hard-coded formId dependency
Both the update and read tests assume a form with ID 1 exists in the test environment (lines 88 and 164). This creates environment-dependent tests that will fail if the database is not pre-seeded or if test data is reset.
The create test (lines 10–84) successfully generates a form but does not capture the response ID. The update and read tests should derive their formId from the create response instead:
- test('should return 200', async () => {
- const url = `/mentoring/v1/form/update/1`
+ test('should return 200', async () => {
+ // First, create a form to get a valid ID
+ const createRes = await request(BASE)
+ .post('/mentoring/v1/form/create')
+ .set('x-auth-token', 'string')
+ .send({
+ type: 'session',
+ sub_type: 'createSessions',
+ data: {
+ template_name: 'defaultTemplate',
+ fields: { controls: [ /* ... */ ] }
+ }
+ })
+ const formId = createRes.body?.result?.id
+
+ const url = `/mentoring/v1/form/update/${formId}`
let req = request(BASE).put(url)Apply the same pattern to the read test (line 162–191).
Committable suggestion skipped: line range outside the PR's diff.
| { | ||
| "POST_/mentoring/v1/form/create": { | ||
| "type": "object", | ||
| "properties": { | ||
| "responseCode": { | ||
| "type": "string" | ||
| }, | ||
| "message": { | ||
| "type": "string" | ||
| }, | ||
| "result": { | ||
| "type": "object", | ||
| "properties": { | ||
| "id": { | ||
| "type": "number" | ||
| }, | ||
| "type": { | ||
| "type": "string" | ||
| }, | ||
| "sub_type": { | ||
| "type": "string" | ||
| }, | ||
| "data": { | ||
| "type": "object", | ||
| "properties": { | ||
| "template_name": { | ||
| "type": "string" | ||
| }, | ||
| "fields": { | ||
| "type": "object", | ||
| "properties": { | ||
| "controls": { | ||
| "type": "array", | ||
| "items": { | ||
| "type": "object", | ||
| "properties": { | ||
| "name": { | ||
| "type": "string" | ||
| }, | ||
| "label": { | ||
| "type": "string" | ||
| }, | ||
| "value": { | ||
| "type": "string" | ||
| }, | ||
| "class": { | ||
| "type": "string" | ||
| }, | ||
| "type": { | ||
| "type": "string" | ||
| }, | ||
| "position": { | ||
| "type": "string" | ||
| }, | ||
| "validators": { | ||
| "type": "object", | ||
| "properties": { | ||
| "required": { | ||
| "type": "boolean" | ||
| }, | ||
| "min_length": { | ||
| "type": "number" | ||
| } | ||
| } | ||
| }, | ||
| "disabled": { | ||
| "type": "boolean" | ||
| }, | ||
| "show_select_all": { | ||
| "type": "boolean" | ||
| } | ||
| }, | ||
| "required": [ | ||
| "name", | ||
| "label", | ||
| "value", | ||
| "class", | ||
| "type", | ||
| "position", | ||
| "validators", | ||
| "disabled", | ||
| "show_select_all" | ||
| ] | ||
| } | ||
| } | ||
| } | ||
| } | ||
| } | ||
| }, | ||
| "updated_at": { | ||
| "type": "string" | ||
| }, | ||
| "created_at": { | ||
| "type": "string" | ||
| }, | ||
| "deleted_at": { | ||
| "type": "null" | ||
| }, | ||
| "organization_id": { | ||
| "type": "string" | ||
| } | ||
| } | ||
| }, | ||
| "meta": { | ||
| "type": "object", | ||
| "properties": { | ||
| "formsVersion": { | ||
| "type": "array", | ||
| "items": { | ||
| "type": "object", | ||
| "properties": { | ||
| "id": { | ||
| "type": "number" | ||
| }, | ||
| "type": { | ||
| "type": "string" | ||
| } | ||
| } | ||
| } | ||
| }, | ||
| "correlation": { | ||
| "type": "string" | ||
| }, | ||
| "meeting_platform": { | ||
| "type": "string" | ||
| } | ||
| } | ||
| } | ||
| } | ||
| }, | ||
| "PUT_/mentoring/v1/form/update/{formId}": { | ||
| "type": "object", | ||
| "properties": { | ||
| "responseCode": { | ||
| "type": "string" | ||
| }, | ||
| "message": { | ||
| "type": "string" | ||
| }, | ||
| "result": { | ||
| "type": "array", | ||
| "items": { | ||
| "type": "string" | ||
| } | ||
| }, | ||
| "meta": { | ||
| "type": "object", | ||
| "properties": { | ||
| "formsVersion": { | ||
| "type": "array", | ||
| "items": { | ||
| "type": "object", | ||
| "properties": { | ||
| "id": { | ||
| "type": "number" | ||
| }, | ||
| "type": { | ||
| "type": "string" | ||
| } | ||
| } | ||
| } | ||
| }, | ||
| "correlation": { | ||
| "type": "string" | ||
| }, | ||
| "meeting_platform": { | ||
| "type": "string" | ||
| } | ||
| } | ||
| } | ||
| } | ||
| }, | ||
| "POST_/mentoring/v1/form/read/{formId}": { | ||
| "type": "object", | ||
| "properties": { | ||
| "responseCode": { | ||
| "type": "string" | ||
| }, | ||
| "message": { | ||
| "type": "string" | ||
| }, | ||
| "result": { | ||
| "type": "object", | ||
| "properties": { | ||
| "id": { | ||
| "type": "number" | ||
| }, | ||
| "type": { | ||
| "type": "string" | ||
| }, | ||
| "sub_type": { | ||
| "type": "string" | ||
| }, | ||
| "data": { | ||
| "type": "object", | ||
| "properties": { | ||
| "template_name": { | ||
| "type": "string" | ||
| }, | ||
| "fields": { | ||
| "type": "object", | ||
| "properties": { | ||
| "controls": { | ||
| "type": "array", | ||
| "items": { | ||
| "type": "object", | ||
| "properties": { | ||
| "name": { | ||
| "type": "string" | ||
| }, | ||
| "label": { | ||
| "type": "string" | ||
| }, | ||
| "value": { | ||
| "type": "string" | ||
| }, | ||
| "class": { | ||
| "type": "string" | ||
| }, | ||
| "type": { | ||
| "type": "string" | ||
| }, | ||
| "position": { | ||
| "type": "string" | ||
| }, | ||
| "validators": { | ||
| "type": "object", | ||
| "properties": { | ||
| "required": { | ||
| "type": "boolean" | ||
| }, | ||
| "min_length": { | ||
| "type": "number" | ||
| } | ||
| } | ||
| }, | ||
| "disabled": { | ||
| "type": "boolean" | ||
| }, | ||
| "show_select_all": { | ||
| "type": "boolean" | ||
| } | ||
| }, | ||
| "required": [ | ||
| "name", | ||
| "label", | ||
| "value", | ||
| "class", | ||
| "type", | ||
| "position", | ||
| "validators", | ||
| "disabled", | ||
| "show_select_all" | ||
| ] | ||
| } | ||
| } | ||
| } | ||
| } | ||
| } | ||
| }, | ||
| "updated_at": { | ||
| "type": "string" | ||
| }, | ||
| "created_at": { | ||
| "type": "string" | ||
| }, | ||
| "deleted_at": { | ||
| "type": "null" | ||
| }, | ||
| "organization_id": { | ||
| "type": "string" | ||
| } | ||
| } | ||
| }, | ||
| "meta": { | ||
| "type": "object", | ||
| "properties": { | ||
| "formsVersion": { | ||
| "type": "array", | ||
| "items": { | ||
| "type": "object", | ||
| "properties": { | ||
| "id": { | ||
| "type": "number" | ||
| }, | ||
| "type": { | ||
| "type": "string" | ||
| } | ||
| } | ||
| } | ||
| }, | ||
| "correlation": { | ||
| "type": "string" | ||
| }, | ||
| "meeting_platform": { | ||
| "type": "string" | ||
| } | ||
| } | ||
| } | ||
| } | ||
| } | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# Find the form.spec.js file
find . -type f -name "form.spec.js" | head -20Repository: ELEVATE-Project/mentoring
Length of output: 156
🏁 Script executed:
# Check the size and read the new form.spec.js
wc -l ./src/integration-tests-new/form/form.spec.jsRepository: ELEVATE-Project/mentoring
Length of output: 118
🏁 Script executed:
cat -n ./src/integration-tests-new/form/form.spec.jsRepository: ELEVATE-Project/mentoring
Length of output: 6588
🏁 Script executed:
cat -n ./src/integration-tests-new/form/schemas/form.schemas.json | head -120Repository: ELEVATE-Project/mentoring
Length of output: 3261
🏁 Script executed:
# Search for any test output or configuration
find . -type f \( -name "*.log" -o -name ".github" -o -name "jest.config.*" \) | head -20Repository: ELEVATE-Project/mentoring
Length of output: 89
🏁 Script executed:
# Create a simple test to verify Ajv behavior with required fields
node <<'EOF'
const Ajv = require('ajv')
const ajv = new Ajv({ strict: false })
const schema = {
type: "object",
properties: {
name: { type: "string" },
disabled: { type: "boolean" },
show_select_all: { type: "boolean" }
},
required: ["name", "disabled", "show_select_all"]
}
const validate = ajv.compile(schema)
// Test 1: All required fields present
const data1 = { name: "title", disabled: false, show_select_all: true }
const valid1 = validate(data1)
console.log("Test 1 (all fields present):", valid1)
// Test 2: Missing disabled and show_select_all (like title control in test)
const data2 = { name: "title" }
const valid2 = validate(data2)
console.log("Test 2 (missing optional fields):", valid2)
if (!valid2) console.log("Errors:", validate.errors)
EOFRepository: ELEVATE-Project/mentoring
Length of output: 902
🏁 Script executed:
# Find package.json and check ajv dependency
find . -type f -name "package.json" | head -5Repository: ELEVATE-Project/mentoring
Length of output: 87
🏁 Script executed:
# Check package.json for ajv and install dependencies
cat ./src/package.json | grep -A 20 "ajv"Repository: ELEVATE-Project/mentoring
Length of output: 51
🏁 Script executed:
# Read package.json to see all dependencies
cat ./src/package.json | head -80Repository: ELEVATE-Project/mentoring
Length of output: 2634
🏁 Script executed:
# Check the old form schemas to compare
ls -la ./src/integration-tests/form/schemas/ 2>/dev/null || echo "No old schemas directory"Repository: ELEVATE-Project/mentoring
Length of output: 93
🏁 Script executed:
# Check the rest of the schema file to verify deleted_at type and required arrays
cat ./src/integration-tests-new/form/schemas/form.schemas.json | grep -A 3 "deleted_at\|\"required\":"Repository: ELEVATE-Project/mentoring
Length of output: 625
🏁 Script executed:
# Get full schema to verify structure and required fields at different levels
python3 <<'EOF'
import json
with open('./src/integration-tests-new/form/schemas/form.schemas.json', 'r') as f:
schemas = json.load(f)
for endpoint, schema in schemas.items():
print(f"\n=== {endpoint} ===")
print(f"Root level has 'required': {'required' in schema}")
if 'properties' in schema:
if 'result' in schema['properties']:
result = schema['properties']['result']
print(f"result has 'required': {'required' in result}")
if 'meta' in schema['properties']:
meta = schema['properties']['meta']
print(f"meta has 'required': {'required' in meta}")
print(f"deleted_at type: {schema['properties']['result']['properties'].get('deleted_at', {}).get('type', 'NOT FOUND')}")
EOFRepository: ELEVATE-Project/mentoring
Length of output: 155
🏁 Script executed:
# Use jq to check structure more efficiently
jq 'keys' ./src/integration-tests-new/form/schemas/form.schemas.jsonRepository: ELEVATE-Project/mentoring
Length of output: 194
🏁 Script executed:
# Check if root schema has 'required' field
jq '.["POST_/mentoring/v1/form/create"] | has("required")' ./src/integration-tests-new/form/schemas/form.schemas.jsonRepository: ELEVATE-Project/mentoring
Length of output: 74
🏁 Script executed:
# Check if result and meta objects have required fields
jq '.["POST_/mentoring/v1/form/create"].properties.result | has("required")' ./src/integration-tests-new/form/schemas/form.schemas.json
jq '.["POST_/mentoring/v1/form/create"].properties.meta | has("required")' ./src/integration-tests-new/form/schemas/form.schemas.jsonRepository: ELEVATE-Project/mentoring
Length of output: 80
🏁 Script executed:
# Verify deleted_at type one more time to be thorough
jq '.["POST_/mentoring/v1/form/create"].properties.result.properties.deleted_at.type' ./src/integration-tests-new/form/schemas/form.schemas.jsonRepository: ELEVATE-Project/mentoring
Length of output: 75
Schema validation will fail on current test payloads—align required fields and types with actual API behavior
Three issues make this schema brittle and incompatible with the test suite:
-
Controls missing optional fields: The schema requires
disabledandshow_select_allon all controls, but test payloads (e.g., thetitlecontrol in both POST create and PUT update) omit these fields. Validation will fail unless:- The API always returns both fields for every control, or
- Remove
disabledandshow_select_allfrom therequiredarray in all three endpoints.
-
Top-level objects lack
requiredarrays: Root schema,result, andmetaobjects have norequiredfield declarations. This means responses missingresult,meta, orresponseCode/messagewill still validate. Addrequiredarrays to enforce the contract (at minimum:["responseCode", "message", "result"]at root level). -
deleted_attype is overly strict:deleted_atis constrained to"type": "null". If the backend ever returns a timestamp string for soft-deleted forms, all tests will fail. Consider"type": ["null", "string"]or whatever types the API actually supports.
Example fix for controls (apply to all three endpoints):
"validators": {
"type": "object",
"properties": {
"required": {
"type": "boolean"
},
"min_length": {
"type": "number"
}
}
},
"disabled": {
"type": "boolean"
},
"show_select_all": {
"type": "boolean"
}
},
- "required": [
- "name",
- "label",
- "value",
- "class",
- "type",
- "position",
- "validators",
- "disabled",
- "show_select_all"
- ]
+ "required": [
+ "name",
+ "label",
+ "value",
+ "class",
+ "type",
+ "position",
+ "validators"
+ ]Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In src/integration-tests-new/form/schemas/form.schemas.json lines 1-302, the
schema is too strict and will fail tests: remove "disabled" and
"show_select_all" from the "required" arrays for control items in all three
endpoints (POST create, PUT update, POST read); add explicit "required" arrays
at the root object level (at least ["responseCode","message","result"]) and for
"result" and "meta" objects where appropriate to enforce the contract; and relax
"deleted_at" to allow null or string (e.g., type ["null","string"]) so
timestamps won't break validation — apply these changes consistently across the
three endpoint schemas.
| req = req | ||
| .set('x-auth-token', menteeDetails.token) // Use mentee's token | ||
| .set('org-id', menteeDetails.organizations[0]) // Correctly access the org ID string | ||
| .set('timezone', 'Asia/Calcutta') // Add timezone header |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's check the implementations of logIn and mentorLogIn in commonTests.js
rg -n -A 25 'async.*logIn\s*\(' src/integration-tests-new/commonTests.js | head -100Repository: ELEVATE-Project/mentoring
Length of output: 51
🏁 Script executed:
# Also check the actual lines 51-54 and 99-102 in requestSessions.specs.js to see the exact usage
cat -n src/integration-tests-new/requestSessions/requestSessions.specs.js | sed -n '50,55p'Repository: ELEVATE-Project/mentoring
Length of output: 390
🏁 Script executed:
# And check lines 99-102
cat -n src/integration-tests-new/requestSessions/requestSessions.specs.js | sed -n '99,103p'Repository: ELEVATE-Project/mentoring
Length of output: 370
🏁 Script executed:
# Search for where menteeDetails is assigned
rg -n 'menteeDetails\s*=' src/integration-tests-new/requestSessions/requestSessions.specs.js | head -20Repository: ELEVATE-Project/mentoring
Length of output: 177
🏁 Script executed:
# Search for where mentorDetails is assigned
rg -n 'mentorDetails\s*=' src/integration-tests-new/requestSessions/requestSessions.specs.js | head -20Repository: ELEVATE-Project/mentoring
Length of output: 183
🏁 Script executed:
# Search for logIn function more broadly - might be in describe blocks or beforeEach
rg -n 'logIn\|mentorLogIn' src/integration-tests-new/requestSessions/requestSessions.specs.js | head -30Repository: ELEVATE-Project/mentoring
Length of output: 51
🏁 Script executed:
# Check what's imported at the top of the file
head -30 src/integration-tests-new/requestSessions/requestSessions.specs.jsRepository: ELEVATE-Project/mentoring
Length of output: 1331
🏁 Script executed:
# Find the commonTests module - it's aliased with @commonTests
find . -name "*commonTest*" -type f | grep -v node_modulesRepository: ELEVATE-Project/mentoring
Length of output: 150
🏁 Script executed:
# Check jest config to understand the @ alias
cat jest.config.js 2>/dev/null || cat package.json | grep -A 10 jestRepository: ELEVATE-Project/mentoring
Length of output: 113
🏁 Script executed:
# Find the logIn and mentorLogIn functions in the new commonTests.js
rg -n 'const logIn|function logIn|async logIn|mentorLogIn' src/integration-tests-new/commonTests.jsRepository: ELEVATE-Project/mentoring
Length of output: 153
🏁 Script executed:
# Get line count to understand file size
wc -l src/integration-tests-new/commonTests.jsRepository: ELEVATE-Project/mentoring
Length of output: 113
🏁 Script executed:
# Read the logIn function
sed -n '9,61p' src/integration-tests-new/commonTests.jsRepository: ELEVATE-Project/mentoring
Length of output: 1619
🏁 Script executed:
# Read the mentorLogIn function
sed -n '62,124p' src/integration-tests-new/commonTests.jsRepository: ELEVATE-Project/mentoring
Length of output: 1871
🏁 Script executed:
# Search for other usages of organizations in test files to see which pattern is correct
rg -n '\.organizations' src/integration-tests-new/ | grep -v node_modules | head -30Repository: ELEVATE-Project/mentoring
Length of output: 1398
🏁 Script executed:
# Also check if there are any database schema files or API documentation
find . -name "*user*" -type f \( -name "*.schema*" -o -name "*model*" \) | grep -v node_modules | head -20Repository: ELEVATE-Project/mentoring
Length of output: 129
🏁 Script executed:
# Check the users schema to see the organizations structure
cat src/integration-tests-new/users/schemas/users.schemas.json | head -100Repository: ELEVATE-Project/mentoring
Length of output: 1967
🏁 Script executed:
# Also search for organization model or schema in the main codebase
find . -path ./node_modules -prune -o -name "*.js" -type f -exec grep -l "organization.*schema\|organizationId" {} \; | head -10Repository: ELEVATE-Project/mentoring
Length of output: 1076
🏁 Script executed:
# Check entity-type.specs.js to see the consistent correct pattern
sed -n '20,30p' src/integration-tests-new/entity-type/entity-type.specs.jsRepository: ELEVATE-Project/mentoring
Length of output: 546
🏁 Script executed:
# Confirm the pattern is consistent throughout entity-type.specs.js
rg -B2 -A2 'organizations\[0\]' src/integration-tests-new/entity-type/entity-type.specs.js | head -40Repository: ELEVATE-Project/mentoring
Length of output: 1062
Fix incorrect organizations access pattern for mentee in org-id header.
Lines 53 and 147 use menteeDetails.organizations[0] as a direct string, but lines 101 and 169 use mentorDetails.organizations[0].id.toString() with an .id property. Since both logIn() and mentorLogIn() return identical structures from the same API endpoint, organizations[0] is an object with an .id property (not a raw string).
The entity-type.specs.js file confirms this correct pattern is used consistently for both mentee and mentor across all requests. Update lines 53 and 147 to use .id.toString() to match lines 101 and 169:
.set('org-id', menteeDetails.organizations[0].id.toString())🤖 Prompt for AI Agents
In src/integration-tests-new/requestSessions/requestSessions.specs.js around
lines 51 to 54 and also update the similar instance at line 147, change the
org-id header usage because menteeDetails.organizations[0] is an object with an
id property; replace the current access that treats it as a string with code
that reads the id and converts it to a string (use the same pattern as
mentorDetails.organizations[0].id.toString()), ensuring all requests set
'org-id' using the .id.toString() accessor so both mentee and mentor use the
same correct structure.
| "id": { | ||
| "type": "number" | ||
| }, |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cat -n src/integration-tests-new/requestSessions/schemas/requestSessions.schemas.json | head -250 | tail -230Repository: ELEVATE-Project/mentoring
Length of output: 5870
🏁 Script executed:
find src -type f \( -name "*.js" \) | grep -i "requestSessions" | head -20Repository: ELEVATE-Project/mentoring
Length of output: 321
🏁 Script executed:
cat -n src/controllers/v1/requestSessions.js | head -100Repository: ELEVATE-Project/mentoring
Length of output: 4002
🏁 Script executed:
cat -n src/services/requestSessions.js | head -200Repository: ELEVATE-Project/mentoring
Length of output: 8623
🏁 Script executed:
cat -n src/services/requestSessions.js | sed -n '200,400p'Repository: ELEVATE-Project/mentoring
Length of output: 7541
🏁 Script executed:
rg -n "getDetails\|getSessionRequestDetails" src/services/requestSessions.jsRepository: ELEVATE-Project/mentoring
Length of output: 51
🏁 Script executed:
rg -n "async" src/services/requestSessions.js | head -20Repository: ELEVATE-Project/mentoring
Length of output: 724
🏁 Script executed:
cat -n src/services/requestSessions.js | sed -n '629,720p'Repository: ELEVATE-Project/mentoring
Length of output: 3340
🏁 Script executed:
rg -A 10 "id.*DataType\|id.*INTEGER\|id.*type" src/database/models/requestSessions.js | head -30Repository: ELEVATE-Project/mentoring
Length of output: 51
🏁 Script executed:
cat -n src/services/requestSessions.js | sed -n '42,192p' | grep -A 20 "SessionRequestResult"Repository: ELEVATE-Project/mentoring
Length of output: 1442
Inconsistent id field type across endpoints requires fix.
The id field type is inconsistent across endpoints: POST_mentoring_v1_requestSessions_create returns id as number (line 21), while GET_mentoring_v1_requestSessions_list (line 113) and GET_mentoring_v1_requestSessions_getDetails (line 238) return id as string. The service layer explicitly converts id to string in the list and getDetails endpoints (via String(session.id) and toString()), but the create endpoint returns the raw database value without conversion. Align the create endpoint to apply the same string conversion for consistency.
🤖 Prompt for AI Agents
In
src/integration-tests-new/requestSessions/schemas/requestSessions.schemas.json
around lines 20 to 22, the create response schema declares "id" as number while
other endpoints use string; update the schema to declare "id" as "string" and
then modify the create endpoint implementation to convert the DB-generated id to
a string before returning (e.g., replace raw id with String(id) or
id.toString()) so all endpoints consistently return id as a string.
| { | ||
| "POST_mentoring_v1_role-extension_create": { | ||
| "type": "object", | ||
| "properties": { | ||
| "responseCode": { | ||
| "type": "string" | ||
| }, | ||
| "message": { | ||
| "type": "string" | ||
| }, | ||
| "result": { | ||
| "type": "object", | ||
| "properties": { | ||
| "created_at": { | ||
| "type": "string" | ||
| }, | ||
| "updated_at": { | ||
| "type": "string" | ||
| }, | ||
| "status": { | ||
| "type": "string" | ||
| }, | ||
| "title": { | ||
| "type": "string" | ||
| }, | ||
| "label": { | ||
| "type": "string" | ||
| }, | ||
| "scope": { | ||
| "type": "string" | ||
| }, | ||
| "organization_id": { | ||
| "type": "string" | ||
| }, | ||
| "id": { | ||
| "type": "number" | ||
| }, | ||
| "deleted_at": { | ||
| "type": "null" | ||
| } | ||
| } | ||
| }, | ||
| "meta": { | ||
| "type": "object", | ||
| "properties": { | ||
| "formsVersion": { | ||
| "type": "array", | ||
| "items": { | ||
| "type": "object", | ||
| "properties": { | ||
| "id": { | ||
| "type": "number" | ||
| }, | ||
| "type": { | ||
| "type": "string" | ||
| }, | ||
| "version": { | ||
| "type": "number" | ||
| } | ||
| }, | ||
| "required": ["id", "type", "version"] | ||
| } | ||
| }, | ||
| "correlation": { | ||
| "type": "string" | ||
| }, | ||
| "meetingPlatform": { | ||
| "type": "string" | ||
| } | ||
| } | ||
| } | ||
| } | ||
| }, |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Schema structure looks good, consider adding required constraints for stricter validation.
The schema for POST_mentoring_v1_role-extension_create defines all expected properties but lacks a required array at the top level and within the result object. This means an empty response {} would pass validation. Consider adding required arrays if strict validation is desired.
Example addition for stricter validation:
{
"POST_mentoring_v1_role-extension_create": {
"type": "object",
"properties": {
"responseCode": {
"type": "string"
},
...
},
+ "required": ["responseCode", "message", "result"]
},📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| { | |
| "POST_mentoring_v1_role-extension_create": { | |
| "type": "object", | |
| "properties": { | |
| "responseCode": { | |
| "type": "string" | |
| }, | |
| "message": { | |
| "type": "string" | |
| }, | |
| "result": { | |
| "type": "object", | |
| "properties": { | |
| "created_at": { | |
| "type": "string" | |
| }, | |
| "updated_at": { | |
| "type": "string" | |
| }, | |
| "status": { | |
| "type": "string" | |
| }, | |
| "title": { | |
| "type": "string" | |
| }, | |
| "label": { | |
| "type": "string" | |
| }, | |
| "scope": { | |
| "type": "string" | |
| }, | |
| "organization_id": { | |
| "type": "string" | |
| }, | |
| "id": { | |
| "type": "number" | |
| }, | |
| "deleted_at": { | |
| "type": "null" | |
| } | |
| } | |
| }, | |
| "meta": { | |
| "type": "object", | |
| "properties": { | |
| "formsVersion": { | |
| "type": "array", | |
| "items": { | |
| "type": "object", | |
| "properties": { | |
| "id": { | |
| "type": "number" | |
| }, | |
| "type": { | |
| "type": "string" | |
| }, | |
| "version": { | |
| "type": "number" | |
| } | |
| }, | |
| "required": ["id", "type", "version"] | |
| } | |
| }, | |
| "correlation": { | |
| "type": "string" | |
| }, | |
| "meetingPlatform": { | |
| "type": "string" | |
| } | |
| } | |
| } | |
| } | |
| }, | |
| { | |
| "POST_mentoring_v1_role-extension_create": { | |
| "type": "object", | |
| "properties": { | |
| "responseCode": { | |
| "type": "string" | |
| }, | |
| "message": { | |
| "type": "string" | |
| }, | |
| "result": { | |
| "type": "object", | |
| "properties": { | |
| "created_at": { | |
| "type": "string" | |
| }, | |
| "updated_at": { | |
| "type": "string" | |
| }, | |
| "status": { | |
| "type": "string" | |
| }, | |
| "title": { | |
| "type": "string" | |
| }, | |
| "label": { | |
| "type": "string" | |
| }, | |
| "scope": { | |
| "type": "string" | |
| }, | |
| "organization_id": { | |
| "type": "string" | |
| }, | |
| "id": { | |
| "type": "number" | |
| }, | |
| "deleted_at": { | |
| "type": "null" | |
| } | |
| } | |
| }, | |
| "meta": { | |
| "type": "object", | |
| "properties": { | |
| "formsVersion": { | |
| "type": "array", | |
| "items": { | |
| "type": "object", | |
| "properties": { | |
| "id": { | |
| "type": "number" | |
| }, | |
| "type": { | |
| "type": "string" | |
| }, | |
| "version": { | |
| "type": "number" | |
| } | |
| }, | |
| "required": ["id", "type", "version"] | |
| } | |
| }, | |
| "correlation": { | |
| "type": "string" | |
| }, | |
| "meetingPlatform": { | |
| "type": "string" | |
| } | |
| } | |
| } | |
| }, | |
| "required": ["responseCode", "message", "result"] | |
| }, |
…setup-2 updated the circleci file
Release Notes: Circle CI New Integration Tests
Overview
Comprehensive integration test infrastructure for the mentoring API with 28 test modules and 28 corresponding JSON schema files, covering 30+ API endpoints across all major feature areas.
Key Additions
Integration Test Suite (28 spec files, 3,756 lines)
JSON Schema Validation Framework (28 schema files, 7,426 lines)
Test Infrastructure
src/integration-tests-new/commonTests.js) withlogIn()andmentorLogIn()helpersConfiguration Updates
integration-tests-new/**/*.specs.js@helpersand@utilsdev:node --inspect=0.0.0.0:9229 app.js)CircleCI Changes
Statistics
Author
Test Coverage by Domain