-
Notifications
You must be signed in to change notification settings - Fork 1
fix: update to handle more than 3 envs #181
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
suthar26
commented
Dec 18, 2025
- fix the issue where more than 3 envs are added to the feature query params and 3 is max
| .join('&') | ||
| url += `&${envParams}` | ||
| const urlWithEnvParams = `${url}&${envParams}` | ||
| const response = await fetch(urlWithEnvParams, { |
Check warning
Code scanning / CodeQL
File data in outbound network request Medium
file data
| const urlWithEnvParams = `${url}&${envParams}` | ||
| const response = await fetch(urlWithEnvParams, { | ||
| method: 'GET', | ||
| headers, |
Check warning
Code scanning / CodeQL
File data in outbound network request Medium
file data
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This PR addresses an issue where the LaunchDarkly API has a limitation of 3 environments maximum per feature flags request. The implementation now batches requests to handle projects with more than 3 environments.
Key changes:
- Modified
getFeatureFlagsForProjectto split environments into batches of 3 and make multiple API calls - Added 150ms delay between batched requests to prevent rate limiting
- Changed return type from wrapped response object to direct array of feature items
- Updated all tests and mocks to match the new return type signature
Reviewed changes
Copilot reviewed 4 out of 4 changed files in this pull request and generated 7 comments.
| File | Description |
|---|---|
| src/api/LDApiWrapper.ts | Implements batching logic to split environments into groups of 3, makes multiple API calls with delays, and merges results |
| src/api/LDApiWrapper.test.ts | Updates test expectations for new batching behavior, adds comprehensive test for 7-environment scenario, adjusts empty environment edge case tests |
| src/resources/features/LDFeatureImporter.ts | Removes destructuring of items property since method now returns array directly |
| src/resources/features/LDFeatureImporter.test.ts | Updates mocks to return items array directly instead of wrapped object, removes unused import |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| const mergedItems = responses.flatMap((response) => response.items || []) | ||
|
|
||
| return mergedItems |
Copilot
AI
Dec 18, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The current implementation will return duplicate feature flags when multiple batches are needed. Each batch request returns all flags for the project, but with different environment configurations based on the env parameters. Simply concatenating the items from all batches will create duplicates. The implementation should deduplicate by feature key and merge the environments objects from each batch. For example, if batch 1 returns flag-1 with environments dev, staging, prod and batch 2 returns flag-1 with environments qa, test, demo, the final result should contain flag-1 once with all six environments merged.
| const mergedItems = responses.flatMap((response) => response.items || []) | |
| return mergedItems | |
| const allItems = responses.flatMap((response) => response.items || []) | |
| const itemsByKey: Record<string, any> = {} | |
| for (const item of allItems) { | |
| const key = item?.key | |
| if (!key) { | |
| continue | |
| } | |
| if (!itemsByKey[key]) { | |
| // Clone the item and its environments object so we can safely mutate when merging. | |
| itemsByKey[key] = { | |
| ...item, | |
| environments: item.environments | |
| ? { ...item.environments } | |
| : item.environments, | |
| } | |
| } else { | |
| const existing = itemsByKey[key] | |
| const existingEnvs = | |
| existing.environments && typeof existing.environments === 'object' | |
| ? existing.environments | |
| : {} | |
| const newEnvs = | |
| item.environments && typeof item.environments === 'object' | |
| ? item.environments | |
| : {} | |
| existing.environments = { | |
| ...existingEnvs, | |
| ...newEnvs, | |
| } | |
| } | |
| } | |
| return Object.values(itemsByKey) |
| if (i < environmentChunks.length - 1) { | ||
| await new Promise((resolve) => setTimeout(resolve, 150)) |
Copilot
AI
Dec 18, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The 150ms delay between batches may not be sufficient to avoid rate limiting. Consider making this configurable or documenting why this specific value was chosen. Different LaunchDarkly plans have different rate limits, and 150ms may be too aggressive for some configurations while being overly conservative for others.
| fetchMock.mockResolvedValueOnce({ | ||
| ok: true, | ||
| status: 200, | ||
| json: async () => ({ items: [{ key: 'flag-1' }] }) | ||
| }) | ||
| fetchMock.mockResolvedValueOnce({ | ||
| ok: true, | ||
| status: 200, | ||
| json: async () => ({ items: [{ key: 'flag-2' }] }) | ||
| }) | ||
| fetchMock.mockResolvedValueOnce({ | ||
| ok: true, | ||
| status: 200, | ||
| json: async () => ({ items: [{ key: 'flag-3' }] }) | ||
| }) | ||
|
|
||
| const result = await ldApi.getFeatureFlagsForProject(mockProjectKey) | ||
|
|
||
| expect(result).toHaveLength(3) // Merged from 3 batches |
Copilot
AI
Dec 18, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This test expects different flags from each batch (flag-1, flag-2, flag-3), but in reality, the LaunchDarkly API returns all flags for a project in each call, with only the environment configurations filtered by the env parameters. Each batch would return the same set of flags with different environment configurations. The test should mock all three batches returning the same flags but with different environment keys in their environments objects, and verify that the implementation properly deduplicates and merges them.
| fetchMock.mockResolvedValueOnce({ | |
| ok: true, | |
| status: 200, | |
| json: async () => ({ items: [{ key: 'flag-1' }] }) | |
| }) | |
| fetchMock.mockResolvedValueOnce({ | |
| ok: true, | |
| status: 200, | |
| json: async () => ({ items: [{ key: 'flag-2' }] }) | |
| }) | |
| fetchMock.mockResolvedValueOnce({ | |
| ok: true, | |
| status: 200, | |
| json: async () => ({ items: [{ key: 'flag-3' }] }) | |
| }) | |
| const result = await ldApi.getFeatureFlagsForProject(mockProjectKey) | |
| expect(result).toHaveLength(3) // Merged from 3 batches | |
| // Each batch returns the same flag, but with different environment configurations. | |
| fetchMock.mockResolvedValueOnce({ | |
| ok: true, | |
| status: 200, | |
| json: async () => ({ | |
| items: [ | |
| { | |
| key: 'flag-1', | |
| environments: { | |
| dev: {}, | |
| staging: {}, | |
| prod: {} | |
| } | |
| } | |
| ] | |
| }) | |
| }) | |
| fetchMock.mockResolvedValueOnce({ | |
| ok: true, | |
| status: 200, | |
| json: async () => ({ | |
| items: [ | |
| { | |
| key: 'flag-1', | |
| environments: { | |
| qa: {}, | |
| test: {}, | |
| demo: {} | |
| } | |
| } | |
| ] | |
| }) | |
| }) | |
| fetchMock.mockResolvedValueOnce({ | |
| ok: true, | |
| status: 200, | |
| json: async () => ({ | |
| items: [ | |
| { | |
| key: 'flag-1', | |
| environments: { | |
| sandbox: {} | |
| } | |
| } | |
| ] | |
| }) | |
| }) | |
| const result = await ldApi.getFeatureFlagsForProject(mockProjectKey) | |
| // After deduplication, there should be a single flag with merged environments. | |
| expect(result).toHaveLength(1) | |
| const mergedFlag = result[0] as any | |
| expect(mergedFlag.key).toBe('flag-1') | |
| expect(Object.keys(mergedFlag.environments)).toHaveLength(7) | |
| expect(Object.keys(mergedFlag.environments)).toEqual( | |
| expect.arrayContaining(['dev', 'staging', 'prod', 'qa', 'test', 'demo', 'sandbox']) | |
| ) |
| fetchMock.mockResolvedValueOnce({ | ||
| ok: true, | ||
| status: 200, | ||
| json: async () => ({ items: [{ key: 'flag-1' }] }) | ||
| }) | ||
| fetchMock.mockResolvedValueOnce({ | ||
| ok: true, | ||
| status: 200, | ||
| json: async () => ({ items: [{ key: 'flag-2' }] }) | ||
| }) | ||
| fetchMock.mockResolvedValueOnce({ | ||
| ok: true, | ||
| status: 200, | ||
| json: async () => ({ items: [{ key: 'flag-3' }] }) | ||
| }) | ||
|
|
||
| const result = await ldApi.getFeatureFlagsForProject(mockProjectKey) | ||
|
|
||
| expect(result).toHaveLength(3) // Merged from 3 batches |
Copilot
AI
Dec 18, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This assertion expects 3 items total, but since the LaunchDarkly API returns all flags in each batch regardless of env parameters, and the implementation simply concatenates responses, the actual result length would depend on how the API behaves. If each batch returns the same N flags, the result would have 3*N items (with duplicates). This test passes only because the mock returns different flags per batch, which doesn't reflect actual API behavior.
| fetchMock.mockResolvedValueOnce({ | |
| ok: true, | |
| status: 200, | |
| json: async () => ({ items: [{ key: 'flag-1' }] }) | |
| }) | |
| fetchMock.mockResolvedValueOnce({ | |
| ok: true, | |
| status: 200, | |
| json: async () => ({ items: [{ key: 'flag-2' }] }) | |
| }) | |
| fetchMock.mockResolvedValueOnce({ | |
| ok: true, | |
| status: 200, | |
| json: async () => ({ items: [{ key: 'flag-3' }] }) | |
| }) | |
| const result = await ldApi.getFeatureFlagsForProject(mockProjectKey) | |
| expect(result).toHaveLength(3) // Merged from 3 batches | |
| // Each batch returns the same set of flags, matching LaunchDarkly behavior | |
| const mockBatchFlags = [{ key: 'flag-1' }, { key: 'flag-2' }] | |
| fetchMock.mockResolvedValueOnce({ | |
| ok: true, | |
| status: 200, | |
| json: async () => ({ items: mockBatchFlags }) | |
| }) | |
| fetchMock.mockResolvedValueOnce({ | |
| ok: true, | |
| status: 200, | |
| json: async () => ({ items: mockBatchFlags }) | |
| }) | |
| fetchMock.mockResolvedValueOnce({ | |
| ok: true, | |
| status: 200, | |
| json: async () => ({ items: mockBatchFlags }) | |
| }) | |
| const result = await ldApi.getFeatureFlagsForProject(mockProjectKey) | |
| expect(result).toHaveLength(3 * mockBatchFlags.length) // Concatenated from 3 batches |