Skip to content

Bug fix/sda fabric devices pcg#409

Open
koderchit wants to merge 4 commits intocisco-en-programmability:mainfrom
koderchit:bug-fix/sda-fabric-devices-pcg
Open

Bug fix/sda fabric devices pcg#409
koderchit wants to merge 4 commits intocisco-en-programmability:mainfrom
koderchit:bug-fix/sda-fabric-devices-pcg

Conversation

@koderchit
Copy link
Copy Markdown

@koderchit koderchit commented Apr 10, 2026

Type of Change

  • Bug fix
  • New feature
  • Breaking change
  • Documentation update

fabric_devices filter must be a list of dicts

Summary:
The fabric_devices field under component_specific_filters was typed and handled as a single dictionary. The brownfield_helper validation layer enforces a list type for all component filters, so any playbook supplying fabric_devices as a dict would fail immediately with "Component 'fabric_devices' filters must be a list".

Steps to Reproduce:

config:
  component_specific_filters:
    components_list: ["fabric_devices"]
    fabric_devices:
      fabric_name: "Global/USA/SAN-JOSE"
      device_roles: ["BORDER_NODE", "CONTROL_PLANE_NODE"]

Observed Behavior:

Component 'fabric_devices' filters must be a list

Expected Behavior:
fabric_devices should be accepted as a list of filter dictionaries:

fabric_devices:
  - fabric_name: "Global/USA/SAN-JOSE"
    device_roles: ["BORDER_NODE", "CONTROL_PLANE_NODE"]

Root Cause Analysis:
fabric_devices was declared as type: dict in the module DOCUMENTATION and processed via a single .get() call in get_fabric_devices_configuration, conflicting with validate_component_specific_filters which expects a list of dicts.

Fix Details:

  • Changed fabric_devices type from dict to list / elements: dict in DOCUMENTATION.
  • Updated get_fabric_devices_configuration to iterate over the list of filter entries; invalid/not-found entries are skipped with a warning so other valid entries in the same list are still processed.
  • Updated all EXAMPLES in the module and playbooks/sda_fabric_devices_playbook_config_generator.yaml to use list syntax.
  • Updated all fixture entries in tests/unit/modules/dnac/fixtures/sda_fabric_devices_playbook_config_generator.json to list format.

Impact:
Breaking change for existing playbooks using fabric_devices as a dict. All consumers must update to list syntax. The new loop-based approach also enables multiple fabric site filters in a single module invocation.


fabric_name required parameter not enforced in fabric_devices filter

Summary:
fabric_name is documented as required: true under fabric_devices in component_specific_filters, but the module executed without error when it was omitted, failing silently later at the API level.

Steps to Reproduce:

config:
  component_specific_filters:
    components_list: ["fabric_devices"]
    fabric_devices:
      - device_ip: "10.0.0.1"

Observed Behavior:
Module executes successfully with no validation error, then fails at the API level:

Sda.get_fabric_devices() missing 1 required positional argument: 'fabric_id'

Expected Behavior:

Component 'fabric_devices' filter entry 1/1 is missing required filter 'fabric_name'

Root Cause Analysis:
validate_component_specific_filters in brownfield_helper.py only iterated over filters that were present in the submitted entry. It never checked for filters marked "required": True that were absent.

Fix Details:
Added a required-field check in validate_component_specific_filters before the per-filter validation loop. For each filter entry, the code now iterates valid_filters_for_component and appends an error for any filter spec with "required": True that is missing from the entry.

Impact:
Any fabric_devices filter entry omitting fabric_name is now rejected at validation time with a clear error message. The fix is in the shared brownfield_helper.py and applies to all modules using validate_component_specific_filters.


Non-existent fabric_name generates empty config instead of warning

Summary:
When a fabric_name that does not exist in Cisco Catalyst Center is supplied, the module silently produced an empty configuration (fabric_devices: []) rather than reporting that no matching data was found.

Steps to Reproduce:

config:
  component_specific_filters:
    components_list: ["fabric_devices"]
    fabric_devices:
      - fabric_name: "Global/USA/New Yorks"   # Non-existent fabric_name

Observed Behavior:

config:
  - fabric_devices: []

Expected Behavior:

No configurations found for module 'sda_fabric_devices_workflow_manager'.
Verify filters and component availability.

Root Cause Analysis:
get_fabric_devices_configuration had two code paths that returned a truthy value when no data was found:

  1. When all filter entries were skipped (unresolvable fabric_name), execution continued past an empty fabric_devices_params_list_to_query and returned a result with an empty list.
  2. When the API returned zero devices, the method returned {"fabric_devices": []} instead of None.

Because yaml_config_generator only skips components when component_data is falsy, these truthy-but-empty returns were appended to final_config_list and written to the YAML file.

Fix Details:
Three changes in get_fabric_devices_configuration:

  1. Added an early-return None guard when fabric_devices_params_list_to_query is empty after processing all filter entries.
  2. Changed return {"fabric_devices": []} (no devices found) to return None.
  3. Added a None return guard after modify_parameters yields no transformed results.

Impact:
Users providing an invalid or non-existent fabric_name now receive a clear status message instead of a misleading empty output file.


component_list missing from valid-components list in error message

Summary:
When an invalid key is provided under component_specific_filters, the error message did not include component_list in the valid-components hint. Since component_list is a valid and expected key, the error gave users an incomplete picture of accepted keys.

Steps to Reproduce:

config:
  component_specific_filters:
    components_lists: ["fabric_devices"]

Observed Behavior:

Invalid network components provided for module 'sda_fabric_devices_workflow_manager': ['components_lists'].
Valid components are: ['fabric_devices']

Expected Behavior:

Invalid network components provided for module 'sda_fabric_devices_workflow_manager': ['components_lists'].
Valid components are: ['fabric_devices', 'component_list']

Root Cause Analysis:
validate_component_specific_filters built the valid-components hint using list(network_elements.keys()). Because component_list is extracted and handled separately before the component loop, it was never included in network_elements and never appeared in the error message.

Fix Details:
The valid-components list is now built as:

valid_components = list(network_elements.keys()) + ["component_list"]

Impact:
Users who provide an invalid key under component_specific_filters now see component_list in the valid-components hint. The fix applies to all modules using validate_component_specific_filters.


Testing Done

  • Manual testing
  • Unit tests
  • Integration tests

Test cases covered:

  • All 10 unit tests pass (cases 2–10 covering fabric_name only, device_ip, single/multi device_roles, all filters combined, explicit components_list, and file_mode: append).
  • All 10 existing unit tests pass — each provides fabric_name and is unaffected by the new required-field check.
  • All existing unit tests pass unchanged — they all supply valid filter values.
  • All existing unit tests pass unchanged — the fix is additive with no impact on existing logic.

Checklist

  • My code follows the style guidelines of this project
  • I have performed a self-review of my own code
  • I have commented my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation
  • My changes generate no new warnings
  • I have added tests that prove my fix is effective or that my feature works
  • New and existing unit tests pass locally with my changes
  • Any dependent changes have been merged and published in downstream modules
  • All the sanity checks have been completed and the sanity test cases have been executed

Ansible Best Practices

  • Tasks are idempotent (can be run multiple times without changing state)
  • Variables and secrets are handled securely (e.g., using ansible-vault or environment variables)
  • Playbooks are modular and reusable
  • Handlers are used for actions that need to run on change

Documentation

  • All options and parameters are documented clearly.
  • Examples are provided and tested.
  • Notes and limitations are clearly stated.

Screenshots (if applicable)

N/A

@koderchit koderchit requested a review from madhansansel as a code owner April 10, 2026 12:44
- Used to narrow down which fabric sites and devices should be included in the generated YAML file.
- If no filters are provided, all fabric devices from all fabric sites in Cisco Catalyst Center will be retrieved.
type: dict
- Each list entry targets a specific fabric site and optionally narrows down by device IP or roles.
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we add the AND and OR logic? you can read and update if required..

            - Within a single entry, all specified filters are combined using AND
              logic. Omitting a filter means no restriction on that attribute.
            - Multiple entries are combined using OR logic, allowing retrieval from
              different fabric sites in a single invocation.

self.log(
f"Fabric site '{fabric_name}' not found in Cisco Catalyst Center. Skipping filter entry {filter_idx}.",
"WARNING",
)
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When a filter entry is skipped (fabric not found, device not found), the code logs a WARNING and continues. But the user never sees these warnings in the module result. Can we collect the skipped entries and including them in the final result['response'] message or result['warnings'] so the user knows which entries were silently dropped... Is it a good idea or overhead? Can you check with @DNACENSolutions / team and update?

I see that in many places we log with "WARNING" in log messages but not captured in final result..

device_info_map = self.get_device_list(device_list_params)
if not device_info_map or device_ip not in device_info_map:
self.log(
f"Device with IP '{device_ip}' not found in Cisco Catalyst Center. Skipping filter entry {filter_idx}.",
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same as above..

params_for_query["networkDeviceId"] = network_device_id
if not params_for_query:
self.log(
f"No valid filters provided for filter entry {filter_idx}, skipping.",
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same as above

comp for comp in components_list if comp not in network_elements
]
if invalid_components:
valid_components = list(network_elements.keys()) + ["component_list"]
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it "component_list" or "components_list" ..

In line number 290, we are using components_list.. Can you check and update?

if req_filter_spec.get("required", False) and req_filter_name not in component_filter:
invalid_filters.append(
"Component '{0}' filter entry {1}/{2} is missing required filter '{3}'".format(
component_name, index, len(component_filters), req_filter_name
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it correct to say index, len(component_filters) ?

if len(component_filters) is 5, and index starts with 0.. then
0/5, 1/5, 2/5, 3/5, 4/5 .. which may mislead to users.. Can you please check? If index starts with 1 then its fine. otherwise .. index + 1, len(component_filters)

for filter_name, filter_value in component_filter.items():
self.log(
"Processing filter '{0}' in entry {1}/{2} for component '{3}': value={4}".format(
filter_name, index, len(component_filters), component_name, filter_value
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same as like above.. we need to check the entire file..

fabric_name: "Global/India/Bangalore"
device_roles: ["BORDER_NODE"]
- fabric_name: "Global/India/Bangalore"
device_roles: ["BORDER_NODE"]
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't see an example for multiple fabric sites.. Can we add it?

# Example 8: Generate configuration for devices from multiple fabric sites
- name: Generate configuration from multiple fabric sites
  hosts: dnac_servers
  vars_files:
    - credentials.yml
  gather_facts: false
  connection: local
  tasks:
    - name: Export fabric devices from two fabric sites
      cisco.dnac.sda_fabric_devices_playbook_config_generator:
        dnac_host: "{{ dnac_host }}"
        dnac_port: "{{ dnac_port }}"
        dnac_username: "{{ dnac_username }}"
        dnac_password: "{{ dnac_password }}"
        dnac_verify: "{{ dnac_verify }}"
        dnac_debug: "{{ dnac_debug }}"
        dnac_version: "{{ dnac_version }}"
        dnac_log: true
        dnac_log_level: DEBUG
        dnac_log_append: false
        dnac_log_file_path: "{{ dnac_log_file_path }}"
        state: gathered
        config:
          component_specific_filters:
            components_list: ["fabric_devices"]
            fabric_devices:
              - fabric_name: "Global/USA/SAN-JOSE"
                device_roles: ["BORDER_NODE"]
              - fabric_name: "Global/India/Bangalore"
                device_roles: ["EDGE_NODE"]

}
]
}
},
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shall we add a testcase

    "filter_multi_fabric_sites_case_11": {
        "component_specific_filters": {
            "fabric_devices": [
                {
                    "fabric_name": "Global/Site_India/Karnataka/Bangalore",
                    "device_roles": ["EDGE_NODE"]
                },
                {
                    "fabric_name": "Global/India/Telangana/Hyderabad/BLD_1",
                    "device_roles": ["CONTROL_PLANE_NODE"]
                }
            ]
        }
    }

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants