-
-
Notifications
You must be signed in to change notification settings - Fork 3
fix: unload entry if runtime_data is missing on device removal #92
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
WalkthroughAdd an early-return in async_remove_config_entry_device to handle config entries that lack Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes
Poem
Pre-merge checks and finishing touches✅ Passed checks (5 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Warning There were issues while running some tools. Please review the errors and either fix the tool's configuration or disable the tool if it's a critical failure. 🔧 Pylint (4.0.4)tests/test_init.pyComment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
custom_components/renogy/__init__.py(1 hunks)requirements_test.txt(1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Run tests (3.13)
🔇 Additional comments (1)
requirements_test.txt (1)
13-13: The pycares version constraint is appropriate and necessary.The constraint prevents compatibility issues with aiodns that arise from pycares 5.0, which is a valid reason for pinning. The version range >=4.11.0,<5.0.0 is appropriate: 4.11.0 supports Python 3.13 and no known security vulnerabilities exist in pycares. If this change is related to device removal via DNS/aiodns functionality, the constraint is relevant to the fix.
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #92 +/- ##
==========================================
+ Coverage 96.62% 96.95% +0.32%
==========================================
Files 6 6
Lines 326 328 +2
==========================================
+ Hits 315 318 +3
+ Misses 11 10 -1 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
|
Note Unit test generation is an Early Access feature. Expect some limitations and changes as we gather feedback and continue to improve it. Generating unit tests... This may take up to 20 minutes. |
|
✅ UTG Post-Process Complete No new issues were detected in the generated code and all check runs have completed. The unit test generation process has completed successfully. |
|
Creating a PR to put the unit tests in... The changes have been created in this pull request: View PR |
…y_device (#93) Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (1)
tests/test_init.py (1)
283-317: Consider removing unused fixtures.The test logic is correct for the empty identifiers edge case. However,
mock_apianddevice_registryfixtures are not used in this test since it creates a mock device directly.🔎 Apply this diff to clean up unused parameters:
async def test_async_remove_config_entry_device_edge_case_empty_identifiers( - hass, mock_api, device_registry: dr.DeviceRegistry, caplog + hass, caplog ):
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
tests/test_init.py(2 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
tests/test_init.py (2)
tests/conftest.py (1)
mock_api(44-93)custom_components/renogy/__init__.py (2)
async_remove_config_entry_device(107-118)async_setup(31-35)
🪛 Ruff (0.14.8)
tests/test_init.py
92-92: Unused function argument: mock_api
(ARG001)
129-129: Unused function argument: mock_api
(ARG001)
169-169: Unused function argument: mock_api
(ARG001)
203-203: Unused function argument: mock_api
(ARG001)
242-242: Unused function argument: mock_api
(ARG001)
284-284: Unused function argument: mock_api
(ARG001)
284-284: Unused function argument: device_registry
(ARG001)
320-320: Unused function argument: mock_api
(ARG001)
352-352: Do not assert blind exception: Exception
(B017)
357-357: Unused function argument: mock_api
(ARG001)
🔇 Additional comments (8)
tests/test_init.py (8)
4-4: LGTM! Necessary import for test mocks.The
MagicMockimport is properly added to support the new test scenarios.
91-126: Excellent coverage of the primary bug fix!This test directly validates the fix for issue #91 by ensuring that devices can be removed when
runtime_datais missing from the config entry. The logic correctly verifies thatasync_remove_config_entry_devicereturnsTruein this scenario.
128-166: LGTM! Proper coverage of the device-exists scenario.The test correctly verifies that when
runtime_datais present and the device exists (get_device returns data), the function returnsFalseto prevent removal of active devices.
168-200: LGTM! Correctly tests orphaned device removal.This test validates the expected behavior for orphaned devices (not found in runtime_data), ensuring they can be successfully removed by returning
True.
202-239: LGTM! Good coverage of cross-domain identifier edge case.The test correctly verifies that devices with no identifiers matching the Renogy domain can be removed, and appropriately confirms that
get_deviceis never invoked for non-matching identifiers.
241-281: LGTM! Excellent test of the "any" logic.This test properly validates that if any identifier returns a valid device from
runtime_data, the device should not be removed. Theside_effectmock setup correctly simulates a scenario where one device exists and another doesn't.
356-382: Verify runtime_data state in integration test.The comment at line 378 states "should not have runtime_data in this context," but the entry is set up via
async_setup_entrywhich typically populatesruntime_data.Please verify whether the config entry actually lacks
runtime_dataafter setup, or if this test is inadvertently testing the runtime_data-exists path instead. You can add an assertion to confirm:# After line 371 assert not hasattr(entry, "runtime_data"), "Entry should not have runtime_data for this test"If
runtime_datais present, consider whether this test needs to be restructured to explicitly test the intended scenario, or if the comment needs updating to reflect what's actually being tested.
91-382: Removemock_apifrom test function parameters or confirm it's actually used.The
mock_apifixture is imported but never directly referenced in these test bodies. If the fixture only configures HTTP mocks via side effects without being consumed by the test itself, consider marking unused fixtures withpytest.mark.usefixtures("mock_api")instead of including them as parameters, making the dependency explicit and avoiding static analysis warnings. However, verify whetherasync_setup()actually triggers HTTP requests that require these mocks—if so, the fixture is needed and the warning is a false positive.
| async def test_async_remove_config_entry_device_runtime_data_get_device_exception( | ||
| hass, mock_api, device_registry: dr.DeviceRegistry, caplog | ||
| ): | ||
| """Test async_remove_config_entry_device when get_device raises an exception.""" | ||
| from custom_components.renogy import async_remove_config_entry_device | ||
|
|
||
| entry = MockConfigEntry( | ||
| domain=DOMAIN, | ||
| title=DEVICE_NAME, | ||
| data=CONFIG_DATA, | ||
| ) | ||
|
|
||
| with caplog.at_level(logging.DEBUG): | ||
| entry.add_to_hass(hass) | ||
| assert await hass.config_entries.async_setup(entry.entry_id) | ||
| await hass.async_block_till_done() | ||
|
|
||
| # Get a device from the registry | ||
| devices = dr.async_entries_for_config_entry(device_registry, entry.entry_id) | ||
| assert len(devices) > 0 | ||
| device = devices[0] | ||
|
|
||
| # Create a mock runtime_data that raises an exception | ||
| mock_runtime_data = MagicMock() | ||
| mock_runtime_data.get_device = MagicMock( | ||
| side_effect=Exception("Test exception") | ||
| ) | ||
|
|
||
| # Add runtime_data to entry | ||
| entry.runtime_data = mock_runtime_data | ||
|
|
||
| # Test that the function handles the exception gracefully | ||
| # The any() should return False if all get_device calls raise exceptions | ||
| with pytest.raises(Exception): | ||
| await async_remove_config_entry_device(hass, entry, device) | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
fd "renogy" --type d | head -20Repository: firstof9/ha-renogy
Length of output: 87
🏁 Script executed:
find . -name "__init__.py" -path "*/renogy/*" 2>/dev/null | head -10Repository: firstof9/ha-renogy
Length of output: 100
🏁 Script executed:
pwd && ls -la tests/test_init.py 2>/dev/null && head -50 tests/test_init.pyRepository: firstof9/ha-renogy
Length of output: 1690
🏁 Script executed:
#!/bin/bash
# Find the test file and implementation
find . -name "test_init.py" -type f | head -5
find . -name "__init__.py" -path "*/custom_components/renogy/*" -type f | head -5
find . -type f -name "*.py" | xargs grep -l "async_remove_config_entry_device" | head -10Repository: firstof9/ha-renogy
Length of output: 181
🏁 Script executed:
grep -n "async_remove_config_entry_device" custom_components/renogy/__init__.py -A 30Repository: firstof9/ha-renogy
Length of output: 1259
🏁 Script executed:
grep -n "def test_async_remove_config_entry_device_runtime_data_get_device_exception" tests/test_init.py -A 40Repository: firstof9/ha-renogy
Length of output: 1783
🏁 Script executed:
grep -n "def test_async_remove_config_entry_device" tests/test_init.py | head -10Repository: firstof9/ha-renogy
Length of output: 700
🏁 Script executed:
sed -n '128,210p' tests/test_init.pyRepository: firstof9/ha-renogy
Length of output: 3215
🏁 Script executed:
sed -n '91,170p' tests/test_init.pyRepository: firstof9/ha-renogy
Length of output: 3022
🏁 Script executed:
cat tests/test_init.py | grep -n "mock_api" | head -5Repository: firstof9/ha-renogy
Length of output: 435
🏁 Script executed:
find tests -name "conftest.py" -type f | head -1 | xargs cat | head -100Repository: firstof9/ha-renogy
Length of output: 2870
Clarify exception propagation and improve test specificity.
The test correctly validates that exceptions from get_device() propagate, but the test comment is misleading. The function doesn't handle exceptions gracefully—it allows them to propagate, which is the current intended behavior. Additionally, using bare Exception is too broad.
Update the test comment to reflect actual behavior and add match parameter for specificity:
# Test that the function propagates exceptions from get_device
with pytest.raises(Exception, match="Test exception"):
await async_remove_config_entry_device(hass, entry, device)🧰 Tools
🪛 Ruff (0.14.8)
320-320: Unused function argument: mock_api
(ARG001)
352-352: Do not assert blind exception: Exception
(B017)
🤖 Prompt for AI Agents
In tests/test_init.py around lines 319 to 354, the test comment incorrectly
states the function handles exceptions gracefully and the test uses a bare
Exception; update the inline comment to reflect that the function propagates
exceptions (e.g., "Test that the function propagates exceptions from
get_device") and change the pytest.raises to include a match argument to be
specific about the error, e.g., use pytest.raises(Exception, match="Test
exception") so the test asserts the exact propagated message.
fixes #91
Summary by CodeRabbit
Bug Fixes
Tests
Chores
✏️ Tip: You can customize this high-level summary in your review settings.