Merged
Conversation
…te symbol error Resolve "duplicate symbol 'google.protobuf.Any'" error that occurred when upgrading from older versions. Both Any_pb2.py and RpcStatus_pb2.py were using the default descriptor pool, which conflicts when google.protobuf.Any is already registered by the protobuf library or other Home Assistant components. Apply the same pattern already used in Common_pb2.py: create a dedicated _any_pool in Any_pb2.py and share it with RpcStatus_pb2.py which depends on the Any message type.
fix: use separate descriptor pool for protobuf files to avoid duplica…
…alization The stat was being incremented in cache.py but was never registered in the stats dictionary, causing repeated warnings in logs: "Tried to increment unknown stat 'accuracy_sanitized_count'"
…bug-NwInr fix: add missing 'accuracy_sanitized_count' stat to coordinator initi…
Handle cryptography.exceptions.InvalidTag explicitly instead of letting it fall into the generic Exception handler. This provides users with a helpful warning message explaining common causes: - Google authentication expired (needs re-auth in Google app) - Shared device where the sharing account's auth is stale - Tracker offline/dead battery causing stale encrypted data Previously, InvalidTag errors showed as ERROR with full stack trace, which was alarming to users. Now logged as WARNING with actionable guidance, since re-authenticating the Google account typically resolves the issue. Addresses user report where shared device auth became stale and was resolved by re-authenticating the sharing account in the Google app.
…ew-aQQLQ fix: improve InvalidTag error handling in decrypt_locations.py
Previously, some translations incorrectly stated that setting the stale threshold to 0 would disable the feature, but the validation required a minimum of 60 seconds. Additionally, setting it to 0 would have caused all locations to be immediately marked as stale (the opposite effect). This change adds a proper boolean toggle (stale_threshold_enabled) that allows users to disable staleness checking entirely. When disabled (default), the tracker always shows the last known location regardless of age - matching Google Find My Device behavior. Changes: - Add OPT_STALE_THRESHOLD_ENABLED constant and default (False) - Add toggle to config flow validation schema - Update device_tracker._is_location_stale() to check toggle - Update device_tracker._get_location_status() to check toggle - Add translations for the new toggle in all 9 languages - Fix incorrect "0 to disable" text in pt-BR, pl, nl, pt translations https://claude.ai/code/session_014fdKUpiQvv1trLqBVzNp3G
The previous default of 1800 seconds (30 minutes) was problematic when users configured longer poll intervals. For example, with a 1-hour poll interval, the location would be marked stale before the next poll. Changed DEFAULT_STALE_THRESHOLD to 7200 seconds (2 hours), which is at least 2x the maximum configurable poll interval (3600s), ensuring the stale threshold is always reasonable regardless of poll settings. Updated all 9 translation files to reflect the new default value. https://claude.ai/code/session_014fdKUpiQvv1trLqBVzNp3G
Reordered stale_threshold_enabled and stale_threshold keys in nl.json, pl.json, pt.json, and pt-BR.json to match the English reference file. The keys now appear after device_poll_delay, consistent with all other translation files. https://claude.ai/code/session_014fdKUpiQvv1trLqBVzNp3G
In tests using MagicMock for hass, async_create_task returns a MagicMock instead of an asyncio.Task. This caused "coroutine never awaited" warnings because the coroutines were created but never properly cleaned up. Changes: - __post_init__: Check if async_create_task returns actual Task before assuming ownership transferred; close coroutine if not - _schedule_lock_save: Track coroutine ownership and close it in exception handler if it hasn't been handed off to a task yet https://claude.ai/code/session_014fdKUpiQvv1trLqBVzNp3G
…behavior-zscmK feat: add stale_threshold_enabled toggle to disable staleness checking
…flict Guard against the Python 3.13 issue where loading both the official google-protobuf library (e.g. via Nest integration) and this custom component's vendored _pb2 files would cause a "duplicate symbol google.protobuf.Any" crash. Commit edc1a4f introduced separate descriptor pools; these 17 tests ensure the fix is not regressed. https://claude.ai/code/session_01FuBmDMytpEr32Nrz9vBAkE
….any_pb2 The vendored Any.proto / Any_pb2.py / Any_pb2.pyi were a byte-for-byte copy of the official google.protobuf.any_pb2 shipped with the protobuf package. Having two files that both define `google.protobuf.Any` caused a "duplicate symbol" crash on Python >= 3.13 when another integration (e.g. Nest) loaded the official library into the default descriptor pool. Changes: - Delete Any.proto, Any_pb2.py, Any_pb2.pyi (redundant with protobuf pkg) - Re-serialise RpcStatus_pb2.py to reference official google/protobuf/any.proto instead of the vendored ProtoDecoders/Any.proto; seed its separate pool with the official any_pb2 descriptor - Update RpcStatus_pb2.pyi to reference google.protobuf.any_pb2 - Rewrite tests: assert vendored Any_pb2 is gone, verify the default pool would reject a re-vendored copy, verify RpcStatus roundtrip works https://claude.ai/code/session_01FuBmDMytpEr32Nrz9vBAkE
nova_request.py now tries the official google.rpc.status_pb2 first (from googleapis-common-protos) and falls back to the vendored RpcStatus_pb2 only when that package is absent. This eliminates the same class of duplicate-symbol risk that existed for google.protobuf.Any: if another HA integration installs googleapis-common-protos, the vendored copy would collide in the default descriptor pool. Also adds: - TestRpcStatusResolution: verifies the prefer-official / fallback logic - TestStandaloneProtoDependencies: ensures all proto modules are importable for the standalone main.py secrets-extraction workflow and that protobuf is declared in requirements.txt https://claude.ai/code/session_01FuBmDMytpEr32Nrz9vBAkE
12 tests covering every line and branch of main.py: - _parse_args: no args, --entry flag, --help exits - list_devices: explicit entry_id, env var fallback, None fallback, KeyboardInterrupt handling, generic exception → stderr + exit(1) - main: delegates _parse_args → list_devices - __name__ == "__main__": verified via subprocess - Functional: --help exits 0, missing cache exits 1 with error message Coverage: 22/22 statements, 0 missed, 100% line+branch. https://claude.ai/code/session_01FuBmDMytpEr32Nrz9vBAkE
…TS.md Add binding guidance for AI agents and developers: - NEVER vendor types from google.* namespace (crash on Python >= 3.13) - Descriptor pool architecture table (which module uses which pool) - Rules for adding new proto modules (reuse parent pool, never default) - Updated regeneration checklist: manual pool patching step after protoc https://claude.ai/code/session_01FuBmDMytpEr32Nrz9vBAkE
HA's translation validator rejects inline URLs in data_description
strings. Replace hardcoded GitHub URLs with {subentries_docs_url}
placeholders in strings.json and all 9 translation files (de, en, es,
fr, it, nl, pl, pt, pt-BR), and pass description_placeholders to the
async_show_form calls for the settings, visibility, and credentials
option steps.
https://claude.ai/code/session_01FuBmDMytpEr32Nrz9vBAkE
Run ruff format + ruff check --fix to resolve I001 (unsorted imports) in nova_request.py, test_main.py, and test_protobuf_namespace_conflict.py. Add explicit check=False to subprocess.run calls (PLW1510). https://claude.ai/code/session_01FuBmDMytpEr32Nrz9vBAkE
… stubs - Move type: ignore[import-untyped] to the from-line so mypy sees it - Remove unused type: ignore on RpcStatus = None (keep it only on the ProtobufDecodeError = Exception fallback where misc/assignment apply) - Add google/protobuf/any_pb2.pyi local stub so mypy resolves the any_pb2 attribute used by RpcStatus_pb2.pyi (the local google/ stub directory shadows the installed types-protobuf stubs) https://claude.ai/code/session_01FuBmDMytpEr32Nrz9vBAkE
…QRaZo fix: protobuf issues (fix instead of workaround)
NOVA_MAX_RETRIES=6 means 6 retries, so total attempts = 7 (1 initial + 6 retries). The log messages and exception strings used NOVA_MAX_RETRIES as the denominator, showing "Attempt 1/6" instead of "Attempt 1/7". This made it appear as if retries never progressed past the first attempt, when in reality the second attempt was succeeding silently. Fixed in all 4 affected locations: - HTTP error retry warning (line 1565) - Rate limit exhaustion error (line 1585) - HTTP error exhaustion error (line 1590) - Network error retry warning (line 1609) https://claude.ai/code/session_01PsGnG9dtEfD4JGV7qWZbDg
…pc3G fix: correct off-by-one in retry attempt count display
- Fix typo: "hanges" → "changes" in V1.7 breaking change notice - Add Codespell and Bandit to CI pipeline description (both run in ci.yml) - Replace non-existent Make targets (test-stubs, wheelhouse, clean-wheelhouse, install-ha-stubs) with actual targets from Makefile - Update `make lint` description to reflect actual `--fix` flag behavior - Correct `make test-ha` description: uses Poetry, not .venv provisioning - Rewrite "Installing HA test dependencies" section to reference Poetry instead of pip and requirements-dev.txt - Fix wheelhouse section to reference script/bootstrap_ssot_cached.sh instead of non-existent `make wheelhouse` target - Update "Running tests locally" to use Poetry-based workflow - Expand "Available Make targets" from 3 to all 13 actual targets - Add 3 missing options to Configuration Options table: semantic_locations, stale_threshold, stale_threshold_enabled - Update Contributing section to use Poetry instead of pip https://claude.ai/code/session_01XmpdKvnSoSPaZdazEu9J8y
…fcUsT docs: verify and correct README.md against actual codebase
Full codebase security audit covering authentication, cryptography, network communication, input validation, and dependency security. Identifies 5 high, 12 medium, and 14 low severity findings with specific file/line references and remediation recommendations. https://claude.ai/code/session_01HJ9MaEB8jcA4gWJh6AdaHA
H1 (fixed): Operator precedence bug in AES-CBC block alignment check
cloud_key_decryptor.py — added parentheses so the expression
correctly evaluates as len % (block_size // 8) instead of
(len % block_size) // 8.
H3 (fixed): Removed global cache fallback in aas_token_retrieval.py
that could leak tokens across accounts in multi-account setups.
Entry-scoped fallback remains intact.
H5 (fixed): Replaced random.randint with secrets.randbelow in
token_retrieval.py for consistency with aas_token_retrieval.py.
Re-assessed H2 (scrypt N=4096) and H4 (DER slicing) as
protocol-dictated by FMDN — downgraded to informational.
Re-assessed M6/M7 (browser auth flow) as inherent to FMDN
secret retrieval — not fixable without breaking functionality.
https://claude.ai/code/session_01HJ9MaEB8jcA4gWJh6AdaHA
- Use cipher.decrypt_and_verify() instead of separate decrypt/verify in foreign_tracker_cryptor.py for atomic authenticated decryption (M1) - Delete SECURITY_REVIEW.md (findings addressed in code) - Update test_aas_token_retrieval.py: replace removed global cache fallback tests with entry-scoped-only test, fix random→secrets mock - Update test_adm_token_retrieval.py: fix random→secrets monkeypatch All checks pass: ruff format, ruff check, mypy --strict, 1996 tests (Python 3.13, HA 2026.1.2) https://claude.ai/code/session_01HJ9MaEB8jcA4gWJh6AdaHA
fix: security review
protobuf>=6.32.0 excluded HA 2025.8 (which pins protobuf==6.31.1), even though that release is within the 6-month support window. Lower to >=6.30.0 so any HA with protobuf 6.x works. selenium>=4.37.0 forced urllib3>=2.5 which is unnecessarily restrictive. Lower to >=4.25.0 (matching dev requirements) since only basic WebDriver/WebDriverWait APIs are used. Add homeassistant minimum version 2025.8.0 to hacs.json — the integration requires ConfigSubentry APIs introduced in HA 2025.8 — so HACS can warn users on older HA versions instead of producing confusing dependency-resolution errors. https://claude.ai/code/session_01CrxpKHRwsK8QtkbJ3UoxCh
…at-YQV5r fix: lower protobuf/selenium version floors to support HA 2025.8+
…-1F0KL fix: adopt HA entity naming best practice for device_tracker entities
The previous implementation set _attr_name = None for all device_tracker entities. With has_entity_name=True: - _attr_name = None → entity inherits ONLY device name (no suffix) - _attr_name not set → name comes from translation_key For the Last Location entity, we need the translation-based suffix (e.g., "Letzter Standort" in German). By deleting _attr_name after the parent's __init__, HA now correctly composes the name as "<device_name> <translated_suffix>". Before: "Galaxy S25 Ultra" (both entities had same name) After: "Galaxy S25 Ultra" and "Galaxy S25 Ultra Letzter Standort" https://claude.ai/code/session_012MaKPZBe61ozNnwEwfkRFQ
Added important lesson learned to AGENTS.md Entity naming section: - _attr_name = None (explicit) → inherits only device name, no suffix - _attr_name not set → name comes from translation_key - Child classes must use `del self._attr_name` after super().__init__() if they need translation-based naming This distinction caused a bug where both primary and secondary entities showed the same name because the parent class set _attr_name = None. https://claude.ai/code/session_012MaKPZBe61ozNnwEwfkRFQ
…-1F0KL fix: enable translation-based naming for Last Location entity
…fault - Add translated labels for contributor_mode selector options in all supported languages (high_traffic → localized, in_all_areas → localized) - Use SelectSelector with translation_key for proper HA Core translation - Fix README.md: stale_threshold default is 1800s (30min), not 7200s (2h) - Remove obsolete stale_threshold_enabled option from documentation (stale threshold is now always enabled with the Last Location entity) https://claude.ai/code/session_016Vcayg9k3z176uoEW2XyFe
fix: translate contributor_mode options and update stale_threshold de…
Root-cause fixes for all 96 mypy --strict errors: **Coordinator mixin typing (83 [misc] errors):** - Remove invalid `self: GoogleFindMyCoordinator` annotations from all 6 mixin classes (registry, subentry, locate, identity, polling, cache). mypy rejects these because the coordinator is a subtype, not a supertype, of the mixins. - Add `_MixinBase` typing base class that declares the full coordinator interface (attributes + cross-mixin methods) so mypy can resolve `self.hass`, `self.config_entry`, and all cross-mixin method calls correctly. **Entity re-exports (5 [attr-defined] errors):** - Use explicit `as` re-export pattern in entity.py for `known_ids_for_subentry_type`, `sanitize_state_text`, `subentry_type` (mypy strict no_implicit_reexport). **Type safety (4 [no-any-return], 3 [arg-type], 1 [unused-ignore]):** - Type `_get_resolver` return as `GoogleFindMyEIDResolver | None` with cast instead of `Any` in sensor.py and binary_sensor.py. - Add None guards for `_device_id: str | None` before `is_device_present()` and `get_ble_battery_state()` calls. - Use explicit `is None` check for restored sensor value before `float()`. - Replace `type: ignore[return-value]` with proper `cast()` in google_home_filter.py callback wrapper. - Type `async_get_local_ip` return via explicit annotation. **Polling variable scoping (3 [misc] errors):** - Rename manually-created `auth_exc` to `reauth_exc` to avoid name collision with exception-caught `auth_exc` variable in the same scope. **Config updates:** - Update mypy python_version to 3.14 - Remove all `disable_error_code` overrides from pyproject.toml https://claude.ai/code/session_012ADLmtgF8USbFxGP5ntkvJ
Revert mypy python_version from 3.14 to 3.13 to maintain backward compatibility with Python 3.13 and HA 2025.9. Document the coordinator mixin typing architecture (_MixinBase pattern), explicit re-export requirements, cast() usage for HA API returns, and exception variable scoping rules in the typing guidance. https://claude.ai/code/session_012ADLmtgF8USbFxGP5ntkvJ
…rence NameError - Move _MixinBase imports into the import block at top of each mixin file to fix E402 (module-level import not at top of file) - Let ruff auto-sort imports (I001) after the _MixinBase repositioning - Suppress PLC0414 for entity.py in pyproject.toml since the `import x as x` re-export pattern is required by mypy strict `no_implicit_reexport` but flagged as useless alias by ruff - Use string form in cast() for GoogleHomeFilter type to avoid evaluating the forward reference at runtime before the class is defined (fixes NameError when decorator runs during class body) https://claude.ai/code/session_012ADLmtgF8USbFxGP5ntkvJ
…s-KJSmt fix: resolve all mypy --strict errors
…ive gate The old 50m flat threshold silently dropped location updates for stationary trackers with fluctuating accuracy (upstream #127). The new adaptive threshold uses the combined measurement uncertainty of both readings: 0.5 * sqrt(acc_old² + acc_new²). Practical effect: - GPS 10m + 10m → threshold ≈ 7m (fine-grained updates pass) - BLE 200m + 200m → threshold ≈ 141m (noise suppressed) - GNSS 2m + 2m → threshold ≈ 1.4m (near-realtime) - No accuracy → 200m fallback (conservative) No config option needed — the physics of the measurement determines the threshold automatically. https://claude.ai/code/session_01P8iLL632vfvSzqv5ndbJRH
…es-d8Bp4 fix: replace hardcoded 50m significance threshold with accuracy-adapt…
Parametrized test that imports every module in the package tree via pkgutil.walk_packages. Catches broken imports, circular dependencies, and protobuf descriptor conflicts (cf. upstream #144) that linters and type checkers miss. https://claude.ai/code/session_01P8iLL632vfvSzqv5ndbJRH
…es-d8Bp4 test: add import smoke test for all 123 modules
Commit 90bf146 introduced _MixinBase with stub methods that raised NotImplementedError for async_set_updated_data, async_request_refresh, and async_set_update_error. Because _MixinBase precedes DataUpdateCoordinator in Python's C3 MRO of GoogleFindMyCoordinator, these stubs shadowed the real implementations at runtime, causing: - All coordinator data updates to fail with NotImplementedError - BLE battery sensors showing no values (entity updates never triggered) - "Failed to get location" for all devices (polling cycle crash) - FCM push updates, manual locate, and device purge all broken The fix wraps the three DataUpdateCoordinator method declarations in an `if TYPE_CHECKING:` guard so they only exist during static analysis and cannot shadow the real implementations at runtime. Adds 11 AST-based regression tests verifying: - The three methods are not defined at runtime in _MixinBase - The three methods exist in TYPE_CHECKING blocks for mypy - No runtime stub raises NotImplementedError for DUC methods - The MRO structure is correct (mixins before DataUpdateCoordinator) - All Operations mixins inherit from _MixinBase - No _MixinBase method shadows known DUC interface methods https://claude.ai/code/session_0184sHLPzZ5fBhWbyzNhxLNg
…or-l9ClE fix: guard DataUpdateCoordinator stubs in _MixinBase with TYPE_CHECKING
Contributor
Author
|
@BSkando Please merge to fix user upgrade issues. Also included fixes for unreported HA 2026.2 bugs. |
The FCM server sends crypto-key and salt values as unpadded base64url strings. Python's urlsafe_b64decode requires correct padding, causing binascii.Error: Incorrect padding when decoding these values. Apply the same dynamic padding pattern (-len(s) % 4) already used in shared_key_retrieval.py and get_owner_key.py. Also normalizes the hard-coded b"========" padding on der_data/secret to use the same consistent approach. https://claude.ai/code/session_01VWxmuXrfwXXxKi579zNJsa
…bug-94w19 fix: add base64 padding to FCM crypto_key and salt decoding
Users without an external URL configured in Home Assistant (e.g. VPN-only setups) previously got repeated warnings and no device map links at all. The external URL is only used for building absolute configuration URLs for the device registry's map view and has no impact on core functionality. Allow fallback to the internal URL so LAN users can still access the map view. External URLs are still preferred when configured. An info-level log is emitted when falling back to an internal URL so users are aware that map links will only work on their local network. https://claude.ai/code/session_01PQchA5hZTKGf31f6auSFZY
…rl config Replace `not hass.config.external_url` with an actual comparison of the resolved base URL against the internal-only URL from get_url(). This avoids a false-positive "Using internal URL" INFO log when a Nabu Casa Cloud URL is resolved but no explicit external_url is configured. https://claude.ai/code/session_01PQchA5hZTKGf31f6auSFZY
The warning text was changed from "Unable to resolve external URL" to "Unable to resolve any Home Assistant URL for map view" in a prior commit, but the test filter string was not updated. https://claude.ai/code/session_01PQchA5hZTKGf31f6auSFZY
…l-GIW2c feat: make external URL optional, fall back to internal URL for map view
The crypto-key header can contain semicolon-separated key-value pairs (e.g. `dh=BPxxx;p256ecdsa=BYyyy`). The previous code blindly sliced the first 3 / 5 characters, which included trailing parameters in the base64 payload and caused `ValueError: Invalid EC key` during decryption. Parse the header properly by splitting on `;` and matching by key name so only the intended `dh=` or `salt=` value is extracted. https://claude.ai/code/session_0126BYUTtsXPj8MAvcGSTgn1
…ror-2EMQC fix: parse FCM crypto-key and encryption headers by parameter name
Regenerated all 7 _pb2.py files using grpcio-tools 1.78.0 (protoc bundled with protobuf 6.31.1) to match the HA Core pinned runtime (protobuf==6.32.0). The binary descriptors are unchanged; only the generated Python wrapper code was updated: - Added runtime_version.ValidateProtobufRuntimeVersion() call - Changed `DESCRIPTOR._options` to `DESCRIPTOR._loaded_options` - Changed `_USE_C_DESCRIPTORS == False` to `not _USE_C_DESCRIPTORS` - Updated version comment to "Protobuf Python Version: 6.31.1" All custom descriptor pool modifications (_common_pool, _findmy_pool, _rpc_pool, _firebase_pool) are preserved to avoid symbol collisions with other HA integrations. Added 11 serialization round-trip tests covering all _pb2 modules (Common, DeviceUpdate, LocationReportsUpload, RpcStatus, mcs, checkin, android_checkin) to guard against regeneration regressions. https://claude.ai/code/session_019XSF5eK4odvQgERB7ELuGE
… errors Bump protobuf minimum from >=6.30.0 to >=6.31.1 to match the ValidateProtobufRuntimeVersion(6, 31, 1, ...) check in regenerated _pb2.py files. Fix MergeFrom pool mismatch in RpcStatus round-trip test by using details.add() instead of standalone Any from the default pool. Fix ruff I001 import sorting in test file. https://claude.ai/code/session_019XSF5eK4odvQgERB7ELuGE
…time-uSoPr Regenerate _pb2.py files with protobuf 6.31.1 and add round-trip tests
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
fix: use separate descriptor pool for protobuf files to avoid duplica…
fix: add missing 'accuracy_sanitized_count' stat to coordinator initi…
fix: improve InvalidTag error handling in decrypt_locations.py