Conversation
New OCA-style connector module for bidirectional synchronization between Odoo and Onshape cloud CAD/PLM platform. Features: - HMAC and OAuth2 authentication with Onshape API - Document, element, and part discovery from Onshape workspace - Product binding with 4-strategy SKU matching (exact, extension strip, version strip, case-insensitive) - Assembly BOM import with component match scoring - Part number/description export from Odoo to Onshape metadata - Webhook receiver for real-time change notifications - Event listener for auto-export on product changes - Queue job integration for async processing with retry patterns - Cron jobs for scheduled background synchronization - Import wizard for guided document/product/BOM imports - Rate limit handling and API quota exhaustion detection (402) Models: onshape.backend, onshape.document, onshape.document.element, onshape.product.product, onshape.mrp.bom Test suite: 51 test methods covering backend, adapter, importers, exporters, mappers, binder, webhooks, listener, and wizard. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
📝 WalkthroughWalkthroughAdds a full Onshape connector Odoo module: manifest and packaging, API adapter (HMAC/OAuth2), connector components (importers, exporters, binder, mapper, listener), webhook controller, models and views for documents/products/BOMs, crons/queue jobs, wizards, tests, and documentation. Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant Backend as Onshape Backend
participant Importer as Batch Importer
participant Adapter as Onshape API Adapter
participant Model as Odoo Models
participant DB as Database
User->>Backend: action_import_documents()
Backend->>Importer: run(backend)
Importer->>Adapter: search_documents()
Adapter-->>Importer: documents
loop per document
Importer->>Model: create/update onshape.document
Model->>DB: write document
Importer->>Adapter: read_document_elements(document)
Adapter-->>Importer: elements
loop per element
Importer->>Model: create/update onshape.document.element
Model->>DB: write element
end
end
Importer-->>Backend: done
sequenceDiagram
participant Onshape as Onshape Platform
participant Webhook as Webhook Controller
participant Validator as Signature Validator
participant Handler as Event Handler
participant Backend as Onshape Backend
participant Job as Delayed Job
Onshape->>Webhook: POST /connector_onshape/webhook/<backend_id>
Webhook->>Validator: _validate_signature(payload)
Validator-->>Webhook: valid/invalid
Webhook->>Handler: _get_event_handler(event_type)
alt metadata change
Handler->>Job: schedule product import
Job->>Backend: action_import_products()
else workflow transition
Handler->>Backend: update binding states
else revision created
Handler->>Backend: mark released
end
Webhook-->>Onshape: 200/4xx
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 16
🧹 Nitpick comments (19)
connector_onshape/readme/USAGE.rst (1)
35-40: Add job-monitoring hint for automatic exports.Since export is async, documenting where to check failures/retries will reduce support friction.
Suggested doc patch
Automatic Export ~~~~~~~~~~~~~~~~ @@ When a product's ``default_code`` or ``name`` is changed in Odoo, a background job is automatically queued to export the update to Onshape (if the product is bound to an Onshape part). +You can monitor queued/failed jobs in **Queue Jobs** (Technical menu), +and retry failed exports from there.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_onshape/readme/USAGE.rst` around lines 35 - 40, Update the "Automatic Export" section to add a brief job-monitoring hint: state that exports are performed asynchronously when a product's default_code or name changes and instruct users to check Odoo's background job/queue dashboard (Queue Jobs / Settings > Technical > Jobs or the project's job-queue UI) for failure/retry status, review worker logs for error details, and verify the product's Onshape binding/part association to troubleshoot stuck exports; reference the automatic export/job terminology (automatic export, default_code, name, export job) so readers can locate and correlate the docs with the system UI and logs.connector_onshape/readme/CONFIGURE.rst (1)
97-107: Add queue-job runner prerequisite to scheduled sync docs.Cron activation alone is not enough if job workers/runner are not configured; imports may queue but never execute.
Suggested doc patch
Scheduled Sync (Cron Jobs) ~~~~~~~~~~~~~~~~~~~~~~~~~~ Three cron jobs are created (disabled by default): @@ Enable them in **Settings > Technical > Automation > Scheduled Actions** when you're ready for automatic background synchronization. + +.. note:: + This connector uses ``queue_job`` for async processing. Ensure your Odoo + deployment has the queue job runner/workers configured, otherwise scheduled + imports may be enqueued but not processed.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_onshape/readme/CONFIGURE.rst` around lines 97 - 107, Update the "Scheduled Sync (Cron Jobs)" section to add a prerequisite note that enabling the cron jobs ("Onshape: Import Documents", "Onshape: Import Products", "Onshape: Import BOMs") is not sufficient by itself — a queue/job runner or worker process must be configured and running for queued import jobs to actually execute; include a short bullet describing how to configure/start the job runner (e.g., enable the worker service, point it at the app's job queue, and ensure it runs on boot) and a reminder to verify the jobs in Settings > Technical > Automation > Scheduled Actions after starting the worker.connector_onshape/data/ir_cron_data.xml (1)
4-38: Consider staggering cron start times when enabling jobs.
ir_cron_import_onshape_documentsandir_cron_import_onshape_productsshare the same 6-hour cadence; enabling both simultaneously can create API bursts and increase rate-limit pressure.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_onshape/data/ir_cron_data.xml` around lines 4 - 38, Stagger the cron start times to avoid simultaneous API bursts: for records ir_cron_import_onshape_documents (model.cron_import_documents) and ir_cron_import_onshape_products (model.cron_import_products) add distinct nextcall values so they don't start at the same moment (e.g., offset one by a few hours), and optionally set a different nextcall for ir_cron_import_onshape_boms (model.cron_import_boms) if needed; implement this by adding a <field name="nextcall" eval="..."> with appropriate datetime offsets for each record so enabling them won't trigger concurrent calls.connector_onshape/tests/test_wizard.py (1)
36-53:test_documents_only_importdoesn't verify the import action.The patched
mock_importat Line 44 is unused, and the test body only asserts wizard field state without callingwizard.action_import(). Move the patch into the test body using a context manager and assert thatwizard.action_import()invokes the mocked backend method.Proposed test hardening
- `@patch.object`( - type( - OnshapeTestCase.env["onshape.backend"] - if hasattr(OnshapeTestCase, "env") - else object - ), - "action_import_documents", - ) - def test_documents_only_import(self, mock_import): - # Just verify the wizard can be created and doesn't crash + def test_documents_only_import(self): wizard = self.env["onshape.import.wizard"].create( { "backend_id": self.backend.id, "import_type": "documents", } ) + with patch.object(type(self.env["onshape.backend"]), "action_import_documents") as mock_import: + wizard.action_import() + mock_import.assert_called_once() self.assertEqual(wizard.import_type, "documents")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_onshape/tests/test_wizard.py` around lines 36 - 53, The test test_documents_only_import currently patches action_import_documents but never uses the mock or calls the wizard import; update the test to patch the backend method within the test body (e.g., using patch.object as a context manager on the same target used now), create the wizard via self.env["onshape.import.wizard"].create(...), then call wizard.action_import() and assert the patched method (action_import_documents) was called (use mock_import.assert_called_once() or similar) to verify the import action is invoked; keep references to the existing symbols: test_documents_only_import, action_import_documents, wizard.action_import(), and the backend retrieval via OnshapeTestCase.env/ self.env to locate the code.connector_onshape/tests/test_adapter.py (1)
78-89: Consider moving theOnshapeQuotaErrorimport to the module level.The import on line 80 inside the test method body works, but placing it alongside other imports at the top of the file (line 4-7) would be more conventional and consistent.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_onshape/tests/test_adapter.py` around lines 78 - 89, Move the OnshapeQuotaError import out of the test body and into the module-level imports so it’s declared alongside other imports at the top of connector_onshape/tests/test_adapter.py; update the test_quota_exhausted_raises to reference the module-level OnshapeQuotaError (used when adapter._request("GET", "/api/v6/documents") raises) instead of importing it inside the function.connector_onshape/tests/test_listener.py (1)
10-47: Tests only verify no exception, not that export was actually skipped.All three tests rely on "no error means it worked," but they don't assert that
export_record(orwith_delay().export_record()) was never called. If the listener had a bug that queued exports anyway (without raising), these tests would still pass.Consider patching
export_recordorwith_delayand asserting.assert_not_called()for at least the key scenario:Example for test_connector_no_export_context_skips
+from unittest.mock import patch + class TestProductListener(OnshapeTestCase): ... def test_connector_no_export_context_skips(self): + with patch.object( + type(self.env["onshape.product.product"]), + "export_record", + ) as mock_export: self.product_bolt.with_context(connector_no_export=True).write( {"default_code": "KF-BOLT-002"} ) + mock_export.assert_not_called()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_onshape/tests/test_listener.py` around lines 10 - 47, Update the three tests (test_connector_no_export_context_skips, test_unlinked_product_no_error, test_irrelevant_field_no_export) to assert that no export was queued by patching the export entry point: mock the model method that triggers the export (e.g., export_record on the Onshape listener or the recordset's with_delay().export_record chain) using unittest.mock.patch or monkeypatch, perform the write operation under test, and then call assert_not_called() on the mock; ensure you patch the exact symbol used by the listener (export_record or the with_delay proxy) so the test fails if an export is enqueued without raising.connector_onshape/components/listener.py (2)
26-26: Prefix unused lambda parameters with_to satisfy ARG005.
recordand**kwargsare structurally required by theskip_ifdecorator to match the wrapped function's signature, but their values go unused. Use underscore-prefixed names to make the intent explicit and silence the linter warning.♻️ Proposed fix
- `@skip_if`(lambda self, record, **kwargs: self.env.context.get("connector_no_export")) + `@skip_if`(lambda self, _record, **_kwargs: self.env.context.get("connector_no_export"))🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_onshape/components/listener.py` at line 26, The skip_if decorator lambda currently declares unused parameters record and **kwargs which triggers ARG005; update the lambda in the `@skip_if`(...) call to prefix unused parameters with an underscore (e.g., use lambda self, _record, **_kwargs: self.env.context.get("connector_no_export")) so the signature still matches the wrapped function while silencing the linter warning for the skip_if decorator usage.
24-24: Suppress RUF012 viaClassVarannotation on_apply_on.Ruff flags
_apply_on = ["product.product"]as a mutable default class attribute (RUF012). Annotating withClassVaris the idiomatic fix used in strict OCA setups.♻️ Proposed fix
+from typing import ClassVar + class OnshapeProductListener(Component): ... - _apply_on = ["product.product"] + _apply_on: ClassVar = ["product.product"]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_onshape/components/listener.py` at line 24, Ruff flags the class attribute _apply_on as a mutable default; annotate it with typing.ClassVar to suppress RUF012 by changing the declaration of _apply_on to use a ClassVar of a string-list type (e.g. ClassVar[List[str]] or ClassVar[list[str]] depending on your typing style) and add the required import(s) (ClassVar and List or use built-in list with forward-compatible typing) at the top of connector_onshape/components/listener.py so _apply_on remains a class-level constant.connector_onshape/components/binder.py (2)
14-20: Suppress RUF012 viaClassVaron mutable class attributes.Both
_inherit(line 14) and_apply_on(lines 17–20) are mutable list class attributes flagged by Ruff RUF012. Annotate withClassVarfor lint compliance.♻️ Proposed fix
+from typing import ClassVar + class OnshapeBinder(Component): ... - _inherit = ["onshape.base", "base.binder"] + _inherit: ClassVar = ["onshape.base", "base.binder"] _usage = "binder" _external_field = "external_id" - _apply_on = [ + _apply_on: ClassVar = [ "onshape.product.product", "onshape.mrp.bom", ]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_onshape/components/binder.py` around lines 14 - 20, The mutable class attributes _inherit and _apply_on in binder.py are flagged by RUF012; update their annotations to use typing.ClassVar (e.g., ClassVar[List[str]]) and import the required names (ClassVar, List) from typing so these remain class-level constants rather than instance attributes; modify the class definition where _inherit and _apply_on are declared to use the ClassVar[List[str]] annotation for each.
52-56:if part_id:treats empty string same asNone.
make_compound_id("d", "e", "")silently returns"d/e"rather than raising or including a trailing slash. If callers can passpart_id=""(e.g., from a model field that is falsy-empty), the intent should be explicit. Usingif part_id is not None:makes the contract clearer.♻️ Proposed fix
`@staticmethod` def make_compound_id(document_id, element_id, part_id=None): - if part_id: + if part_id is not None: return f"{document_id}/{element_id}/{part_id}" return f"{document_id}/{element_id}"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_onshape/components/binder.py` around lines 52 - 56, The function make_compound_id treats empty-string part_id as None because it checks `if part_id:`, which drops trailing slashes; change the predicate to `if part_id is not None:` so that an explicit empty string produces a compound id with a trailing slash (i.e., `f"{document_id}/{element_id}/{part_id}"`) while None still results in the two-segment form; update the method make_compound_id accordingly to use `is not None` to make the contract explicit.connector_onshape/tests/test_export_product.py (1)
56-78: Missing test for malformed/1-part compound ID insplit_compound_id.The binder's
split_compound_idhas a silent fallback for inputs with fewer than 2/-separated parts, returning only{"document_id": external_id}. This code path has no test coverage. Additionally, consider moving the inlinefrom ..components.binder import OnshapeBinderimports to the class or module level to avoid repetition.Would you like me to generate a test case for the malformed ID fallback path?
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_onshape/tests/test_export_product.py` around lines 56 - 78, Add a unit test that exercises OnshapeBinder.split_compound_id for the malformed / single-part fallback path (e.g., pass "doc1" and assert it returns {"document_id": "doc1"} and also test empty string behavior if desired) to cover the code branch that returns only {"document_id": external_id}; also refactor the repeated inline imports in test_make_compound_id_with_part/test_make_compound_id_without_part/test_split_compound_id by moving "from ..components.binder import OnshapeBinder" to the module or test class level so the binder is imported once for all tests.connector_onshape/models/onshape_backend.py (3)
177-193: Cron methods run imports synchronously — consider usingwith_delay().Each cron method calls
action_import_*()synchronously in a loop. If any backend's import takes long or fails, it blocks subsequent backends and could exceed the cron timeout. Given that queue_job infrastructure is already in place, delegate to the job queue.♻️ Suggested approach
`@api.model` def cron_import_documents(self): backends = self.search([("state", "=", "active")]) for backend in backends: - backend.action_import_documents() + backend.with_delay().action_import_documents()Apply the same pattern to
cron_import_productsandcron_import_boms.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_onshape/models/onshape_backend.py` around lines 177 - 193, The cron methods currently call action_import_documents/action_import_products/action_import_boms synchronously in a loop over backends found by search([("state","=","active")]); change each direct call to enqueue the work with the job queue by using backend.with_delay().action_import_documents(), backend.with_delay().action_import_products(), and backend.with_delay().action_import_boms() respectively so each backend import runs as a queued job instead of blocking the cron.
66-79: Count computations load all related records into memory.
len(rec.document_ids)triggers a full ORM read of every related record just to count them. For backends with hundreds or thousands of documents/bindings, this is wasteful.♻️ Suggested: use `search_count` or `read_group`
`@api.depends`("document_ids") def _compute_document_count(self): - for rec in self: - rec.document_count = len(rec.document_ids) + data = self.env["onshape.document"].read_group( + [("backend_id", "in", self.ids)], + ["backend_id"], + ["backend_id"], + ) + mapped = {d["backend_id"][0]: d["backend_id_count"] for d in data} + for rec in self: + rec.document_count = mapped.get(rec.id, 0) `@api.depends`("product_binding_ids") def _compute_product_binding_count(self): - for rec in self: - rec.product_binding_count = len(rec.product_binding_ids) + data = self.env["onshape.product.product"].read_group( + [("backend_id", "in", self.ids)], + ["backend_id"], + ["backend_id"], + ) + mapped = {d["backend_id"][0]: d["backend_id_count"] for d in data} + for rec in self: + rec.product_binding_count = mapped.get(rec.id, 0) `@api.depends`("bom_binding_ids") def _compute_bom_binding_count(self): - for rec in self: - rec.bom_binding_count = len(rec.bom_binding_ids) + data = self.env["onshape.mrp.bom"].read_group( + [("backend_id", "in", self.ids)], + ["backend_id"], + ["backend_id"], + ) + mapped = {d["backend_id"][0]: d["backend_id_count"] for d in data} + for rec in self: + rec.bom_binding_count = mapped.get(rec.id, 0)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_onshape/models/onshape_backend.py` around lines 66 - 79, The three compute methods (_compute_document_count, _compute_product_binding_count, _compute_bom_binding_count) currently call len(rec.document_ids / product_binding_ids / bom_binding_ids) which forces loading all related records; change each to use a count query against the related model (e.g. self.env['onshape.document'].search_count([('backend_id','=', rec.id)]) or equivalent domain) or use read_group to get counts for all recs at once, then assign rec.document_count / product_binding_count / bom_binding_count from those count results to avoid materializing related recordsets.
131-136:action_export_part_numbersloads all bindings eagerly.
self.product_binding_ids.filtered(...)fetches every product binding and its relatedodoo_idinto memory. For large backends, consider a domain-based search instead.♻️ Suggested fix
def action_export_part_numbers(self): self.ensure_one() self._check_active() - bindings = self.product_binding_ids.filtered(lambda b: b.odoo_id.default_code) + bindings = self.env["onshape.product.product"].search([ + ("backend_id", "=", self.id), + ("odoo_id.default_code", "!=", False), + ]) for binding in bindings: binding.with_delay().export_record()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_onshape/models/onshape_backend.py` around lines 131 - 136, action_export_part_numbers currently loads all product_binding_ids (and their odoo_id) into memory via self.product_binding_ids.filtered(...); replace this with a domain search on the binding model to fetch only bindings for this backend that have an odoo_id.default_code and iterate them to call binding.with_delay().export_record(); specifically, keep ensure_one() and _check_active(), then use env['connector.onshape.product.binding'].search(domain) where domain targets the current backend (e.g. [('backend_id','=', self.id)]) and non-empty default_code (e.g. [('odoo_id.default_code','!=', False)]) before calling export_record on each binding.connector_onshape/components/importer.py (3)
396-398: MD5 is flagged by static analysis — consider SHA256 for BOM hashing.While MD5 here is not used for security (just change detection),
hashlib.sha256is equally fast in practice and avoids linter warnings and future audit questions. It's a low-effort swap.♻️ Minimal fix
- bom_hash = hashlib.md5( + bom_hash = hashlib.sha256( json.dumps(bom_items, sort_keys=True).encode() ).hexdigest()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_onshape/components/importer.py` around lines 396 - 398, Replace the MD5 usage when computing bom_hash with SHA256: change the call to hashlib.md5(...) to hashlib.sha256(...) while keeping the same json.dumps(bom_items, sort_keys=True).encode() and .hexdigest() call; update any variable/comment referencing MD5 accordingly so the BOM hash generation (bom_hash from bom_items) uses hashlib.sha256 to satisfy the linter/audit requirement.
463-483: Silently skipping unmatched BOM components may confuse users.
_build_bom_linesdrops components that can't be matched to an Odoo product without any logging. Combined with the match_score, users can see that components are missing but not which ones. A debug-level log per skipped item would aid troubleshooting.♻️ Suggested improvement
product = self._find_component_product(backend, item) if not product: + _logger.debug( + "BOM component not matched, skipping: %s (partNumber=%s)", + item.get("name", ""), + item.get("partNumber", ""), + ) continue🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_onshape/components/importer.py` around lines 463 - 483, The _build_bom_lines function currently drops BOM items when _find_component_product returns no product; add a debug-level log for each skipped item so users can see which components were unmatched and why. In _build_bom_lines, after "if not product: continue", insert a debug log call (using the connector's existing logger, e.g., self.logger or self._logger) that outputs identifying item details (name/part number), the attempted match inputs, and any available match_score or reason from _find_component_product; ensure the log message is concise and only at debug level so normal runs aren’t noisy.
106-113: Bareexcept Exceptioncatches mask different failure modes.All three API call sites catch
Exceptionbroadly. A transient network error, an authentication failure (401/403), and a rate-limit response (402/429) all get the same treatment: a warning log and skip. Authentication failures especially should propagate rather than be silently swallowed, as they indicate a systemic configuration issue that will affect every subsequent call.Consider catching a more specific exception hierarchy (e.g.,
requests.RequestExceptionor a custom adapter exception) and letting auth/config errors propagate.Also applies to: 186-194, 381-389
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_onshape/components/importer.py` around lines 106 - 113, The current bare "except Exception" around the API calls (the blocks that log "Could not fetch elements for document %s" and return, referencing document.onshape_document_id and workspace_id) should be replaced with targeted exception handling: catch requests.RequestException (or your connector's specific HTTP client error) to handle transient network errors and log/return, but catch requests.HTTPError (or inspect exception.response.status_code) and re-raise for authentication errors (401/403) so they propagate, and treat rate-limit codes (429/402) separately (e.g., log + backoff/retry or return), ensuring you import requests (or the adapter) and avoid swallowing unexpected exceptions.connector_onshape/controllers/webhook.py (1)
96-125:_handle_metadata_changere-imports all products for the backend, not just the affected document.
backend.with_delay().action_import_products()triggers a full product batch import across all documents, not scoped to the document identified bydoc_id. For backends with many documents, this is unnecessarily expensive for a single-document metadata change. Consider passing the document context to scope the re-import.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_onshape/controllers/webhook.py` around lines 96 - 125, The current _handle_metadata_change uses backend.with_delay(...).action_import_products() which triggers a full import across all documents; instead scope the job to the specific document by invoking the import on the document record or passing the document context/ID to the import task. Replace backend.with_delay(...).action_import_products() with a call that targets the document (e.g., document.with_delay(priority=10, description=f"Re-import products for doc {document.name}").action_import_products() or backend.with_delay(...).action_import_products(document_id=document.id) depending on the existing task signature) so only the affected document is re-imported.connector_onshape/models/onshape_mrp_bom.py (1)
67-79: URL may contain empty path segments when workspace or element is missing.When
doc.onshape_default_workspace_idorrec.onshape_element_idis falsy, the fallbackor ""produces paths like/documents/.../w//e/which are malformed. Theonshape_product_product.pysnippet has the same pattern, so this may be a broader issue, but consider gating the URL construction on the presence of workspace and element IDs, or at minimum the workspace.♻️ Suggested fix
def _compute_onshape_url(self): for rec in self: doc = rec.onshape_document_id - if doc and doc.backend_id.base_url and doc.onshape_document_id: - workspace = doc.onshape_default_workspace_id or "" - elem = rec.onshape_element_id or "" + if ( + doc + and doc.backend_id.base_url + and doc.onshape_document_id + and doc.onshape_default_workspace_id + ): + workspace = doc.onshape_default_workspace_id + elem = rec.onshape_element_id or "" rec.onshape_url = ( f"{doc.backend_id.base_url}" f"/documents/{doc.onshape_document_id}" f"/w/{workspace}/e/{elem}" ) else: rec.onshape_url = False🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_onshape/models/onshape_mrp_bom.py` around lines 67 - 79, The _compute_onshape_url method builds URLs with empty path segments when workspace or element are missing; update it to first ensure rec.onshape_document_id and doc.backend_id.base_url exist, then build the URL starting with f"{doc.backend_id.base_url}/documents/{doc.onshape_document_id}" and append "/w/{workspace}" only if doc.onshape_default_workspace_id is truthy and append "/e/{elem}" only if rec.onshape_element_id is truthy; set rec.onshape_url = False if required doc/base_url/document id is missing. Ensure you reference the existing symbols: _compute_onshape_url, onshape_document_id, onshape_default_workspace_id, onshape_element_id, and onshape_url.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@connector_onshape/components/adapter.py`:
- Around line 171-177: The Retry-After header parsing can raise ValueError and
abort the 429 retry path; update the block that computes wait (using resp,
retry_after, wait, RETRY_BACKOFF, attempt) to guard parsing: attempt to convert
retry_after to an int inside a try/except (or coerce via int(float(...))) and on
any TypeError/ValueError fall back to RETRY_BACKOFF * (attempt + 1); ensure the
final wait variable always gets a valid numeric fallback so retries continue.
- Around line 9-10: The nonce generation currently uses random.choice (imported
via "import random") which is not cryptographically secure; replace usages of
random.choice with secrets.choice and add "import secrets" (remove or keep
random only if used elsewhere). Update the nonce generation site(s) (the
expression using random.choice and string) to use secrets.choice (or better, use
secrets.token_urlsafe/token_hex if preferred) so HMAC nonces are
cryptographically secure; ensure any tests or callers still reference the same
variable name (e.g., nonce) and that imports are adjusted accordingly.
In `@connector_onshape/components/binder.py`:
- Around line 58-69: split_compound_id currently calls external_id.split("/")
without guarding against None and returns inconsistent dict shapes for malformed
IDs; update split_compound_id to validate input and normalize outputs: if
external_id is None or not a str raise a ValueError (or TypeError) with a clear
message, split into parts, and always return the same keys ("document_id",
"element_id", "part_id") populating missing values with None (or explicitly
raise on wrong segment counts if you prefer strictness) so callers receive a
predictable dict; reference function name split_compound_id and the keys
"document_id", "element_id", "part_id" when making the change.
In `@connector_onshape/components/exporter.py`:
- Around line 59-60: The early return when items is falsy prevents updating the
stored sync_date; instead, ensure you still call binding.write({"sync_date":
<current timestamp>}) before returning so an empty items response advances the
sync cursor. Locate the block in exporter.py where items is checked (the `if not
items: return`), replace the silent return with a call to binding.write(...)
using the same timestamp logic used after successful exports (or compute now via
the same helper used elsewhere), then return; this preserves existing behavior
while ensuring `sync_date` is updated even when the items list is empty.
In `@connector_onshape/components/importer.py`:
- Around line 498-504: The "Try by name" fallback incorrectly searches the
default_code field; update the search domain used in the block that references
item_name and product (self.env["product.product"].search) to query the name
field instead of default_code (i.e., use ("name", "=", item_name)) so the lookup
truly attempts a product name match; keep the limit=1 and return behavior
unchanged.
In `@connector_onshape/controllers/webhook.py`:
- Around line 43-56: The webhook currently skips HMAC verification when
backend.webhook_secret is empty, but handlers (e.g., code paths reading
payload.get("documentId") and payload.get("transitionName")) trust that data and
act on it; change behavior so webhook events cannot be forged by a guessed
backend_id by either: (A) enforcing a webhook secret requirement in webhook.py
by returning an error if backend.webhook_secret is not set (i.e., refuse
processing before calling _validate_signature), or (B) keeping optional HMAC but
making all handlers re-fetch authoritative state from the Onshape API (use the
Onshape client to retrieve document/element/transition state instead of using
payload.get(...) values) before modifying binding state; pick one option and
apply consistently to functions that currently trust the payload.
- Around line 127-159: The handlers _handle_workflow_transition (and likewise
_handle_revision_created) currently update all product bindings for a document;
change the search to narrow to the affected element/part by reading
payload.get("elementId") and payload.get("partId") and adding corresponding
search domain clauses (e.g., filter by the binding's element id and/or part id
fields in addition to backend_id and onshape_document_id) so only bindings
matching the elementId and/or partId are updated; keep the existing fallback to
document-only updates if elementId/partId are absent.
- Around line 30-61: The webhook handler currently uses `@http.route` with
type="json" and optionally skips HMAC validation when backend.webhook_secret is
empty; update the webhook to use type="http" (make intent explicit for plain
JSON), require or strongly log/warn when backend.webhook_secret is not set
(reject requests or return an error instead of silently accepting them), ensure
_validate_signature(raw_body, backend.webhook_secret, signature) is always
invoked when a secret exists and fail fast on invalid signatures, and
clarify/reinforce the docstring behavior by either re-fetching event details
from Onshape before trusting payload data or explicitly documenting that
payloads are treated as untrusted; locate these changes in the webhook method,
the route decorator, and the use of backend.webhook_secret/_validate_signature
to implement.
In `@connector_onshape/data/queue_job_function_data.xml`:
- Around line 4-34: The CI fails because the field related_action_enable is
invalid on model queue.job.function; remove the <field
name="related_action_enable" eval="False" /> line from each record referencing
queue.job.function (records job_function_import_documents,
job_function_import_products, job_function_import_boms, and
job_function_export_product) in
connector_onshape/data/queue_job_function_data.xml so the records only include
valid fields (model_id, method, channel_id, retry_pattern).
In `@connector_onshape/models/onshape_document.py`:
- Around line 80-86: Replace the _compute_display_name computed-field approach
with an override of name_get() for Odoo 16 compatibility: implement a
name_get(self) method that iterates over records in self, builds the label using
rec.name and rec.onshape_document_id (use the first 8 chars of
onshape_document_id when present, e.g. "[{rec.onshape_document_id[:8]}]
{rec.name}"), and returns a list of (rec.id, label) tuples; then remove or
disable the `@api.depends/_compute_display_name` implementation so display_name
resolution consistently uses name_get across Odoo 16 code paths.
In `@connector_onshape/models/onshape_product_product.py`:
- Around line 91-103: The _compute_onshape_url method builds malformed URLs when
either onshape_default_workspace_id or onshape_element_id is missing; update
_compute_onshape_url to only construct the URL when doc and
doc.backend_id.base_url are set AND both doc.onshape_default_workspace_id and
rec.onshape_element_id are truthy, otherwise set rec.onshape_url = False;
reference the symbols _compute_onshape_url, onshape_document_id,
onshape_default_workspace_id, onshape_element_id, onshape_url, and
backend_id.base_url when making the change.
In `@connector_onshape/models/product_template.py`:
- Around line 15-23: The dependency decorator on _compute_onshape_document_count
only tracks product_variant_ids.onshape_bind_ids but the method reads
bind.onshape_document_id; update the `@api.depends` for the
_compute_onshape_document_count method to include the nested field
(product_variant_ids.onshape_bind_ids.onshape_document_id) so changes to a
binding's onshape_document_id will invalidate the cache and trigger
recomputation of onshape_document_count.
In `@connector_onshape/static/description/index.html`:
- Line 372: Update the AGPL-3 badge href in the README.rst source
(connector_onshape/README.rst) to use HTTPS instead of HTTP: replace
"http://www.gnu.org/licenses/agpl-3.0-standalone.html" with
"https://www.gnu.org/licenses/agpl-3.0-standalone.html" (the change will then be
picked up by oca-gen-addon-readme and regenerate the index.html).
In `@connector_onshape/tests/test_import_product.py`:
- Around line 49-57: The test test_mcmaster_catalog_match currently unpacks
match_type but doesn't assert it and the comment wrongly states Strategy 1;
update the test to assert match_type equals the McMaster match value used by the
mapper (match_product) — i.e., use the same constant/string other tests assert
for McMaster matches (copy the exact value used elsewhere, e.g.,
mapper.MATCH_TYPE_MCMASTER or 'mcmaster') and change the inline comment to say
the name is matched by MCMASTER_RE extraction (Strategy 3) because
VERSION_SUFFIX_RE only strips numeric .#### suffixes and does not remove the
"_Grade 5 Steel Bolt" part, so base_no_ver still contains the underscore string
and the code falls back to the MCMASTER_RE extraction.
In `@connector_onshape/views/onshape_backend_views.xml`:
- Around line 113-117: The oauth2_token field in the view is rendered in plain
text; change its field tag to mark it as a password so it's masked in the UI by
adding password="True" to the <field name="oauth2_token" /> declaration (same
pattern used for oauth2_client_secret and api_secret), ensuring the form view
for the related model now renders oauth2_token masked.
In `@connector_onshape/wizards/onshape_import_wizard.py`:
- Around line 47-48: The wizard currently only writes True and never resets the
backend flag; update the write to persist the actual checkbox state by writing
the boolean value of self.auto_create_products (instead of only writing True) so
backend.write receives True or False accordingly—modify the block that contains
self.auto_create_products and backend.write({"auto_create_products": ...}) to
pass the real boolean state (e.g., using bool(self.auto_create_products) or the
checkbox value) so the backend flag is cleared when unchecked.
---
Nitpick comments:
In `@connector_onshape/components/binder.py`:
- Around line 14-20: The mutable class attributes _inherit and _apply_on in
binder.py are flagged by RUF012; update their annotations to use typing.ClassVar
(e.g., ClassVar[List[str]]) and import the required names (ClassVar, List) from
typing so these remain class-level constants rather than instance attributes;
modify the class definition where _inherit and _apply_on are declared to use the
ClassVar[List[str]] annotation for each.
- Around line 52-56: The function make_compound_id treats empty-string part_id
as None because it checks `if part_id:`, which drops trailing slashes; change
the predicate to `if part_id is not None:` so that an explicit empty string
produces a compound id with a trailing slash (i.e.,
`f"{document_id}/{element_id}/{part_id}"`) while None still results in the
two-segment form; update the method make_compound_id accordingly to use `is not
None` to make the contract explicit.
In `@connector_onshape/components/importer.py`:
- Around line 396-398: Replace the MD5 usage when computing bom_hash with
SHA256: change the call to hashlib.md5(...) to hashlib.sha256(...) while keeping
the same json.dumps(bom_items, sort_keys=True).encode() and .hexdigest() call;
update any variable/comment referencing MD5 accordingly so the BOM hash
generation (bom_hash from bom_items) uses hashlib.sha256 to satisfy the
linter/audit requirement.
- Around line 463-483: The _build_bom_lines function currently drops BOM items
when _find_component_product returns no product; add a debug-level log for each
skipped item so users can see which components were unmatched and why. In
_build_bom_lines, after "if not product: continue", insert a debug log call
(using the connector's existing logger, e.g., self.logger or self._logger) that
outputs identifying item details (name/part number), the attempted match inputs,
and any available match_score or reason from _find_component_product; ensure the
log message is concise and only at debug level so normal runs aren’t noisy.
- Around line 106-113: The current bare "except Exception" around the API calls
(the blocks that log "Could not fetch elements for document %s" and return,
referencing document.onshape_document_id and workspace_id) should be replaced
with targeted exception handling: catch requests.RequestException (or your
connector's specific HTTP client error) to handle transient network errors and
log/return, but catch requests.HTTPError (or inspect
exception.response.status_code) and re-raise for authentication errors (401/403)
so they propagate, and treat rate-limit codes (429/402) separately (e.g., log +
backoff/retry or return), ensuring you import requests (or the adapter) and
avoid swallowing unexpected exceptions.
In `@connector_onshape/components/listener.py`:
- Line 26: The skip_if decorator lambda currently declares unused parameters
record and **kwargs which triggers ARG005; update the lambda in the
`@skip_if`(...) call to prefix unused parameters with an underscore (e.g., use
lambda self, _record, **_kwargs: self.env.context.get("connector_no_export")) so
the signature still matches the wrapped function while silencing the linter
warning for the skip_if decorator usage.
- Line 24: Ruff flags the class attribute _apply_on as a mutable default;
annotate it with typing.ClassVar to suppress RUF012 by changing the declaration
of _apply_on to use a ClassVar of a string-list type (e.g. ClassVar[List[str]]
or ClassVar[list[str]] depending on your typing style) and add the required
import(s) (ClassVar and List or use built-in list with forward-compatible
typing) at the top of connector_onshape/components/listener.py so _apply_on
remains a class-level constant.
In `@connector_onshape/controllers/webhook.py`:
- Around line 96-125: The current _handle_metadata_change uses
backend.with_delay(...).action_import_products() which triggers a full import
across all documents; instead scope the job to the specific document by invoking
the import on the document record or passing the document context/ID to the
import task. Replace backend.with_delay(...).action_import_products() with a
call that targets the document (e.g., document.with_delay(priority=10,
description=f"Re-import products for doc
{document.name}").action_import_products() or
backend.with_delay(...).action_import_products(document_id=document.id)
depending on the existing task signature) so only the affected document is
re-imported.
In `@connector_onshape/data/ir_cron_data.xml`:
- Around line 4-38: Stagger the cron start times to avoid simultaneous API
bursts: for records ir_cron_import_onshape_documents
(model.cron_import_documents) and ir_cron_import_onshape_products
(model.cron_import_products) add distinct nextcall values so they don't start at
the same moment (e.g., offset one by a few hours), and optionally set a
different nextcall for ir_cron_import_onshape_boms (model.cron_import_boms) if
needed; implement this by adding a <field name="nextcall" eval="..."> with
appropriate datetime offsets for each record so enabling them won't trigger
concurrent calls.
In `@connector_onshape/models/onshape_backend.py`:
- Around line 177-193: The cron methods currently call
action_import_documents/action_import_products/action_import_boms synchronously
in a loop over backends found by search([("state","=","active")]); change each
direct call to enqueue the work with the job queue by using
backend.with_delay().action_import_documents(),
backend.with_delay().action_import_products(), and
backend.with_delay().action_import_boms() respectively so each backend import
runs as a queued job instead of blocking the cron.
- Around line 66-79: The three compute methods (_compute_document_count,
_compute_product_binding_count, _compute_bom_binding_count) currently call
len(rec.document_ids / product_binding_ids / bom_binding_ids) which forces
loading all related records; change each to use a count query against the
related model (e.g.
self.env['onshape.document'].search_count([('backend_id','=', rec.id)]) or
equivalent domain) or use read_group to get counts for all recs at once, then
assign rec.document_count / product_binding_count / bom_binding_count from those
count results to avoid materializing related recordsets.
- Around line 131-136: action_export_part_numbers currently loads all
product_binding_ids (and their odoo_id) into memory via
self.product_binding_ids.filtered(...); replace this with a domain search on the
binding model to fetch only bindings for this backend that have an
odoo_id.default_code and iterate them to call
binding.with_delay().export_record(); specifically, keep ensure_one() and
_check_active(), then use
env['connector.onshape.product.binding'].search(domain) where domain targets the
current backend (e.g. [('backend_id','=', self.id)]) and non-empty default_code
(e.g. [('odoo_id.default_code','!=', False)]) before calling export_record on
each binding.
In `@connector_onshape/models/onshape_mrp_bom.py`:
- Around line 67-79: The _compute_onshape_url method builds URLs with empty path
segments when workspace or element are missing; update it to first ensure
rec.onshape_document_id and doc.backend_id.base_url exist, then build the URL
starting with f"{doc.backend_id.base_url}/documents/{doc.onshape_document_id}"
and append "/w/{workspace}" only if doc.onshape_default_workspace_id is truthy
and append "/e/{elem}" only if rec.onshape_element_id is truthy; set
rec.onshape_url = False if required doc/base_url/document id is missing. Ensure
you reference the existing symbols: _compute_onshape_url, onshape_document_id,
onshape_default_workspace_id, onshape_element_id, and onshape_url.
In `@connector_onshape/readme/CONFIGURE.rst`:
- Around line 97-107: Update the "Scheduled Sync (Cron Jobs)" section to add a
prerequisite note that enabling the cron jobs ("Onshape: Import Documents",
"Onshape: Import Products", "Onshape: Import BOMs") is not sufficient by itself
— a queue/job runner or worker process must be configured and running for queued
import jobs to actually execute; include a short bullet describing how to
configure/start the job runner (e.g., enable the worker service, point it at the
app's job queue, and ensure it runs on boot) and a reminder to verify the jobs
in Settings > Technical > Automation > Scheduled Actions after starting the
worker.
In `@connector_onshape/readme/USAGE.rst`:
- Around line 35-40: Update the "Automatic Export" section to add a brief
job-monitoring hint: state that exports are performed asynchronously when a
product's default_code or name changes and instruct users to check Odoo's
background job/queue dashboard (Queue Jobs / Settings > Technical > Jobs or the
project's job-queue UI) for failure/retry status, review worker logs for error
details, and verify the product's Onshape binding/part association to
troubleshoot stuck exports; reference the automatic export/job terminology
(automatic export, default_code, name, export job) so readers can locate and
correlate the docs with the system UI and logs.
In `@connector_onshape/tests/test_adapter.py`:
- Around line 78-89: Move the OnshapeQuotaError import out of the test body and
into the module-level imports so it’s declared alongside other imports at the
top of connector_onshape/tests/test_adapter.py; update the
test_quota_exhausted_raises to reference the module-level OnshapeQuotaError
(used when adapter._request("GET", "/api/v6/documents") raises) instead of
importing it inside the function.
In `@connector_onshape/tests/test_export_product.py`:
- Around line 56-78: Add a unit test that exercises
OnshapeBinder.split_compound_id for the malformed / single-part fallback path
(e.g., pass "doc1" and assert it returns {"document_id": "doc1"} and also test
empty string behavior if desired) to cover the code branch that returns only
{"document_id": external_id}; also refactor the repeated inline imports in
test_make_compound_id_with_part/test_make_compound_id_without_part/test_split_compound_id
by moving "from ..components.binder import OnshapeBinder" to the module or test
class level so the binder is imported once for all tests.
In `@connector_onshape/tests/test_listener.py`:
- Around line 10-47: Update the three tests
(test_connector_no_export_context_skips, test_unlinked_product_no_error,
test_irrelevant_field_no_export) to assert that no export was queued by patching
the export entry point: mock the model method that triggers the export (e.g.,
export_record on the Onshape listener or the recordset's
with_delay().export_record chain) using unittest.mock.patch or monkeypatch,
perform the write operation under test, and then call assert_not_called() on the
mock; ensure you patch the exact symbol used by the listener (export_record or
the with_delay proxy) so the test fails if an export is enqueued without
raising.
In `@connector_onshape/tests/test_wizard.py`:
- Around line 36-53: The test test_documents_only_import currently patches
action_import_documents but never uses the mock or calls the wizard import;
update the test to patch the backend method within the test body (e.g., using
patch.object as a context manager on the same target used now), create the
wizard via self.env["onshape.import.wizard"].create(...), then call
wizard.action_import() and assert the patched method (action_import_documents)
was called (use mock_import.assert_called_once() or similar) to verify the
import action is invoked; keep references to the existing symbols:
test_documents_only_import, action_import_documents, wizard.action_import(), and
the backend retrieval via OnshapeTestCase.env/ self.env to locate the code.
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (3)
connector_onshape/security/ir.model.access.csvis excluded by!**/*.csvconnector_onshape/static/description/icon.pngis excluded by!**/*.pngconnector_onshape/static/description/icon.svgis excluded by!**/*.svg
📒 Files selected for processing (51)
connector_onshape/README.rstconnector_onshape/__init__.pyconnector_onshape/__manifest__.pyconnector_onshape/components/__init__.pyconnector_onshape/components/adapter.pyconnector_onshape/components/binder.pyconnector_onshape/components/core.pyconnector_onshape/components/exporter.pyconnector_onshape/components/importer.pyconnector_onshape/components/listener.pyconnector_onshape/components/mapper.pyconnector_onshape/controllers/__init__.pyconnector_onshape/controllers/webhook.pyconnector_onshape/data/ir_cron_data.xmlconnector_onshape/data/queue_job_channel_data.xmlconnector_onshape/data/queue_job_function_data.xmlconnector_onshape/models/__init__.pyconnector_onshape/models/mrp_bom.pyconnector_onshape/models/onshape_backend.pyconnector_onshape/models/onshape_document.pyconnector_onshape/models/onshape_mrp_bom.pyconnector_onshape/models/onshape_product_product.pyconnector_onshape/models/product_product.pyconnector_onshape/models/product_template.pyconnector_onshape/readme/CONFIGURE.rstconnector_onshape/readme/CONTRIBUTORS.rstconnector_onshape/readme/DESCRIPTION.rstconnector_onshape/readme/USAGE.rstconnector_onshape/security/onshape_security.xmlconnector_onshape/static/description/index.htmlconnector_onshape/tests/__init__.pyconnector_onshape/tests/common.pyconnector_onshape/tests/test_adapter.pyconnector_onshape/tests/test_backend.pyconnector_onshape/tests/test_export_product.pyconnector_onshape/tests/test_import_bom.pyconnector_onshape/tests/test_import_product.pyconnector_onshape/tests/test_importer_flow.pyconnector_onshape/tests/test_listener.pyconnector_onshape/tests/test_webhook.pyconnector_onshape/tests/test_wizard.pyconnector_onshape/views/mrp_bom_views.xmlconnector_onshape/views/onshape_backend_views.xmlconnector_onshape/views/onshape_document_views.xmlconnector_onshape/views/onshape_product_views.xmlconnector_onshape/views/product_template_views.xmlconnector_onshape/wizards/__init__.pyconnector_onshape/wizards/onshape_import_wizard.pyconnector_onshape/wizards/onshape_import_wizard_views.xmlsetup/connector_onshape/odoo/addons/connector_onshapesetup/connector_onshape/setup.py
| @staticmethod | ||
| def split_compound_id(external_id): | ||
| parts = external_id.split("/") | ||
| if len(parts) == 3: | ||
| return { | ||
| "document_id": parts[0], | ||
| "element_id": parts[1], | ||
| "part_id": parts[2], | ||
| } | ||
| if len(parts) == 2: | ||
| return {"document_id": parts[0], "element_id": parts[1]} | ||
| return {"document_id": external_id} |
There was a problem hiding this comment.
split_compound_id has no guard against None input and silently swallows malformed IDs.
- Calling
split_compound_id(None)raisesAttributeErroron.split("/"). - A 1-segment input (no
/) returns{"document_id": external_id}with noelement_idkey, silently returning a partial dict that callers are unlikely to expect.
🛡️ Proposed fix
`@staticmethod`
def split_compound_id(external_id):
+ if not external_id:
+ raise ValueError("external_id must be a non-empty string")
parts = external_id.split("/")
if len(parts) == 3:
return {
"document_id": parts[0],
"element_id": parts[1],
"part_id": parts[2],
}
if len(parts) == 2:
return {"document_id": parts[0], "element_id": parts[1]}
- return {"document_id": external_id}
+ raise ValueError(f"Cannot parse compound ID: {external_id!r}")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@connector_onshape/components/binder.py` around lines 58 - 69,
split_compound_id currently calls external_id.split("/") without guarding
against None and returns inconsistent dict shapes for malformed IDs; update
split_compound_id to validate input and normalize outputs: if external_id is
None or not a str raise a ValueError (or TypeError) with a clear message, split
into parts, and always return the same keys ("document_id", "element_id",
"part_id") populating missing values with None (or explicitly raise on wrong
segment counts if you prefer strictness) so callers receive a predictable dict;
reference function name split_compound_id and the keys "document_id",
"element_id", "part_id" when making the change.
| !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! | ||
| !! source digest: sha256:a5b097e306fb250e6bb558ec033e08f2af79ffbb0c1f3ef97cbefb94e8916081 | ||
| !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! --> | ||
| <p><a class="reference external image-reference" href="https://odoo-community.org/page/development-status"><img alt="Beta" src="https://img.shields.io/badge/maturity-Beta-yellow.png" /></a> <a class="reference external image-reference" href="http://www.gnu.org/licenses/agpl-3.0-standalone.html"><img alt="License: AGPL-3" src="https://img.shields.io/badge/licence-AGPL--3-blue.png" /></a> <a class="reference external image-reference" href="https://github.com/OCA/connector/tree/16.0/connector_onshape"><img alt="OCA/connector" src="https://img.shields.io/badge/github-OCA%2Fconnector-lightgray.png?logo=github" /></a> <a class="reference external image-reference" href="https://translation.odoo-community.org/projects/connector-16-0/connector-16-0-connector_onshape"><img alt="Translate me on Weblate" src="https://img.shields.io/badge/weblate-Translate%20me-F47D42.png" /></a> <a class="reference external image-reference" href="https://runboat.odoo-community.org/builds?repo=OCA/connector&target_branch=16.0"><img alt="Try me on Runboat" src="https://img.shields.io/badge/runboat-Try%20me-875A7B.png" /></a></p> |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# First, find the README.rst file in connector_onshape directory
find . -name "README.rst" -path "*/connector_onshape/*" | head -5Repository: kencove/connector
Length of output: 91
🏁 Script executed:
#!/bin/bash
# Search for the HTTP AGPL URL in RST files
rg -n "http://www.gnu.org/licenses/agpl" --type rstRepository: kencove/connector
Length of output: 158
🏁 Script executed:
#!/bin/bash
# Also check for any AGPL references in README.rst files to see the context
rg -B2 -A2 "agpl.*3.*standalone" --type rst -iRepository: kencove/connector
Length of output: 484
🏁 Script executed:
#!/bin/bash
# Read the README.rst file to see the full context and verify auto-generation notice
head -30 connector_onshape/README.rstRepository: kencove/connector
Length of output: 1479
🏁 Script executed:
#!/bin/bash
# Check for any other HTTP URLs in the README.rst file
rg "http://" connector_onshape/README.rstRepository: kencove/connector
Length of output: 126
Fix the HTTP AGPL-3 badge link in the RST source, not here.
The AGPL-3 badge href uses http:// instead of https:// at line 17 of connector_onshape/README.rst:
http://www.gnu.org/licenses/agpl-3.0-standalone.html
Because the generated index.html file is auto-generated by oca-gen-addon-readme, any edit there will be overwritten. The fix must be applied to the README.rst source.
🔧 Proposed fix in README.rst
.. |badge2| image:: https://img.shields.io/badge/licence-AGPL--3-blue.png
- :target: http://www.gnu.org/licenses/agpl-3.0-standalone.html
+ :target: https://www.gnu.org/licenses/agpl-3.0-standalone.html🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@connector_onshape/static/description/index.html` at line 372, Update the
AGPL-3 badge href in the README.rst source (connector_onshape/README.rst) to use
HTTPS instead of HTTP: replace
"http://www.gnu.org/licenses/agpl-3.0-standalone.html" with
"https://www.gnu.org/licenses/agpl-3.0-standalone.html" (the change will then be
picked up by oca-gen-addon-readme and regenerate the index.html).
…, revision, vendor, project Add comprehensive metadata extraction from Onshape API: - 12 new fields on binding: description, appearance, revision, mass, volume, surface_area, author, designer, vendor, project, custom_properties - Mass properties endpoint (mass/volume/surface area from Onshape) - Property-to-field mapping via PROPERTY_FIELD_MAP dict - Custom/unknown properties stored as JSON - Material/appearance fallback from part data when not in metadata - Export mapper uses Onshape property names as keys - Updated views with Physical Properties, People & Project groups - 4 new test methods for metadata, mass, custom props, fallback Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 3
♻️ Duplicate comments (6)
connector_onshape/components/adapter.py (2)
58-60: Usesecrets.choicefor cryptographically secure HMAC nonce generation.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_onshape/components/adapter.py` around lines 58 - 60, The nonce generation using random.choice in the adapter (the variable named "nonce") is not cryptographically secure; replace the current random.choice(...) loop with a secrets-based generator (e.g., use secrets.choice or preferably secrets.token_urlsafe/truncation) so the HMAC nonce is cryptographically secure. Locate the nonce assignment in adapter.py (the block that builds the 25-char nonce) and swap in secrets.* API calls and import secrets at top of file, preserving the 25-character length and character set requirements.
171-177:int(retry_after)can raiseValueErroron non-integerRetry-Afterheaders, aborting the retry path.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_onshape/components/adapter.py` around lines 171 - 177, The Retry-After handling may raise ValueError when the header is not an integer; update the block that reads resp.headers.get("Retry-After") (inside the resp.status_code == 429 branch) to robustly handle both integer seconds and HTTP-date formats: attempt int(retry_after) inside a try/except ValueError, and on ValueError parse the HTTP-date (e.g. via email.utils.parsedate_to_datetime) to compute seconds until that date, and if parsing fails fall back to RETRY_BACKOFF * (attempt + 1); ensure the final value is assigned to wait and that missing Retry-After still uses the existing backoff logic.connector_onshape/models/onshape_product_product.py (1)
137-149: Malformed Onshape URL when workspace or element ID is missing.
workspaceandelemcan be empty strings, producing broken links like/w//e/that are surfaced viawidget="url"in the form view.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_onshape/models/onshape_product_product.py` around lines 137 - 149, _compute_onshape_url builds URLs that include empty "/w//e/" segments when onshape_default_workspace_id or onshape_element_id are missing; update the logic in _compute_onshape_url to only append the workspace segment ("/w/{workspace}") if onshape_default_workspace_id is truthy and only append the element segment ("/e/{elem}") if onshape_element_id is truthy, ensuring the base "{doc.backend_id.base_url}/documents/{doc.onshape_document_id}" is used when both are missing, and keep setting rec.onshape_url = False when onshape_document_id or backend_id/base_url are absent.connector_onshape/components/importer.py (1)
520-526: "Try by name" fallback incorrectly searchesdefault_codeinstead ofname.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_onshape/components/importer.py` around lines 520 - 526, The fallback "Try by name" block in importer.py is searching the wrong field—replace the domain on the self.env["product.product"].search call to match the product name instead of default_code (i.e. use ("name", "=", item_name)); ensure the check still uses item_name and returns product as before (the search invocation and variable names product and item_name should be left intact).connector_onshape/tests/test_import_product.py (1)
49-58:match_typeis not asserted and the inline comment is incorrect.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_onshape/tests/test_import_product.py` around lines 49 - 58, The test test_mcmaster_catalog_match currently ignores match_type and has an incorrect inline comment; update the test to assert the returned match_type from mapper.match_product(part_data) against the expected value used by the match logic (use the actual match_type constant/enum or string that match_product returns, e.g., the MatchType or constant used in the mapper implementation), and correct the inline comment to accurately describe the matching mechanism (referencing mapper.match_product and self.product_mcmaster / default_code to ensure the comment matches the asserted match_type).connector_onshape/components/exporter.py (1)
57-58:sync_dateis never updated whenitemsis empty.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_onshape/components/exporter.py` around lines 57 - 58, The early return at "if not items: return" prevents updating sync_date when there are no items; modify the logic so that before returning you still persist the new sync position (e.g., call the existing update/persist function or set the sync_date field) — locate the block with "if not items: return" in exporter.py and insert a call to the component that records the sync cursor (or assign self.sync_date / sync_state.sync_date) with the computed sync_date, then return.
🧹 Nitpick comments (4)
connector_onshape/components/importer.py (1)
101-113:backendparameter is accepted but never used in_import_elements.Remove it from the signature to avoid confusion, and update the call site accordingly.
♻️ Proposed fix
- def _import_elements(self, backend, document, workspace_id): + def _import_elements(self, document, workspace_id):- self._import_elements(backend, document, workspace_id) + self._import_elements(document, workspace_id)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_onshape/components/importer.py` around lines 101 - 113, The _import_elements function currently accepts an unused backend parameter; remove backend from the signature of _import_elements(document, workspace_id) and update any callers to stop passing backend, keeping the body unchanged (it should still obtain adapter via self.component(usage="backend.adapter") and call adapter.read_document_elements(document.onshape_document_id, workspace_id)); ensure tests and all call sites that invoked _import_elements(..., backend, ...) are updated to call _import_elements(document, workspace_id) so there are no mismatched arguments.connector_onshape/components/mapper.py (1)
63-63: Unusedelementparameter.
elementis accepted bymap_recordbut never referenced. Prefix it with_to signal intentional non-use.♻️ Proposed fix
- def map_record(self, part_data, document=None, element=None): + def map_record(self, part_data, document=None, _element=None):🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_onshape/components/mapper.py` at line 63, The map_record method declares an unused parameter element; rename it to _element (or prefix with an underscore) in the map_record signature to indicate intentional non-use and update any internal references (none expected) and callers if they rely on exact signature matching so the function remains clear that element is unused.connector_onshape/components/exporter.py (1)
72-72: Redundant membership check before dictionary access.
prop_name in export_values and export_values[prop_name]performs two lookups; usedict.getinstead.♻️ Proposed fix
- if prop_name in export_values and export_values[prop_name]: + if export_values.get(prop_name):🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_onshape/components/exporter.py` at line 72, Replace the redundant membership-and-index check "prop_name in export_values and export_values[prop_name]" with a single lookup using export_values.get(prop_name) (or export_values.get(prop_name, default)) inside the exporter code; locate the occurrence using the symbols prop_name and export_values in connector_onshape/components/exporter.py and change the condition to use dict.get to avoid double lookup and handle falsy/missing values appropriately.connector_onshape/tests/test_import_product.py (1)
95-95: Redundant import pathfrom ..tests.commoninside thetests/package.The file is already inside
connector_onshape/tests/, so the correct relative import isfrom .common import MOCK_MASS_PROPERTIES.♻️ Proposed fix
- from ..tests.common import MOCK_MASS_PROPERTIES + from .common import MOCK_MASS_PROPERTIES🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_onshape/tests/test_import_product.py` at line 95, The test file imports MOCK_MASS_PROPERTIES using an incorrect relative path; replace the redundant import "from ..tests.common import MOCK_MASS_PROPERTIES" with the correct package-local relative import "from .common import MOCK_MASS_PROPERTIES" so the module resolves inside the connector_onshape/tests package and avoids traversing up into tests twice; update the import line where MOCK_MASS_PROPERTIES is referenced.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@connector_onshape/components/adapter.py`:
- Around line 95-105: Wrap the json.loads call in _oauth2_headers in a
try/except that catches json.JSONDecodeError (and possibly ValueError) when
parsing backend.oauth2_token returned by _get_backend(); on exception set
token_data = {} (so access_token becomes ""), and emit a warning via an
available logger (e.g., self.logger.warning) indicating the malformed token and
fallback, then continue to build headers as before—this prevents uncaught
crashes from a corrupted backend.oauth2_token while preserving normal request
behavior.
In `@connector_onshape/components/mapper.py`:
- Around line 127-130: The loop in map_record that sums
total_mass/total_volume/total_area uses body_data.get("mass", [0.0])[0] etc.,
which raises IndexError when the API returns an empty list; update the
extraction to safely handle empty lists by checking the returned list before
indexing (e.g., retrieve vals = body_data.get("mass") and use vals[0] if vals
and len(vals)>0 else 0.0), and apply the same pattern for "volume" and
"periphery" so total_mass, total_volume, and total_area never attempt to index
an empty list.
In `@connector_onshape/models/onshape_product_product.py`:
- Around line 132-136: The computed, stored field onshape_url is missing
onshape_document_id.onshape_default_workspace_id from its `@api.depends`, so
changes to a document's default workspace won't trigger recomputation; update
the `@api.depends` on the compute method (the decorator above the onshape_url
compute function) to include "onshape_document_id.onshape_default_workspace_id"
alongside the existing dependencies ("onshape_document_id.backend_id.base_url",
"onshape_document_id.onshape_document_id", "onshape_element_id") so stored
values are refreshed when the document workspace changes.
---
Duplicate comments:
In `@connector_onshape/components/adapter.py`:
- Around line 58-60: The nonce generation using random.choice in the adapter
(the variable named "nonce") is not cryptographically secure; replace the
current random.choice(...) loop with a secrets-based generator (e.g., use
secrets.choice or preferably secrets.token_urlsafe/truncation) so the HMAC nonce
is cryptographically secure. Locate the nonce assignment in adapter.py (the
block that builds the 25-char nonce) and swap in secrets.* API calls and import
secrets at top of file, preserving the 25-character length and character set
requirements.
- Around line 171-177: The Retry-After handling may raise ValueError when the
header is not an integer; update the block that reads
resp.headers.get("Retry-After") (inside the resp.status_code == 429 branch) to
robustly handle both integer seconds and HTTP-date formats: attempt
int(retry_after) inside a try/except ValueError, and on ValueError parse the
HTTP-date (e.g. via email.utils.parsedate_to_datetime) to compute seconds until
that date, and if parsing fails fall back to RETRY_BACKOFF * (attempt + 1);
ensure the final value is assigned to wait and that missing Retry-After still
uses the existing backoff logic.
In `@connector_onshape/components/exporter.py`:
- Around line 57-58: The early return at "if not items: return" prevents
updating sync_date when there are no items; modify the logic so that before
returning you still persist the new sync position (e.g., call the existing
update/persist function or set the sync_date field) — locate the block with "if
not items: return" in exporter.py and insert a call to the component that
records the sync cursor (or assign self.sync_date / sync_state.sync_date) with
the computed sync_date, then return.
In `@connector_onshape/components/importer.py`:
- Around line 520-526: The fallback "Try by name" block in importer.py is
searching the wrong field—replace the domain on the
self.env["product.product"].search call to match the product name instead of
default_code (i.e. use ("name", "=", item_name)); ensure the check still uses
item_name and returns product as before (the search invocation and variable
names product and item_name should be left intact).
In `@connector_onshape/models/onshape_product_product.py`:
- Around line 137-149: _compute_onshape_url builds URLs that include empty
"/w//e/" segments when onshape_default_workspace_id or onshape_element_id are
missing; update the logic in _compute_onshape_url to only append the workspace
segment ("/w/{workspace}") if onshape_default_workspace_id is truthy and only
append the element segment ("/e/{elem}") if onshape_element_id is truthy,
ensuring the base
"{doc.backend_id.base_url}/documents/{doc.onshape_document_id}" is used when
both are missing, and keep setting rec.onshape_url = False when
onshape_document_id or backend_id/base_url are absent.
In `@connector_onshape/tests/test_import_product.py`:
- Around line 49-58: The test test_mcmaster_catalog_match currently ignores
match_type and has an incorrect inline comment; update the test to assert the
returned match_type from mapper.match_product(part_data) against the expected
value used by the match logic (use the actual match_type constant/enum or string
that match_product returns, e.g., the MatchType or constant used in the mapper
implementation), and correct the inline comment to accurately describe the
matching mechanism (referencing mapper.match_product and self.product_mcmaster /
default_code to ensure the comment matches the asserted match_type).
---
Nitpick comments:
In `@connector_onshape/components/exporter.py`:
- Line 72: Replace the redundant membership-and-index check "prop_name in
export_values and export_values[prop_name]" with a single lookup using
export_values.get(prop_name) (or export_values.get(prop_name, default)) inside
the exporter code; locate the occurrence using the symbols prop_name and
export_values in connector_onshape/components/exporter.py and change the
condition to use dict.get to avoid double lookup and handle falsy/missing values
appropriately.
In `@connector_onshape/components/importer.py`:
- Around line 101-113: The _import_elements function currently accepts an unused
backend parameter; remove backend from the signature of
_import_elements(document, workspace_id) and update any callers to stop passing
backend, keeping the body unchanged (it should still obtain adapter via
self.component(usage="backend.adapter") and call
adapter.read_document_elements(document.onshape_document_id, workspace_id));
ensure tests and all call sites that invoked _import_elements(..., backend, ...)
are updated to call _import_elements(document, workspace_id) so there are no
mismatched arguments.
In `@connector_onshape/components/mapper.py`:
- Line 63: The map_record method declares an unused parameter element; rename it
to _element (or prefix with an underscore) in the map_record signature to
indicate intentional non-use and update any internal references (none expected)
and callers if they rely on exact signature matching so the function remains
clear that element is unused.
In `@connector_onshape/tests/test_import_product.py`:
- Line 95: The test file imports MOCK_MASS_PROPERTIES using an incorrect
relative path; replace the redundant import "from ..tests.common import
MOCK_MASS_PROPERTIES" with the correct package-local relative import "from
.common import MOCK_MASS_PROPERTIES" so the module resolves inside the
connector_onshape/tests package and avoids traversing up into tests twice;
update the import line where MOCK_MASS_PROPERTIES is referenced.
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (9)
connector_onshape/components/adapter.pyconnector_onshape/components/exporter.pyconnector_onshape/components/importer.pyconnector_onshape/components/mapper.pyconnector_onshape/models/onshape_product_product.pyconnector_onshape/tests/common.pyconnector_onshape/tests/test_export_product.pyconnector_onshape/tests/test_import_product.pyconnector_onshape/views/onshape_product_views.xml
🚧 Files skipped from review as they are similar to previous changes (2)
- connector_onshape/tests/common.py
- connector_onshape/views/onshape_product_views.xml
- adapter: use secrets instead of random for HMAC nonce, guard oauth2_token JSON parse, guard Retry-After header, extract _log_rate_limit and _check_retryable to reduce complexity - binder: validate input in split_compound_id (raise ValueError) - exporter: update sync_date even when metadata items empty - importer: BOM "Try by name" searches product name not default_code - mapper: guard against empty mass property lists from API - onshape_product_product: add workspace_id to @api.depends, guard URL requiring workspace + element IDs - product_template: add onshape_document_id to @api.depends - onshape_document: use name_get() for Odoo 16 compat - wizard: always sync auto_create_products boolean - webhook: warn on missing secret, narrow binding scope with _find_bindings helper, use elementId/partId when available - backend: cron methods use with_delay() for async processing - views: mask oauth2_token with password=True - data: remove invalid related_action_enable from job functions - tests: add malformed compound ID tests, fix mcmaster assertion Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
|
@cursor review |
|
Skipping Bugbot: Bugbot is disabled for this repository. Visit the Bugbot dashboard to update your settings. |
There was a problem hiding this comment.
Actionable comments posted: 7
♻️ Duplicate comments (11)
connector_onshape/wizards/onshape_import_wizard.py (1)
47-47:auto_create_productsis now correctly persisted bidirectionally.
backend.write({"auto_create_products": self.auto_create_products})correctly propagates bothTrueandFalse, resolving the previous one-way toggle issue.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_onshape/wizards/onshape_import_wizard.py` at line 47, The write call now correctly persists the boolean toggle; update the OnshapeImportWizard implementation to use backend.write({"auto_create_products": self.auto_create_products}) (in the method where the wizard commits settings) and ensure the corresponding backend field auto_create_products exists on the target model so both True and False are stored; confirm no leftover one-way toggle logic (e.g., any previous conditional that inverted or only set True) remains in methods like OnshapeImportWizard.<method handling save> or related helper functions.connector_onshape/controllers/webhook.py (2)
133-177:_find_bindingscorrectly narrows byelementId/partId— LGTM.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_onshape/controllers/webhook.py` around lines 133 - 177, No change required: the _find_bindings function already narrows by elementId and partId correctly; leave _handle_workflow_transition, _handle_revision_created, and _find_bindings as-is (they correctly build the domain using ("onshape_element_id","=", elem_id) and ("onshape_part_id","=", part_id) and call request.env["onshape.product.product"].sudo().search(domain)).
43-62: HMAC validation remains optional with warning — no change from previous finding.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_onshape/controllers/webhook.py` around lines 43 - 62, The code currently only warns when backend.webhook_secret is missing and continues processing; change this to enforce HMAC validation: if backend.webhook_secret is falsy, return an error response (e.g., {"status":"error","message":"Webhook secret not configured"}) instead of logging a warning, and ensure the existing validation branch using request.httprequest.get_data(), request.httprequest.headers.get("X-Onshape-Webhook-Signature", ""), and self._validate_signature(raw_body, backend.webhook_secret, signature) remains unchanged so incoming requests are rejected when signature is missing or invalid.connector_onshape/components/binder.py (1)
58-71: Input validation and malformed-ID guard are now in place — LGTM.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_onshape/components/binder.py` around lines 58 - 71, The split_compound_id function already includes input validation and guards for malformed IDs; no code changes required—keep the existing implementation of split_compound_id (including the ValueError checks and the parsing into document_id, element_id, and optional part_id) as reviewed.connector_onshape/models/onshape_document.py (1)
80-88:name_get()correctly overrides display label for Odoo 16 — LGTM.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_onshape/models/onshape_document.py` around lines 80 - 88, No changes required: the name_get method correctly overrides the record display label for Odoo 16 by returning tuples of (rec.id, name) and handling rec.onshape_document_id; leave the name_get function as-is (function name_get in onshape_document.py).connector_onshape/components/exporter.py (1)
57-59:sync_dateis now updated before the early return — LGTM.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_onshape/components/exporter.py` around lines 57 - 59, The early-return branch in exporter.py currently updates the record's sync_date via binding.write({"sync_date": fields.Datetime.now()}) before returning when items is falsy; leave this as-is (no code change required) since the sync_date should be refreshed on an empty export, so ensure the binding.write call in the block containing the items check remains intact and unchanged.connector_onshape/models/product_template.py (1)
15-26: Granular@api.dependsnow includes the nestedonshape_document_idpath — LGTM.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_onshape/models/product_template.py` around lines 15 - 26, The updated `@api.depends` correctly adds the nested path and the compute method _compute_onshape_document_count already collects unique onshape_document_id values from product_variant_ids -> onshape_bind_ids and sets onshape_document_count; no code change required—keep the depends decorator as ("product_variant_ids.onshape_bind_ids","product_variant_ids.onshape_bind_ids.onshape_document_id") and retain the existing _compute_onshape_document_count implementation that builds a set of bind.onshape_document_id.id and assigns len(doc_ids) to rec.onshape_document_count.connector_onshape/models/onshape_product_product.py (1)
132-155: URL guard and missing@api.dependsdependency are both resolved — LGTM.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_onshape/models/onshape_product_product.py` around lines 132 - 155, The review contains duplicated non-code review markers — remove the duplicate review/comment tokens from the PR text (e.g., the repeated "[duplicate_comment]" or duplicate approval lines) so there is only one approval note; no change is needed in the _compute_onshape_url method or its `@api.depends`, just clean up the duplicated comment metadata in the PR/review.connector_onshape/components/mapper.py (1)
128-130: Empty-listIndexErrorguard is now in place — LGTM.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_onshape/components/mapper.py` around lines 128 - 130, The current index-guard pattern using (body_data.get("mass") or [0.0])[0] is fragile/duplicated; create a small helper like _first_or_zero(data: dict, key: str) -> float that returns the first element or 0.0 (e.g., v = data.get(key); return (v[0] if v else 0.0)), then replace the three spots updating total_mass, total_volume, total_area to use _first_or_zero(body_data, "mass") etc.; this centralizes the safe-access logic and removes repetition in mapper.py.connector_onshape/tests/test_import_product.py (1)
49-57: McMastermatch_typeassertion is now in place — LGTM.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_onshape/tests/test_import_product.py` around lines 49 - 57, The review contains a duplicate approval comment rather than a code issue; no test changes are required for test_mcmaster_catalog_match or mapper.match_product/match_type — remove the duplicate review comment from the PR thread (clean up the redundant "[duplicate_comment]"/extra approval entry) and leave the test assertions as-is.connector_onshape/components/importer.py (1)
520-526: Name-based fallback is now correct.The past concern about
_find_component_productsearchingdefault_codeinstead ofnamehas been addressed; the domain now correctly uses("name", "=", item_name).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_onshape/components/importer.py` around lines 520 - 526, The name-based fallback in _find_component_product is now correct (using self.env["product.product"].search([("name","=", item_name)], limit=1)); ensure any remaining references or legacy code that searched by "default_code" are removed or updated to avoid confusion, keep the domain as ("name","=", item_name) in _find_component_product, and update or add a unit/integration test that verifies finding products by name to prevent regressions.
🧹 Nitpick comments (11)
connector_onshape/wizards/onshape_import_wizard.py (2)
49-50: Tautological condition — simplify or remove.
import_typeis arequiredselection field whose only valid values are exactly("documents", "products", "boms", "full"). Thein (...)guard on line 49 is therefore alwaysTrueand adds no protection.♻️ Proposed simplification
- if self.import_type in ("documents", "products", "boms", "full"): - backend.action_import_documents() + backend.action_import_documents()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_onshape/wizards/onshape_import_wizard.py` around lines 49 - 50, The if-check on the import call is tautological because the selection field import_type only permits "documents","products","boms","full", so remove the redundant guard and call backend.action_import_documents() unconditionally (or replace the entire block with a direct call). Update the call site in onshape_import_wizard.py where import_type and backend.action_import_documents() are used so the code simply invokes backend.action_import_documents() without the if-check.
66-68: Merge adjacent string literals into one.Python silently concatenates the two adjacent string literals, but a single string (or explicit
+) is clearer and avoids a potential maintenance hazard if the strings are later separated by a variable.♻️ Proposed fix
- "message": _( - "Import jobs have been queued. " "Check the job queue for progress." - ), + "message": _( + "Import jobs have been queued. Check the job queue for progress." + ),🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_onshape/wizards/onshape_import_wizard.py` around lines 66 - 68, The message value currently uses two adjacent string literals inside the _(...) call for the "message" key; merge them into a single string literal so the _("...") contains "Import jobs have been queued. Check the job queue for progress." (or explicitly join with +) to avoid silent concatenation—update the string inside the _("...") call where "message" is defined in onshape_import_wizard.py.connector_onshape/components/exporter.py (1)
70-79: Simplify the redundant key-existence check (Ruff RUF019).
prop_name in export_values and export_values[prop_name]checks the key twice. Useexport_values.get(prop_name)instead.♻️ Proposed fix
- if prop_name in export_values and export_values[prop_name]: + if export_values.get(prop_name):🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_onshape/components/exporter.py` around lines 70 - 79, The loop building properties_update redundantly checks membership and value; replace the condition "prop_name in export_values and export_values[prop_name]" with a single truthy lookup using export_values.get(prop_name) (e.g., value = export_values.get(prop_name) and if value: ...), then use that value when constructing the dict for propertyId and value; update the code paths around part_item, prop_name, export_values, and properties_update accordingly.connector_onshape/components/mapper.py (2)
53-61: AnnotatePROPERTY_FIELD_MAPasClassVarto satisfy Ruff RUF012.Class-level dicts/lists that aren't meant to be instance attributes should be declared with
typing.ClassVarto communicate intent and silence the warning.♻️ Proposed fix
+from typing import ClassVar ... - PROPERTY_FIELD_MAP = { + PROPERTY_FIELD_MAP: ClassVar[dict] = {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_onshape/components/mapper.py` around lines 53 - 61, PROPERTY_FIELD_MAP is a class-level dictionary and should be annotated as a typing.ClassVar to satisfy Ruff RUF012; update the declaration of PROPERTY_FIELD_MAP in mapper.py to have a ClassVar type annotation (e.g., ClassVar[Dict[str, str]] or ClassVar[dict[str, str]]) and add the required import from typing (ClassVar and Dict if using that form), leaving the dictionary literal itself unchanged and keeping it at class scope (refer to the PROPERTY_FIELD_MAP symbol and its enclosing class in mapper.py).
63-63: Unusedelementparameter (Ruff ARG002).
elementis accepted but never read insidemap_record. Either use it or prefix with_to document the intentional non-use.♻️ Proposed fix
- def map_record(self, part_data, document=None, element=None): + def map_record(self, part_data, document=None, _element=None):🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_onshape/components/mapper.py` at line 63, The map_record method declares an unused parameter element which triggers ARG002; either use the parameter where needed inside map_record or rename it to _element (or prefix with an underscore) to indicate intentional non-use and silence the linter. Locate the map_record function definition and update the parameter name from element to _element or incorporate element into the mapping logic (e.g., pass it to any helper or include it in returned data) so the symbol is referenced.connector_onshape/tests/test_import_product.py (2)
95-95: Redundant relative import path: use.commoninstead of..tests.common.The file is already inside
connector_onshape/tests/, so..tests.commonresolves to the same package as.common— the path suggests going up and back in, which is confusing.♻️ Proposed fix
- from ..tests.common import MOCK_MASS_PROPERTIES + from .common import MOCK_MASS_PROPERTIES🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_onshape/tests/test_import_product.py` at line 95, Replace the redundant relative import "from ..tests.common import MOCK_MASS_PROPERTIES" with a direct package-relative import "from .common import MOCK_MASS_PROPERTIES" in the test_import_product module so it references the tests.common module within the same package; update the import statement accordingly to use .common.
86-86: Moveimport jsonto the module top level.In-method imports are unusual in tests and signal the import was added in a hurry. Hoist it to the top of the file alongside the other imports.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_onshape/tests/test_import_product.py` at line 86, The in-test import of the json module should be hoisted to the module top-level: remove the "import json" from inside the test in connector_onshape/tests/test_import_product.py and add "import json" with the other imports at the top of the file so that json is available module-wide (no runtime in-method import in the test function).connector_onshape/models/onshape_document.py (1)
62-68: Annotate_sql_constraintswithClassVarto satisfy Ruff RUF012.Applies to both
OnshapeDocument(lines 62–68) andOnshapeDocumentElement(lines 116–122).♻️ Proposed fix
+from typing import ClassVar ... - _sql_constraints = [ + _sql_constraints: ClassVar[list] = [ ( "unique_document", ... ), ]Apply the same pattern to
OnshapeDocumentElement._sql_constraints.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_onshape/models/onshape_document.py` around lines 62 - 68, Annotate the class-level _sql_constraints attributes with typing.ClassVar to satisfy Ruff RUF012: update OnshapeDocument._sql_constraints and OnshapeDocumentElement._sql_constraints to be declared as ClassVar[...] so the linter recognizes them as class variables; locate the _sql_constraints definitions in the OnshapeDocument and OnshapeDocumentElement classes and add the ClassVar typing annotation (import typing.ClassVar if missing).connector_onshape/components/binder.py (1)
14-20: Annotate component-framework list attributes withClassVarto satisfy Ruff RUF012.
_inheritand_apply_onare read-only registry hints, not instance data. Annotating them silences the warning without behavioral change.♻️ Proposed fix
+from typing import ClassVar ... - _inherit = ["onshape.base", "base.binder"] + _inherit: ClassVar[list] = ["onshape.base", "base.binder"] ... - _apply_on = [ + _apply_on: ClassVar[list] = [ "onshape.product.product", "onshape.mrp.bom", ]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_onshape/components/binder.py` around lines 14 - 20, The attributes _inherit and _apply_on in binder.py are class-level registry hints and should be annotated with typing.ClassVar to silence Ruff RUF012; import ClassVar from typing (or typing_extensions if needed), then change the declarations for _inherit and _apply_on to use ClassVar[list[str]] (or ClassVar[list[str | str]] as appropriate) so they remain class-level constants rather than instance fields, keeping their current values and behavior.connector_onshape/components/adapter.py (1)
96-100: Prefer_logger.exceptionover_logger.errorinsideexceptblocks.
_logger.errordiscards the exception traceback;_logger.exceptionlogs it automatically and is the idiomatic choice here.♻️ Proposed change
except (ValueError, TypeError): - _logger.error("oauth2_token for backend %s is not valid JSON", backend.id) + _logger.exception( + "oauth2_token for backend %s is not valid JSON", backend.id + ) token_data = {}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_onshape/components/adapter.py` around lines 96 - 100, In the except block that handles json.loads(backend.oauth2_token or "{}") (catching ValueError and TypeError), change the logging call to use _logger.exception instead of _logger.error so the exception traceback is captured; leave token_data = {} unchanged and keep the same exception types to preserve behavior, referencing backend.oauth2_token, token_data and _logger.exception to locate the change.connector_onshape/components/importer.py (1)
418-420: Suppress Ruff S324 by declaring MD5 is not used for security.MD5 is fine for BOM change-detection; annotate it so linters and future readers agree.
♻️ Proposed change
bom_hash = hashlib.md5( - json.dumps(bom_items, sort_keys=True).encode() + json.dumps(bom_items, sort_keys=True).encode(), + usedforsecurity=False, ).hexdigest()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_onshape/components/importer.py` around lines 418 - 420, Replace the insecure-linter warning by calling hashlib.md5 with the non-security flag and a short clarifying comment: change the bom_hash computation to use hashlib.md5(json.dumps(bom_items, sort_keys=True).encode(), usedforsecurity=False).hexdigest() and add a brief inline comment on the same line (e.g., "# MD5 used only for BOM change-detection, not for security") to satisfy Ruff S324 and future readers; target the existing bom_hash assignment in importer.py.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@connector_onshape/components/adapter.py`:
- Around line 341-343: The current code sets body["filter"] =
json.dumps({"teamId": backend.team_id}) which produces an invalid Onshape filter
and uses a non-existent $TeamId variable; instead, remove the filter assignment
and set the company/org scope directly by adding body["companyId"] =
backend.team_id (or the appropriate company id field) before calling
self._request("POST", "/api/v6/webhooks", json_body=body) in adapter.py so the
webhook is registered at the organization level rather than trying to use an
unsupported filter expression.
- Around line 341-343: The webhook registration incorrectly sets body["filter"]
= json.dumps({"teamId": backend.team_id}) and POSTs it via self._request("POST",
"/api/v6/webhooks", json_body=body); change this to use Onshape's company
scoping by setting body["companyId"] = backend.team_id (use the raw
backend.team_id value, not json.dumps) and remove the filter field so the
request body contains companyId at the top level before calling self._request;
update any related references to body["filter"] to reflect removal.
- Around line 316-324: The read_thumbnail method currently returns bytes because
base64.b64encode(resp.content) yields bytes, which will not serialize for Odoo
Binary fields or JSON-RPC; update read_thumbnail to return a text-safe base64
string by base64-encoding resp.content and then decoding to a str (e.g.,
.decode('ascii'/'utf-8')) before returning, while preserving the None return
when status_code != 200; reference the method name read_thumbnail and the resp
handling to locate the change.
In `@connector_onshape/components/mapper.py`:
- Around line 219-221: On export you write "Weight" in
OnshapeProductExportMapper.map_record but the import mapper lacks a
corresponding entry, causing round-trip loss; add "Weight" to KNOWN_PROPERTIES
and add a PROPERTY_FIELD_MAP entry mapping "Weight" -> "onshape_mass" (or the
chosen binding field) and implement unit conversion in the import path so the
imported value populates the binding field (e.g., convert Onshape mass units to
kg and store in onshape_mass or the binding's weight field) to make
export/import symmetric.
In `@connector_onshape/controllers/webhook.py`:
- Around line 127-131: The current call to
backend.with_delay(...).action_import_products() triggers a full backend resync
for every metadata webhook; change this to only re-import products for the
specific changed document by either (A) adding a targeted method like
onshape.document.action_import_document_products(document.id) and invoking
backend.with_delay(...).action_import_document_products(document.id), or (B)
update the existing action_import_products to accept a document_id/filter
argument and call
backend.with_delay(...).action_import_products(document_id=document.id); update
the webhook handler (where document is already resolved) to pass the document ID
so only that document’s products are re-imported and avoid queuing full resyncs.
- Around line 85-91: The _validate_signature function is using hex digests and
omitting the timestamp; change it to build the signed message as
"<timestamp>.<raw_body>" (use the X-onshape-webhook-timestamp header value
passed into the function), compute the HMAC-SHA256 over the raw bytes of that
message using secret.encode("utf-8"), compare the raw binary digests (use
.digest()) and decode the incoming signature from Base64
(base64.b64decode(signature)) before calling hmac.compare_digest to perform a
constant-time comparison; update references in _validate_signature accordingly
to operate on bytes and the timestamp.
In `@connector_onshape/wizards/onshape_import_wizard.py`:
- Around line 61-70: The notification text in onshape_import_wizard.py is
misleading because the wizard calls backend.action_import_documents(),
backend.action_import_products(), and backend.action_import_boms() synchronously
(they call importer.run(...) directly), so either change the wizard to dispatch
those calls asynchronously using the same with_delay() pattern as the
cron_import_* methods or update the notification message to reflect synchronous
execution; specifically, adjust the "message" and/or "title" in the return dict
so it no longer says "queued" (e.g., "Import started - running synchronously;
check the job log for results" or similar) or replace the direct calls to
action_import_documents/action_import_products/action_import_boms with delayed
invocations (backend.with_delay().action_import_documents(), etc.) to match the
message.
---
Duplicate comments:
In `@connector_onshape/components/binder.py`:
- Around line 58-71: The split_compound_id function already includes input
validation and guards for malformed IDs; no code changes required—keep the
existing implementation of split_compound_id (including the ValueError checks
and the parsing into document_id, element_id, and optional part_id) as reviewed.
In `@connector_onshape/components/exporter.py`:
- Around line 57-59: The early-return branch in exporter.py currently updates
the record's sync_date via binding.write({"sync_date": fields.Datetime.now()})
before returning when items is falsy; leave this as-is (no code change required)
since the sync_date should be refreshed on an empty export, so ensure the
binding.write call in the block containing the items check remains intact and
unchanged.
In `@connector_onshape/components/importer.py`:
- Around line 520-526: The name-based fallback in _find_component_product is now
correct (using self.env["product.product"].search([("name","=", item_name)],
limit=1)); ensure any remaining references or legacy code that searched by
"default_code" are removed or updated to avoid confusion, keep the domain as
("name","=", item_name) in _find_component_product, and update or add a
unit/integration test that verifies finding products by name to prevent
regressions.
In `@connector_onshape/components/mapper.py`:
- Around line 128-130: The current index-guard pattern using
(body_data.get("mass") or [0.0])[0] is fragile/duplicated; create a small helper
like _first_or_zero(data: dict, key: str) -> float that returns the first
element or 0.0 (e.g., v = data.get(key); return (v[0] if v else 0.0)), then
replace the three spots updating total_mass, total_volume, total_area to use
_first_or_zero(body_data, "mass") etc.; this centralizes the safe-access logic
and removes repetition in mapper.py.
In `@connector_onshape/controllers/webhook.py`:
- Around line 133-177: No change required: the _find_bindings function already
narrows by elementId and partId correctly; leave _handle_workflow_transition,
_handle_revision_created, and _find_bindings as-is (they correctly build the
domain using ("onshape_element_id","=", elem_id) and ("onshape_part_id","=",
part_id) and call request.env["onshape.product.product"].sudo().search(domain)).
- Around line 43-62: The code currently only warns when backend.webhook_secret
is missing and continues processing; change this to enforce HMAC validation: if
backend.webhook_secret is falsy, return an error response (e.g.,
{"status":"error","message":"Webhook secret not configured"}) instead of logging
a warning, and ensure the existing validation branch using
request.httprequest.get_data(),
request.httprequest.headers.get("X-Onshape-Webhook-Signature", ""), and
self._validate_signature(raw_body, backend.webhook_secret, signature) remains
unchanged so incoming requests are rejected when signature is missing or
invalid.
In `@connector_onshape/models/onshape_document.py`:
- Around line 80-88: No changes required: the name_get method correctly
overrides the record display label for Odoo 16 by returning tuples of (rec.id,
name) and handling rec.onshape_document_id; leave the name_get function as-is
(function name_get in onshape_document.py).
In `@connector_onshape/models/onshape_product_product.py`:
- Around line 132-155: The review contains duplicated non-code review markers —
remove the duplicate review/comment tokens from the PR text (e.g., the repeated
"[duplicate_comment]" or duplicate approval lines) so there is only one approval
note; no change is needed in the _compute_onshape_url method or its
`@api.depends`, just clean up the duplicated comment metadata in the PR/review.
In `@connector_onshape/models/product_template.py`:
- Around line 15-26: The updated `@api.depends` correctly adds the nested path and
the compute method _compute_onshape_document_count already collects unique
onshape_document_id values from product_variant_ids -> onshape_bind_ids and sets
onshape_document_count; no code change required—keep the depends decorator as
("product_variant_ids.onshape_bind_ids","product_variant_ids.onshape_bind_ids.onshape_document_id")
and retain the existing _compute_onshape_document_count implementation that
builds a set of bind.onshape_document_id.id and assigns len(doc_ids) to
rec.onshape_document_count.
In `@connector_onshape/tests/test_import_product.py`:
- Around line 49-57: The review contains a duplicate approval comment rather
than a code issue; no test changes are required for test_mcmaster_catalog_match
or mapper.match_product/match_type — remove the duplicate review comment from
the PR thread (clean up the redundant "[duplicate_comment]"/extra approval
entry) and leave the test assertions as-is.
In `@connector_onshape/wizards/onshape_import_wizard.py`:
- Line 47: The write call now correctly persists the boolean toggle; update the
OnshapeImportWizard implementation to use backend.write({"auto_create_products":
self.auto_create_products}) (in the method where the wizard commits settings)
and ensure the corresponding backend field auto_create_products exists on the
target model so both True and False are stored; confirm no leftover one-way
toggle logic (e.g., any previous conditional that inverted or only set True)
remains in methods like OnshapeImportWizard.<method handling save> or related
helper functions.
---
Nitpick comments:
In `@connector_onshape/components/adapter.py`:
- Around line 96-100: In the except block that handles
json.loads(backend.oauth2_token or "{}") (catching ValueError and TypeError),
change the logging call to use _logger.exception instead of _logger.error so the
exception traceback is captured; leave token_data = {} unchanged and keep the
same exception types to preserve behavior, referencing backend.oauth2_token,
token_data and _logger.exception to locate the change.
In `@connector_onshape/components/binder.py`:
- Around line 14-20: The attributes _inherit and _apply_on in binder.py are
class-level registry hints and should be annotated with typing.ClassVar to
silence Ruff RUF012; import ClassVar from typing (or typing_extensions if
needed), then change the declarations for _inherit and _apply_on to use
ClassVar[list[str]] (or ClassVar[list[str | str]] as appropriate) so they remain
class-level constants rather than instance fields, keeping their current values
and behavior.
In `@connector_onshape/components/exporter.py`:
- Around line 70-79: The loop building properties_update redundantly checks
membership and value; replace the condition "prop_name in export_values and
export_values[prop_name]" with a single truthy lookup using
export_values.get(prop_name) (e.g., value = export_values.get(prop_name) and if
value: ...), then use that value when constructing the dict for propertyId and
value; update the code paths around part_item, prop_name, export_values, and
properties_update accordingly.
In `@connector_onshape/components/importer.py`:
- Around line 418-420: Replace the insecure-linter warning by calling
hashlib.md5 with the non-security flag and a short clarifying comment: change
the bom_hash computation to use hashlib.md5(json.dumps(bom_items,
sort_keys=True).encode(), usedforsecurity=False).hexdigest() and add a brief
inline comment on the same line (e.g., "# MD5 used only for BOM
change-detection, not for security") to satisfy Ruff S324 and future readers;
target the existing bom_hash assignment in importer.py.
In `@connector_onshape/components/mapper.py`:
- Around line 53-61: PROPERTY_FIELD_MAP is a class-level dictionary and should
be annotated as a typing.ClassVar to satisfy Ruff RUF012; update the declaration
of PROPERTY_FIELD_MAP in mapper.py to have a ClassVar type annotation (e.g.,
ClassVar[Dict[str, str]] or ClassVar[dict[str, str]]) and add the required
import from typing (ClassVar and Dict if using that form), leaving the
dictionary literal itself unchanged and keeping it at class scope (refer to the
PROPERTY_FIELD_MAP symbol and its enclosing class in mapper.py).
- Line 63: The map_record method declares an unused parameter element which
triggers ARG002; either use the parameter where needed inside map_record or
rename it to _element (or prefix with an underscore) to indicate intentional
non-use and silence the linter. Locate the map_record function definition and
update the parameter name from element to _element or incorporate element into
the mapping logic (e.g., pass it to any helper or include it in returned data)
so the symbol is referenced.
In `@connector_onshape/models/onshape_document.py`:
- Around line 62-68: Annotate the class-level _sql_constraints attributes with
typing.ClassVar to satisfy Ruff RUF012: update OnshapeDocument._sql_constraints
and OnshapeDocumentElement._sql_constraints to be declared as ClassVar[...] so
the linter recognizes them as class variables; locate the _sql_constraints
definitions in the OnshapeDocument and OnshapeDocumentElement classes and add
the ClassVar typing annotation (import typing.ClassVar if missing).
In `@connector_onshape/tests/test_import_product.py`:
- Line 95: Replace the redundant relative import "from ..tests.common import
MOCK_MASS_PROPERTIES" with a direct package-relative import "from .common import
MOCK_MASS_PROPERTIES" in the test_import_product module so it references the
tests.common module within the same package; update the import statement
accordingly to use .common.
- Line 86: The in-test import of the json module should be hoisted to the module
top-level: remove the "import json" from inside the test in
connector_onshape/tests/test_import_product.py and add "import json" with the
other imports at the top of the file so that json is available module-wide (no
runtime in-method import in the test function).
In `@connector_onshape/wizards/onshape_import_wizard.py`:
- Around line 49-50: The if-check on the import call is tautological because the
selection field import_type only permits "documents","products","boms","full",
so remove the redundant guard and call backend.action_import_documents()
unconditionally (or replace the entire block with a direct call). Update the
call site in onshape_import_wizard.py where import_type and
backend.action_import_documents() are used so the code simply invokes
backend.action_import_documents() without the if-check.
- Around line 66-68: The message value currently uses two adjacent string
literals inside the _(...) call for the "message" key; merge them into a single
string literal so the _("...") contains "Import jobs have been queued. Check the
job queue for progress." (or explicitly join with +) to avoid silent
concatenation—update the string inside the _("...") call where "message" is
defined in onshape_import_wizard.py.
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (15)
connector_onshape/components/adapter.pyconnector_onshape/components/binder.pyconnector_onshape/components/exporter.pyconnector_onshape/components/importer.pyconnector_onshape/components/mapper.pyconnector_onshape/controllers/webhook.pyconnector_onshape/data/queue_job_function_data.xmlconnector_onshape/models/onshape_backend.pyconnector_onshape/models/onshape_document.pyconnector_onshape/models/onshape_product_product.pyconnector_onshape/models/product_template.pyconnector_onshape/tests/test_export_product.pyconnector_onshape/tests/test_import_product.pyconnector_onshape/views/onshape_backend_views.xmlconnector_onshape/wizards/onshape_import_wizard.py
🚧 Files skipped from review as they are similar to previous changes (4)
- connector_onshape/tests/test_export_product.py
- connector_onshape/views/onshape_backend_views.xml
- connector_onshape/models/onshape_backend.py
- connector_onshape/data/queue_job_function_data.xml
| def read_thumbnail(self, document_id, size="300x300"): | ||
| resp = self._request( | ||
| "GET", | ||
| f"/api/v6/thumbnails/d/{document_id}/s/{size}", | ||
| raw=True, | ||
| ) | ||
| if resp.status_code == 200: | ||
| return base64.b64encode(resp.content) | ||
| return None |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Search for all calls to read_thumbnail and their context
rg -n "read_thumbnail" --type py -B 2 -A 5Repository: kencove/connector
Length of output: 987
🏁 Script executed:
#!/bin/bash
# Search for invocations of read_thumbnail (excluding definition and mocks)
rg -n "\.read_thumbnail\(" --type py -B 3 -A 3Repository: kencove/connector
Length of output: 43
🏁 Script executed:
#!/bin/bash
# Search more broadly for any usage patterns
# Check if read_thumbnail is called via string or reflection
rg -n "read_thumbnail" --type pyRepository: kencove/connector
Length of output: 252
🌐 Web query:
Odoo Binary field base64 encoded string requirements
💡 Result:
In Odoo, a fields.Binary value is expected to be the Base64 encoding of the raw file bytes (i.e., “file content in base64”), stored by Odoo in a binary/bytea column or as an attachment depending on configuration. [1]
What to send/write into a Binary field
- A plain Base64 string of the file bytes (no extra wrapping required by Odoo itself). Example (Python):
- read bytes →
base64.b64encode(...)→ (often) decode to text before sending via RPC/JSON. [3]
- read bytes →
- In practice:
- Python-side value may be bytes (
b'...') inside server code. - RPC/JSON payloads typically should be a UTF‑8 string (e.g.,
base64.b64encode(data).decode()), because JSON can’t carry raw bytes. [3]
- Python-side value may be bytes (
What not to send (common confusion)
- Don’t store a data-URI (e.g.,
data:image/png;base64,...) in the Binary field—Odoo stores just the Base64 data. The data-URI prefix is only something you add when embedding into HTML yourself. [9] - If you do embed in HTML, you may need to prefix with
data:<mime>;base64,and ensure the Base64 has no line breaks for some consumers. [8]
XML data files (modules / demo data)
If you load binary/image content from a file in XML, use type="base64" with file="..." so Odoo reads the file and encodes it automatically. [7]
Sources: Odoo binary field stores base64-encoded file content [1]; example of writing base64 string via XML-RPC (b64encode(...).decode()) [3]; XML type="base64" + file="..." usage [7]; data-URI prefix is for HTML embedding, not Odoo storage [8].
base64.b64encode returns bytes, but Odoo Binary fields require a base64-encoded string.
While this method is currently unused in this repository, the implementation is incorrect for its intended purpose. When the return value is eventually used with Odoo's Binary fields or sent via JSON-RPC, bytes will fail serialization. Apply the proposed fix:
Fix
if resp.status_code == 200:
- return base64.b64encode(resp.content)
+ return base64.b64encode(resp.content).decode("utf-8")
return None🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@connector_onshape/components/adapter.py` around lines 316 - 324, The
read_thumbnail method currently returns bytes because
base64.b64encode(resp.content) yields bytes, which will not serialize for Odoo
Binary fields or JSON-RPC; update read_thumbnail to return a text-safe base64
string by base64-encoding resp.content and then decoding to a str (e.g.,
.decode('ascii'/'utf-8')) before returning, while preserving the None return
when status_code != 200; reference the method name read_thumbnail and the resp
handling to locate the change.
| if backend.team_id: | ||
| body["filter"] = json.dumps({"teamId": backend.team_id}) | ||
| return self._request("POST", "/api/v6/webhooks", json_body=body) |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
Onshape webhook filter expression language format team ID variable
💡 Result:
Onshape’s webhook “filter” (in the older webhook registration style) is not a general-purpose expression language—it’s a fixed-format string that Onshape expects in one of these two shapes:
- Document-scoped:
{$DocumentId} = '[did]' && {$WorkspaceId} = '[wid]' && {$ElementId} = '[eid]'
- User/client-scoped:
{$UserId} = '[uid]' && {$ClientId} = 'cid'
[1]
There is no {$TeamId} variable documented for webhook filters. If what you mean by “team” is an org/company scope, Onshape’s current webhook docs describe scoping certain webhook event groups by providing a companyId in the webhook registration body (instead of a “teamId” in a filter expression). [2]
Separately (not webhooks), Onshape extensions/action URL parameterization supports {$companyId} / {$sessionCompanyId}, but that mechanism is for extension URLs, not webhook filters. [3]
Sources: [1] [2] [3]
filter parameter uses invalid format and proposed fix references non-existent Onshape variable.
The current code json.dumps({"teamId": backend.team_id}) produces '{"teamId": "xxx"}', which is not a valid Onshape filter format. However, the proposed fix using "{$TeamId} = '%s'" is also incorrect—Onshape webhook filters do not support a $TeamId variable.
Onshape webhook filters support only fixed-format scoping:
- Document-scoped:
{$DocumentId} = '[id]' && {$WorkspaceId} = '[id]' && {$ElementId} = '[id]' - User/client-scoped:
{$UserId} = '[id]' && {$ClientId} = '[id]'
For org/company-level scoping, Onshape expects companyId as a direct parameter in the webhook registration body (not a filter expression). If team-scoping is the intent, pass companyId to the webhook endpoint instead of attempting to filter by a non-existent $TeamId variable.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@connector_onshape/components/adapter.py` around lines 341 - 343, The current
code sets body["filter"] = json.dumps({"teamId": backend.team_id}) which
produces an invalid Onshape filter and uses a non-existent $TeamId variable;
instead, remove the filter assignment and set the company/org scope directly by
adding body["companyId"] = backend.team_id (or the appropriate company id field)
before calling self._request("POST", "/api/v6/webhooks", json_body=body) in
adapter.py so the webhook is registered at the organization level rather than
trying to use an unsupported filter expression.
🧩 Analysis chain
🏁 Script executed:
# First, let's examine the actual code at lines 341-343
fd -t f "adapter.py" | head -5Repository: kencove/connector
Length of output: 180
🏁 Script executed:
# Find the exact file location
git ls-files | grep -i "adapter.py"Repository: kencove/connector
Length of output: 180
🏁 Script executed:
# Once we find the file, let's examine it
cat -n connector_onshape/components/adapter.py | sed -n '330,350p'Repository: kencove/connector
Length of output: 925
🏁 Script executed:
# Check the _request method to understand json_body handling
rg -A 10 "def _request" connector_onshape/components/adapter.pyRepository: kencove/connector
Length of output: 512
🌐 Web query:
Onshape webhook API filter field JSON object format documentation
💡 Result:
Onshape’s Webhook/createWebhook request body does not have a generic filter JSON object field like some other webhook APIs. Instead, Onshape “filters” (scopes) events by requiring resource ID fields depending on the event group—e.g. documentId or companyId for Document events, companyId for Workflow events, clientId for Application events—plus the events array and options/url. [1]
For the exact JSON schema (i.e., what fields are allowed for createWebhook and which are required for each event type), Onshape’s documentation directs you to the Glassworks API Explorer, which is the authoritative, up-to-date schema view for request bodies. [2]
Sources:
[1] Onshape “Webhooks” docs (event groups + required registration fields): (onshape-public.github.io)
[2] Onshape “API Explorer” docs (how to view request body schema in Glassworks): (onshape-public.github.io)
Citations:
- 1: https://onshape-public.github.io/docs/app-dev/webhook/
- 2: https://onshape-public.github.io/docs/api-intro/explorer/?utm_source=openai
🏁 Script executed:
# Get full _request implementation to understand json_body handling
cat -n connector_onshape/components/adapter.py | sed -n '80,130p'Repository: kencove/connector
Length of output: 2187
🏁 Script executed:
# Check if there are tests for webhook registration
cat -n connector_onshape/tests/test_adapter.py | grep -A 20 "webhook"Repository: kencove/connector
Length of output: 43
🌐 Web query:
Onshape webhook API teamId filter field - does Onshape support teamId for webhook scope
💡 Result:
Onshape webhooks do not support scoping (or filtering) by teamId.
What Onshape supports for webhook “scope” depends on the event group:
- Document-group events: registration requires
documentIdORcompanyId(company-wide applies to all company-owned docs, present and future). (onshape-public.github.io) - Workflow-group events: registration requires
companyId. (onshape-public.github.io) - (App-group events use
clientId, not teamId.) (onshape-public.github.io)
And for the webhook filter field (where used), the documented “context” forms are based on Document/Workspace/Element or User/Client—not teamId. (forum.onshape.com)
Citations:
- 1: https://onshape-public.github.io/docs/app-dev/webhook/
- 2: https://onshape-public.github.io/docs/app-dev/webhook/
- 3: https://onshape-public.github.io/docs/app-dev/webhook/
- 4: https://forum.onshape.com/discussion/11355/using-webhooks-for-part-translation?utm_source=openai
🌐 Web query:
Onshape webhook registration API request body schema documentation
💡 Result:
For Onshape webhook registration (create webhook) request-body schema, the canonical docs are:
- Onshape Public Docs → App Dev → Webhooks (includes the
POST /api/v6/webhooksexample body and explains which additional fields are required depending on the event “group” you’re registering for). [1]
The base request body is documented as containing at least: events (string array), url (string), and options.collapseEvents (boolean), with additional required identifiers depending on the event type—e.g. clientId (Application group) or documentId / companyId (Document group) or companyId (Workflow group). [1]
Recent schema change:
POST /webhooks(andPOST /webhooks/{wid}) added a new request propertynamein release rel-1.210 (2026-01-30). [2]
If you want the full, authoritative JSON Schema field-by-field (types, required/optional, enums), Onshape’s docs explicitly point you to the Glassworks API Explorer:
- Open the Explorer, expand
createWebhook, then view the Request body → Schema. [1][3]
Sources:
[1] Onshape Public Docs – Webhooks page (examples + per-event required fields) (onshape-public.github.io)
[2] Onshape Public Docs – Changelog rel-1.210 (adds name to webhook POST bodies) (onshape-public.github.io)
[3] Onshape Public Docs – API Explorer guide (how to view request-body schema) (onshape-public.github.io)
Citations:
- 1: https://onshape-public.github.io/docs/app-dev/webhook/
- 2: https://onshape-public.github.io/docs/changelog/?utm_source=openai
- 3: https://onshape-public.github.io/docs/api-intro/explorer/?utm_source=openai
🏁 Script executed:
# Check for tests related to webhooks
rg -i "register_webhook|webhook" connector_onshape/tests/test_adapter.py -B 2 -A 10Repository: kencove/connector
Length of output: 43
🏁 Script executed:
# Search for backend.team_id usage throughout codebase to understand its purpose
rg "backend\.team_id" --type py -B 2 -A 2Repository: kencove/connector
Length of output: 888
🏁 Script executed:
# Look for any comments or documentation about team-based filtering strategy
rg -i "team.*filter|filter.*team|teamid" connector_onshape/ -B 1 -A 1Repository: kencove/connector
Length of output: 350
🌐 Web query:
Onshape API team-scoped webhook registration companyId vs teamId filtering
💡 Result:
Onshape’s webhook registration API does not support a teamId filter. For “team-scoped” behavior, you register at the company (tenant) level using companyId (or at a single-document level with documentId).
- For Document-group events, the registration body must include either
documentIdorcompanyId; registering withcompanyIdsubscribes you to events for all present and future documents owned by that company. (onshape-public.github.io) - For Workflow-group events,
companyIdis required. (onshape-public.github.io)
So if your “team” owns documents, you typically achieve “team scope” by using the companyId of the owning Onshape company/tenant (the same identifier used broadly across Onshape APIs/docs), not a teamId. (onshape-public.github.io)
Citations:
- 1: https://onshape-public.github.io/docs/app-dev/webhook/
- 2: https://onshape-public.github.io/docs/app-dev/webhook/
- 3: https://onshape-public.github.io/docs/app-dev/webhook/
🏁 Script executed:
# Get the full _request method implementation to see how json_body is handled
rg -A 30 "def _request" connector_onshape/components/adapter.py | head -40Repository: kencove/connector
Length of output: 1153
🏁 Script executed:
# Check if there's a requests.request call in _request that uses json parameter
rg "requests\.request|requests\.post|requests\.get" connector_onshape/components/adapter.py -B 2 -A 2Repository: kencove/connector
Length of output: 185
🏁 Script executed:
# Look at the complete _request method to understand json_body parameter handling
cat -n connector_onshape/components/adapter.py | sed -n '120,180p'Repository: kencove/connector
Length of output: 2833
Incorrect webhook scoping: Onshape doesn't support filter with teamId.
The current code attempts to use a non-existent filter field with teamId, which Onshape's webhook API does not recognize or support. Onshape webhook registration uses companyId for company/team-level scoping.
Additionally, line 342 double-encodes the JSON: json.dumps() creates a string, which requests then JSON-serializes again, producing "filter": "{\"teamId\":\"xxx\"}" on the wire instead of the intended object structure. However, fixing only the encoding won't work—the filter field and teamId parameter are not part of Onshape's webhook schema.
Correct approach: Use companyId field (not filter) for team-scoped webhooks:
🐛 Proposed fix
if backend.team_id:
- body["filter"] = json.dumps({"teamId": backend.team_id})
+ body["companyId"] = backend.team_idReference: Onshape webhook registration API requires companyId for Document and Workflow event scoping.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@connector_onshape/components/adapter.py` around lines 341 - 343, The webhook
registration incorrectly sets body["filter"] = json.dumps({"teamId":
backend.team_id}) and POSTs it via self._request("POST", "/api/v6/webhooks",
json_body=body); change this to use Onshape's company scoping by setting
body["companyId"] = backend.team_id (use the raw backend.team_id value, not
json.dumps) and remove the filter field so the request body contains companyId
at the top level before calling self._request; update any related references to
body["filter"] to reflect removal.
| # Include weight from Odoo if set (export back to Onshape) | ||
| if binding.odoo_id.weight: | ||
| vals["Weight"] = str(binding.odoo_id.weight) |
There was a problem hiding this comment.
"Weight" export has no symmetric import path — round-trip data loss.
OnshapeProductExportMapper.map_record writes "Weight" to Onshape, but "Weight" is absent from both KNOWN_PROPERTIES and PROPERTY_FIELD_MAP in the import mapper. When the part is re-imported, "Weight" will fall through to custom_props and be stored as JSON in onshape_custom_properties rather than updating the product's weight field — the round-trip is asymmetric.
Either add "Weight" to KNOWN_PROPERTIES and wire it to a binding field in PROPERTY_FIELD_MAP (e.g., "onshape_mass" in kg after unit conversion), or document that the weight export is intentionally write-only.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@connector_onshape/components/mapper.py` around lines 219 - 221, On export you
write "Weight" in OnshapeProductExportMapper.map_record but the import mapper
lacks a corresponding entry, causing round-trip loss; add "Weight" to
KNOWN_PROPERTIES and add a PROPERTY_FIELD_MAP entry mapping "Weight" ->
"onshape_mass" (or the chosen binding field) and implement unit conversion in
the import path so the imported value populates the binding field (e.g., convert
Onshape mass units to kg and store in onshape_mass or the binding's weight
field) to make export/import symmetric.
| def _validate_signature(self, raw_body, secret, signature): | ||
| expected = hmac.new( | ||
| secret.encode("utf-8"), | ||
| raw_body, | ||
| digestmod=hashlib.sha256, | ||
| ).hexdigest() | ||
| return hmac.compare_digest(expected, signature) |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
Onshape webhook X-Onshape-Webhook-Signature header format documentation
💡 Result:
Onshape documents the webhook signature headers as a raw Base64-encoded HMAC-SHA256 digest (no sha256= prefix).
Headers sent (when enabled in Company/Enterprise settings):
X-onshape-webhook-timestamp(string timestamp of when Onshape sent the webhook)X-onshape-webhook-signature-primaryX-onshape-webhook-signature-secondary(cad.onshape.com)
Signature value format / computation:
- Build the signed message as:
message = <timestamp header value> + "." + <raw webhook request body> - Compute:
signature = Base64( HMAC-SHA256(key=<primary or secondary signing key>, data=message) )(cad.onshape.com)
Citations:
- 1: https://cad.onshape.com/help/fr_FR/Content/Plans/webhooks.htm
- 2: https://cad.onshape.com/help/fr_FR/Content/Plans/webhooks.htm
🏁 Script executed:
# Search for how X-Onshape-Webhook-Signature is used in the codebase
rg "X-Onshape-Webhook-Signature" -A 5 -B 5Repository: kencove/connector
Length of output: 930
🏁 Script executed:
# Look for webhook signature validation tests or examples
fd "test" -t f | xargs rg -l "webhook\|signature" -iRepository: kencove/connector
Length of output: 45
🏁 Script executed:
# Check the webhook.py file to see the full context of signature handling
cat -n connector_onshape/controllers/webhook.py | head -120Repository: kencove/connector
Length of output: 4965
Fix the webhook signature validation: use Base64 encoding and include timestamp in the message.
The signature validation has two critical issues:
-
Encoding mismatch: The code computes
.hexdigest()but Onshape sends a Base64-encoded HMAC-SHA256 value. The comparison will always fail silently. -
Missing timestamp: According to Onshape's webhook documentation, the signed message must be
<X-onshape-webhook-timestamp>.<raw_body>, but the code only usesraw_body.
Change the validation to decode the Base64 signature and include the timestamp header in the signed message before comparison.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@connector_onshape/controllers/webhook.py` around lines 85 - 91, The
_validate_signature function is using hex digests and omitting the timestamp;
change it to build the signed message as "<timestamp>.<raw_body>" (use the
X-onshape-webhook-timestamp header value passed into the function), compute the
HMAC-SHA256 over the raw bytes of that message using secret.encode("utf-8"),
compare the raw binary digests (use .digest()) and decode the incoming signature
from Base64 (base64.b64decode(signature)) before calling hmac.compare_digest to
perform a constant-time comparison; update references in _validate_signature
accordingly to operate on bytes and the timestamp.
| # Queue re-import of products for this document | ||
| backend.with_delay( | ||
| priority=10, | ||
| description=f"Re-import products for doc {document.name}", | ||
| ).action_import_products() |
There was a problem hiding this comment.
action_import_products() fans out to a full backend resync on every metadata webhook.
A single onshape.model.lifecycle.metadata event triggers backend.with_delay().action_import_products(), which re-imports all products across all documents for the backend. If a busy assembly changes frequently, this can queue many full-resync jobs simultaneously, overwhelming the job queue and the Onshape API rate limits.
Consider scoping the re-import to just the changed document (the document record is already resolved at line 112):
♻️ Proposed refinement
- # Queue re-import of products for this document
- backend.with_delay(
- priority=10,
- description=f"Re-import products for doc {document.name}",
- ).action_import_products()
+ # Queue re-import scoped to the changed document
+ document.with_delay(
+ priority=10,
+ description=f"Re-import products for doc {document.name}",
+ ).action_import_document_products()This requires a corresponding action_import_document_products method on onshape.document; alternatively, pass the document ID as a filter argument to the existing importer.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@connector_onshape/controllers/webhook.py` around lines 127 - 131, The current
call to backend.with_delay(...).action_import_products() triggers a full backend
resync for every metadata webhook; change this to only re-import products for
the specific changed document by either (A) adding a targeted method like
onshape.document.action_import_document_products(document.id) and invoking
backend.with_delay(...).action_import_document_products(document.id), or (B)
update the existing action_import_products to accept a document_id/filter
argument and call
backend.with_delay(...).action_import_products(document_id=document.id); update
the webhook handler (where document is already resolved) to pass the document ID
so only that document’s products are re-imported and avoid queuing full resyncs.
| return { | ||
| "type": "ir.actions.client", | ||
| "tag": "display_notification", | ||
| "params": { | ||
| "title": _("Import Started"), | ||
| "message": _( | ||
| "Import jobs have been queued. " "Check the job queue for progress." | ||
| ), | ||
| "type": "success", | ||
| "sticky": False, |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Search for with_delay() usage in the codebase
rg -n "with_delay\(\)" connector_onshape/
# Look for batch importer class definitions
rg -n "class.*[Bb]atch.*[Ii]mporter" connector_onshape/
# Search for def run methods in components
rg -n "def run" connector_onshape/components/Repository: kencove/connector
Length of output: 1256
🏁 Script executed:
# Check the onshape_backend.py file for action_import_* methods around lines 109-128
git ls-files connector_onshape/ | grep -E "(backend|onshape_backend)" | head -20Repository: kencove/connector
Length of output: 194
🏁 Script executed:
# List structure to understand where files are
fd -t f "(backend|importer)" connector_onshape/ | head -30Repository: kencove/connector
Length of output: 281
🏁 Script executed:
# Read the relevant sections of onshape_backend.py to see the action_import_* methods
sed -n '175,200p' connector_onshape/models/onshape_backend.pyRepository: kencove/connector
Length of output: 670
🏁 Script executed:
# Also check the wizard code to see how it calls action_import_*
sed -n '55,75p' connector_onshape/wizards/onshape_import_wizard.pyRepository: kencove/connector
Length of output: 831
🏁 Script executed:
# Check onshape_backend.py lines 109-135 to see the original action_import_* definitions mentioned in review
sed -n '109,145p' connector_onshape/models/onshape_backend.pyRepository: kencove/connector
Length of output: 1336
The notification message is misleading — batch imports triggered from this wizard are synchronous, not queued.
The wizard calls backend.action_import_documents(), backend.action_import_products(), and backend.action_import_boms() directly (lines 65–70) without with_delay(). These methods in turn call importer.run(backend=self) directly without async dispatching. The with_delay() calls that exist in onshape_backend.py are only in the separate cron_import_* methods (not the action_import_* methods), so they apply only to scheduled imports, not wizard-initiated ones.
The success notification claims "Import jobs have been queued," but the user-initiated imports execute synchronously. Users may dismiss the notification believing work is in progress when execution has already completed (or failed), defeating the purpose of async feedback.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@connector_onshape/wizards/onshape_import_wizard.py` around lines 61 - 70, The
notification text in onshape_import_wizard.py is misleading because the wizard
calls backend.action_import_documents(), backend.action_import_products(), and
backend.action_import_boms() synchronously (they call importer.run(...)
directly), so either change the wizard to dispatch those calls asynchronously
using the same with_delay() pattern as the cron_import_* methods or update the
notification message to reflect synchronous execution; specifically, adjust the
"message" and/or "title" in the return dict so it no longer says "queued" (e.g.,
"Import started - running synchronously; check the job log for results" or
similar) or replace the direct calls to
action_import_documents/action_import_products/action_import_boms with delayed
invocations (backend.with_delay().action_import_documents(), etc.) to match the
message.
|
bugbot run |
|
Skipping Bugbot: Bugbot is disabled for this repository. Visit the Bugbot dashboard to update your settings. |
Summary
Models
onshape.backendonshape.documentonshape.product.productonshape.mrp.bomMetadata Fields on Product Binding
onshape_part_numberonshape_descriptiononshape_materialonshape_appearanceonshape_revisiononshape_massonshape_volumeonshape_surface_areaonshape_authoronshape_designeronshape_vendoronshape_projectonshape_custom_propertiesDependencies
All already installed in the Kencove 16.0 stack.
Deployment notes
After merging, add to
addons.yamlunder theconnector:section:No new Python dependencies required. Onshape API credentials (HMAC or OAuth2) must be configured in the backend form after installation.
Test plan
invoke test --modules=connector_onshape🤖 Generated with Claude Code
Summary by CodeRabbit
New Features
Documentation