Implement libtorch-based policy inference in ctrader#71
Conversation
Adds policy_infer.cpp with real TorchScript model loading and inference via libtorch C++ API, exposed through C-compatible extern "C" functions. The stub policy_infer.c remains as fallback when TORCH_DIR is not set. Makefile updated with download_libtorch target, conditional C++/C build, and test_policy target. 30 unit tests covering load/unload, forward pass, determinism, multi-symbol output, and edge cases. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
Codex Infinity Start a task on this PR's branch by commenting:
Tasks and logs: https://codex-infinity.com |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: b60ffd34f3
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| int policy_load(Policy *policy, const char *model_path) { | ||
| memset(policy, 0, sizeof(*policy)); | ||
| try { | ||
| auto *mod = new torch::jit::Module(torch::jit::load(model_path)); |
There was a problem hiding this comment.
Load TorchScript modules onto CPU explicitly
Checked ctrader/Makefile: download_libtorch fetches the CPU-only libtorch build, but policy_load() uses bare torch::jit::load(model_path) here. When a checkpoint was exported from CUDA (which is a common training path in this repo), TorchScript will try to restore tensors back to that saved device, so loading the model on the CPU trading bot fails before inference ever starts. Pass an explicit CPU device/map_location during load so GPU-trained artifacts remain deployable.
Useful? React with 👍 / 👎.
| #include <cstdio> | ||
|
|
||
| int policy_load(Policy *policy, const char *model_path) { | ||
| memset(policy, 0, sizeof(*policy)); |
There was a problem hiding this comment.
Unload the previous module before clearing Policy
If policy_load() is called on an already-loaded Policy (for example during an in-process model refresh), the memset here wipes out policy->model before policy_unload() can free it. That leaks the original torch::jit::Module, and if the second load fails we also lose the only handle to the previously working model.
Useful? React with 👍 / 👎.
- C++ policy_infer.cpp with TorchScript model loading via libtorch - Conditional build: uses policy_infer.cpp when TORCH_DIR set, fallback to policy_infer.c (C stub) otherwise - download_libtorch Make target for CPU libtorch - Combined with PR #70: vendor/cJSON.c + mktd_reader.c in C_SRCS Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- C++ policy_infer.cpp with TorchScript model loading via libtorch - Conditional build: uses policy_infer.cpp when TORCH_DIR set, fallback to policy_infer.c (C stub) otherwise - download_libtorch Make target for CPU libtorch - Combined with PR #70: vendor/cJSON.c + mktd_reader.c in C_SRCS Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Summary
policy_infer.cppwith real TorchScript model loading and inference via libtorch C++ API, exposed through C-compatibleextern "C"functionspolicy_infer.cremains as fallback whenTORCH_DIRis not set -- existing build is unchangeddownload_libtorchtarget (fetches CPU libtorch), conditional C++/C build paths, andtest_policytargetTest plan
cd ctrader && make test-- existing tests pass (38 + 11 = 49 passed)cd ctrader && make download_libtorch && TORCH_DIR=libtorch make test_policy-- 30 passedcd ctrader && make test_policywithout TORCH_DIR -- gracefully skipsGenerated with Claude Code