init 0.2.6, semi-major revamp#60
Conversation
…ion optimizations - Integrate RunnerCommand channel across all API modules for unified shutdown signaling - Add non-consuming shutdown_ref() method to Client and rename shutdown to shutdown_owned() - Implement connection stability tracking with automatic reconnect attempt reset after 10 seconds of uptime - Optimize timeout configuration: extend initial handshake to 20s, reduce WebSocket connection timeout to 10s - Refine logging verbosity: downgrade connection lifecycle events from info to debug for production - Update region configuration data with precise geographic coordinates and streamlined endpoint lists - Modernize test infrastructure using module-scoped fixtures and reduced sleep intervals
…connection error type - Changed WebsocketConnectionError to use Box<TungsteniteError> for better error handling. - Added From<TungsteniteError> implementation for BinaryOptionsToolsError. - Cleaned up imports and improved error messages for clarity. Remove unused imports in send.rs - Removed the unused `debug` import from tracing. Refactor RecieverStream and FilteredRecieverStream in stream.rs - Cleaned up code formatting and improved readability. - Ensured consistent use of async/await patterns. - Updated error handling in receive methods. Update traits in traits.rs - Cleaned up imports and ensured consistent formatting. - Maintained trait definitions for better clarity and usability. Refactor Data and Callback structures in types.rs - Improved code organization and readability. - Ensured consistent use of async patterns and error handling. - Updated the default_validator function for clarity. Update reimports in reimports.rs - Organized imports for better readability. Bump binary-options-tools-macros version to 0.2.0 - Updated version in Cargo.lock to reflect the latest changes.
…eals module panic and ExpertOptions deserialization.
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughSignificant refactor: repository templates and CI streamlined; monetary values migrated from f64 to rust_decimal::Decimal and timestamps to i64 across crates and Python bindings; new RunnerCommand and ResponseRouter wiring threaded through modules; pending-order APIs, richer Socket.IO binary/text handling, centralized TLS connector/geolocation, and a redesigned event-driven WebSocket client introduced. Changes
Sequence Diagram(s)sequenceDiagram
participant Client as Python Client
participant PyRaw as RawPocket (PyO3/Rust)
participant Runner as Runner/Dispatcher
participant WS as WebSocket Server
Client->>PyRaw: open_pending_order(params)
PyRaw->>Runner: enqueue Command(OpenPendingOrder, command_id)
Runner->>WS: send Socket.IO request (42[...] / binary)
WS-->>Runner: response (text or binary)
Runner->>Runner: ResponseRouter routes by command_id
Runner-->>PyRaw: deliver command response (PendingOrder)
PyRaw-->>Client: return PendingOrder/result
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Possibly related PRs
Suggested reviewers
🚥 Pre-merge checks | ✅ 4✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches🧪 Generate unit tests (beta)
No actionable comments were generated in the recent review. 🎉 Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Summary of ChangesHello @sixtysixx, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request introduces a semi-major revamp to the BinaryOptionsTools library, primarily focusing on enhancing financial precision, improving WebSocket communication robustness, and expanding trading capabilities with pending orders. The core Rust logic now leverages Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Changelog
Ignored Files
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request introduces a significant revamp, including a major refactoring to use Decimal for financial values, which greatly improves precision. The connection logic, module architecture, and project documentation have also been substantially improved for better robustness and maintainability. While these are excellent changes, I've identified several critical issues where Decimal to f64 conversions are handled unsafely, potentially leading to silent data corruption. I've also noted a couple of minor documentation errors that could mislead users.
BinaryOptionsToolsV2/python/BinaryOptionsToolsV2/pocketoption/synchronous.py
Outdated
Show resolved
Hide resolved
…synchronous.py Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
There was a problem hiding this comment.
Actionable comments posted: 13
Note
Due to the large number of review comments, Critical severity comments were prioritized as inline comments.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (9)
BinaryOptionsToolsUni/README.md (1)
168-182:⚠️ Potential issue | 🟡 MinorSection heading should be
### C#, not### C.The code block contains C# (.NET) code (
using BinaryOptionsToolsUni;,await,Console.WriteLine, etc.), but the heading reads "C" instead of "C#". This is inconsistent with the rest of the README where each language section heading matches its content.📝 Proposed fix
-### C +### C#BinaryOptionsToolsV2/python/BinaryOptionsToolsV2/BinaryOptionsToolsV2.pyi (1)
155-155:⚠️ Potential issue | 🟠 MajorUpdate
start_tracingstub to match the 4-parameter Rust signature.The stub declares
start_tracing(level: str = "info") -> None, but the Rust implementation (logs.rs) defines it with 4 required parameters:path: String, level: String, terminal: booland a layers argument. The Python code at tracing.py:157 correctly calls it with 4 arguments, so the stub will cause aTypeErrorat runtime. Update the stub to:def start_tracing(path: str, level: str, terminal: bool, layers: list = None) -> None: ...BinaryOptionsToolsV2/python/BinaryOptionsToolsV2/pocketoption/asynchronous.py (1)
503-525:⚠️ Potential issue | 🔴 CriticalBug:
payout()returnsNonewhen called without anassetargument.When
assetisNone(the default), neither theif isinstance(asset, str)norelif isinstance(asset, list)branch executes, so the function falls through and implicitly returnsNone. The full payout dictionary is never returned, breaking the primary use case shown in the examples wherefull_payout = await api.payout()expects a dictionary. Areturn payoutstatement is missing at the end of the method.Proposed fix
payout = json.loads(await self.client.payout()) if isinstance(asset, str): return payout.get(asset) elif isinstance(asset, list): return [payout.get(ast) for ast in asset] + return payoutcrates/binary_options_tools/src/pocketoption/ssid.rs (1)
134-152:⚠️ Potential issue | 🟡 MinorVerify that recursive
Self::parseterminates for all inputs.Line 141: If the trimmed input is a valid JSON string, it's unquoted and
parseis called recursively. This is safe becauseserde_json::from_str::<String>only succeeds for a JSON-encoded string literal (e.g.,"\"inner\""→"inner"), and each recursion peels one layer of encoding. However, a pathologically crafted input with many nesting levels could cause deep recursion (stack overflow).Consider adding a depth limit or converting to an iterative loop if externally-supplied SSIDs are accepted.
crates/binary_options_tools/src/pocketoption/modules/subscriptions.rs (1)
828-838:⚠️ Potential issue | 🟠 MajorCloning
SubscriptionStreamshares the MPSC receiver, causing messages to be split between clones.
kanal::AsyncReceiveris an MPSC (bounded) channel receiver. Cloning it creates multiple consumers that compete for messages from the same channel, resulting in each message being received by only one clone. Thereceive()method expects sequential consumption of all messages, so cloned instances will silently miss data.Additionally, each cloned instance has its own
Dropimplementation that sends anUnsubscribecommand. This means dropping one clone unsubscribes the stream for all other clones, breaking them unexpectedly.If
Cloneis needed for framework integration, use a broadcast channel instead, or document the intentional single-consumer-per-clone design and prevent accidental cloning through the type system.crates/binary_options_tools/src/expertoptions/modules/profile.rs (2)
228-228:⚠️ Potential issue | 🟠 MajorRemove
println!("Here")debug artifact from production code.This will emit unstructured output to stdout on every module start. Replace with a
debug!call or remove entirely.Proposed fix
- println!("Here"); + debug!(target: "ProfileModule", "Starting profile module run loop.");
325-343:⚠️ Potential issue | 🟠 MajorRemove
dbg!()macros fromsend_startup_messages.Lines 330–331 use
dbg!()which writes to stderr and is not suitable for production. Replace with structured tracing.Proposed fix
- if dbg!(self.state.is_demo().await) { - dbg!("Sent demo message"); + if self.state.is_demo().await { + debug!(target: "ProfileModule", "Sending demo context message.");crates/core-pre/src/client.rs (1)
505-508:⚠️ Potential issue | 🟡 Minor
RunnerCommand::ConnectandRunnerCommand::Reconnectsilently discarded during active session.As noted in the
traits.rsreview, the_ => {}catch-all on line 507 meansReconnectcommands sent viaclient.reconnect()during an active session are consumed and ignored without feedback. At minimum, add adebug!log so this isn't invisible.Proposed fix
- _ => {} + other => { + debug!(target: "Runner", "Ignoring command {other:?} during active session."); + }crates/binary_options_tools/src/lib.rs (1)
39-39:⚠️ Potential issue | 🟡 MinorType has a spelling error:
RecieverStreamshould beReceiverStream.The misspelling originates in the source crate definition (
crates/core-pre/src/utils/stream.rsand mirrored incrates/core/src/general/stream.rs), and since it's re-exported here in the public API, downstream users see the misspelled name. Fixing this requires renaming the type definition and updating all ~30+ usages across the codebase.
🤖 Fix all issues with AI agents
In `@crates/binary_options_tools/src/framework/virtual_market.rs`:
- Around line 309-310: The computed profit currently assigns total payout to the
local variable profit (when win is true); change the calculation so Deal.profit
stores net gain/loss: set profit to trade.amount *
Decimal::from(trade.payout_percent) / dec!(100.0) when win, dec!(0.0) when draw,
and -trade.amount when loss. Update the branch using the variables win, draw,
trade.amount and trade.payout_percent (the profit local and the Deal.profit
assignment) to implement those three cases so the stored profit matches
PocketOption API semantics.
In `@crates/binary_options_tools/src/pocketoption/modules/subscriptions.rs`:
- Around line 36-66: The background router (ResponseRouter) can drop responses
if a module replies before wait_for inserts the oneshot sender; implement a
pre-registration API on ResponseRouter (e.g., async fn register(&self, id: Uuid)
-> oneshot::Receiver<CommandResponse>) that locks pending, inserts a new
oneshot::Sender for id and returns the Receiver, and update callers (places that
currently call wait_for(id) after sending Command like subscribe()) to call
router.register(id) before sending the Command to the module, then await the
returned receiver and map the oneshot error to PocketError as wait_for does;
keep ResponseRouter::new and the background task unchanged except that it will
now find pre-registered senders in pending.
In `@crates/core/data/client_enhanced.rs`:
- Around line 485-488: BinaryOptionsToolsError::WebsocketConnectionError is
being constructed with a bare
tokio_tungstenite::tungstenite::Error::ConnectionClosed but the variant expects
a boxed error; update the constructor to wrap the tungstenite error in
Box::new(...) (e.g.,
Box::new(tokio_tungstenite::tungstenite::Error::ConnectionClosed)) wherever
BinaryOptionsToolsError::WebsocketConnectionError is created (including the
instance in client_enhanced.rs and the similar occurrence in connection.rs) so
the variant signature matches.
- Around line 388-440: The init() function currently accepts handler: Handler
and credentials: Creds but does not store them on EnhancedWebSocketInner and the
stored data: Data is never used in the receiver task; update the struct to
include fields for credentials and handler (e.g., credentials: Creds, handler:
Handler) and set them in init(), then modify the message receiver/receiver task
(the code that consumes message_receiver and emits events) to
authenticate/connect using credentials (during connect/reconnect logic) and to
pass incoming messages into the Data/Handler pipeline (use the Data<T,Transfer>
instance and call the appropriate handler methods to process messages before
emitting events). Ensure references to EnhancedWebSocketInner, init,
message_receiver, data, Handler, and Creds are updated everywhere the
receiver/connect logic runs so the handler and credentials are actually used.
In `@crates/core/data/client2.rs`:
- Around line 537-543: The closure parameter type in update_websocket_config is
wrong: update_websocket_config currently declares F: FnOnce(&mut
WebSocketConfig) but delegates to self.shared_state.update_config which expects
FnOnce(&mut WebSocketClientConfig); change update_websocket_config to accept F:
FnOnce(&mut WebSocketClientConfig) (or alternatively change
shared_state.update_config to use WebSocketConfig if you intend to unify types),
update the function signature of update_websocket_config and any callers to use
WebSocketClientConfig, and ensure imports/usages of WebSocketConfig vs
WebSocketClientConfig are reconciled so the types match when calling
shared_state.update_config.
- Around line 1-1003: The file fails to compile due to multiple API mismatches:
align the WebSocketEvent definition with its usage (make variants use the same
tuple/struct shapes or update usage sites like broadcast_event and handler
matches to the defined variants), unify the trait types by replacing/renaming
WebSocketEventHandler or EventHandler so the stored handlers
(SharedState.event_handlers) and implementations (LoggingEventHandler,
StatsEventHandler) implement the same trait used throughout (ensure the
async_trait signature matches handle_event(&self, event: &WebSocketEvent<...>)
or EventHandler::handle_event(event)), fix SharedState fields and constructors
by adding or renaming the missing stats field or updating calls to
get_stats/update_stats to use connection_state (update SharedState::new
signature or its callers so arity matches), remove the unsupported
SplitSink::clone() call in run_connection and instead move or wrap the sink into
an Arc<Mutex<...>> or spawn the sender task using the original write (transfer
ownership properly), pick one config type (WebSocketConfig vs
WebSocketClientConfig) and make get_config()/update_config()/usage consistent,
and make LoggingEventHandler’s match over WebSocketEvent exhaustive (add missing
arms or a catch-all _ arm). Apply these changes to the referenced symbols:
WebSocketEvent, WebSocketEventHandler/EventHandler, SharedState::new,
get_stats/update_stats and connection_state, run_connection (sender_task
creation and write usage), WebSocketConfig/WebSocketClientConfig and
LoggingEventHandler::handle_event.
- Around line 70-83: The code defines a new WebSocketEventHandler trait that
conflicts with the existing EventHandler trait used elsewhere
(LoggingEventHandler, StatsEventHandler, TestEventHandler), causing a type
mismatch; remove/stop using WebSocketEventHandler and unify on the existing
EventHandler trait: delete or stop referencing WebSocketEventHandler, change
SharedState::event_handlers to store Arc<dyn EventHandler<Transfer>>, update
WebSocketClient2::add_event_handler to accept Arc<dyn EventHandler<Transfer>>
and ensure the call sites and method signatures (handle_event in the handlers
and any invocations inside WebSocketClient2 that pass events) match
EventHandler’s expected signature (convert any &WebSocketEvent<...> usages to
the owned/event form EventHandler expects). Ensure imports reference
crate::general::events::EventHandler and that all handler structs implement that
trait.
- Around line 406-418: The methods get_stats and update_stats reference a
non-existent self.stats; change them to operate on the existing connection_state
field or add a new stats field—pick one: (A) Prefer changing the methods to use
connection_state: in get_stats() read and clone from self.connection_state
(return the appropriate type, e.g., ConnectionState or convert to
ConnectionStats if callers require it) and in update_stats<F> take FnOnce(&mut
ConnectionState) and apply it to self.connection_state.write().await; or (B) if
you truly need a separate ConnectionStats storage, add stats:
Arc<RwLock<ConnectionStats>> to SharedState and initialize it where SharedState
is constructed, then keep get_stats/update_stats operating on self.stats.
Reference the methods get_stats, update_stats and the field connection_state
when making the change and update all call sites (start_event_loop,
run_connection) to match the chosen type (ConnectionState vs ConnectionStats).
- Around line 738-765: The sender task currently calls write.clone() but
SplitSink (the write variable) is not Clone; instead move the write into the
spawned task and stop attempting to clone it: change the closure capture to take
ownership of write (e.g., remove write.clone() and move write into the async
move) and adjust the surrounding run_connection signature so write is not
declared as a mutable top-level borrow that must be cloned; ensure the spawned
task owns a mutable write (SplitSink<WebSocketStream<MaybeTlsStream<TcpStream>>,
Message>) and use that owned write inside the loop that reads from
message_receiver and updates shared_state (referencing sender_task, write,
run_connection, message_receiver, shared_state).
- Around line 566-568: SharedState::new is being invoked with a single argument
but its signature requires two (data, buffer_size); update the call site in
client2.rs where SharedState::new(data) is used to pass a proper buffer_size
(for example: use the buffer size value from the existing config object or a
sensible default constant) so the call matches SharedState::new(data,
buffer_size) and compiles.
- Around line 35-68: The enum WebSocketEvent is defined with struct-style
variants but is used throughout as tuple/unit variants (e.g.,
WebSocketEvent::Connected, WebSocketEvent::Disconnected(reason),
WebSocketEvent::Error(e.to_string())), which causes compilation errors; update
the WebSocketEvent declaration to use tuple and unit variants to match usage
(e.g., Connected(Option<String>) or Connected, Disconnected(String),
Error(String, Option<String>), MessageReceived(Transfer),
RawMessageReceived(Transfer::Raw), MessageSent(Transfer), PingSent(Instant),
PongReceived(Instant), etc.) so all construction and match sites across the file
compile without changing those sites.
- Around line 913-935: The match in LoggingEventHandler::handle_event over
WebSocketEvent is non‑exhaustive and misses variants (Authenticated,
BalanceUpdated, OrderOpened, OrderClosed, StreamUpdate, CandlesReceived,
PingSent, PongReceived); update the match in the handle_event method to either
add a catch‑all arm (e.g., _ => debug!("Unhandled WebSocket event: {:?}",
event)) or explicitly handle the missing variants with appropriate log calls,
ensuring WebSocketEvent is fully covered so the code compiles.
In `@README.md`:
- Around line 114-128: The README contains installation URLs hard-coded to
release version "0.2.6" which does not exist; update the three wheel URLs (the
Windows, Linux, and macOS pip install links shown) to point to the actual
published release "0.2.1" (replace every occurrence of "0.2.6" in those URLs and
filenames with "0.2.1"), or alternatively revert the lines to a generic pip
install command that references the package name without a release-specific
wheel if you prefer to wait for 0.2.6.
🟠 Major comments (14)
crates/core/data/batching.rs-145-151 (1)
145-151:⚠️ Potential issue | 🟠 MajorHandle
rate == 0to prevent an infinite wait.With
rate == 0,acquire()will sleep forever. Either reject zero innew()or treat it as “no limit.”💡 Example safeguard
pub fn new(rate: u32) -> Self { - Self { + assert!(rate > 0, "rate must be > 0"); + Self { rate, tokens: Arc::new(Mutex::new(rate)), last_refill: Arc::new(Mutex::new(Instant::now())), } }Also applies to: 153-176
crates/core/data/batching.rs-44-46 (1)
44-46:⚠️ Potential issue | 🟠 MajorGuard against zero/invalid batching config to prevent panics and unexpected stalls.
batch_size == 0causes division by zero,batch_timeout == Duration::ZEROwill panic ininterval()(tokio asserts period must be non-zero), andmax_pending < batch_sizecreates a zero-capacity channel that can unexpectedly blockadd_message. Add validation when constructing the batcher to enforce these invariants.Also applies to lines 109–116 (background flusher interval creation).
💡 Example validation approach (keeps API stable)
pub fn new(config: BatchingConfig) -> Self { - let (batch_sender, batch_receiver) = bounded(config.max_pending / config.batch_size); + assert!(config.batch_size > 0, "batch_size must be > 0"); + assert!(!config.batch_timeout.is_zero(), "batch_timeout must be > 0"); + let capacity = (config.max_pending / config.batch_size).max(1); + let (batch_sender, batch_receiver) = bounded(capacity);crates/binary_options_tools/src/pocketoption/modules/keep_alive.rs-110-130 (1)
110-130:⚠️ Potential issue | 🟠 MajorLogging the user's public IP raises a privacy/compliance concern.
Line 115 calls
get_public_ip().awaitand logs the user's public IP address atwarnlevel on auth rejection. This could conflict with GDPR/CCPA requirements if logs are retained or shipped to a centralized system. Additionally,get_public_ip()makes an external network call in the error path, adding latency before the shutdown signal is sent.Consider removing the IP log or gating it behind a debug/trace level so it doesn't appear in production logs by default.
crates/core/data/client2.rs-326-330 (1)
326-330:⚠️ Potential issue | 🟠 MajorHandlers beyond
max_concurrent_handlersare silently skipped, not deferred.When the number of matching handlers exceeds
max_concurrent_handlers, the loopbreaks (line 328), permanently skipping the remaining handlers for this event with no warning logged. This means later-registered handlers may silently never fire.Consider using a
tokio::sync::Semaphoreto limit concurrency while still eventually invoking all handlers, or at minimum log a warning when handlers are being dropped.crates/core/data/client2.rs-269-342 (1)
269-342: 🛠️ Refactor suggestion | 🟠 MajorTwo
impl SharedStateblocks with duplicate/conflicting methods.There are two impl blocks for
SharedState<T>:
- Block 1 (lines 269–342):
add_event_handler/remove_event_handler/get_connection_state/update_connection_state/broadcast_event- Block 2 (lines 380–433):
new/add_handler/remove_handler/get_stats/update_stats/get_config/update_config
add_event_handler≡add_handler, andremove_event_handler≡remove_handler— these are near-identical duplicates. Additionally,get_stats/update_statsreference a non-existentself.statsfield whileget_connection_state/update_connection_stateuse the actualself.connection_statefield. Consolidate into a single impl block with one set of methods.Also applies to: 380-433
crates/core/data/client2.rs-352-378 (1)
352-378: 🛠️ Refactor suggestion | 🟠 Major
WebSocketConfigis a subset ofWebSocketClientConfig— consolidate.
WebSocketConfigduplicates 6 of the 13 fields fromWebSocketClientConfig. This duplication directly caused the type mismatch bug above and will continue to be a maintenance footgun. Consider removingWebSocketConfigand usingWebSocketClientConfigthroughout, or embeddingWebSocketConfigas a field withinWebSocketClientConfig.crates/core/data/client2.rs-866-877 (1)
866-877:⚠️ Potential issue | 🟠 MajorDropping the losing
JoinHandleinselect!leaks the task.In Tokio, dropping a
JoinHandledoes not abort the underlying task — it continues running detached. When one branch of thisselect!completes, the other task will keep running indefinitely.Abort the remaining task explicitly:
Proposed fix
- tokio::select! { - result = sender_task => { - result??; - } - result = receiver_task => { - result??; - } - } + tokio::select! { + result = sender_task => { + receiver_task.abort(); + result??; + } + result = receiver_task => { + sender_task.abort(); + result??; + } + }crates/core/data/client2.rs-304-341 (1)
304-341:⚠️ Potential issue | 🟠 MajorRead lock on
event_handlersheld acrossjoin_allawait — can block writes for up to 5 seconds.The
handlersread guard (line 305) is held while spawning tasks and awaitingjoin_all(line 337), which has a 5-second timeout. Any concurrent call toadd_handler/remove_handlerwill be blocked for the entire duration, and nested handler calls that attempt to modify the registry will deadlock.Clone the handler list into a local
Vecand drop the lock before spawning:Proposed fix
pub async fn broadcast_event(&self, event: WebSocketEvent<T::Transfer>) { - let handlers = self.event_handlers.read().await; let config = self.get_config().await; + let handlers: Vec<_> = { + let guard = self.event_handlers.read().await; + guard.clone() + }; if handlers.is_empty() { return; }BinaryOptionsToolsV2/python/BinaryOptionsToolsV2/tracing.py-155-157 (1)
155-157:⚠️ Potential issue | 🟠 MajorUpdate
.pyistub to match the Rust function signature.The
.pyifile at line 155 declaresdef start_tracing(level: str = "info") -> None, but the actual Rust implementation takes 4 parameters:path,level,terminal, andlayers. The call at line 157 passes all 4 arguments correctly, matching the Rust signature. Update the stub to:def start_tracing(path: str, level: str, terminal: bool, layers: List[StreamLogsLayer]) -> None: ...crates/core/data/client_enhanced.rs-856-891 (1)
856-891:⚠️ Potential issue | 🟠 MajorPotential deadlock: write lock on
connection_stateheld acrossevent_manager.emit().The
statewrite guard (line 876) is held until the end of the function. The subsequentemit()call (line 882) invokes registered event handlers synchronously. If any handler attempts to readconnection_state(e.g., viais_connected()orget_connection_stats()), it will deadlock because tokio'sRwLockis not reentrant.Drop the guard before emitting:
Proposed fix
pub async fn disconnect(&self) -> BinaryOptionsResult<()> { info!("Disconnecting WebSocket client..."); // Stop keep-alive manager if let Some(keep_alive_manager) = self.keep_alive.lock().await.as_mut() { keep_alive_manager.stop().await; } // Stop reconnection supervisor if let Some(task) = self.reconnect_task.lock().await.take() { task.abort(); } // Cancel all background tasks let mut tasks = self.background_tasks.lock().await; for task in tasks.drain(..) { task.abort(); } + drop(tasks); // Update connection state - let mut state = self.connection_state.write().await; - state.is_connected = false; - state.connection_start_time = None; - state.current_region = None; + { + let mut state = self.connection_state.write().await; + state.is_connected = false; + state.connection_start_time = None; + state.current_region = None; + } // Emit disconnected event self.event_manager .emit(Event::new( EventType::Disconnected, serde_json::json!({"reason": "manual_disconnect"}), )) .await?; info!("WebSocket client disconnected successfully"); Ok(()) }crates/binary_options_tools/src/pocketoption/utils.rs-99-99 (1)
99-99:⚠️ Potential issue | 🟠 MajorFull IP address logged at
warnlevel — PII exposure risk.Line 99 logs the unredacted
ip_addressparameter at warn level. Unlike the debug-level redacted logging inssid.rs, this warn message will appear in production logs by default. Consider redacting the IP here as well.🛡️ Proposed fix — redact IP in warning
- tracing::warn!(target: "PocketUtils", "All geo providers failed for IP {}. Using fallback location.", ip_address); + tracing::warn!(target: "PocketUtils", "All geo providers failed for IP lookup. Using fallback location.");crates/binary_options_tools/src/pocketoption/candle.rs-472-474 (1)
472-474:⚠️ Potential issue | 🟠 MajorPanic on zero-duration:
%by zero ifdurationisDuration::ZERO.
duration.as_secs()returns0for a zero duration, causing86400 % 0to panic at runtime. Add a guard before the modulo check.🐛 Proposed fix
pub fn time_aligned(duration: Duration) -> PocketResult<Self> { + if duration.as_secs() == 0 { + return Err(PocketError::General( + "Duration must be greater than zero for time-aligned subscription".to_string(), + )); + } if 24 * 60 * 60 % duration.as_secs() != 0 {crates/binary_options_tools/src/pocketoption/modules/get_candles.rs-173-196 (1)
173-196:⚠️ Potential issue | 🟠 Major
get_candles_advancedhas no timeout and no mismatch retry limit.Unlike
historical_data.rswhich hasHISTORICAL_DATA_TIMEOUTandMAX_MISMATCH_RETRIES, this method loops forever awaiting a matching response. If the server never responds or keeps sending mismatched responses, the caller blocks indefinitely.Suggested fix: add timeout and mismatch guard
pub async fn get_candles_advanced( &self, asset: impl ToString, period: i64, time: i64, offset: i64, ) -> PocketResult<Vec<Candle>> { info!(target: "GetCandlesHandle", "Requesting candles for asset: {}, period: {}, time: {}, offset: {}", asset.to_string(), period, time, offset); let req_id = Uuid::new_v4(); self.sender .send(Command::GetCandles { asset: asset.to_string(), period, time, offset, req_id, }) .await .map_err(CoreError::from)?; + let timeout_duration = std::time::Duration::from_secs(30); + let mut mismatch_count = 0; + const MAX_MISMATCHES: usize = 5; loop { - match self.receiver.recv().await { - Ok(CommandResponse::CandlesResult { + match tokio::time::timeout(timeout_duration, self.receiver.recv()).await { + Ok(Ok(CommandResponse::CandlesResult { req_id: response_id, candles, - }) => { + })) => { if req_id == response_id { return Ok(candles); } - // Continue waiting for the correct response + mismatch_count += 1; + if mismatch_count >= MAX_MISMATCHES { + return Err(PocketError::General("Exceeded mismatch retries for get_candles".into())); + } } // ... similar for Error variant + Err(_) => return Err(PocketError::Timeout { + task: "get_candles".to_string(), + context: format!("period: {}, offset: {}", period, offset), + duration: timeout_duration, + }),crates/core-pre/src/traits.rs-9-15 (1)
9-15:⚠️ Potential issue | 🟠 Major
RunnerCommand::Reconnectsent byreconnect()is silently discarded.The public
reconnect()method (line 214) sendsRunnerCommand::Reconnect, but in the mainselect!loop (lines 456–507 inclient.rs), onlyDisconnectandShutdownvariants are handled explicitly. TheReconnectandConnectvariants fall through to the catch-all_ => {}arm, meaning the command is consumed without effect.The actual reconnection logic exists separately in the outer connection loop (lines 309–350), controlled by the
is_hard_disconnectflag. A user callingclient.reconnect()will not trigger an immediate reconnection attempt; the method silently succeeds but the sent command is ignored.Either handle
RunnerCommand::ReconnectandConnectin the match statement, or clarify the API contract to prevent users from expecting immediate reconnection.
🟡 Minor comments (26)
BinaryOptionsToolsV2/src/error.rs-15-15 (1)
15-15:⚠️ Potential issue | 🟡 MinorTypo: "descerializing" → "deserializing"
- #[error("Error descerializing data, {0}")] + #[error("Error deserializing data, {0}")]BinaryOptionsToolsV2/pyproject.toml-55-57 (1)
55-57:⚠️ Potential issue | 🟡 MinorRelative
testpathsis fragile across different working directories.
testpaths = ["../tests"]resolves relative to the current working directory at pytest invocation, not relative topyproject.toml. While this works when pytest is run fromBinaryOptionsToolsV2/, it will fail if invoked from the repo root or other directories. CI avoids this issue by explicitly passing the test path (pytest ../../tests) as a command-line argument.To make this more robust, either:
- Use a path relative to the pyproject.toml location:
testpaths = ["tests"]with a conftest.py at the repo root, or- Document the expected invocation directory clearly in CI configuration and project documentation.
ForLLMsAndAgents/guidelines.md-14-17 (1)
14-17:⚠️ Potential issue | 🟡 MinorDocument the intentional deviation from PEP 8 line length.
Line 15 specifies a 120-character maximum, which deviates from PEP 8's recommended 79 characters (with flexibility up to 99 for some cases). While this is a valid project-specific choice, consider adding a brief note acknowledging this deviation to avoid confusion for contributors expecting strict PEP 8 compliance.
📝 Suggested clarification
- **Formatting**: Follow [PEP 8](https://www.python.org/dev/peps/pep-0008/). -- **Line Length**: Maximum of 120 characters (enforced by `ruff`). +- **Line Length**: Maximum of 120 characters (enforced by `ruff`). Note: This deviates from PEP 8's default of 79 characters. - **Typing**: Use type hints for all function signatures and complex variables.ForLLMsAndAgents/guidelines.md-21-26 (1)
21-26:⚠️ Potential issue | 🟡 MinorFix formatting inconsistency in commit convention example.
Lines 23 and 25 have leading spaces that create formatting inconsistencies in the commit structure example. Remove these for cleaner presentation.
🎨 Proposed formatting fix
- **Format**: [Subject Line] - [Body] +[Body] - [Footer/Issues] +[Footer/Issues]BinaryOptionsToolsV2/python/BinaryOptionsToolsV2/validator.py-29-29 (1)
29-29:⚠️ Potential issue | 🟡 MinorDocstring regex examples have double-escaped patterns in raw strings.
r"[A-Z]\\w+"in a raw string produces the literal string[A-Z]\\w+(two backslashes), not[A-Z]\w+. The regex engine will match a literal backslash followed byw, not the\wcharacter class. Same issue on line 54 withr"^\\d+".Use either
r"\w"(raw string, single backslash) or"\\w"(regular string, escaped), but not both.📝 Proposed fix
- v1 = Validator.regex(r"[A-Z]\\w+") # Starts with capital letter + v1 = Validator.regex(r"[A-Z]\w+") # Starts with capital letter- validator = Validator.regex(r"^\\d+") + validator = Validator.regex(r"^\d+")CHANGELOG.md-158-161 (1)
158-161:⚠️ Potential issue | 🟡 MinorMissing reference link for
[0.2.6].The heading on line 22 uses
[0.2.6]but no corresponding link definition exists in the footer. The other versions all have their links defined here.Proposed fix
+[0.2.6]: https://github.com/ChipaDevTeam/BinaryOptionsTools-v2/releases/tag/BinaryOptionsToolsV2-0.2.6 [0.2.5]: https://github.com/ChipaDevTeam/BinaryOptionsTools-v2/releases/tag/BinaryOptionsToolsV2-0.2.5.github/workflows/CI.yml-155-169 (1)
155-169:⚠️ Potential issue | 🟡 MinorConsider pinning the Alpine version for reproducibility and stability.
The
alpine:latestimage on line 159 is unpinned, which can introduce non-deterministic behavior if Alpine's base image changes. Whilepython3 -m venvwill work correctly withpy3-pipinstalled (sincepy3-pipdepends onpython3, which includes the bundledensurepipmodule), pinning to a specific Alpine version likealpine:3.20is a best practice for CI stability.- docker run --rm -v ${{ github.workspace }}:/io -w /io/BinaryOptionsToolsV2 alpine:latest sh -c ' + docker run --rm -v ${{ github.workspace }}:/io -w /io/BinaryOptionsToolsV2 alpine:3.20 sh -c 'crates/binary_options_tools/src/pocketoption/modules/keep_alive.rs-1-16 (1)
1-16:⚠️ Potential issue | 🟡 Minor
SIDconstant is unused inrun(), creating a routing/processing mismatch.
SID(40{"sid":) is used inInitRule::call(line 214) for routing, butrun()uses the broadertext.starts_with("40")(line 90). This meansrule()will only route40{"sid":...}messages to InitModule, yetrun()is written to handle any40-prefixed message. If the server ever sends a plain40(a valid Socket.IO connect ack without SID),rule()won't route it, and the authentication flow stalls.Consider aligning the routing rule with what
run()expects:Proposed fix in InitRule::call
if text.starts_with(SID_BASE) - || text.starts_with(SID) + || text.starts_with("40") || text.as_str() == "41" || text.as_str() == "2"crates/binary_options_tools/src/pocketoption/modules/keep_alive.rs-248-253 (1)
248-253:⚠️ Potential issue | 🟡 MinorStale
validflag can cause the binary part ofsuccessauthto be missed.The fallback at lines 249–252 consumes the
validflag for any text message that doesn't match the protocol patterns above. If any unexpected text message (e.g., a server notification without[) arrives between thesuccessauthplaceholder text and its binary counterpart, this block will consumevalidand returntruefor the wrong message. The subsequent binary part will then not be routed to InitModule, and authentication won't complete.This is an edge-case ordering issue, but it could cause silent auth failures that are very hard to diagnose.
Suggested fix: only consume `valid` for binary messages
} } - - if self.valid.load(Ordering::SeqCst) { - self.valid.store(false, Ordering::SeqCst); - return true; - } false } Message::Binary(_) => {This way, only
Message::Binary(lines 255–261) can consume the flag, which is the intended next message type after a binary-placeholdersuccessauth.crates/binary_options_tools/src/pocketoption/modules/keep_alive.rs-47-189 (1)
47-189:⚠️ Potential issue | 🟡 MinorAdd clarifying comment about InitModule's persistent loop design.
The
run()loop correctly persists beyond successful authentication at line 156 to continue handling protocol-level keep-alive messages (e.g., Socket.IO pings at lines 132–134). However, this intent is not documented in the code. Since the module name and structure suggest dual-purpose behavior (initialization and keep-alive), add a comment above or within the authentication block (around line 155) clarifying that the loop intentionally persists to maintain the WebSocket connection and handle ongoing protocol messages:// Module continues running to handle protocol keep-alive (Socket.IO pings/pongs) // and to route subsequent data messages to other modules via their rules authenticated = true;This prevents future maintainers from incorrectly refactoring away the persistent loop.
crates/core/data/client2.rs-1-3 (1)
1-3:⚠️ Potential issue | 🟡 MinorUnused import:
f32::consts::E.
Efromf32::constsis not referenced anywhere in this file. Remove it to avoid a compiler warning.Proposed fix
use std::{ - collections::HashMap, f32::consts::E, ops::Deref, sync::Arc, time::{Duration, Instant} + collections::HashMap, ops::Deref, sync::Arc, time::{Duration, Instant} };BinaryOptionsToolsV2/python/BinaryOptionsToolsV2/pocketoption/synchronous.py-454-458 (1)
454-458:⚠️ Potential issue | 🟡 MinorDocstring uses
awaitin a synchronous class.Line 456 shows
await client.connect()in the example, but this isPocketOption(sync). Should beclient.connect().Fix
- await client.connect() + client.connect()BinaryOptionsToolsV2/python/BinaryOptionsToolsV2/pocketoption/asynchronous.py-193-205 (1)
193-205:⚠️ Potential issue | 🟡 MinorSSID sanitization looks reasonable; minor note on credential logging.
The regex normalization on line 195 is a good defensive measure for common shell-stripping issues. However, line 203 logs the first 20 characters of the SSID. Depending on the SSID format, this could include sensitive session token data. Consider logging only that the SSID "does not start with '42['" without including the actual prefix content.
Suggested change
- self.logger.warn(f"SSID does not start with '42[': {ssid[:20]}...") + self.logger.warn("SSID does not start with expected '42[' prefix")BinaryOptionsToolsV2/python/BinaryOptionsToolsV2/tracing.py-132-159 (1)
132-159:⚠️ Potential issue | 🟡 MinorDocstring claims the function raises but the implementation swallows all exceptions.
Lines 144–145 document
Raises: Exception, but the actual behavior (lines 158–159) catches every exception and only prints it. The caller has no signal that logging initialization failed. Also, thelayersparameter is missing from the docstring's Args section.If silencing errors is intentional (to avoid crashing on logging failures), remove the
Raisessection from the docstring and consider returning a bool or logging a warning viawarnings.warn. If errors should propagate, remove the bareexcept.Suggested docstring fix (if swallowing is intentional)
""" Initialize logging system for the application. Args: path (str): Path where log files will be stored. level (str): Logging level (default is "DEBUG"). terminal (bool): Whether to display logs in the terminal (default is True). + layers (list): Additional log layers to configure (default is empty list). Returns: None - - Raises: - Exception: If there's an error starting the logging system. + + Note: + Errors during initialization are printed to stdout but not raised. """crates/core/data/client_enhanced.rs-839-853 (1)
839-853:⚠️ Potential issue | 🟡 Minor
messages_sentcounter incremented before the message is actually enqueued.The counter is bumped (line 843) before
message_sender.send()(line 847). If the send fails, the counter still reflects the failed message, making stats inaccurate.Proposed fix: increment after successful send
pub async fn send_message(&self, message: Message) -> BinaryOptionsResult<()> { - // Update stats - { - let mut state = self.connection_state.write().await; - state.messages_sent += 1; - } - // Send through message batcher or directly self.message_sender .send(message) .await .map_err(|e| BinaryOptionsToolsError::ChannelRequestSendingError(e.to_string()))?; + // Update stats after successful enqueue + { + let mut state = self.connection_state.write().await; + state.messages_sent += 1; + } + Ok(()) }crates/core/data/client_enhanced.rs-485-488 (1)
485-488:⚠️ Potential issue | 🟡 MinorMisleading error variant when all connection attempts fail.
Returning
tungstenite::Error::ConnectionClosedwhen no connection was ever established is semantically incorrect — this error typically means an existing connection was closed. Consider using a more descriptive error (e.g., a custom variant orError::Url/Error::Io) or returning the last encountered connection error instead.Suggested approach: propagate the last error
+ let mut last_error = None; for url in &self.connection_urls { match self.try_connect_single(url).await { Ok(websocket) => { // ... existing success handling ... } Err(e) => { warn!("Failed to connect to {}: {}", url, e); + last_error = Some(e); continue; } } } - Err(BinaryOptionsToolsError::WebsocketConnectionError( - tokio_tungstenite::tungstenite::Error::ConnectionClosed, - )) + Err(last_error.unwrap_or_else(|| BinaryOptionsToolsError::WebsocketConnectionError( + tokio_tungstenite::tungstenite::Error::ConnectionClosed, + )))crates/binary_options_tools/src/expertoptions/types.rs-53-55 (1)
53-55:⚠️ Potential issue | 🟡 Minor
get_symbol()fallback tonamecould cause silent HashMap key collisions inAssets::new.If multiple assets have
symbol: Noneand share the samename, or if one asset'ssymbolmatches another'sname, theHashMap::from_iteron line 60-65 will silently discard all but the last entry. Consider whether this is an expected scenario from the ExpertOptions API, and if not, adding dedup detection or logging a warning.crates/binary_options_tools/src/expertoptions/connect.rs-41-41 (1)
41-41:⚠️ Potential issue | 🟡 MinorWrong log target:
"PocketConnect"should be"ExpertConnect"(or"ExpertConnectThread").This logs a successful ExpertOptions connection under the
"PocketConnect"target, which is misleading for anyone filtering logs by target. Line 44 has the same pre-existing issue.Proposed fix
- debug!(target: "PocketConnect", "Successfully connected to ExpertOptions"); + debug!(target: "ExpertConnect", "Successfully connected to ExpertOptions");crates/binary_options_tools/src/framework/virtual_market.rs-220-241 (1)
220-241:⚠️ Potential issue | 🟡 MinorPotential TOCTOU: trade removal races with a concurrent
result()call.At lines 236–238, the trade is removed from
open_tradesifcurrent_time >= expiry_time, but thecurrent_timesnapshot is taken inside the same lock scope. If two concurrentresult()calls execute for the sametrade_id, the second call will fail with "Trade not found" because the first already removed it. This is probably acceptable for a virtual market, but worth noting.crates/binary_options_tools/src/pocketoption/candle.rs-115-131 (1)
115-131:⚠️ Potential issue | 🟡 MinorSilent zero defaults on malformed tick data.
as_f64().unwrap_or_default()silently maps unparseable values to0(timestamp) or0.0(price), which could produce nonsensical candles. Consider logging a warning or filtering out zero-timestamp ticks downstream if data integrity matters.crates/binary_options_tools/src/pocketoption/utils.rs-59-67 (1)
59-67:⚠️ Potential issue | 🟡 MinorLow collision resistance in
get_index.The index is
"{unix_seconds}{2-digit random}", yielding only ~90 distinct values per second. If multiple trades or messages are initiated concurrently within the same second, collisions are likely. If uniqueness is required, consider using a wider random range or an atomic counter.crates/binary_options_tools/src/pocketoption/modules/trades.rs-66-95 (1)
66-95:⚠️ Potential issue | 🟡 Minor
trade()can block indefinitely if the server never responds.The
rx.awaiton line 91 has no timeout. If the server never sends asuccessopenOrderorfailopenOrderresponse, the caller will wait forever. UnlikeDealsHandle::check_result_with_timeout, there's no timeout variant here. Consider adding an internal timeout or documenting that callers must wrap this intokio::time::timeout.crates/binary_options_tools/src/pocketoption/modules/subscriptions.rs-491-500 (1)
491-500:⚠️ Potential issue | 🟡 MinorSevere formatting issue in the
Historycommand handler.Lines 491–500 have wildly inconsistent indentation (deeply indented
else ifandelsebranches). This appears to be an accidental formatting artifact. Runrustfmtto fix.crates/binary_options_tools/src/pocketoption/modules/historical_data.rs-354-357 (1)
354-357:⚠️ Potential issue | 🟡 MinorBinary placeholder detection doesn't track expected state.
When
is_binary_placeholderis true, the codecontinues without recording that the next message should be the binary payload. If an unrelated message arrives before the binary payload, it will be processed (or discarded) normally, and the binary payload will be treated as a new message without the context of the pending request. This works because thepending_requestis preserved (not taken), so the binary payload will still match. However, if the binary payload is preceded by anotherServerResponse::Historyfor a different asset, it could be incorrectly matched first and consume the pending request.crates/binary_options_tools/src/pocketoption/types.rs-152-159 (1)
152-159:⚠️ Potential issue | 🟡 MinorZero-price fallback from
Decimal::from_f64_retaincould silently corrupt data.If
as_f64()returnsNone(e.g., the JSON value is a string or object),price_f64becomes0.0andpricebecomesDecimal(0). A zero price on a stream data point is semantically invalid for a trading instrument and could trigger incorrect trading logic downstream. Consider returning a deserialization error instead of defaulting.Suggested approach
- let price_f64 = vec[0][2].as_f64().unwrap_or(0.0); - let price = Decimal::from_f64_retain(price_f64).unwrap_or_default(); + let price_f64 = vec[0][2] + .as_f64() + .ok_or_else(|| serde::de::Error::custom("Invalid price value in StreamData"))?; + let price = Decimal::from_f64_retain(price_f64) + .ok_or_else(|| serde::de::Error::custom("Failed to convert price to Decimal"))?;crates/binary_options_tools/src/lib.rs-24-27 (1)
24-27:⚠️ Potential issue | 🟡 MinorUnclosed code fence in crate-level doc comment.
Line 27 opens a
```textblock, but the doc comments end immediately after — there's no closing```. This will produce a rustdoc warning and render the remaining crate documentation incorrectly. The example comment lines (24–26) likely belong inside the fenced block.Proposed fix
-//! // Use the streaming utilities for real-time data processing -//! // Serialize and deserialize data with the provided macros -//! // Apply timeouts to async operations -//! ```text +//! ```text +//! // Use the streaming utilities for real-time data processing +//! // Serialize and deserialize data with the provided macros +//! // Apply timeouts to async operations +//! ```
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In
`@BinaryOptionsToolsV2/python/BinaryOptionsToolsV2/pocketoption/synchronous.py`:
- Around line 224-237: The context manager __exit__ currently calls shutdown()
and thus leaks the event loop; change __exit__ to call self.close() so both
shutdown and loop cleanup run. In close(), remove the dead conditional that
calls self.loop.stop() after shutdown (since shutdown uses run_until_complete
and the loop will not be running), and simply perform shutdown (if not already
done) then safely close the loop (self.loop.close()), guarding if needed for
None or already-closed state; reference methods: __exit__, close, shutdown and
attribute self.loop.
BinaryOptionsToolsV2/python/BinaryOptionsToolsV2/pocketoption/synchronous.py
Outdated
Show resolved
Hide resolved
- Enhanced connection management with improved statistics tracking and connection pooling. - Updated `BinaryOptionsToolsError::WebsocketConnectionError` to box the underlying error type. - Introduced breaking changes in the WebSocket event system, including unification of event handling. - Added explicit request ID registration in the ResponseRouter for improved response handling. - Updated Python type hints to reflect correct return types for trading and data methods. - Increased pytest timeout to 60 seconds for better test stability.
There was a problem hiding this comment.
Actionable comments posted: 5
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (9)
crates/core/data/connection.rs (2)
281-288:⚠️ Potential issue | 🟠 Major
Clonecreates a fresh empty pool, silently discarding all accumulated stats.The
Cloneimpl constructs a brand-newConnectionPool(with emptyconnectionsandstatsmaps) instead of sharing theArc<Mutex<…>>state. This means every call toself.clone()on line 242 gives the spawned task a manager whosepool.update_stats(…)writes go nowhere useful — the stats are thrown away when the task finishes. Consequently,get_best_url()on the original manager will never see data collected during parallel connection attempts.If the intent is for clones to share the pool, derive or implement
Cloneby cloning theArcs:Proposed fix
impl Clone for EnhancedConnectionManager { fn clone(&self) -> Self { Self { - pool: ConnectionPool::new(self.pool.max_connections), + pool: self.pool.clone(), connect_timeout: self.connect_timeout, ssl_verify: self.ssl_verify, } } }
ConnectionPoolalready hasArc-wrapped fields, so derivingClone(or writing the above) will share the underlying maps. You may also need to derive or implementCloneonConnectionPool:impl Clone for ConnectionPool { fn clone(&self) -> Self { Self { connections: Arc::clone(&self.connections), stats: Arc::clone(&self.stats), max_connections: self.max_connections, } } }
181-185:⚠️ Potential issue | 🟡 MinorSSL verification bypass is a no-op — both branches produce
Connector::default().The
ssl_verifyflag has no effect. If disabling verification isn't needed yet, consider removing the flag to avoid giving callers a false sense of configurability, or track this TODO in an issue.Would you like me to open an issue to track implementing the SSL verification bypass?
crates/binary_options_tools/src/pocketoption/modules/subscriptions.rs (3)
747-761:⚠️ Potential issue | 🟡 MinorOrphaned router entry when
senderisNone.
self.router.register(command_id)at line 750 inserts aoneshot::Senderinto the router'spendingmap. Ifself.senderisNone(line 751), the method returns early at line 760 without ever consuming or removing that entry, leaking it.Move the registration after confirming the sender exists, or guard it:
Proposed fix
pub async fn unsubscribe(mut self) -> PocketResult<()> { let command_id = Uuid::new_v4(); - let receiver = self.router.register(command_id).await; if let Some(sender) = self.sender.take() { + let receiver = self.router.register(command_id).await; sender .send(Command::Unsubscribe { asset: self.asset.clone(),
502-521: 🛠️ Refactor suggestion | 🟠 MajorSeverely broken indentation in the
Historycommand handler.Lines 512–521 are indented far deeper than the surrounding code (~48 spaces vs ~24). This appears to be an accidental formatting artifact and hurts readability. Please reformat this block to match the surrounding indentation level.
852-913:⚠️ Potential issue | 🟠 Major
Dropunsubscribes on every clone, which may terminate sibling streams.
SubscriptionStreamimplementsClone(line 853) andDropsends anUnsubscribecommand (line 903). If a stream is cloned and one clone is dropped (e.g., moved into a closure, temporary variable, etc.), theDropfires an unsubscribe for the asset, tearing down the subscription for all remaining clones that share the same asset.Consider tracking ownership more carefully—e.g., use an
Arc-based refcount and only unsubscribe when the last reference is dropped, or remove the automatic unsubscribe fromDropentirely and require explicit calls tounsubscribe().BinaryOptionsToolsV2/python/BinaryOptionsToolsV2/BinaryOptionsToolsV2.pyi (1)
59-115:⚠️ Potential issue | 🟡 Minor
RawPocketOptionstub is missingshutdown,get_pending_deals, andactive_assetsmethods.The async wrapper (
asynchronous.pylines 461, 549, 745) callsawait self.client.shutdown(),await self.client.get_pending_deals(), andawait self.client.active_assets()on the raw client. These methods are exported from the Rust implementation (pocket_client.rs) but are absent from this type stub, causing type-checker errors.Add missing method signatures
async def unsubscribe(self, asset: str) -> None: ... async def create_raw_handler(self, validator: RawValidator, keep_alive: Optional[str]) -> RawHandler: ... + async def shutdown(self) -> None: ... + async def get_pending_deals(self) -> str: ... + async def active_assets(self) -> str: ...crates/core/data/client_enhanced.rs (3)
527-550:⚠️ Potential issue | 🔴 CriticalRemove the unused
try_connect_singlemethod (lines 527–550).This method is dead code. The
connect()method usesself.connector.connect()directly instead of calling this URL-based wrapper, and no other code in the repository callstry_connect_singlefrom this file. Remove it to reduce clutter.
518-521:⚠️ Potential issue | 🔴 CriticalType mismatch:
KeepAliveManager::startexpectsSender<Message>but receivesSenderMessage.The call at line 520 passes
self.message_sender.clone()(which returnsSenderMessage) toKeepAliveManager::start, which expectsSender<Message>.SenderMessagedoes not implementDeref,Into, or any conversion trait that would make this compile. Either extract the innersenderfield or modifyKeepAliveManager::startto acceptSenderMessage.
465-506:⚠️ Potential issue | 🔴 CriticalURL-based region fallback is a no-op:
connector.connect()never receives the target URL.The loop iterates
connection_urlsbutconnector.connect(...)(line 469) only receivescredentialsandconfig—theurlis never passed. Every iteration attempts an identical connection, defeating the region-fallback design.The codebase already has the proper mechanism:
self.connection_manager.connect(&self.connection_urls)handles URL-based fallback correctly (seetry_connect_single()at line 525 for reference). Replace the ineffective loop with a direct call to the connection manager:Suggested fix
- // Try each URL in sequence (like Python) - for url in &self.connection_urls { - // First try authenticated connect using the connector - match self - .connector - .connect::<T, Transfer, U>(self.credentials.clone(), &self.config) - .await - { - Ok(websocket) => { + // Use connection manager to handle URL-based fallback + let (websocket, used_url) = self + .connection_manager + .connect(&self.connection_urls) + .await?; + + { info!( "Connected and authenticated to region: {}", - url.host_str().unwrap_or("unknown") + Url::parse(&used_url)?.host_str().unwrap_or("unknown") ); // Update connection state let mut state = self.connection_state.write().await; state.is_connected = true; state.successful_connections += 1; state.connection_start_time = Some(Instant::now()); - state.current_region = url.host_str().map(|s| s.to_string()); + state.current_region = Url::parse(&used_url)?.host_str().map(|s| s.to_string()); state.reconnect_attempts = 0; drop(state); // Emit connected event self.event_manager .emit(Event::new( EventType::Connected, - serde_json::json!({"region": url.host_str()}), + serde_json::json!({"region": Url::parse(&used_url)?.host_str()}), )) .await?; // Start connection handler self.start_connection_handler(websocket).await?; return Ok(()); - } - Err(e) => { - warn!( - "Failed to connect/authenticate to {}: {}, trying next URL", - url, e - ); - continue; - } - } } - - Err(BinaryOptionsToolsError::WebsocketConnectionError(Box::new( - tokio_tungstenite::tungstenite::Error::ConnectionClosed, - )))
🤖 Fix all issues with AI agents
In `@crates/binary_options_tools/src/pocketoption/modules/subscriptions.rs`:
- Around line 805-810: In process_update, avoid silently converting Decimal→f64
with price.to_f64().unwrap_or_default() — if conversion fails return a
PocketResult::Err instead of using 0.0; change the conversion to check
price.to_f64().ok_or_else(...) and propagate a descriptive error (including the
asset from self.asset() and timestamp) so the subsequent call to
self.sub_type.update(&BaseCandle::from((timestamp, price_f64)))? never receives
an invalid zero price.
In `@crates/core/data/client_enhanced.rs`:
- Around line 591-597: The current code uses futures_util::stream::select_all
(fused_streams) to merge RecieverStream::new(message_receiver) and
RecieverStream::new(message_receiver_priority), which yields fair (round-robin)
polling and loses priority semantics; change the loop inside the tokio::spawn
task to poll the two streams explicitly with a biased tokio::select! so
message_receiver_priority is drained first. Concretely: create mutable stream
handles (e.g., let mut normal = stream1.to_stream(); let mut priority =
stream2.to_stream()), then replace the while let Some(Ok(message)) =
fused_streams.next().await loop with a loop using tokio::select! (biased) that
first tries to receive from priority and handles Some(Ok(msg)) there, and only
if priority is empty awaits normal.next() as the fallback; keep the same message
handling code paths for the matched branches.
In `@crates/core/data/client2.rs`:
- Around line 354-363: The remove_handler stub currently does nothing; implement
real removal by identifying handlers by name: add a name() -> &str (or owned
String) to the EventHandler trait (or introduce a thin NamedEventHandler
wrapper/type) so each Arc<dyn EventHandler<WebSocketEvent<T::Transfer>>> exposes
an identifier, then change add_handler/remove_handler to use that identifier; in
remove_handler acquire self.event_handlers.write().await, iterate/retain to drop
handlers whose name() matches the provided &str and return true if any were
removed (false otherwise); update callers such as
WebSocketClient2::remove_event_handler to expect the real bool result.
- Around line 276-292: The loop in client2.rs currently breaks once tasks.len()
>= config.max_concurrent_handlers, silently dropping remaining handlers and
always wrapping events as EventType::Custom("ws_event"), so change the dispatch
in the function that iterates over handlers to (1) remove the break and instead
enforce concurrency with a limiter (e.g., use a tokio::sync::Semaphore, or
collect spawned futures into a FuturesUnordered and drive at most
config.max_concurrent_handlers concurrently) so every handler in handlers is
eventually awaited and executed, and (2) preserve the original event variant
when creating the Event (stop unconditionally using
EventType::Custom("ws_event".to_string()) and pass the correct EventType through
to Event::new); keep using handler.clone(), handler.handle(&e).await and log
errors as before.
In `@crates/core/data/connection.rs`:
- Around line 269-271: The code currently constructs and returns
BinaryOptionsToolsError::WebsocketConnectionError wrapping
tokio_tungstenite::tungstenite::Error::ConnectionClosed when all URL attempts
fail, which is misleading; update the connection loop (the function that
iterates URLs and returns the final Err) to capture and return the last actual
error from the failed attempts instead of fabricating ConnectionClosed, or add
and return a clearer error variant like
BinaryOptionsToolsError::AllConnectionsFailed with a message and optionally the
last error; ensure you replace the hardcoded
tokio_tungstenite::tungstenite::Error::ConnectionClosed instantiation and use
the real last_error variable (or construct the new variant) when calling
WebsocketConnectionError (or the new variant).
🧹 Nitpick comments (11)
crates/core/data/connection.rs (1)
252-267: All individual connection errors are silently discarded in the parallel path.Both
Ok(Err(_))andErr(_)arms are silently continued. When every attempt fails, the caller gets only the syntheticConnectionClosederror (flagged above) with no context about why connections failed. Consider collecting or logging at least the last error for debuggability.crates/binary_options_tools/src/framework/virtual_market.rs (2)
62-214:buyandsellare nearly identical — consider extracting a shared helper.The two methods differ only in
Action::Call/Action::Putandcommand: 0/1. A private method likeopen_trade(&self, asset, amount, time, action)would eliminate ~75 duplicated lines and make future changes (e.g., adding fields toDeal) less error-prone. The same applies to the four repeatedDealconstruction blocks acrossbuy,sell, andresult.
291-302: Note:close_pricereflects the price at query time, not at trade expiry.This is pre-existing behavior unchanged by this PR, but worth documenting: if
result()is called significantly after expiry, the close price will be whatever the market price happens to be at that moment rather than the price at the actual expiry timestamp. For a virtual/testing market this may be acceptable, but callers should be aware.crates/binary_options_tools/src/pocketoption/modules/subscriptions.rs (1)
42-57: Spawned router task has no cancellation handle.The
JoinHandlefromtokio::spawnis dropped, so there's no way to await or cancel the background task. This is acceptable while theAsyncReceiverchannel provides an implicit shutdown signal, but if graceful-shutdown semantics become important later, consider storing the handle (or using aCancellationToken).BinaryOptionsToolsV2/python/BinaryOptionsToolsV2/BinaryOptionsToolsV2.pyi (1)
73-75: Return type change fromTupletoList[str]forbuy/sellis functionally correct but semantically looser.
List[str]works for unpacking at runtime, but since the Rust side always returns exactly 2 elements (trade_id, trade_json), aTuple[str, str]would give type checkers better guarantees and match the unpacking pattern inasynchronous.py((trade_id, trade) = await self.client.buy(...)).crates/core/data/client2.rs (2)
700-726:writeis now moved into the sender task — previousCloneissue resolved.The
SplitSinkownership is correctly transferred into the spawned task.One minor note:
message.clone()on line 708 is unnecessary sincemessageis owned fromrecv()and could be passed directly towrite.send(message).Remove unnecessary clone
- if let Err(e) = write.send(message.clone()).await { + if let Err(e) = write.send(message).await {
313-339: Remove the unusedWebSocketConfigstruct fromclient2.rs.The
WebSocketConfigdefined at lines 313–328 is dead code and never instantiated. The canonicalWebSocketConfig(with connection settings, health monitoring, and performance options) lives inwebsocket_config.rsand is actively used throughout the codebase. Keeping the duplicate struct inclient2.rscreates confusion between two similarly-named types with completely different fields and purposes.crates/core/data/client_enhanced.rs (4)
647-728: Dual close-frame detection paths are confusing and may double-emit events.The handler pipeline (lines 652-663) can detect close frames and emit
Disconnected+ trigger reconnect. Then, lines 704-714 perform a second rawMessage::Closematch doing the same thing. If the handler setsshould_close = true, thebreakat line 662 prevents reaching line 705, so no double-emit. But if the handler doesn't flag close, both paths exist for the same concern.Consider consolidating: either let the handler fully own close detection (remove the raw match for
Message::Close), or remove close handling from the handler and rely solely on the raw match.
597-598: Unnecessarymessage.clone()— the value can be moved.
messageisn't used afterwrite.send(...), so the clone allocates needlessly on every message. Just move it:Suggested fix
- match write.send(message.clone()).await { + match write.send(message).await {
903-939:disconnectdoesn't send a WebSocket Close frame before aborting tasks.The method is documented as "disconnect gracefully," but it aborts background tasks (including the sender) without first sending a
Message::Close(None)through the WebSocket. Consider queuing a close frame and giving the sender a brief window to flush it before aborting.
757-884: Removeprocess_text_message,process_socket_io_message, andhandle_json_message— they're unused.These methods form an internal chain but are never called. The receiver task at line 623 routes incoming messages through
handler.process_message()instead. Remove these methods to reduce maintenance burden.
There was a problem hiding this comment.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (9)
crates/binary_options_tools/src/pocketoption/modules/subscriptions.rs (4)
747-761:⚠️ Potential issue | 🟡 MinorOneshot sender leaked when
self.senderisNone.
registerinserts a sender into the router's pending map on line 750, but ifself.senderis alreadyNone(line 751), the function returns early without ever consuming the registered oneshot. The orphaned entry sits in thependingHashMapuntil the router task shuts down.Move the registration after the sender check, or remove the pending entry on the early-return path.
Proposed fix
pub async fn unsubscribe(mut self) -> PocketResult<()> { let command_id = Uuid::new_v4(); - let receiver = self.router.register(command_id).await; if let Some(sender) = self.sender.take() { + let receiver = self.router.register(command_id).await; sender .send(Command::Unsubscribe { asset: self.asset.clone(),
858-868:⚠️ Potential issue | 🟠 Major
Clone+Dropinteraction will prematurely kill subscriptions.
SubscriptionStreamimplements bothClone(clones the sender/router/asset) andDrop(sendsCommand::Unsubscribeforself.asset). Dropping any clone unsubscribes the asset for all clones — the surviving clone's nextreceive()will get aTerminatedevent.Consider either:
- Removing the
Cloneimpl entirely (useArc<SubscriptionStream>if sharing is needed), or- Reference-counting the sender (e.g., with
Arc<AtomicUsize>) and only sendingUnsubscribewhen the last clone is dropped.
502-521:⚠️ Potential issue | 🟡 MinorSeverely broken indentation in the
Historycommand handler.Lines 512–521 are indented ~48 spaces deep while the surrounding code is at ~24. This looks like a paste/merge artifact and makes the block very hard to follow. Please reformat.
829-834:⚠️ Potential issue | 🟠 Major
to_streamnever terminates — yields infinite errors after a failure.
unfoldalways returnsSome(...), so after a terminal error (channel closed,Terminatedevent) the stream will repeatedly callreceive()and yieldErron every poll, spinning forever.Return
Noneon unrecoverable errors to signal stream completion:Proposed fix
pub fn to_stream(self) -> impl futures_util::Stream<Item = PocketResult<Candle>> + 'static { Box::pin(unfold(self, |mut stream| async move { let result = stream.receive().await; - Some((result, stream)) + match result { + Ok(candle) => Some((Ok(candle), stream)), + Err(e) => { + // Yield the error and then terminate the stream + Some((Err(e), stream.into_terminated())) + } + } })) }Alternatively, a simpler approach — just stop on error:
pub fn to_stream(self) -> impl futures_util::Stream<Item = PocketResult<Candle>> + 'static { Box::pin(unfold(Some(self), |state| async move { let mut stream = state?; match stream.receive().await { Ok(candle) => Some((Ok(candle), Some(stream))), Err(e) => Some((Err(e), None)), // yield error, then end } })) }crates/core/data/client2.rs (2)
3-3:⚠️ Potential issue | 🟡 MinorUnused import:
std::f32::consts::E.This import of Euler's number is not used anywhere in the file.
use std::{ collections::HashMap, - f32::consts::E, ops::Deref, sync::Arc, time::{Duration, Instant}, };
630-636:⚠️ Potential issue | 🔴 Critical
connected_atfield does not exist onConnectionState— will not compile.
ConnectionState(lines 103–133) hasconnection_start_time: Option<Instant>, notconnected_at. The same issue appears at line 669.🐛 Proposed fix
At line 635:
shared_state .update_stats(|stats| { stats.successful_connections += 1; - stats.connected_at = Some(std::time::Instant::now()); + stats.connection_start_time = Some(std::time::Instant::now()); }) .await;At line 669:
- stats.connected_at = None; + stats.connection_start_time = None;crates/core/data/client_enhanced.rs (2)
459-507:⚠️ Potential issue | 🔴 Critical
connect()ignores the iterated URL — all loop iterations attempt the same connection.The loop iterates over
self.connection_urls(line 465) butself.connector.connect()(lines 467–471) usesself.credentialsandself.config, completely ignoring theurlvariable. Every iteration makes the exact same connection attempt, so the URL-based region fallback is non-functional. Theurlis only used for logging (line 474) and settingcurrent_region(line 483).The connection should either pass the URL to the connector or use
try_connect_single(url)(defined at line 527).
360-374:⚠️ Potential issue | 🔴 CriticalClosure signature mismatch with
EventHandlertrait — this code will not compile.The
add_event_handleracceptsF: Fn(&serde_json::Value) -> BinaryOptionsResult<()>, butEventManager::add_handler(events.rs:121) expectsArc<dyn EventHandler<T>>. TheEventHandlerblanket impl (events.rs:85-94) requiresF: Fn(&Event<T>) -> FutwhereFut: Future<Output = BinaryOptionsResult<()>>.Your closure doesn't satisfy this:
- Parameter mismatch: takes
&serde_json::Value, needs&Event<serde_json::Value>- Return type mismatch: returns
BinaryOptionsResult<()>directly, needsFuture<Output = BinaryOptionsResult<()>>The
Arc<F>cannot be coerced toArc<dyn EventHandler<serde_json::Value>>.crates/core/data/connection.rs (1)
251-276:⚠️ Potential issue | 🔴 CriticalType mismatch:
last_errorisOption<BinaryOptionsToolsError>butunwrap_orreceivesTungsteniteError.At line 266,
Ok(Err(e))yieldse: BinaryOptionsToolsError, makinglast_error: Option<BinaryOptionsToolsError>. However, line 275 callslast_error.unwrap_or(tokio_tungstenite::tungstenite::Error::ConnectionClosed), which passesTungsteniteErrortounwrap_or. Sinceunwrap_orrequires its argument to match theOption's inner type, this causes a compilation error.Fix by using
unwrap_or_elseto construct the fallback error:Proposed fix
Err(BinaryOptionsToolsError::WebsocketConnectionError(Box::new( - last_error.unwrap_or(tokio_tungstenite::tungstenite::Error::ConnectionClosed), + last_error.unwrap_or_else(|| { + BinaryOptionsToolsError::WebsocketConnectionError(Box::new( + tokio_tungstenite::tungstenite::Error::ConnectionClosed, + )) + }), )))
🤖 Fix all issues with AI agents
In `@CHANGELOG.md`:
- Around line 22-39: The changelog entry currently labels a release as "0.2.6"
but contains multiple breaking changes and new features (see sections "Added"
and "Changed (Breaking Logic)" including symbols Deal.profit, EventHandler,
WebSocketEvent, EnhancedWebSocketInner, PocketOption), so update the release
header to a minor version bump (e.g., change "## [0.2.6] - 2026-02-13" to "##
[0.3.0] - 2026-02-13") and ensure any internal references to "0.2.6" in this
file are updated to "0.3.0" to reflect the breaking/API changes per the
repository's semantic versioning policy.
- Around line 176-179: Add the missing release link for 0.2.6 to the versions
list in CHANGELOG.md by adding a new line in the same format as the others:
include the reference label "[0.2.6]:" followed by the GitHub release URL using
the existing tag naming pattern (BinaryOptionsToolsV2-0.2.6) so it matches
entries like [0.2.5] and [0.2.4]; ensure placement is consistent with the other
version links.
In `@crates/core/readme.md`:
- Line 27: Correct the grammatical error in the README line describing Sender:
change the phrase "will work be shared between threads" to a grammatically
correct form such as "will be shared between threads" (or "is designed to be
shared between threads") in the sentence that documents `Sender` so the
description reads clearly.
- Line 18: The README's WebSocket trait/struct list is outdated; replace the
incorrect names with the actual implementations: list the Connect trait, the
MessageHandler trait (for message processing), the SenderMessage struct (noting
it wraps an async_channel::Sender for sending), and the Data struct (for data
management) so the README accurately matches the codebase.
🧹 Nitpick comments (6)
crates/binary_options_tools/src/pocketoption/modules/subscriptions.rs (1)
544-552:retainwith side-effect to extractcommand_idworks but is fragile.If multiple history responses arrive for the same
(asset, period)— e.g., due to a server-side replay —retainwould match and remove only the first entry, butidwould be overwritten to the last match'scommand_id, routing the response to the wrong caller.This is unlikely given the duplicate-request guard on line 504, but worth noting. A
position/swap_removeordrain_filterwould be more explicit.crates/core/data/events.rs (1)
76-94: Closures are not removable by name since the blanket impl always returns"unnamed".The closure
impl<T, F, Fut> EventHandler<T> for Fdoesn't overridename(), so all closure-based handlers share the name"unnamed". Callingremove_handler("unnamed")inclient2.rswould remove all closure handlers at once, and removing a specific closure handler is impossible.Consider either:
- Documenting this limitation clearly, or
- Providing a wrapper struct (e.g.,
NamedHandler { name: String, handler: F }) for registering closures with identifiable names.crates/core/data/connection.rs (1)
286-293:Clonecreates a fresh empty pool, discarding all accumulated stats.
EnhancedConnectionManager::clone()constructs a brand-newConnectionPool, so cloned instances (e.g., line 242 inconnect()) lose allConnectionStatsaccumulated by the original. This means theget_best_url()optimization is effectively never used when connections are attempted in parallel, since each spawned task operates on an empty pool.If losing stats is intentional (to isolate parallel attempts), consider documenting it. Otherwise, share the pool via
Arc:♻️ Suggested approach
pub struct EnhancedConnectionManager { - pool: ConnectionPool, + pool: Arc<ConnectionPool>, connect_timeout: Duration, ssl_verify: bool, }This makes
Cloneshare the same pool and removes the need for a manualCloneimpl.crates/core/data/client2.rs (2)
302-318: All handler tasks are spawned eagerly — semaphore only limits concurrent execution, not spawn count.Every handler gets a
tokio::spawnregardless ofmax_concurrent_handlers. Under many handlers this could create a large number of tasks. The semaphore correctly throttles concurrent execution, but the spawn overhead remains. This is likely acceptable for typical handler counts (≤ dozens), but worth noting if handler counts grow large.
386-390:remove_handlernow works but is fragile with closure handlers.Since closure-based
EventHandlerimpls always return"unnamed"fromname()(seeevents.rs), callingremove_handler("unnamed")would remove all closure handlers. This is functional but could surprise callers. Consider documenting this limitation or returning an error whenname == "unnamed".crates/core/data/client_enhanced.rs (1)
715-740: Duplicate handling ofMessage::Close— disconnect event emitted twice and reconnect triggered twice.After
handler.process_message()detects a close frame (line 665should_closebranch), it emits aDisconnectedevent and callsreconnect_notify.notify_one()thenbreaks. However, ifshould_closeisfalsebut the raw message isMessage::Close, lines 717–727 emit a secondDisconnectedevent and callreconnect_notify.notify_one()again.Even if
should_closeistrue, thebreakat line 674 prevents reaching line 716. But if the handler doesn't detect the close (returnsshould_close = false), the low-level match handles it — this path works. The concern is that two independent close-detection mechanisms could diverge. Consider consolidating close detection in one place.
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
|
/gemini review |
There was a problem hiding this comment.
Code Review
This pull request is a significant revamp, introducing numerous improvements across the board. Key enhancements include the migration to rust_decimal::Decimal for financial calculations, which greatly improves precision and correctness. The connection logic, SSID parsing, and message handling have been made more robust. New features like pending orders have been added. The codebase has also been reorganized for better maintainability.
My review focuses on the FFI boundaries where Decimal values are converted back to f64. The current use of unwrap_or_default() can lead to silent data corruption, which is a significant risk in a financial application. I've provided several comments with high and medium severity to highlight these areas and suggest safer alternatives.
| amount: deal.amount.to_f64().unwrap_or_default(), | ||
| profit: deal.profit.to_f64().unwrap_or_default(), | ||
| percent_profit: deal.percent_profit, | ||
| percent_loss: deal.percent_loss, | ||
| open_price: deal.open_price.to_f64().unwrap_or_default(), | ||
| close_price: deal.close_price.to_f64().unwrap_or_default(), |
There was a problem hiding this comment.
Using to_f64().unwrap_or_default() for financial values like amount, profit, and prices is risky. If a Decimal value is outside the range of f64 (e.g., a very large profit), it will be silently converted to 0.0, leading to incorrect data being passed to the FFI consumer. This could have serious consequences in a trading application. It would be safer to use .unwrap() to cause a panic on conversion failure, making the issue immediately visible, rather than allowing potentially incorrect data to be used. A better long-term solution for FFI is often to pass Decimal types as strings to preserve precision.
| open: candle.open.to_f64().unwrap_or_default(), | ||
| high: candle.high.to_f64().unwrap_or_default(), | ||
| low: candle.low.to_f64().unwrap_or_default(), | ||
| close: candle.close.to_f64().unwrap_or_default(), |
There was a problem hiding this comment.
Similar to the Deal struct, using to_f64().unwrap_or_default() for OHLC candle prices can lead to silent data corruption if the decimal values are out of the f64 range. This could cause incorrect chart rendering or flawed technical analysis. It's safer to use .unwrap() to cause a panic on conversion failure, making the issue immediately visible, rather than allowing potentially incorrect data to be used.
| pub async fn balance(&self) -> f64 { | ||
| self.inner.balance().await.to_f64().unwrap_or_default() | ||
| } |
There was a problem hiding this comment.
The conversion from Decimal to f64 using to_f64().unwrap_or_default() can silently hide potential data corruption. If the balance value is too large or small to be represented by an f64, to_f64() will return None, and this code will then default to 0.0. This could be misleading for the user. Consider returning a Result to propagate the conversion error, or at least logging a warning if the conversion fails.
| pub fn balance<'py>(&self, py: Python<'py>) -> PyResult<Bound<'py, PyAny>> { | ||
| let market = self.market.clone(); | ||
| pyo3_async_runtimes::tokio::future_into_py(py, async move { Ok(market.balance().await) }) | ||
| pyo3_async_runtimes::tokio::future_into_py(py, async move { | ||
| Ok(market.balance().await.to_f64().unwrap_or_default()) | ||
| }) | ||
| } | ||
| } |
There was a problem hiding this comment.
The conversion from Decimal to f64 using to_f64().unwrap_or_default() can silently return 0.0 if the balance is too large or small to be represented by an f64. In the context of a trading bot's context object, providing an incorrect balance could lead to flawed strategy decisions. It would be safer to propagate a Python exception if the conversion fails.
Fix tests, examples, history, formatted, CI refactor, SSID parsing patches.
Added functions - shutdown, get_assets, some raw / lower level websocket stuff, and i think thats it idk i forget ngl
Summary by CodeRabbit
New Features
Bug Fixes & Improvements
Documentation