Skip to content

add library#3

Open
faddat wants to merge 10 commits intomainfrom
faddat/lib
Open

add library#3
faddat wants to merge 10 commits intomainfrom
faddat/lib

Conversation

@faddat
Copy link
Collaborator

@faddat faddat commented Dec 29, 2024

Summary by CodeRabbit

  • New Features

    • Added a new package called "cheese" to the workspace.
    • Implemented multi-chain wallet management and transaction signing.
    • Created modules for fetching and processing liquidity pool data from Meteora and Raydium.
    • Introduced a price-balancer module for managing cheese and stable tokens.
  • Chores

    • Updated workspace configuration.
    • Added new dependencies for the cheese package.
    • Updated .gitignore to exclude seedphrase files.
  • Refactor

    • Moved common utility functions to shared modules.
    • Reorganized code structure with a modular approach.

@coderabbitai
Copy link

coderabbitai bot commented Dec 29, 2024

Walkthrough

This pull request introduces a new Rust package called "cheese" to the workspace, focusing on blockchain and decentralized exchange (DEX) interactions. The package includes modules for fetching and processing pool data from Meteora and Raydium, implementing a price balancer, managing multi-chain wallets, and updating README information. The changes enhance the project's capabilities in handling cryptocurrency pool data, transaction signing, and aggregating liquidity information across different blockchain networks.

Changes

File Change Summary
Cargo.toml Added "packages/cheese" to workspace members
packages/cheese/Cargo.toml New package configuration with dependencies like anyhow, reqwest, serde, tokio, bip39, ed25519-dalek, k256, and rand
packages/cheese/src/lib.rs Added public modules: common, meteora, raydium, updater, and wallet
packages/cheese/src/balancer.rs Implemented the price balancer logic with Wallet and CheesePoolPrice structs
packages/cheese/src/common.rs Defined data models for paginated API responses and pool information
packages/cheese/src/meteora.rs Introduced logic for fetching data from the Meteora API
packages/cheese/src/raydium.rs Added functionality for interacting with the Raydium API
packages/cheese/src/updater.rs Aggregator logic for fetching and processing data from both DEXes
packages/cheese/src/wallet.rs Implemented multi-chain key management using the hd-wallet crate
.gitignore Added **seedphrase to ignored patterns
packages/price-balancer/Cargo.toml Added local dependency on cheese package
packages/price-balancer/src/main.rs Removed local utility functions in favor of imported functions from cheese::common
packages/cheese/src/solana.rs Added comment indicating the package is currently empty
packages/price-balancer/src/main.rs Removed local utility functions in favor of imported functions from cheese::common

Sequence Diagram

sequenceDiagram
    participant Client
    participant Meteora
    participant Raydium
    participant Wallet
    participant Balancer

    Client->>Meteora: Fetch Cheese Pools
    Meteora-->>Client: Return Pool Data
    Client->>Raydium: Fetch Additional Pool Info
    Raydium-->>Client: Return Pool Details
    Client->>Wallet: Initialize Wallet
    Client->>Balancer: Run Price Balancer
    Balancer->>Wallet: Calculate Trades
    Balancer-->>Client: Return Balancing Results
Loading

Poem

🧀 A Cheesy Adventure in Code 🚀

Rust wheels spinning, pools so bright,
Meteora and Raydium take flight!
Wallets dancing, tokens in sync,
Blockchain magic faster than a blink!
Cheese, oh cheese, our code's delight!

  • Coded by CodeRabbit 🐰

Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR. (Beta)
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@faddat
Copy link
Collaborator Author

faddat commented Dec 29, 2024

@coderabbitai reveiw

@coderabbitai
Copy link

coderabbitai bot commented Dec 29, 2024

@faddat: Understood. I will review the changes.

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🧹 Nitpick comments (21)
packages/cheese/src/balancer.rs (5)

1-5: Consider using Rust doc comments for module-level documentation.

Currently, the file begins with regular line comments. You might switch to Rust doc comments (//!) so these appear more naturally in generated documentation, should you wish to expose and reference them in library docs.


16-21: Clarify naming for Wallet fields.

Field leftover_other may be unclear for future readers. Consider renaming it to something more descriptive (e.g., leftover_stable or leftover_fiat) to reflect its intended usage.

- pub leftover_other: f64,
+ pub leftover_stable: f64,

35-40: Make wallet initialization configuration more flexible.

Hardcoding leftover_cheese: 10_000.0 and leftover_other: 0.0 in the code can limit reuse. Consider passing these values as parameters or loading them from a config, to make the library more adaptable in different environments.


89-128: Handle stable or partial purchase flow for underpriced pools.

When pools are underpriced, the code prints a “skip” message rather than performing a purchase (lines 126-127). If this is intentional for your prototype, it's fine. Otherwise, ensure leftover stable tokens are handled consistently (e.g., partial buys).


165-174: Consider structured logging instead of println!.

Using a structured logging library (e.g., tracing) can help produce more informative logs, preserve context across async calls, and be filtered at different verbosity levels. This will be particularly helpful in larger-scale or production deployments.

packages/cheese/src/updater.rs (3)

1-6: Consider Rust doc comments for module-level summaries.

Switching from regular comments to //! doc comments makes it easier for consumers of the library to see these docs in rendered form, facilitating discoverability.


18-39: Add field-level documentation for DisplayPool and CheeseAggregates.

To improve maintainability, consider annotating each field (e.g., source, other_mint, etc.) with doc comments describing its intended meaning. This helps future developers or external users understand their purpose at a glance.


195-239: Consider a tabular or column-alignment library for cleaner formatting.

The manual alignment approach is prone to spacing issues as fields grow or shrink in length. Using libraries like prettytable-rs or tabled can simplify code and ensure consistent table output.

packages/cheese/src/meteora.rs (2)

8-8: Monitor duplication across crates.

fetch_meteora_cheese_pools is similarly implemented in packages/lib/src/meteora.rs. If duplication is intentional for separation of concerns, it’s fine. Otherwise, consider consolidating them into a single canonical location to reduce maintenance overhead.


42-47: Parameterize base_url, page size, or concurrency.

By centralizing these values as constants or config parameters, you can more easily tune them for different environments without changing source code. For instance, dev vs. production servers might warrant different paging or concurrency strategies.

packages/lib/src/meteora.rs (2)

1-3: Consolidate repeated logic if possible.

This file largely duplicates functionality from packages/cheese/src/meteora.rs. Depending on your architecture, consider unifying them to avoid parallel maintenance. If you truly need separate logic for library vs. cheese package uses, clarify in the code or documentation why it’s beneficial.


39-86: Confirm whether parallel fetches are possible.

Currently, you do a serialized pagination loop, which might increase latency. If Meteora’s API supports concurrent or batched requests without rate-limit issues, consider parallelizing or implementing batch requests for improved performance.

packages/lib/src/raydium.rs (2)

28-29: Use descriptive struct field alias to avoid confusion.
The #[serde(default, alias = "type")] attribute is helpful, but consider a more descriptive alias (e.g., "pool_type") to make the code clearer to future maintainers. Keeping “type” as an alias might conflict with language keywords or cause confusion.


78-107: Leverage structured logging instead of println.
Using structured logs (e.g., tracing or log) benefits production-grade apps, giving better filtering and context on environment or severity levels.

packages/cheese/src/common.rs (1)

57-91: Consider limiting the number of pages or implementing a backoff strategy.
When fetching multiple pages in a loop, always consider potential rate limits or large datasets that might degrade performance. Adding a backoff or page limit can make the process more robust.

packages/lib/src/lib.rs (2)

40-46: Keep deserializing helpers consistent with usage.
de_string_to_f64 is useful. If additional numeric fields need conversion, consider making helper functions more generic or re-check if each numeric field needs defaulting on parse errors.


50-98: Ensure property-based tests for pagination.
Because pagination can lead to edge cases (e.g., off-by-one errors, large pages), consider adding property-based tests or large-scale tests to verify correctness.

Do you want me to help write tests to confirm correct handling of large pagination sequences?

packages/combined_code.txt (4)

450-451: Use snake_case for field names in Rust structs

The field hasNextPage doesn't follow Rust's naming convention of using snake_case.

-    hasNextPage: bool,
+    has_next_page: bool,

505-516: Add concrete documentation for CheeseAggregates fields

The struct has placeholder comments. Consider adding more specific documentation for each field to improve code maintainability.

 #[derive(Debug, Default)]
 struct CheeseAggregates {
-    // e.g. total USD liquidity across all pools
+    /// Total USD liquidity summed across all pools (Meteora + Raydium)
     total_liquidity_usd: f64,
-    // total trades all-time, or daily trades, etc.
-    // This is a placeholder example
+    /// Total number of trades across all pools
     total_trades_all_time: u64,
-    // total Cheese quantity (summed across all pools)
+    /// Total Cheese token quantity summed across all pools
     total_cheese_qty: f64,
-    // total daily volume across all Cheese pools
+    /// Total 24-hour trading volume in USD across all pools
     total_volume_24h: f64,

796-810: Improve robustness of parse_other_token_name function

The function makes assumptions about the pool name format and could be more robust.

 fn parse_other_token_name(pool_name: &str) -> String {
+    if pool_name.is_empty() {
+        return String::new();
+    }
+
     let parts: Vec<&str> = pool_name.split('-').collect();
     if parts.len() == 2 {
         let left = parts[0].trim();
         let right = parts[1].trim();
-        if left.contains("🧀") || left.to_lowercase().contains("cheese") {
+        let is_left_cheese = left.contains("🧀") || 
+            left.to_lowercase().contains("cheese");
+        let is_right_cheese = right.contains("🧀") || 
+            right.to_lowercase().contains("cheese");
+
+        if is_left_cheese && !is_right_cheese {
             return right.to_string();
         }
-        if right.contains("🧀") || right.to_lowercase().contains("cheese") {
+        if is_right_cheese && !is_left_cheese {
             return left.to_string();
         }
-        return right.to_string();
+        // If both or neither contain cheese, return the right part
+        return right.to_string();
     }
     pool_name.to_string()
 }

812-856: Extract table column widths as constants

The print_table function uses magic numbers for column widths. Consider extracting these as constants for better maintainability.

+const TABLE_WIDTHS: &[usize] = &[8, 44, 10, 10, 10, 10, 12, 12, 5, 44];
+const TABLE_HEADERS: &[&str] = &[
+    "Source",
+    "Other Mint",
+    "Symbol",
+    "Cheese Qty",
+    "Other Qty",
+    "Pool Type",
+    "Liquidity($)",
+    "Volume($)",
+    "Fee",
+    "Pool Address",
+];

 fn print_table(pools: &[DisplayPool]) {
     println!(
         "| {:<8} | {:<44} | {:<10} | {:>10} | {:>10} | {:<10} | {:>12} | {:>12} | {:>5} | {:<44} |",
-        "Source",
-        "Other Mint",
-        "Symbol",
-        "Cheese Qty",
-        "Other Qty",
-        "Pool Type",
-        "Liquidity($)",
-        "Volume($)",
-        "Fee",
-        "Pool Address",
+        TABLE_HEADERS[0],
+        TABLE_HEADERS[1],
+        TABLE_HEADERS[2],
+        TABLE_HEADERS[3],
+        TABLE_HEADERS[4],
+        TABLE_HEADERS[5],
+        TABLE_HEADERS[6],
+        TABLE_HEADERS[7],
+        TABLE_HEADERS[8],
+        TABLE_HEADERS[9],
     );
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between ecb0075 and 40645fc.

⛔ Files ignored due to path filters (1)
  • Cargo.lock is excluded by !**/*.lock
📒 Files selected for processing (14)
  • Cargo.toml (1 hunks)
  • packages/cheese/Cargo.toml (1 hunks)
  • packages/cheese/src/balancer.rs (1 hunks)
  • packages/cheese/src/common.rs (1 hunks)
  • packages/cheese/src/lib.rs (1 hunks)
  • packages/cheese/src/meteora.rs (1 hunks)
  • packages/cheese/src/raydium.rs (1 hunks)
  • packages/cheese/src/updater.rs (1 hunks)
  • packages/combined_code.txt (1 hunks)
  • packages/lib/Cargo.toml (1 hunks)
  • packages/lib/src/common.rs (1 hunks)
  • packages/lib/src/lib.rs (1 hunks)
  • packages/lib/src/meteora.rs (1 hunks)
  • packages/lib/src/raydium.rs (1 hunks)
✅ Files skipped from review due to trivial changes (4)
  • packages/lib/src/common.rs
  • packages/cheese/src/lib.rs
  • packages/lib/Cargo.toml
  • packages/cheese/Cargo.toml
🧰 Additional context used
🪛 LanguageTool
packages/combined_code.txt

[typographical] ~862-~862: Two consecutive dots
Context: ... } else { format!("{}…", &input[..max_len.saturating_sub(1)]) } } ===...

(DOUBLE_PUNCTUATION)

🪛 Gitleaks (8.21.2)
packages/combined_code.txt

878-878: Detected a Generic API Key, potentially exposing access to various services and sensitive operations.

(generic-api-key)

🔇 Additional comments (4)
packages/cheese/src/balancer.rs (1)

49-68: Verify placeholder logic for implied price.

By dividing p.pool_tvl by 500.0, you're assigning a simple placeholder for price_usd. Confirm this meets your real-world pricing model. Otherwise, refactor to a more accurate or data-driven approach, especially if the final application depends on precise values.

packages/cheese/src/updater.rs (1)

42-60: Validate the aggregator’s approach to merging data.

The aggregator straightforwardly combines Meteora and Raydium pool data, but these sources may report different or overlapping pools. Consider a strategy for deduplication or distinguishing partial/inconsistent data if they refer to the same pool in different ways.

packages/cheese/src/common.rs (1)

29-33: Validate field aliases to avoid potential conflicts.
You're using #[serde(alias = "trading_volume")] for daily_volume. Ensure that no collisions occur if additional or legacy aliases are introduced.

Cargo.toml (1)

5-5: Workspace structure confirmed.
Adding packages/lib to the workspace is appropriate. Ensure that any local dependencies or path references for packages/lib are updated across all relevant modules.

Comment on lines +130 to +163
// Step 5: For pools under $600, deposit Cheese + "other token"
let mut under_600: Vec<&CheesePoolPrice> =
pool_prices.iter().filter(|pp| pp.tvl < 600.0).collect();
under_600.sort_by(|a, b| {
a.tvl
.partial_cmp(&b.tvl)
.unwrap_or(std::cmp::Ordering::Equal)
});

for pp in under_600 {
let needed = 600.0 - pp.tvl;
println!(
"[{}] TVL ${:.2} < $600 => deposit Cheese + other token to raise ~${:.2}",
pp.pool_name, pp.tvl, needed
);
let half_needed = needed / 2.0;
let cheese_deposit = half_needed / fair_price;
if wallet.leftover_cheese < cheese_deposit {
println!("Not enough cheese leftover. Skipping...");
continue;
}
wallet.leftover_cheese -= cheese_deposit;

if wallet.leftover_other < half_needed {
println!("Not enough 'other' leftover. Skipping or partial deposit...");
continue;
}
wallet.leftover_other -= half_needed;

println!(
" -> Deposited ~{:.2} Cheese & ${:.2} of other => new TVL ~600",
cheese_deposit, half_needed
);
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Enable partial deposits for low-TVL pools if feasible.

Currently, the code checks if the wallet has enough tokens to deposit the full amount. If not, it simply logs “Skipping...”. You could handle partial deposits to raise TVL closer to $600, which might be beneficial if resources aren't sufficient for a full deposit.

Comment on lines +78 to +102
let url = format!(
"https://api-v3.raydium.io/pools/info/mint?mint1={}&poolType=all&poolSortField=default&sortType=desc&pageSize=1000&page=1",
CHEESE_MINT
);
println!("Requesting Raydium cheese pools from {url}");

let resp = client.get(&url).send().await?;
if !resp.status().is_success() {
return Err(anyhow!("Raydium request failed: {}", resp.status()));
}

let parsed: RaydiumMintPoolsResponse = resp.json().await?;
if !parsed.success {
return Err(anyhow!(
"Raydium cheese-pools returned success=false for id {}",
parsed.id
));
}

println!(
"Raydium /pools/info/mint returned {} items\n",
parsed.data.count
);
Ok(parsed.data.data)
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Consider pagination or limiting large fetches.

The Raydium endpoint is requested with pageSize=1000, but if the dataset grows, you’ll need a strategy for paging or repeated requests similar to how you handle Meteora. Otherwise, extremely large responses may degrade performance.

Comment on lines +641 to +791
#[tokio::main]
async fn main() -> Result<()> {
let client = Client::new();

// 1) fetch cheese pools from Meteora
let meteora_pools = fetch_meteora_cheese_pools(&client).await?;

// 2) gather mints from those pools
let all_mints = gather_all_mints(&meteora_pools);
let mut all_mints_vec: Vec<String> = all_mints.into_iter().collect();
all_mints_vec.sort();

// 3) fetch minted data for those from Raydium
let minted_data = fetch_raydium_mint_ids(&client, &all_mints_vec).await?;
let mut mint_to_symbol = HashMap::new();
for maybe_item in &minted_data {
if let Some(item) = maybe_item {
if !item.address.is_empty() {
mint_to_symbol.insert(item.address.clone(), item.symbol.clone());
}
}
}

// 4) convert meteora -> DisplayPool
let mut cheese_aggs = CheeseAggregates::default();
let mut final_pools = Vec::new();
for pool in &meteora_pools {
// figure out which side is Cheese, which is other
let (cheese_ix, other_ix) = if pool.pool_token_mints.len() == 2 {
if pool.pool_token_mints[0] == CHEESE_MINT {
(0, 1)
} else {
(1, 0)
}
} else {
(0, 0)
};

let cheese_amt_str = if pool.pool_token_amounts.len() > cheese_ix {
pool.pool_token_amounts[cheese_ix].clone()
} else {
"".to_string()
};
// parse it as f64 for aggregates
let cheese_amt_f64 = cheese_amt_str.parse::<f64>().unwrap_or(0.0);

let other_mint = if pool.pool_token_mints.len() > other_ix {
pool.pool_token_mints[other_ix].clone()
} else {
"".to_string()
};
let other_amt_str = if pool.pool_token_amounts.len() > other_ix {
pool.pool_token_amounts[other_ix].clone()
} else {
"".to_string()
};
let other_amt_f64 = other_amt_str.parse::<f64>().unwrap_or(0.0);

// fallback symbol
let other_symbol = mint_to_symbol
.get(&other_mint)
.cloned()
.unwrap_or_else(|| parse_other_token_name(&pool.pool_name));

// aggregate
cheese_aggs.total_liquidity_usd += pool.pool_tvl;
cheese_aggs.total_volume_24h += pool.daily_volume;
cheese_aggs.total_cheese_qty += cheese_amt_f64;
// total_trades_all_time is arbitrary placeholder here
cheese_aggs.total_trades_all_time += 1; // pretend each pool is "one trade"? replace as needed

final_pools.push(DisplayPool {
source: "Meteora".to_string(),
other_mint: other_mint,
other_symbol,
cheese_qty: format!("{:.2}", cheese_amt_f64),
other_qty: format!("{:.2}", other_amt_f64),
pool_type: pool.pool_type.clone(),
liquidity_usd: format!("{:.2}", pool.pool_tvl),
volume_usd: format!("{:.2}", pool.daily_volume),
fee: pool.total_fee_pct.clone(),
pool_address: pool.pool_address.clone(),
});
}

// 5) fetch Raydium cheese pools
let raydium_cheese_pools = fetch_raydium_cheese_pools(&client).await?;
for rp in &raydium_cheese_pools {
let (cheese_side_amt, other_side_amt, other_mint_addr, other_symbol) =
if rp.mintA.address == CHEESE_MINT {
let oh_mint = rp.mintB.address.clone();
let oh_sym = mint_to_symbol
.get(&oh_mint)
.cloned()
.unwrap_or_else(|| rp.mintB.symbol.clone());
(rp.mint_amount_a, rp.mint_amount_b, oh_mint, oh_sym)
} else {
let oh_mint = rp.mintA.address.clone();
let oh_sym = mint_to_symbol
.get(&oh_mint)
.cloned()
.unwrap_or_else(|| rp.mintA.symbol.clone());
(rp.mint_amount_b, rp.mint_amount_a, oh_mint, oh_sym)
};

cheese_aggs.total_liquidity_usd += rp.tvl;
cheese_aggs.total_volume_24h += rp.day.volume;
cheese_aggs.total_cheese_qty += cheese_side_amt;
cheese_aggs.total_trades_all_time += 2; // e.g. let's pretend each Raydium pool is "two trades"

let fee_str = format!("{:.4}", rp.feeRate);
final_pools.push(DisplayPool {
source: "Raydium".to_string(),
other_mint: other_mint_addr,
other_symbol,
cheese_qty: format!("{:.2}", cheese_side_amt),
other_qty: format!("{:.2}", other_side_amt),
pool_type: rp.r#type.clone(),
liquidity_usd: format!("{:.2}", rp.tvl),
volume_usd: format!("{:.2}", rp.day.volume),
fee: fee_str,
pool_address: rp.pool_id.clone(),
});
}

// Print stats first
println!("===== Cheese Aggregates =====");
println!(
"Total Liquidity (USD): ${:.2}",
cheese_aggs.total_liquidity_usd
);
println!(
"Total 24H Volume (USD): ${:.2}",
cheese_aggs.total_volume_24h
);
println!(
"All-Time Trades: {}",
cheese_aggs.total_trades_all_time
);
println!(
"Total Cheese qty: {:.2}",
cheese_aggs.total_cheese_qty
);
println!("=============================\n");

// Then print table
print_table(&final_pools);

sleep(Duration::from_secs(2)).await;
Ok(())
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Refactor main function for better maintainability

The main function is quite long and handles multiple responsibilities. Consider breaking it down into smaller, focused functions.

  1. Extract the Meteora processing logic (lines 667-724)
  2. Extract the Raydium processing logic (lines 728-764)
  3. Extract the stats printing logic (lines 767-785)
  4. Remove the arbitrary sleep at the end

Example refactor for the Meteora processing:

async fn process_meteora_pools(
    meteora_pools: &[MeteoraPool],
    mint_to_symbol: &HashMap<String, String>,
    cheese_aggs: &mut CheeseAggregates,
) -> Vec<DisplayPool> {
    let mut final_pools = Vec::new();
    for pool in meteora_pools {
        let (cheese_ix, other_ix) = get_token_indices(pool);
        let (cheese_amt, other_amt) = get_token_amounts(pool, cheese_ix, other_ix);
        // ... rest of the processing logic
    }
    final_pools
}

Comment on lines +521 to +568
async fn fetch_meteora_cheese_pools(client: &Client) -> Result<Vec<MeteoraPool>> {
let base_url = "https://amm-v2.meteora.ag";
let search_url = format!("{}/pools/search", base_url);

let mut all_pools = Vec::new();
let mut page = 0;
let size = 50;

loop {
println!("Requesting page {page} from {search_url}");
let resp = client
.get(&search_url)
.query(&[
("page".to_string(), page.to_string()),
("size".to_string(), size.to_string()),
("include_token_mints".to_string(), CHEESE_MINT.to_string()),
])
.send()
.await?;

if !resp.status().is_success() {
return Err(anyhow!("Meteora request failed: {}", resp.status()));
}

let parsed: PaginatedPoolSearchResponse = resp.json().await?;
println!(
"Got {} pools on page {}, total_count={}",
parsed.data.len(),
parsed.page,
parsed.total_count
);

all_pools.extend(parsed.data);

let fetched_so_far = ((page + 1) * size) as i32;
if fetched_so_far >= parsed.total_count {
break;
}
page += 1;
}

println!(
"\nFetched a total of {} Cheese pools from Meteora.\n",
all_pools.len()
);

Ok(all_pools)
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Add timeout and retry mechanism for API requests

The function makes HTTP requests without timeouts and retry logic, which could lead to hanging requests or unnecessary failures in case of temporary network issues.

Consider:

  1. Adding timeouts to the client
  2. Implementing a retry mechanism with backoff
  3. Moving API URLs to configuration
 async fn fetch_meteora_cheese_pools(client: &Client) -> Result<Vec<MeteoraPool>> {
-    let base_url = "https://amm-v2.meteora.ag";
+    let base_url = std::env::var("METEORA_API_URL")
+        .unwrap_or_else(|_| "https://amm-v2.meteora.ag".to_string());
     let search_url = format!("{}/pools/search", base_url);
 
     let mut all_pools = Vec::new();
     let mut page = 0;
     let size = 50;
+    let max_retries = 3;
+    let timeout = Duration::from_secs(10);
 
     loop {
         println!("Requesting page {page} from {search_url}");
-        let resp = client
-            .get(&search_url)
+        let mut retries = 0;
+        let resp = loop {
+            if retries >= max_retries {
+                return Err(anyhow!("Max retries reached"));
+            }
+            let result = client
+            .get(&search_url)
+            .timeout(timeout)
             .query(&[
                 ("page".to_string(), page.to_string()),
                 ("size".to_string(), size.to_string()),
                 ("include_token_mints".to_string(), CHEESE_MINT.to_string()),
             ])
-            .send()
-            .await?;
+            .send()
+            .await;
+            
+            match result {
+                Ok(resp) => break resp,
+                Err(e) => {
+                    retries += 1;
+                    println!("Request failed: {e}. Retrying {retries}/{max_retries}...");
+                    sleep(Duration::from_secs(2u64.pow(retries))).await;
+                    continue;
+                }
+            }
+        };

Committable suggestion skipped: line range outside the PR's diff.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 6

♻️ Duplicate comments (1)
packages/combined_code.txt (1)

868-876: 🛠️ Refactor suggestion

Add timeout and retry mechanism for API requests

The HTTP requests lack timeout configuration and retry logic, which could lead to hanging requests or unnecessary failures.

Note: This is similar to a previous review comment, but it's still valid and needs to be addressed.

🧹 Nitpick comments (9)
packages/price-balancer/Cargo.toml (1)

12-12: Consider adding version specification

While using a workspace-local path dependency is valid, it's recommended to also specify a version for better dependency management and documentation.

-cheese = { path = "../cheese" }
+cheese = { path = "../cheese", version = "0.1.0" }
packages/cheese/src/updater.rs (4)

1-16: Enhance module documentation with examples and usage details.

While the current documentation explains the module's purpose, it would be more helpful to include:

  • Example usage of the main functions
  • Expected return values and error cases
  • Rate limits or performance considerations when fetching from DEXes

18-40: Add field documentation and consider stronger typing.

The structs would benefit from:

  1. Documentation for each field in DisplayPool
  2. Consider using a custom type for monetary values to prevent precision issues

Example improvement:

/// Represents a display-friendly version of a DEX pool
#[derive(Debug)]
pub struct DisplayPool {
    /// The source DEX ("Meteora" or "Raydium")
    pub source: String,
    /// The mint address of the other token in the pool
    pub other_mint: String,
    // ... (document remaining fields)
}

179-193: Improve token name parsing robustness.

The current implementation has potential edge cases:

  1. No handling for malformed pool names
  2. Case-sensitive cheese token detection
  3. Default fallback might be confusing
 fn parse_other_token_name(pool_name: &str) -> String {
+    // Normalize the input
+    let pool_name = pool_name.trim();
+    if pool_name.is_empty() {
+        return "UNKNOWN".to_string();
+    }
+
     let parts: Vec<&str> = pool_name.split('-').collect();
     if parts.len() == 2 {
         let left = parts[0].trim();
         let right = parts[1].trim();
-        if left.contains("🧀") || left.to_lowercase().contains("cheese") {
+        let left_lower = left.to_lowercase();
+        let right_lower = right.to_lowercase();
+        if left_lower.contains("cheese") || left.contains("🧀") {
             return right.to_string();
         }
-        if right.contains("🧀") || right.to_lowercase().contains("cheese") {
+        if right_lower.contains("cheese") || right.contains("🧀") {
             return left.to_string();
         }
         return right.to_string();
     }
-    pool_name.to_string()
+    "UNKNOWN".to_string()
 }

195-239: Extract table formatting constants.

The table formatting uses hard-coded column widths. Consider:

  1. Extracting these as constants
  2. Making the table width configurable
  3. Adding support for dynamic width calculation
const COLUMN_WIDTHS: &[usize] = &[8, 44, 10, 10, 10, 10, 12, 12, 5, 44];
const COLUMN_HEADERS: &[&str] = &["Source", "Other Mint", "Symbol", "Cheese Qty", "Other Qty", 
    "Pool Type", "Liquidity($)", "Volume($)", "Fee", "Pool Address"];
packages/combined_code.txt (2)

1249-1261: Improve documentation for CheeseAggregates

The struct has placeholder comments and lacks clear documentation about the meaning and calculation of each field.

Add comprehensive documentation:

 // Additional stats about Cheese
 #[derive(Debug, Default)]
+/// Aggregated statistics for Cheese pools across all DEXes
+///
+/// This struct tracks key metrics including:
+/// * Total liquidity in USD
+/// * Trading volume
+/// * Number of trades
+/// * Total Cheese tokens in pools
 struct CheeseAggregates {
-    // e.g. total USD liquidity across all pools
+    /// Total USD value locked across all pools
     total_liquidity_usd: f64,
-    // total trades all-time, or daily trades, etc.
-    // This is a placeholder example
+    /// Total number of trades executed across all pools
+    /// Note: This is an approximate count as different DEXes track trades differently
     total_trades_all_time: u64,
-    // total Cheese quantity (summed across all pools)
+    /// Total amount of Cheese tokens in all liquidity pools
     total_cheese_qty: f64,
-    // total daily volume across all Cheese pools
+    /// Total trading volume in USD over the last 24 hours
     total_volume_24h: f64,
 }

1603-1609: Improve Unicode handling in truncate function

The current implementation might split Unicode characters in the middle when truncating strings.

Use char boundaries for safe truncation:

 fn truncate(input: &str, max_len: usize) -> String {
     if input.len() <= max_len {
         input.to_string()
     } else {
-        format!("{}…", &input[..max_len.saturating_sub(1)])
+        let mut chars = input.char_indices();
+        let mut last_idx = 0;
+        while let Some((idx, _)) = chars.next() {
+            if idx >= max_len.saturating_sub(1) {
+                break;
+            }
+            last_idx = idx;
+        }
+        format!("{}…", &input[..last_idx])
     }
 }
🧰 Tools
🪛 LanguageTool

[typographical] ~1607-~1607: Two consecutive dots
Context: ... } else { format!("{}…", &input[..max_len.saturating_sub(1)]) } } ===...

(DOUBLE_PUNCTUATION)

packages/cheese/src/wallet.rs (2)

29-92: Key derivation logic is correct but consider full derivation paths.

While this demonstrates how to convert the mnemonic seed into multiple sub-seeds, real-world usage typically applies slip10 or chain-specific derivation paths. If you plan to handle real funds, ensure you follow recommended derivation paths for each chain to avoid potential conflicts with common wallets.

🧰 Tools
🪛 Gitleaks (8.21.2)

58-58: Detected a Generic API Key, potentially exposing access to various services and sensitive operations.

(generic-api-key)


69-69: Detected a Generic API Key, potentially exposing access to various services and sensitive operations.

(generic-api-key)


80-80: Detected a Generic API Key, potentially exposing access to various services and sensitive operations.

(generic-api-key)


94-100: File I/O for reading the seed phrase.

Reading a local seedphrase from disk is sufficient for development, but consider a more secure secrets management approach for production to avoid accidental leakage.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 40645fc and 3c3077d.

⛔ Files ignored due to path filters (1)
  • Cargo.lock is excluded by !**/*.lock
📒 Files selected for processing (11)
  • .gitignore (1 hunks)
  • Cargo.toml (1 hunks)
  • packages/cheese/Cargo.toml (1 hunks)
  • packages/cheese/src/lib.rs (1 hunks)
  • packages/cheese/src/price.rs (1 hunks)
  • packages/cheese/src/raydium.rs (1 hunks)
  • packages/cheese/src/updater.rs (1 hunks)
  • packages/cheese/src/wallet.rs (1 hunks)
  • packages/combined_code.txt (1 hunks)
  • packages/price-balancer/Cargo.toml (1 hunks)
  • packages/price-balancer/src/main.rs (1 hunks)
✅ Files skipped from review due to trivial changes (2)
  • packages/cheese/src/price.rs
  • .gitignore
🚧 Files skipped from review as they are similar to previous changes (3)
  • Cargo.toml
  • packages/cheese/src/lib.rs
  • packages/cheese/src/raydium.rs
🧰 Additional context used
🪛 Gitleaks (8.21.2)
packages/cheese/src/wallet.rs

58-58: Detected a Generic API Key, potentially exposing access to various services and sensitive operations.

(generic-api-key)


69-69: Detected a Generic API Key, potentially exposing access to various services and sensitive operations.

(generic-api-key)


80-80: Detected a Generic API Key, potentially exposing access to various services and sensitive operations.

(generic-api-key)


179-179: Detected a Generic API Key, potentially exposing access to various services and sensitive operations.

(generic-api-key)


208-208: Detected a Generic API Key, potentially exposing access to various services and sensitive operations.

(generic-api-key)


238-238: Detected a Generic API Key, potentially exposing access to various services and sensitive operations.

(generic-api-key)

packages/combined_code.txt

1623-1623: Detected a Generic API Key, potentially exposing access to various services and sensitive operations.

(generic-api-key)

🪛 LanguageTool
packages/combined_code.txt

[uncategorized] ~451-~451: Loose punctuation mark.
Context: ...ration::from_secs(2)).await; Ok(()) } fn parse_other_token_name(pool_name: &s...

(UNLIKELY_OPENING_PUNCTUATION)


[typographical] ~519-~519: Two consecutive dots
Context: ... } else { format!("{}…", &input[..max_len.saturating_sub(1)]) } } ===...

(DOUBLE_PUNCTUATION)


[typographical] ~1607-~1607: Two consecutive dots
Context: ... } else { format!("{}…", &input[..max_len.saturating_sub(1)]) } } ===...

(DOUBLE_PUNCTUATION)

🔇 Additional comments (11)
packages/price-balancer/Cargo.toml (1)

12-12: Verify the cheese package structure and manifest

The dependency on the local cheese package looks correct. However, let's verify its existence and structure.

✅ Verification successful

Cheese package structure verified and properly configured

The verification confirms that:

  • The cheese package exists at the correct location
  • It has a valid Cargo.toml with proper package metadata
  • Contains a lib.rs file in the src directory, indicating it's a library crate
  • The package version is properly specified (0.1.0)

The local dependency declaration in price-balancer's Cargo.toml is correctly structured and points to a valid Rust package.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify the cheese package exists and has proper Rust package structure

# Check if the cheese package directory exists and contains Cargo.toml
if [ -f "packages/cheese/Cargo.toml" ]; then
    echo "Found cheese package Cargo.toml"
    # Display package name and version
    cat packages/cheese/Cargo.toml | rg -A 3 '^\[package\]'
    # Check for lib.rs or main.rs
    fd -t f "(lib|main)\.rs$" packages/cheese/src/
else
    echo "Error: cheese package Cargo.toml not found"
fi

Length of output: 307

packages/price-balancer/src/main.rs (3)

2-3: Good use of imported utility functions.

Replacing the local de_string_to_f64 and percent_diff with their equivalents from cheese::common keeps this file concise and avoids duplicating logic. This is a clean refactor that promotes better maintainability and reusability.


2-3: Ensure consistent error handling for de_string_to_f64.

While the imported function helps with parsing string data into f64, consider verifying if it provides adequate error handling or fallback approaches for invalid string inputs. If user-input data or uncertain sources feed into this deserialization, fallback or reporting mechanisms might be beneficial.


2-3: Confirm the behavior of negative percentages.

When using percent_diff(pp.price_usd, fair_price) to determine if a pool is overpriced or underpriced, confirm whether negative returns are correctly handled downstream. It may be helpful to incorporate an absolute value check or a specialized function if negative diffs are unintentional in some contexts.

packages/cheese/src/updater.rs (1)

99-99: Document the trade count logic.

The trade count incrementation seems arbitrary (1 for Meteora, 2 for Raydium). This should be either:

  1. Documented to explain the reasoning
  2. Fetched from actual trade count data if available

Also applies to: 137-137

packages/cheese/src/wallet.rs (5)

13-27: Struct initialization looks well-organized.

Your MultiChainKeys struct provides a clear model for storing chain-specific keys. However, ensure that the sources of these keys (seed file, environment variable, etc.) are secure in production environments. Typically, it's best practice to avoid committing real seed files to a public repo to prevent key exposure.


58-58: Likely false positives from static analysis regarding generic API keys.

These lines generate public keys (e.g., sol_public, sui_public, aptos_public). Although static analysis flags them as generic API keys, these are actually cryptographic keys derived from the seed. As a precaution:

  • Ensure that your seed file is not tracked in version control.
  • Avoid pushing real wallet secrets to remote repositories.

Also applies to: 69-69, 80-80

🧰 Tools
🪛 Gitleaks (8.21.2)

58-58: Detected a Generic API Key, potentially exposing access to various services and sensitive operations.

(generic-api-key)


102-134: Transaction signing covers multiple chains effectively.

This implementation is straightforward but watch out for future expansions. If you add more blockchains or special signing conditions, it may be beneficial to refactor chain-specific logic into separate methods or traits for cleaner organization.


179-179: False positives from static analysis for test keys.

These lines in unit tests generate or derive public keys for testing. Gitleaks flags them as potential secrets by mistake. Keep them for test coverage only, and be mindful not to use them in production.

Also applies to: 208-208, 238-238

🧰 Tools
🪛 Gitleaks (8.21.2)

179-179: Detected a Generic API Key, potentially exposing access to various services and sensitive operations.

(generic-api-key)


137-271: Unit tests look thorough.

Your tests cover each supported chain, verifying both correct signature lengths and error handling for unsupported chains. This ensures reliable coverage and lowers the risk of regressions.

🧰 Tools
🪛 Gitleaks (8.21.2)

179-179: Detected a Generic API Key, potentially exposing access to various services and sensitive operations.

(generic-api-key)


208-208: Detected a Generic API Key, potentially exposing access to various services and sensitive operations.

(generic-api-key)


238-238: Detected a Generic API Key, potentially exposing access to various services and sensitive operations.

(generic-api-key)

packages/cheese/Cargo.toml (1)

1-14: New package dependencies look appropriate.

All dependencies are pinned, which is good for reproducible builds. Keep an eye on updates to security patches or bug fixes in these libraries.

Consider this script to check for any known vulnerabilities in pinned dependencies:

✅ Verification successful

Since we cannot use cargo audit in the sandbox environment, let me try a different approach to verify the dependencies.


Let me check the dependencies in other Cargo.toml files to see if there's any consistency in versioning or potential conflicts.


Dependencies are well-structured and consistent across workspace

The dependencies in packages/cheese/Cargo.toml are appropriate and align well with the rest of the workspace:

  • Core dependencies like tokio, reqwest, serde, and anyhow are used consistently across all packages
  • Version constraints are properly specified and compatible:
    • reqwest uses "0.12" across all packages
    • serde uses "1.0" with consistent features
    • anyhow uses "1.0"
  • Crypto-related dependencies (bip39, ed25519-dalek, k256, rand_core) are properly pinned and only included where needed (cheese package)
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Checking for known vulnerabilities in pinned crates using cargo-audit
# (If cargo-audit is available in the environment)

cargo audit

Length of output: 109


Script:

#!/bin/bash
# Check if there are any other Cargo.toml files in the repository
fd Cargo.toml

# Look for any security-related comments or documentation
rg -i "security|vulnerability|advisory" --type md

Length of output: 174


Script:

#!/bin/bash
# Check contents of all Cargo.toml files
for f in $(fd Cargo.toml); do
  echo "=== $f ==="
  cat "$f"
  echo
done

Length of output: 1636

Comment on lines +83 to +84
let cheese_amt_f64 = cheese_amt_str.parse::<f64>().unwrap_or(0.0);
let other_amt_f64 = other_amt_str.parse::<f64>().unwrap_or(0.0);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Replace unwrap with proper error handling.

The current unwrap usage could cause runtime panics. Consider using proper error handling:

-        let cheese_amt_f64 = cheese_amt_str.parse::<f64>().unwrap_or(0.0);
-        let other_amt_f64 = other_amt_str.parse::<f64>().unwrap_or(0.0);
+        let cheese_amt_f64 = cheese_amt_str.parse::<f64>()
+            .map_err(|e| anyhow::anyhow!("Failed to parse cheese amount: {}", e))?;
+        let other_amt_f64 = other_amt_str.parse::<f64>()
+            .map_err(|e| anyhow::anyhow!("Failed to parse other amount: {}", e))?;
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
let cheese_amt_f64 = cheese_amt_str.parse::<f64>().unwrap_or(0.0);
let other_amt_f64 = other_amt_str.parse::<f64>().unwrap_or(0.0);
let cheese_amt_f64 = cheese_amt_str.parse::<f64>()
.map_err(|e| anyhow::anyhow!("Failed to parse cheese amount: {}", e))?;
let other_amt_f64 = other_amt_str.parse::<f64>()
.map_err(|e| anyhow::anyhow!("Failed to parse other amount: {}", e))?;

Comment on lines +45 to +50
// 1) fetch from Meteora
let meteora_pools = fetch_meteora_cheese_pools(&client).await?;

// 2) fetch from Raydium
let raydium_pools = fetch_raydium_cheese_pools(&client).await?;

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Add timeout handling for external API calls.

The DEX API calls could potentially hang. Consider adding timeouts:

+    let timeout = Duration::from_secs(30);
+    let client = Client::builder()
+        .timeout(timeout)
+        .build()?;

Committable suggestion skipped: line range outside the PR's diff.

Comment on lines +126 to +128
let price_usd = if p.pool_tvl > 0.0 {
p.pool_tvl / 500.0
} else {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Improve price calculation logic

The current price calculation (pool_tvl / 500.0) appears to be a placeholder and may not accurately reflect the true token price. This could lead to incorrect trading decisions.

Consider implementing a more accurate price calculation based on:

  • Token reserves in the pool
  • Recent trade prices
  • Oracle data if available
-        let price_usd = if p.pool_tvl > 0.0 {
-            p.pool_tvl / 500.0
-        } else {
+        let price_usd = if p.pool_tvl > 0.0 && !p.pool_token_amounts.is_empty() {
+            // Calculate price based on token reserves
+            let cheese_reserves = p.pool_token_amounts[cheese_ix].parse::<f64>().unwrap_or(0.0);
+            let other_reserves = p.pool_token_amounts[other_ix].parse::<f64>().unwrap_or(0.0);
+            if cheese_reserves > 0.0 {
+                (p.pool_tvl * other_reserves) / (2.0 * cheese_reserves)
+            } else {
+                0.0
+            }
+        } else {

Committable suggestion skipped: line range outside the PR's diff.

}

// Decide a small trade
let trade_size_cheese = 100.0;
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Consider dynamic trade sizing

The hardcoded trade size of 100.0 CHEESE tokens may not be optimal for all pool sizes and market conditions.

Implement dynamic trade sizing based on:

  • Pool liquidity
  • Price impact
  • Historical volume
-        let trade_size_cheese = 100.0;
+        let trade_size_cheese = calculate_optimal_trade_size(
+            pp.tvl,
+            pp.price_usd,
+            fair_price,
+            diff_pct
+        );

+fn calculate_optimal_trade_size(
+    pool_tvl: f64,
+    current_price: f64,
+    fair_price: f64,
+    price_diff_pct: f64
+) -> f64 {
+    // Base size on pool TVL and price difference
+    let base_size = pool_tvl * 0.01; // 1% of TVL
+    let impact_factor = (price_diff_pct / 100.0).min(0.05); // Cap at 5%
+    base_size * impact_factor
+}

Committable suggestion skipped: line range outside the PR's diff.

Comment on lines +1332 to +1333
let url = format!("https://api-v3.raydium.io/mint/ids?mints={}", joined);
println!("Requesting minted data from Raydium for mints: {joined}");
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Potential security vulnerability in URL construction

Direct string interpolation in URL construction could lead to injection vulnerabilities if the mints array contains malicious data.

Use proper URL encoding:

-    let joined = mints.join(",");
-    let url = format!("https://api-v3.raydium.io/mint/ids?mints={}", joined);
+    let joined: String = mints
+        .iter()
+        .map(|m| urlencoding::encode(m))
+        .collect::<Vec<_>>()
+        .join(",");
+    let url = format!("https://api-v3.raydium.io/mint/ids?mints={}", joined);

Committable suggestion skipped: line range outside the PR's diff.

Comment on lines +332 to +387
let mut cheese_aggs = CheeseAggregates::default();
let mut final_pools = Vec::new();

for m in &meteora_pools {
let (cheese_ix, other_ix) = if m.pool_token_mints.len() == 2 {
if m.pool_token_mints[0] == CHEESE_MINT {
(0, 1)
} else {
(1, 0)
}
} else {
(0, 0)
};

let cheese_amt_str = m
.pool_token_amounts
.get(cheese_ix)
.cloned()
.unwrap_or_default();
let other_amt_str = m
.pool_token_amounts
.get(other_ix)
.cloned()
.unwrap_or_default();

let cheese_amt_f64 = cheese_amt_str.parse::<f64>().unwrap_or(0.0);
let other_amt_f64 = other_amt_str.parse::<f64>().unwrap_or(0.0);

let other_mint = m
.pool_token_mints
.get(other_ix)
.cloned()
.unwrap_or_default();
let other_symbol = mint_to_symbol
.get(&other_mint)
.cloned()
.unwrap_or_else(|| parse_other_token_name(&m.pool_name));

cheese_aggs.total_liquidity_usd += m.pool_tvl;
cheese_aggs.total_volume_24h += m.daily_volume;
cheese_aggs.total_cheese_qty += cheese_amt_f64;
cheese_aggs.total_trades_all_time += 1;

final_pools.push(DisplayPool {
source: "Meteora".to_string(),
other_mint,
other_symbol,
cheese_qty: format!("{:.2}", cheese_amt_f64),
other_qty: format!("{:.2}", other_amt_f64),
pool_type: m.pool_type.clone(),
liquidity_usd: format!("{:.2}", m.pool_tvl),
volume_usd: format!("{:.2}", m.daily_volume),
fee: m.total_fee_pct.clone(),
pool_address: m.pool_address.clone(),
});
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Improve error handling in data processing

The code assumes all data conversions will succeed and uses unwrap_or without logging failures. This could hide issues with data quality.

Add proper error handling and logging:

-        let cheese_amt_f64 = cheese_amt_str.parse::<f64>().unwrap_or(0.0);
+        let cheese_amt_f64 = match cheese_amt_str.parse::<f64>() {
+            Ok(amt) => amt,
+            Err(e) => {
+                log::warn!(
+                    "Failed to parse cheese amount '{}' for pool {}: {}",
+                    cheese_amt_str,
+                    m.pool_address,
+                    e
+                );
+                0.0
+            }
+        };

Committable suggestion skipped: line range outside the PR's diff.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🧹 Nitpick comments (7)
packages/cheese/Cargo.toml (1)

17-17: Consider limiting tokio features for optimal binary size.

The full feature flag includes all tokio features, which may unnecessarily increase binary size. Consider specifying only the required features:

-tokio = { version = "1.36.0", features = ["full"] }
+tokio = { version = "1.36.0", features = ["rt-multi-thread", "macros", "time"] }
packages/cheese/src/common.rs (3)

1-6: Consider moving module declaration to lib.rs.

The comment // lib/src/lib.rs suggests this code might be in the wrong location. Consider moving this module declaration to lib.rs and keeping the implementation in common.rs.


11-36: Add documentation and derive additional traits for data models.

The public structs lack documentation and potentially useful derived traits.

+/// Response from the Meteora API containing paginated pool information
 #[derive(Debug, Deserialize)]
 pub struct PaginatedResponse {
     pub data: Vec<PoolInfo>,
     pub page: i32,
     pub total_count: i32,
 }

+/// Information about a specific liquidity pool
-#[derive(Debug, Deserialize)]
+#[derive(Debug, Deserialize, Clone, PartialEq)]
 pub struct PoolInfo {

Also consider converting total_fee_pct to use de_string_to_f64 like pool_tvl for consistency.


99-106: Add documentation and handle floating-point edge cases.

The percent_diff function needs documentation and should handle NaN/infinity cases.

+/// Calculates the absolute percentage difference between two numbers.
+/// 
+/// Returns:
+/// - 0.0 if both numbers are zero
+/// - NaN if either number is NaN or infinite
+/// - The percentage difference otherwise
 pub fn percent_diff(a: f64, b: f64) -> f64 {
+    if a.is_nan() || b.is_nan() || a.is_infinite() || b.is_infinite() {
+        f64::NAN
+    } else
     if (a + b) == 0.0 {
         0.0
     } else {
         ((a - b).abs() * 200.0) / (a + b)
     }
 }
packages/cheese/src/lib.rs (1)

1-6: Add module-level documentation.

The module structure is well-organized and follows Rust conventions. However, consider adding documentation comments (//!) for each module to describe their purpose and functionality. This will help users understand the role of each module in the library's API.

Example:

//! Common types and utilities shared across the crate
pub mod common;

//! Meteora DEX integration and pool data fetching
pub mod meteora;

// ... (similar for other modules)
packages/cheese/src/wallet.rs (2)

21-63: Consider adding validation for derivation paths.

The function hardcodes derivation paths without validation. Consider adding checks to ensure the paths follow the BIP44 specification for each chain.

Example validation:

fn validate_derivation_path(path: &str, coin_type: u32) -> Result<()> {
    let parsed = DerivationPath::from_str(path)?;
    let components = parsed.components();
    
    if components.len() < 3 || components[1].index != coin_type {
        return Err(anyhow!("Invalid derivation path for coin type {}", coin_type));
    }
    Ok(())
}

4-10: Consider adding feature flags for chain-specific dependencies.

The current implementation includes all chain-specific dependencies unconditionally. Consider using feature flags to allow users to opt-in to specific chains they need.

Example in Cargo.toml:

[features]
default = ["ethereum", "solana"]
ethereum = []
solana = []
aptos = []
sui = []
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 3c3077d and 6a80bb8.

⛔ Files ignored due to path filters (1)
  • Cargo.lock is excluded by !**/*.lock
📒 Files selected for processing (5)
  • packages/cheese/Cargo.toml (1 hunks)
  • packages/cheese/src/common.rs (1 hunks)
  • packages/cheese/src/lib.rs (1 hunks)
  • packages/cheese/src/solana.rs (1 hunks)
  • packages/cheese/src/wallet.rs (1 hunks)
✅ Files skipped from review due to trivial changes (1)
  • packages/cheese/src/solana.rs
🔇 Additional comments (1)
packages/cheese/Cargo.toml (1)

18-18: Verify reqwest version compatibility.

The specified reqwest version "0.12.11" seems to be ahead of the latest stable release. This might cause build issues.

✅ Verification successful

The reqwest version is correct and matches the latest stable release

The specified version "0.12.11" in the Cargo.toml file exactly matches the latest stable version available on crates.io, so there won't be any build issues related to version compatibility.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Check latest reqwest version on crates.io
curl -s https://crates.io/api/v1/crates/reqwest | jq -r '.crate.max_stable_version'

Length of output: 91

Comment on lines +49 to +97
pub async fn fetch_meteora_cheese_pools(client: &Client) -> Result<Vec<PoolInfo>> {
let base_url = "https://amm-v2.meteora.ag/pools/search";

let mut all = Vec::new();
let mut page = 0;
let size = 50;

loop {
println!("Fetching page {} from Meteora...", page);
let resp = client
.get(base_url)
.query(&[
("page", page.to_string()),
("size", size.to_string()),
("include_token_mints", CHEESE_MINT.to_string()),
])
.send()
.await?;

if !resp.status().is_success() {
return Err(anyhow!(
"Meteora /pools/search request failed: {}",
resp.status()
));
}

let parsed: PaginatedResponse = resp.json().await?;
println!(
"Got {} pools on page {}, total_count={}",
parsed.data.len(),
parsed.page,
parsed.total_count
);

all.extend(parsed.data);

let fetched_so_far = ((page + 1) * size) as i32;
if fetched_so_far >= parsed.total_count {
break;
}
page += 1;
}

println!(
"\nFetched a total of {} Cheese pools from Meteora.\n",
all.len()
);
Ok(all)
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Enhance API client robustness and configurability.

Several improvements could make this function more production-ready:

  1. Move configuration to constants or environment variables
  2. Add proper logging instead of println
  3. Implement timeout and retry logic
  4. Consider rate limiting for API calls

Example implementation:

use log::{info, warn};
use std::time::Duration;

const METEORA_BASE_URL: &str = "https://amm-v2.meteora.ag";
const PAGE_SIZE: i32 = 50;
const REQUEST_TIMEOUT: Duration = Duration::from_secs(10);

pub async fn fetch_meteora_cheese_pools(client: &Client) -> Result<Vec<PoolInfo>> {
    let mut all = Vec::new();
    let mut page = 0;

    loop {
        info!("Fetching page {} from Meteora...", page);
        
        let resp = client
            .get(&format!("{}/pools/search", METEORA_BASE_URL))
            .timeout(REQUEST_TIMEOUT)
            .query(&[
                ("page", page.to_string()),
                ("size", PAGE_SIZE.to_string()),
                ("include_token_mints", CHEESE_MINT.to_string()),
            ])
            .send()
            .await?;

        // ... rest of the implementation
    }
}

Comment on lines +73 to +89
fn main() -> Result<()> {
let keys = create_multichain_keys()?;

// Ethereum (Secp256k1)
println!("Ethereum Key (Extended): {:?}", keys.eth_key);

// Solana (Ed25519)
println!("Solana Key (Extended): {:?}", keys.sol_key);

// Aptos (Ed25519)
println!("Aptos Key (Extended): {:?}", keys.aptos_key);

// Sui (Ed25519)
println!("Sui Key (Extended): {:?}", keys.sui_key);

Ok(())
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Remove main function from library code.

The main function should not be in a library crate. Consider moving it to an example or binary target.

Create a new file examples/derive_keys.rs:

use cheese::wallet::create_multichain_keys;
use anyhow::Result;

fn main() -> Result<()> {
    let keys = create_multichain_keys()?;
    // ... rest of the printing logic
    Ok(())
}

Comment on lines +65 to +71
/// Helper: read the `seedphrase` file and trim it.
fn read_seed_phrase_file() -> Result<String> {
let path = PathBuf::from("seedphrase");
let raw = fs::read_to_string(&path)
.with_context(|| format!("Failed to read seed phrase from {:?}", path))?;
Ok(raw.trim().to_string())
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Improve seed phrase file handling security.

The current implementation has several security concerns:

  1. Hardcoded file path may expose seed phrases in predictable locations
  2. File content is read as plain text
  3. No secure memory wiping after use

Consider:

  1. Making the path configurable
  2. Using secure storage solutions
  3. Implementing secure memory wiping
-fn read_seed_phrase_file() -> Result<String> {
-    let path = PathBuf::from("seedphrase");
-    let raw = fs::read_to_string(&path)
-        .with_context(|| format!("Failed to read seed phrase from {:?}", path))?;
-    Ok(raw.trim().to_string())
+fn read_seed_phrase_file(path: impl Into<PathBuf>) -> Result<String> {
+    use secrecy::{Secret, ExposeSecret};
+    use zeroize::Zeroizing;
+    
+    let path = path.into();
+    let raw = Zeroizing::new(
+        fs::read_to_string(&path)
+            .with_context(|| format!("Failed to read seed phrase from {:?}", path))?
+    );
+    Ok(Secret::new(raw.trim().to_string()))
}

Committable suggestion skipped: line range outside the PR's diff.

Comment on lines +12 to +19
/// Our collection of derived keys for multiple chains.
#[derive(Debug)]
pub struct MultiChainKeys {
pub eth_key: ExtendedKey<bip39::Secp256k1>, // secp256k1 (Ethereum)
pub sol_key: ExtendedKey<bip39::Ed25519>, // ed25519 (Solana)
pub aptos_key: ExtendedKey<bip39::Ed25519>, // ed25519 (Aptos)
pub sui_key: ExtendedKey<bip39::Ed25519>, // ed25519 (Sui)
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codebase verification

Add test coverage for MultiChainKeys and key derivation functionality

The codebase inspection confirms that test coverage is needed. The wallet.rs module contains critical cryptographic functionality for deriving keys across multiple chains, but lacks any test coverage. Specifically, tests should verify:

  • Correct derivation paths for each chain (Ethereum, Solana, Aptos, Sui)
  • Proper key generation using known seed phrases
  • Error handling for invalid mnemonics
  • Error handling for missing/invalid seed phrase file

Key locations needing tests:

  • packages/cheese/src/wallet.rs: Add test module for MultiChainKeys and create_multichain_keys()
  • Test both successful and error cases for read_seed_phrase_file()

Note: The verification confirms the original review comment's validity. The code handles critical cryptographic operations and key derivation for multiple chains, making test coverage essential for security and correctness. The absence of tests in both the module file and lib.rs indicates this is a significant gap that should be addressed.

🔗 Analysis chain

Add test coverage for MultiChainKeys.

The struct lacks unit tests to verify correct key derivation for each chain.

Would you like me to generate comprehensive test cases for the key derivation functionality?


The previous search might have been too narrow. Let's expand our search to:

  1. Find all Rust files in the package
  2. Look for test-related patterns including test attributes and test modules

This will help us determine if tests exist in separate files or if we need to verify the test coverage specifically for MultiChainKeys.


Let's examine the actual content of these files to:

  1. Understand the full context of MultiChainKeys implementation
  2. Check if there are any existing tests in the module or in lib.rs
  3. Verify if the struct is part of the public API that needs test coverage
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Check for existing tests
rg -l "mod tests" packages/cheese/src/

Length of output: 40


Script:

#!/bin/bash
# Let's check for test files in a broader scope and with different patterns
fd -e rs . packages/cheese/

# Also search for test-related code patterns
rg -l "test|#\[test\]" packages/cheese/

Length of output: 342


Script:

#!/bin/bash
# Let's examine the wallet.rs file content to understand the context better
cat packages/cheese/src/wallet.rs

# Also check lib.rs as it might contain module-level tests
cat packages/cheese/src/lib.rs

Length of output: 3465

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant