Skip to content

Conversation

@Lementknight
Copy link
Member

@Lementknight Lementknight commented Mar 29, 2025

Summary by CodeRabbit

  • New Features

    • Introduced a new module for searching BlueSky posts with improved progress tracking and support for advanced filtering options.
    • Updated default search limit to 500 posts for optimized performance.
  • Bug Fixes

    • Enhanced error handling for API requests, providing clearer feedback when issues occur.
  • Tests

    • Added comprehensive tests to verify search functionality, input validation, and error handling.

@github-actions
Copy link

github-actions bot commented Mar 29, 2025

Dependency Review

✅ No vulnerabilities or license issues or OpenSSF Scorecard issues found.

Scanned Files

None

@Lementknight Lementknight changed the title Staging New File Scraper Refactor Mar 31, 2025
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Apr 1, 2025

"""

Walkthrough

The changes refactor the BlueSky post search logic by removing the search_posts function from mission_blue.py and delegating this responsibility to a new scraper.py module. The new module encapsulates API interaction, progress tracking, and error handling. Comprehensive unit tests for scraper.search_posts are also introduced.

Changes

File(s) Change Summary
mission_blue.py Removed local search_posts function and related imports; replaced calls with scraper.search_posts.
scraper.py Added new module with search_posts function handling API queries, pagination, progress bar, and errors.
tests/scraper_test.py Added unittest suite for scraper.search_posts covering input validation, success, and error handling.

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant mission_blue
    participant scraper
    participant BlueSkyAPI

    User->>mission_blue: Initiate post search
    mission_blue->>scraper: search_posts(params, token)
    loop While more posts and limit not reached
        scraper->>BlueSkyAPI: GET /search/posts (with params, token)
        BlueSkyAPI-->>scraper: JSON response (posts, cursor)
        scraper->>scraper: Update progress bar, accumulate posts
    end
    scraper-->>mission_blue: Return list of posts
    mission_blue-->>User: Present posts
Loading

Poem

In burrows deep, I watched code hop,
From mission_blue, the search did stop.
Now scraper scurries, swift and neat,
With progress bars and tests complete.
A rabbit’s cheer for code anew—
Fetching posts, as bunnies do!
🐇✨
"""


📜 Recent review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 0a1d4f4 and 3581b2e.

📒 Files selected for processing (2)
  • mission_blue.py (4 hunks)
  • scraper.py (1 hunks)
✅ Files skipped from review due to trivial changes (1)
  • scraper.py
🚧 Files skipped from review as they are similar to previous changes (1)
  • mission_blue.py
✨ Finishing Touches
  • 📝 Generate Docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@andewmark
Copy link
Collaborator

@Lementknight turning this over to you to review, anything else you think I should add to test?

@andewmark andewmark marked this pull request as ready for review May 23, 2025 02:27
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (7)
scraper.py (1)

83-83: Fix typo in comment.

"enxt" should be "next".

-                # Move to the enxt page if available
+                # Move to the next page if available
mission_blue.py (1)

1-1: Fix typo in module docstring.

"conatins" should be "contains".

-"""This module conatins the BlueSky Web Scrapper."""
+"""This module contains the BlueSky Web Scrapper."""
tests/scraper_test.py (5)

1-1: Fix module reference in docstring.

The docstring refers to "mission_blue module" but this file tests the scraper module.

-"""Testing suite for the mission_blue module."""
+"""Testing suite for the scraper module."""

28-28: Fix typo in docstring.

"it not provided" should be "is not provided".

-        """Test if the function raises ValueError when a token it not provided."""
+        """Test if the function raises ValueError when a token is not provided."""

38-39: Remove or complete the incomplete comment.

The comment appears to be incomplete or unnecessary.

-    # Ensure that the function returns an empty list when no posts are found
-

95-95: Fix typo in comment.

"Redircting" should be "Redirecting".

-        # Redircting stdout to StringIO
+        # Redirecting stdout to StringIO

11-104: Consider adding test coverage for pagination and posts_limit functionality.

The current tests cover basic scenarios well, but are missing coverage for:

  • Pagination handling (when cursor is returned)
  • posts_limit functionality (when total posts exceed the limit)
  • Progress bar behavior (though this might be mocked)

Would you like me to generate additional test cases to improve coverage?

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between d4a5455 and ff94fd8.

📒 Files selected for processing (3)
  • mission_blue.py (2 hunks)
  • scraper.py (1 hunks)
  • tests/scraper_test.py (1 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (1)
mission_blue.py (1)
scraper.py (1)
  • search_posts (10-96)
🔇 Additional comments (1)
mission_blue.py (1)

6-6: LGTM!

The refactoring correctly delegates the search functionality to the new scraper module.

Also applies to: 361-361

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (2)
scraper.py (2)

35-35: Docstring is consistent with implementation.

Contrary to the previous review comment, the docstring correctly states the default as 500, which matches the implementation at line 59. No changes needed.


90-96: Improve error handling pattern.

Using 'response' in locals() is a code smell and the previous review comment correctly identified this issue.

🧹 Nitpick comments (3)
scraper.py (3)

10-13: Remove redundant pylint disable comments.

The pylint disable comments appear unnecessary without clear justification for their specific violations.

 def search_posts(params, token):
-    # pylint: disable=E1102
-    # pylint: disable=C0301
-

77-81: Optimize condition and list slicing.

The condition if posts_limit is redundant since posts_limit is guaranteed to be a number. Also, slicing large lists can be memory-intensive.

-                if posts_limit and total_fetched >= posts_limit:
+                if total_fetched >= posts_limit:
                     print(
                         f"Fetched {total_fetched} posts, total: {total_fetched}/{posts_limit}"
                     )
-                    return posts[:posts_limit]
+                    # Truncate only if we exceeded the limit
+                    if len(posts) > posts_limit:
+                        posts = posts[:posts_limit]
+                    return posts

83-83: Fix typo in comment.

There's a typo "nextt" instead of "next".

-                # Move to the nextt page if available
+                # Move to the next page if available
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between ff94fd8 and 5098d24.

📒 Files selected for processing (3)
  • mission_blue.py (2 hunks)
  • scraper.py (1 hunks)
  • tests/scraper_test.py (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
  • mission_blue.py
  • tests/scraper_test.py
🔇 Additional comments (4)
scraper.py (4)

1-8: LGTM! Clean module structure and appropriate imports.

The module docstring is clear and the imports are well-organized for the functionality needed (HTTP requests and progress tracking).


45-49: LGTM! Proper input validation.

The validation correctly checks for required parameters and provides clear error messages.


51-56: LGTM! Standard API request setup.

The URL, headers, and authorization are properly configured for the BlueSky API.


58-61: LGTM! Creative progress bar implementation.

The butterfly-themed progress bar adds a nice touch while maintaining functionality.

Lementknight and others added 2 commits June 4, 2025 22:36
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
By making the clean it ensures that our scraper remains functional when scrapes get large.
Copy link
Member Author

@Lementknight Lementknight left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@Lementknight
Copy link
Member Author

@andewmark even though you wrote the logic for this refactor, since I created the PR you have to be the one to approve it and merge it.

Posts_limit was still defaulted to 1000 even though docstring said 500. Other areas were the opposite.
Everything now set to defaulted 500
Copy link
Collaborator

@andewmark andewmark left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Approving myself as per @Lementknight

@andewmark andewmark merged commit db99e2a into main Jun 4, 2025
10 checks passed
@andewmark andewmark deleted the scraper-refactor-1 branch June 4, 2025 23:28
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Create a Python File for the scraping functions and Implement Testing for Methods

3 participants