Skip to content

Conversation

@DEVisions
Copy link
Contributor

@DEVisions DEVisions commented Aug 28, 2025

Hi, I'd like to be able to give some remote files that are constantly updated as source of IPs to block (i.e. https://github.com/X4BNet/lists_vpn/tree/main/output) so I implemented these changes to be able to do so. This works if you use it like this -t remote-file https://xyz.com/ip-list.txt but also like this -t _all --url https://xyz.com/ip-list.txt. Multiple --url flag are supported so sources are fetched and merged in a single file as with the use of the file flag.

Summary by Sourcery

Add a remote-file provider to allow fetching IP lists from remote URLs, extend CLI to accept --url flags, and integrate it into the existing target processing flow.

New Features:

  • Add remote-file provider to fetch and merge IP ranges from one or more URLs
  • Introduce -u/--url CLI option that supports multiple URLs and validate its usage

Enhancements:

  • Integrate remote-file target into main loop with unified provider_vars handling and support for _all meta target
  • Register RemoteFile class in the provider mapping

@sourcery-ai
Copy link
Contributor

sourcery-ai bot commented Aug 28, 2025

Reviewer's Guide

Introduce a new remote-file provider to fetch and merge IP ranges from remote URLs by extending the CLI, validator, and main processing flow, and implementing a dedicated RemoteFile class.

Sequence diagram for fetching and processing remote files with RemoteFile provider

sequenceDiagram
    participant Main
    participant RemoteFile
    participant requests
    Main->>RemoteFile: __init__(urls, excludeip6)
    RemoteFile->>requests: get(url) for each url
    requests-->>RemoteFile: response (text)
    RemoteFile->>RemoteFile: _get_ranges()
    RemoteFile->>RemoteFile: _process_ranges()
    RemoteFile-->>Main: processed_ranges
Loading

Entity relationship diagram for provider registration including RemoteFile

erDiagram
    PROVIDERS {
        string name
        class reference
    }
    PROVIDERS ||--o| REMOTEFILE : registers
    REMOTEFILE {
        list urls
        bool excludeip6
    }
Loading

Class diagram for the new RemoteFile provider

classDiagram
    class RemoteFile {
        +urls: list[str]
        +excludeip6: bool
        +source_ranges: dict
        +processed_ranges: dict
        +__init__(urls: list[str], excludeip6: bool = False)
        +_get_ranges() dict
        +_process_ranges() dict
    }
    RemoteFile --|> BaseProvider
Loading

File-Level Changes

Change Details Files
Add CLI options and validation for remote URLs
  • Add -u/--url argument with append action
  • Enforce that remote-file target requires at least one URL in validation
src/sephiroth/main.py
Refactor main processing loop to unify provider handling
  • Expand _all meta-target into concrete providers
  • Introduce provider_vars placeholder and skip providers when no inputs
  • Add remote-file branch mirroring file and asn handling
  • Aggregate header_comments and ranges only when provider_vars is set
src/sephiroth/main.py
Register and implement the new RemoteFile provider
  • Add remote-file mapping in providers registry
  • Create RemoteFile class to fetch URLs, filter and process lines, and produce ranges with comments
src/sephiroth/providers/__init__.py
src/sephiroth/providers/remote_file.py

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

Copy link
Contributor

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey there - I've reviewed your changes - here's some feedback:

Blocking issues:

  • Detected a 'requests' call without a timeout set. By default, 'requests' calls wait until the connection is closed. This means a 'requests' call without a timeout will hang the program if a response is never received. Consider setting a timeout for all 'requests'. (link)

General comments:

  • Consider adding a timeout (and optionally retry logic) to the requests.get calls in the remote-file provider to avoid hanging or failing silently on slow/unresponsive URLs.
  • You may want to deduplicate and sort the merged IP ranges from multiple URLs before emitting them to prevent duplicate entries and improve consistency.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- Consider adding a timeout (and optionally retry logic) to the requests.get calls in the remote-file provider to avoid hanging or failing silently on slow/unresponsive URLs.
- You may want to deduplicate and sort the merged IP ranges from multiple URLs before emitting them to prevent duplicate entries and improve consistency.

## Individual Comments

### Comment 1
<location> `src/sephiroth/providers/remote_file.py:15` </location>
<code_context>
+        ranges = {}
+        for url in self.urls:
+            try:
+                resp = requests.get(url)
+                resp.raise_for_status()
+                lines = resp.text.splitlines()
</code_context>

<issue_to_address>
Consider setting a timeout for requests.get to avoid hanging.

A timeout will prevent indefinite waiting if a server does not respond.
</issue_to_address>

<suggested_fix>
<<<<<<< SEARCH
                resp = requests.get(url)
=======
                resp = requests.get(url, timeout=10)
>>>>>>> REPLACE

</suggested_fix>

### Comment 2
<location> `src/sephiroth/providers/remote_file.py:18` </location>
<code_context>
+                resp = requests.get(url)
+                resp.raise_for_status()
+                lines = resp.text.splitlines()
+                ranges[url] = [line + "\n" for line in lines]
+            except Exception as e:
+                print(f"[!] Failed to fetch {url}: {e}")
</code_context>

<issue_to_address>
Appending '\n' to each line may be unnecessary and could cause issues downstream.

Downstream consumers may receive lines with extra newlines or inconsistent formatting. Please verify if appending '\n' is required.
</issue_to_address>

### Comment 3
<location> `src/sephiroth/providers/remote_file.py:27` </location>
<code_context>
+        ranges = []
+        for fname, range_list in self.source_ranges.items():
+            for ip_line in range_list:
+                if ip_line.startswith("#"):
+                    continue
+                if ":" in ip_line and self.excludeip6:
</code_context>

<issue_to_address>
Lines starting with whitespace before '#' will not be treated as comments.

Consider stripping leading whitespace before checking for '#', to ensure all comment lines are correctly identified.

Suggested implementation:

```python
            for ip_line in range_list:
                stripped_ip_line = ip_line.lstrip()
                if stripped_ip_line.startswith("#"):
                    continue
                if ":" in stripped_ip_line and self.excludeip6:

```

```python
                if "#" in stripped_ip_line:
                    ip_addr, comment = map(str.strip, stripped_ip_line.split("#", 1))
                else:
                    ip_addr = stripped_ip_line.strip()
                    comment = ""

```
</issue_to_address>

### Comment 4
<location> `src/sephiroth/providers/remote_file.py:29` </location>
<code_context>
+            for ip_line in range_list:
+                if ip_line.startswith("#"):
+                    continue
+                if ":" in ip_line and self.excludeip6:
+                    continue
+                if "#" in ip_line:
</code_context>

<issue_to_address>
IPv6 exclusion logic may match false positives if ':' appears in comments.

Splitting the line to separate comments before checking for ':' will prevent incorrect exclusions.
</issue_to_address>

<suggested_fix>
<<<<<<< SEARCH
            for ip_line in range_list:
                if ip_line.startswith("#"):
                    continue
                if ":" in ip_line and self.excludeip6:
                    continue
                if "#" in ip_line:
                    ip_addr, comment = map(str.strip, ip_line.split("#", 1))
                else:
                    ip_addr = ip_line.strip()
                    comment = ""
                ranges.append({
                    "range": ip_addr,
                    "comment": f"{fname} {comment}".strip()
                })
=======
            for ip_line in range_list:
                if ip_line.startswith("#"):
                    continue
                if "#" in ip_line:
                    ip_addr, comment = map(str.strip, ip_line.split("#", 1))
                else:
                    ip_addr = ip_line.strip()
                    comment = ""
                if ":" in ip_addr and self.excludeip6:
                    continue
                ranges.append({
                    "range": ip_addr,
                    "comment": f"{fname} {comment}".strip()
                })
>>>>>>> REPLACE

</suggested_fix>

### Comment 5
<location> `src/sephiroth/main.py:290` </location>
<code_context>
-        template_vars["header_comments"] += provider_vars["header_comments"]
-        template_vars["ranges"] += provider_vars["ranges"]
+
+        if provider_vars:
+            template_vars["header_comments"] += provider_vars["header_comments"]
+            template_vars["ranges"] += provider_vars["ranges"]
</code_context>

<issue_to_address>
Skipping providers with no output may mask silent failures.

Consider adding a log or warning when provider_vars is None to make silent failures more visible during debugging.
</issue_to_address>

<suggested_fix>
<<<<<<< SEARCH
        if provider_vars:
            template_vars["header_comments"] += provider_vars["header_comments"]
            template_vars["ranges"] += provider_vars["ranges"]
=======
        if provider_vars:
            template_vars["header_comments"] += provider_vars["header_comments"]
            template_vars["ranges"] += provider_vars["ranges"]
        else:
            import logging
            logging.warning(f"No output from provider '{provider}'. This may indicate a silent failure.")
>>>>>>> REPLACE

</suggested_fix>

## Security Issues

### Issue 1
<location> `src/sephiroth/providers/remote_file.py:15` </location>

<issue_to_address>
**security (python.requests.best-practice.use-timeout):** Detected a 'requests' call without a timeout set. By default, 'requests' calls wait until the connection is closed. This means a 'requests' call without a timeout will hang the program if a response is never received. Consider setting a timeout for all 'requests'.

```suggestion
                resp = requests.get(url, timeout=30)
```

*Source: opengrep*
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

ranges = {}
for url in self.urls:
try:
resp = requests.get(url)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion (bug_risk): Consider setting a timeout for requests.get to avoid hanging.

A timeout will prevent indefinite waiting if a server does not respond.

Suggested change
resp = requests.get(url)
resp = requests.get(url, timeout=10)

resp = requests.get(url)
resp.raise_for_status()
lines = resp.text.splitlines()
ranges[url] = [line + "\n" for line in lines]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

question: Appending '\n' to each line may be unnecessary and could cause issues downstream.

Downstream consumers may receive lines with extra newlines or inconsistent formatting. Please verify if appending '\n' is required.

ranges = []
for fname, range_list in self.source_ranges.items():
for ip_line in range_list:
if ip_line.startswith("#"):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion: Lines starting with whitespace before '#' will not be treated as comments.

Consider stripping leading whitespace before checking for '#', to ensure all comment lines are correctly identified.

Suggested implementation:

            for ip_line in range_list:
                stripped_ip_line = ip_line.lstrip()
                if stripped_ip_line.startswith("#"):
                    continue
                if ":" in stripped_ip_line and self.excludeip6:
                if "#" in stripped_ip_line:
                    ip_addr, comment = map(str.strip, stripped_ip_line.split("#", 1))
                else:
                    ip_addr = stripped_ip_line.strip()
                    comment = ""

Comment on lines +26 to +39
for ip_line in range_list:
if ip_line.startswith("#"):
continue
if ":" in ip_line and self.excludeip6:
continue
if "#" in ip_line:
ip_addr, comment = map(str.strip, ip_line.split("#", 1))
else:
ip_addr = ip_line.strip()
comment = ""
ranges.append({
"range": ip_addr,
"comment": f"{fname} {comment}".strip()
})
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion (bug_risk): IPv6 exclusion logic may match false positives if ':' appears in comments.

Splitting the line to separate comments before checking for ':' will prevent incorrect exclusions.

Suggested change
for ip_line in range_list:
if ip_line.startswith("#"):
continue
if ":" in ip_line and self.excludeip6:
continue
if "#" in ip_line:
ip_addr, comment = map(str.strip, ip_line.split("#", 1))
else:
ip_addr = ip_line.strip()
comment = ""
ranges.append({
"range": ip_addr,
"comment": f"{fname} {comment}".strip()
})
for ip_line in range_list:
if ip_line.startswith("#"):
continue
if "#" in ip_line:
ip_addr, comment = map(str.strip, ip_line.split("#", 1))
else:
ip_addr = ip_line.strip()
comment = ""
if ":" in ip_addr and self.excludeip6:
continue
ranges.append({
"range": ip_addr,
"comment": f"{fname} {comment}".strip()
})

Comment on lines +290 to +292
if provider_vars:
template_vars["header_comments"] += provider_vars["header_comments"]
template_vars["ranges"] += provider_vars["ranges"]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion (bug_risk): Skipping providers with no output may mask silent failures.

Consider adding a log or warning when provider_vars is None to make silent failures more visible during debugging.

Suggested change
if provider_vars:
template_vars["header_comments"] += provider_vars["header_comments"]
template_vars["ranges"] += provider_vars["ranges"]
if provider_vars:
template_vars["header_comments"] += provider_vars["header_comments"]
template_vars["ranges"] += provider_vars["ranges"]
else:
import logging
logging.warning(f"No output from provider '{provider}'. This may indicate a silent failure.")

ranges = {}
for url in self.urls:
try:
resp = requests.get(url)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

security (python.requests.best-practice.use-timeout): Detected a 'requests' call without a timeout set. By default, 'requests' calls wait until the connection is closed. This means a 'requests' call without a timeout will hang the program if a response is never received. Consider setting a timeout for all 'requests'.

Suggested change
resp = requests.get(url)
resp = requests.get(url, timeout=30)

Source: opengrep

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant